WO2020251238A1 - Procédé d'obtention d'informations utilisateur d'intérêt sur la base de données d'image d'entrée et procédé de personnalisation de conception d'objet - Google Patents

Procédé d'obtention d'informations utilisateur d'intérêt sur la base de données d'image d'entrée et procédé de personnalisation de conception d'objet Download PDF

Info

Publication number
WO2020251238A1
WO2020251238A1 PCT/KR2020/007445 KR2020007445W WO2020251238A1 WO 2020251238 A1 WO2020251238 A1 WO 2020251238A1 KR 2020007445 W KR2020007445 W KR 2020007445W WO 2020251238 A1 WO2020251238 A1 WO 2020251238A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
appearance
server
user
individual
Prior art date
Application number
PCT/KR2020/007445
Other languages
English (en)
Korean (ko)
Inventor
이종혁
전혜은
Original Assignee
(주)사맛디
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)사맛디 filed Critical (주)사맛디
Publication of WO2020251238A1 publication Critical patent/WO2020251238A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present invention relates to a method for obtaining user interest information based on input image data and a method for customizing object design.
  • the existing method of obtaining user interest information through images is obtained based on information directly tagged by the user, and there is a problem in that the acquisition result becomes inaccurate if the user incorrectly tags a keyword in the image.
  • the results of obtaining interest information differ depending on the keyword selected by the user who inputs an image.
  • the present invention for solving the above-described problem is to provide a method and program for obtaining user interest information based on input image data for obtaining user interest information by analyzing image data input by the user.
  • the present invention provides a method and program for obtaining user interest information based on input image data that outputs specific image data to a user and allows the user to modify the output image data to more accurately obtain user interest information through the modified information. I want to provide.
  • the present invention is to provide a method and program for a user to easily customize an object design through a customizing interface.
  • an object of the present invention is to provide a method and a program for customizing an object design using a plurality of external classification criteria and individual external characteristics corresponding to the object.
  • the present invention is to provide a method and a program for customizing an object design using a preset standard model.
  • the present invention is to provide a method and program for recommending an object suitable for a user by using abstract characteristics corresponding to the object or the user's design data.
  • the server inputs the first input image data into an appearance characteristic recognition model, Calculating an appearance characteristic, the server generating first appearance description data by combining a plurality of individual appearance characteristics with respect to the first input image data, and the server generating a first appearance description data based on the first appearance description data.
  • Generating and outputting output image data wherein the first input image data is image data input from a specific user, and the appearance classification standard is a specific classification standard for describing the appearance of a specific object, and the same It may include a plurality of individual appearance characteristics expressing various appearance characteristics within the classification criteria.
  • the first input image data is image data of a specific article of a specific object received from the user
  • the first output image data is a virtual article of the specific object generated based on the first appearance description data. It may be image data for.
  • the step of generating the first outline description data may include extracting code values corresponding to a plurality of individual appearance characteristics of the first input image data, and combining the plurality of code values, It may include the step of generating the first outline description data.
  • the first output image data may be image data for a virtual article including a plurality of individual appearance characteristics included in the first appearance description data.
  • the server inputs the second input image data into an external appearance characteristic recognition model, calculating individual appearance characteristics for a plurality of appearance classification criteria, and the server inputting a plurality of individual appearance characteristics for the second input image data. And generating second outline description data by combining them, wherein the second input image data may be image data in which the first output image data has been modified by the user.
  • the server may further include storing the first appearance description data or the second appearance description data as the user's interest information.
  • a program for obtaining user interest information based on input image data is combined with hardware to execute the aforementioned method for obtaining user interest information, and is stored in a recording medium.
  • An object design customizing method includes determining, by a server, an object based on a first user input, a plurality of appearance classification criteria corresponding to the object, and the plurality of appearance classification criteria Providing a customizing interface based on a plurality of individual appearance characteristics corresponding to each of, and generating design data of the object based on a second user input detected by the customizing interface and a preset standard model by the server ,
  • the appearance classification standard is a specific classification standard for describing the appearance of a specific object, and includes a plurality of individual appearance characteristics expressing various external characteristics within the same classification standard of the object, and the customizing interface It may include a plurality of menus and the design data matched with the plurality of individual appearance characteristics corresponding to the object.
  • the server may further include displaying the generated design data in the customizing interface.
  • the second user input may be an input for selecting at least one menu from among the plurality of menus.
  • the standard model includes at least one of a standard human body model, a fixed joint line and a length reference line for indicating the plurality of individual appearance characteristics, and the at least one menu selected by the server according to the second user input It may further include generating the design data based on at least one of a fixed junction line and a length reference line corresponding to.
  • the server may further include changing the design data based on the standard model and the third user input detected by the customizing interface.
  • the server extracting a recommended object corresponding to a combination of appearance classification criteria matched with the object or an abstract characteristic corresponding to the generated design data based on a matching algorithm, and design data corresponding to the extracted recommended object. It may further include providing to the user through the customizing interface.
  • the server may further include changing the design data of the recommended object based on a fourth user input detected by the customizing interface and a preset standard model.
  • An object design customization program is combined with hardware to execute the above-described object design customization method, and is stored in a recording medium.
  • the present invention by storing the user's interest information in the form of text-based appearance description data by analyzing image data, it is possible to efficiently acquire and store the user's interest information.
  • the user can easily create and change the design of the object by providing the user with a customizing interface.
  • design freedom is given to the user through the customizing interface, but the processing speed of the customizing method can be increased by using a preset standard model.
  • the user's satisfaction can be maximized since it is possible to easily and simply request creation of an object reflecting a desired design using a customizing interface.
  • FIG. 1 is a block diagram showing a server and related configurations according to an embodiment of the present invention.
  • FIG. 2 is a block diagram showing a server including an external feature recognition model for each object according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of a method of obtaining user interest information based on input image data according to an embodiment of the present invention.
  • FIG. 4 is a block diagram of a system for obtaining user interest information based on input image data according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of a method of generating outline description data according to an embodiment of the present invention.
  • FIG. 6 is a flowchart of a method for obtaining user interest information based on input image data, further comprising the step of receiving second input image data according to an embodiment of the present invention.
  • FIG. 7 is an exemplary diagram for describing a standard model according to an embodiment of the present invention.
  • FIG. 8 is a flowchart illustrating a method of customizing an object design according to an embodiment of the present invention.
  • 9 to 21 are exemplary diagrams for explaining a method of customizing an object design according to an embodiment of the present invention.
  • 22 is a flowchart illustrating a method of providing a recommended object according to an embodiment of the present invention.
  • 23 is an exemplary diagram for describing a method of providing a recommended object according to an embodiment of the present invention.
  • a'computer' includes all various devices capable of performing arithmetic processing and providing results to a user.
  • computers are not only desktop PCs and notebooks, but also smart phones, tablet PCs, cellular phones, PCS phones, and synchronous/asynchronous systems.
  • a mobile terminal of the International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), a personal digital assistant (PDA), and the like may also be applicable.
  • a head mounted display (HMD) device includes a computing function
  • the HMD device may be a computer.
  • the computer may correspond to a server that receives a request from a client and performs information processing.
  • client' refers to all devices including a communication function that users can install and use a program (or application). That is, the client device may include at least one of a telecommunication device such as a smart phone, a tablet, a PDA, a laptop, a smart watch, and a smart camera, and a remote controller, but is not limited thereto.
  • a telecommunication device such as a smart phone, a tablet, a PDA, a laptop, a smart watch, and a smart camera, and a remote controller, but is not limited thereto.
  • object refers to an article of a specific classification or category for performing a search.
  • object when a user wants to search for an image of a desired item in a shopping mall, when a user searches for clothes among item categories, the object may be clothes.
  • image data refers to a two-dimensional or three-dimensional static or dynamic image including a specific object. That is,'image data' may be static image data that is one frame, or dynamic image data (ie, moving image data) in which a plurality of frames are consecutive.
  • 'learning image data means image data used for training a learning model.
  • the'appearance classification standard' refers to a classification standard of an appearance expression necessary for describing the appearance of a specific object or for annotation. That is, the'appearance classification criterion' is a specific classification criterion for describing the appearance of a specific object, and includes a plurality of individual appearance characteristics expressing various appearance characteristics within the same classification criterion of the object.
  • the appearance classification standard is a classification standard for the appearance of the clothing, and may correspond to a pattern, color, fit, length, and the like. That is, when the appearance classification standard for a specific object increases, the external shape of a specific article belonging to the object can be described in detail.
  • 'individual appearance characteristics' refers to various characteristics included in a specific appearance classification standard. For example, if the appearance classification criterion is color, the individual appearance characteristics mean various individual colors.
  • the'expert client 30' is responsible for giving individual appearance characteristics to the learning image data (i.e., labeling the learning image data) or giving the image data individual appearance characteristics within the unlearned appearance classification criteria. It means the client of the expert who does it.
  • abtract characteristic refers to an abstract characteristic given to a specific object.
  • the'abstract characteristic' may be an emotional characteristic for a specific object (for example, in the case of clothing, an emotional or fashionable expression such as vintage).
  • 'abstract characteristic' may mean a shape change or motion when the image data is a moving picture.
  • 1 is a block diagram showing a server and related configurations according to an embodiment of the present invention.
  • 2 is a block diagram showing a server including an external feature recognition model for each object according to an embodiment of the present invention.
  • the image data search method of the server 10 refers to a method of accurately extracting image data desired by a user based on abstract terms representing the appearance of a specific object.
  • the method of customizing the object design may be performed based on the image data search method. Therefore, first, a method of searching for image data will be described.
  • the server 10 inputs the image data into the appearance characteristic recognition model 100, and individual appearance characteristics for a plurality of appearance classification criteria are obtained.
  • the server 10 generating appearance description data by combining a plurality of individual appearance characteristics of the image data, and the server 10 receiving a search keyword from a specific user, the matching algorithm 200 ), extracting image data corresponding to a combination of appearance classification criteria matched with an abstract characteristic corresponding to the search keyword.
  • the server 10 may store a plurality of appearance classification criteria, a plurality of individual appearance characteristics, abstract characteristics, appearance description data, extracted image data, customized design data, and the like in the database 400.
  • the server 10 inputs the image data into the appearance characteristic recognition model 100 to calculate individual appearance characteristics for a plurality of appearance classification criteria. That is, the server 10 provides new image data for which the external characteristic analysis has not been performed to the external characteristic recognition model 100 to calculate individual external characteristics for each external classification standard of a specific object.
  • the appearance characteristic recognition model 100 includes a plurality of individual characteristic recognition modules 110 for determining different appearance classification criteria, as shown in FIG. 1. That is, the appearance characteristic recognition model 100 includes a plurality of individual characteristic recognition modules 110 specialized to recognize each appearance classification criterion. The more the external classification criteria of a specific object are, the more the server 10 includes a plurality of individual characteristic recognition modules 110 in the external characteristic recognition model 100. The individual characteristic recognition module 110 calculates individual external characteristics included in a specific external classification standard of image data.
  • the server 10 acquires all the individual appearance characteristics of each appearance classification criterion for image data.
  • the individual characteristic recognition module 110 is trained through a deep learning learning model by matching individual external characteristics of a specific external classification standard with respect to a plurality of training image data. That is, the individual characteristic recognition module 110 is constructed with a specific deep learning algorithm, and learning is performed by matching a specific one of a plurality of appearance classification criteria with image data for learning.
  • the server 10 may perform a process of training each individual characteristic recognition module 110 as follows.
  • the server 10 acquires a plurality of training image data for a specific object. For example, when the object is a specific type of clothing (eg, a shirt), the server 10 acquires images of several shirts.
  • the server 10 may be selected by an expert from among image data previously stored, or may newly acquire an object image that is easy to learn.
  • the server 10 acquires each appearance classification standard definition and a plurality of individual appearance characteristics for each appearance classification standard. That is, the server 10 sets the initial number of individual characteristic recognition modules 110 according to the setting of a plurality of appearance classification criteria. In addition, the server 10 sets a type of feature for labeling the training image data for each appearance classification criterion as a plurality of individual appearance characteristics in each appearance classification criterion are set.
  • the server 10 may receive a plurality of appearance classification criteria for analyzing the appearance of a specific object and a plurality of individual appearance characteristics within each appearance classification standard from the expert client 30 for analyzing the appearance of a specific object.
  • the server 10 may receive an appearance classification standard and individual appearance characteristics included therein from a client of a designer who is a clothing expert.
  • the server 10 labels the training image data with a plurality of individual appearance characteristics of each appearance classification criterion. That is, the server 10 receives and matches at least one individual appearance characteristic for each of a plurality of appearance classification criteria for each training image data. For example, when 10 external classification criteria are set for a specific object, the server 10 receives one individual external characteristic for each of the 10 external classification criteria for each learning image data including the corresponding object, and A training data set that matches the image data and 10 individual type characteristics is formed.
  • the server 10 performs training by matching the training image data with individual appearance characteristics of a specific external classification standard labeled therefor. That is, when the server 10 trains the individual characteristic recognition module 110 for the A appearance classification criterion, deep learning by extracting only the training image data and the individual appearance characteristics of the A appearance classification criterion matched from the training dataset. Enter into the learning model. Through this, the server 10 constructs each individual characteristic recognition module 110 capable of recognizing individual external characteristics of each external classification standard.
  • the external characteristic recognition model 100 includes a combination of individual characteristic recognition modules 110 different for each object type.
  • fashion miscellaneous goods types for example, shoes, wallets, bags
  • the server 10 creates a combination of individual characteristic recognition modules 110 for each object type.
  • a specialized external feature recognition model for recognizing the external appearance of a specific object is created.
  • each external characteristic recognition model 100 for a plurality of objects may share and use a specific individual characteristic recognition module 110.
  • the color recognition module can be used universally regardless of the object type, so that the server 10 provides a plurality of external characteristics distinguished for each object.
  • a universal color recognition module can be used in the recognition model 100.
  • the server 10 generates appearance description data by combining or listing a plurality of individual appearance characteristics of the image data. If the external appearance classification criteria for a specific object are divided in detail, the external appearance description data specifically describes the external appearance of the object through individual external characteristics.
  • the step of generating the outline description data includes extracting a code value corresponding to a plurality of individual appearance characteristics of the image data and combining the plurality of code values to generate the outline description data in the form of a code string. And generating. That is, as the server 10 codes the individual appearance characteristics, the appearance description data can be generated as a code string, and through this, the processing of the appearance description data can be efficiently performed.
  • the server 10 when there is an unlearned appearance classification standard of a specific object for which the individual characteristic recognition module 110 is not constructed (for example, recognition through a deep learning learning model among the external classification criteria of the object) If there is something difficult to do or the individual characteristic recognition module 110 has not yet been constructed due to the creation of a new external classification standard), the server 10 is based on the unlearned external classification standard from the expert client or the image provider client 40. For each image data, the individual appearance characteristics are input.
  • the server 10 generates the appearance description data by combining the input individual appearance characteristics and the calculated individual appearance characteristics.
  • the input individual appearance characteristics are obtained for the unlearned appearance classification criteria from an image provider client 40 or an expert client that provided the image data, and the calculated individual appearance characteristics are transmitted to the individual characteristic recognition module 110. It is calculated by inputting image data.
  • the matching algorithm 200 extracts image data corresponding to a combination of appearance classification criteria matched with an abstract characteristic corresponding to the search keyword. (S600).
  • a user wants to search for desired image data based on a search keyword that is one of the abstract characteristics of a specific object or a search keyword that is similar to the abstract characteristic
  • the server 10 uses the matching algorithm 200 to search for the desired image data.
  • a combination of appearance classification criteria matching the corresponding abstract characteristics is extracted, and image data having a corresponding combination of appearance classification criteria in the appearance description data is extracted.
  • the abstract characteristic may be matched with a plurality of individual appearance characteristics for a specific appearance classification criterion.
  • the server 10 may not match the specific external classification standard with the corresponding abstract characteristic.
  • the server 10 may match the appearance classification criterion 1 with the abstract characteristic X.
  • the server 10 may match a plurality of individual appearance characteristics of the appearance classification criterion 2 with the abstract characteristic X.
  • the server 10 when a new appearance classification criterion for a specific object is added, the server 10 obtains individual appearance characteristics of the new appearance classification criterion for the training image data, and constructs a new learning data set. The step and the server 10 training a new individual characteristic recognition module 110 based on the new learning data set, and adding it to the external characteristic recognition model. That is, when a new external classification standard for a specific object is added (for example, a new standard for dividing the external characteristics of clothing is added), the server 10 does not change the existing individual characteristic recognition module 110. It is possible to change the external characteristic recognition model 100 according to the situation in which the new external classification standard is added by additionally constructing only the individual characteristic recognition module 110 for the new external classification standard.
  • the server 10 acquires individual appearance characteristics of the new appearance classification criteria for the training image data, and constructs a new training data set.
  • the server 10 when constructing a new individual characteristic recognition module 110 by using the same image data used to train another individual characteristic recognition module 110 in the past, the server 10 is an expert client 30 ), the individual appearance characteristics of the new appearance classification standard are input for each learning image data.
  • the server 10 acquires new image data for training the individual characteristic recognition module 110 for the new appearance classification criteria, and inputs individual appearance characteristics of the new appearance classification criteria, respectively. And construct a new training data set.
  • the server 10 trains the new individual feature recognition module 110 based on the new learning data set, and adds it to the external feature recognition model (S710). Through this, the server 10 adds a new individual characteristic recognition module 110 together with a plurality of existing individual characteristic recognition modules 110 to the existing plurality of external characteristic recognition models.
  • the server 10 inputs the image data from which the appearance description data has been obtained by the individual characteristic recognition module 110, which is already established, into the new individual characteristic recognition module 110, and the new appearance classification standard It further comprises the step of adding an individual appearance characteristic to the. That is, the server 10 performs a process of updating the appearance description data of the previously acquired image data to reflect the new appearance classification criteria. To this end, the server 10 inserts all the image data into the new individual characteristic recognition module 110 to calculate individual appearance characteristics.
  • the server 10 further includes updating the matching algorithm 200 by matching the individual appearance characteristics of the new appearance classification criteria with each abstract characteristic. That is, when the user searches for image data based on a keyword corresponding to an abstract characteristic, the server 10 reflects the new appearance classification criteria and provides the optimal search result. For characteristics, the individual external characteristics of the new external classification standard are linked.
  • the server 10 further includes the step of setting the matching algorithm 200 by receiving setting data matching the combination of the abstract characteristic and the appearance classification criterion from the expert client.
  • the definition of abstract characteristics may be changed or different due to factors such as regional differences, changes in the times, and establishment of new definitions.
  • abstract characteristics representing specific fashion trends or emotional characteristics may change according to the change of the times, and may be defined differently according to regions around the world (for example, ' The abstract characteristic (ie, emotional characteristic) of'vintage' can be defined as having a different appearance in the past and the present.) Therefore, the server 10 is a matching relationship between the combination of abstract characteristics in the matching algorithm 200 and individual appearance characteristics. You can add or change settings.
  • the server 10 when the definition of a specific abstract characteristic is changed, the server 10 receives a combination of appearance classification criteria for the current abstract characteristic from the expert client 30.
  • the server 10 may set the combination of the abstract characteristic before the change and the appearance classification criterion as the definition of the corresponding abstract characteristic at a specific point in the past. Through this, the server 10 may accumulate definition or description information of specific abstract characteristics according to changes in the times.
  • the server 10 may receive and store a combination of appearance classification criteria for each region from the expert client 30.
  • the server 10 obtains the reference image data from the user client 20, the reference image data acquisition step, and inputs the reference image data into the appearance characteristic recognition model 100, Calculating individual appearance characteristics for a plurality of appearance classification criteria, the server 10 generating appearance description data by combining a plurality of individual appearance characteristics with respect to the reference image data, and the server 10 performing the reference image And extracting image data including appearance description data identical to or similar to the data. That is, when a user does not search based on a keyword corresponding to an abstract characteristic, but performs a search based on a specific object image (ie, reference image data) that the user has, the server 10 Appearance description data is generated, and image data including the same or similar appearance description data is extracted and provided to the user client 20.
  • a specific object image ie, reference image data
  • the server 10 acquires reference image data from the user client 20. That is, the server 10 receives the reference image data stored in the user client 20 or searched online by the user.
  • the server 10 inputs the reference image data into the appearance characteristic recognition model 100 to calculate individual appearance characteristics included in each appearance classification criterion. That is, the server 10 acquires a plurality of individual external characteristics for describing the external characteristics of the reference image data as text information through each individual characteristic recognition module 110. Thereafter, the server 10 generates appearance description data by combining a plurality of individual appearance characteristics with respect to the reference image data.
  • the server 10 extracts image data including the same outline description data as the reference image data.
  • the server 10 searches for and provides the image data having the same external description data as the reference image data.
  • the server 10 in the case of searching for image data to a range similar to the reference image data, includes from a low importance to a similar range among a plurality of appearance classification criteria included in the appearance description data of the reference image data. It expands and extracts image data including one or more extended outline description data.
  • the server 10 may include an importance ranking for a plurality of appearance classification criteria of a specific object (for example, the higher the importance ranking, the higher the priority, the higher the search range is maintained at a fixed value), The degree of similarity between individual appearance characteristics within a specific appearance classification criterion may be included.
  • the server 10 receives a request to provide additional image data from the user client 20, the step of sequentially providing image data having different appearance classification criteria and adding by the user
  • the server 10 further includes setting a personalized abstract characteristic based on the external description data of the selected image data. That is, when performing a search based on a search keyword, the server 10 expands the search range and provides additional image data while changing at least one appearance classification criterion from the description information of the abstract characteristic corresponding to the search keyword to another individual appearance characteristic.
  • the server 10 receives one or more desired image images from the extended search range from the user.
  • the server 10 personalizes a search keyword or abstract characteristic input by the user based on the selected video image. For example, since the external definition of the general abstract characteristic and the external definition of the abstract characteristic that the user is thinking may be different, the server 10 is based on the external description data of the video image selected by the user in the expanded search result. Set the description information or appearance definition of the abstract characteristic that the user thinks (that is, the description information of the personalized abstract characteristic). Through this, if the user performs a search with the same search keyword or abstract characteristic in the future, the server 10 does not search based on the description information of the general abstract characteristic, but performs a search based on the description information of the personalized abstract characteristic. By doing so, the user can first provide the desired image.
  • the server 10 when there is an abstract characteristic corresponding to the outline description data of the selected image data, the server 10 provides the user client 20 with an abstract characteristic suitable for extracting the selected image data. It includes more. That is, the server 10 notifies that the external appearance definition known to the user for a specific abstract characteristic is different from the commonly used appearance definition, and provides an abstract characteristic (or search keyword) that matches the external appearance definition that the actual user thinks. It is extracted and provided. Through this, it is possible for the user to recognize a search keyword for obtaining a desired search result when re-searching later.
  • the abstract characteristic when the image data is moving image data including a plurality of frames, the abstract characteristic may be an expression representing a specific shape change or motion. That is, the abstract characteristic may be a textual expression representing a specific motion or shape change.
  • the server 10 generates appearance description data in which combinations of individual appearance characteristics (that is, individual appearance characteristics belonging to each appearance classification criterion) for a plurality of frames of video data, which are moving images, are arranged in time series. Specifically, the step of calculating the individual appearance characteristics is performed for each frame in the moving picture data, and the step of generating the appearance description data sequentially generates a plurality of individual appearance characteristics for each frame.
  • the server 10 is a matching algorithm 200 in which each abstract characteristic (for example, an expression representing a shape change or motion) and time series data of individual appearance characteristics within each appearance classification criterion are matched. Includes.
  • each abstract characteristic for example, an expression representing a shape change or motion
  • time series data of individual appearance characteristics within each appearance classification criterion are matched.
  • the server 10 searches for and provides video data corresponding to an abstract characteristic (ie, a specific motion or shape change) desired by the user.
  • FIG. 3 is a flowchart of a method for obtaining user interest information based on input image data according to an embodiment of the present invention
  • FIG. 4 is a block diagram of a system for obtaining user interest information based on input image data according to an embodiment of the present invention.
  • the server 10 inputs the first input image data into the appearance characteristic recognition model 100, Calculating individual appearance characteristics for a plurality of appearance classification criteria (S1200), a step of generating, by the server 10, first appearance description data by combining a plurality of individual appearance characteristics for the first input image data (S1400) ), and the server 10 generating and outputting first output image data based on the first outline description data (S1600).
  • S1200 Calculating individual appearance characteristics for a plurality of appearance classification criteria
  • S1400 first appearance description data by combining a plurality of individual appearance characteristics for the first input image data
  • S1600 first outline description data
  • the server 10 inputs the first input image data into the appearance characteristic recognition model 100 to calculate individual appearance characteristics for a plurality of appearance classification criteria (S1200).
  • the first input image data refers to image data input from a specific user who wants to acquire interest information.
  • the first input image data includes image data for a real object or a virtual object.
  • the first input image data may be obtained by various methods.
  • the first input image data may be obtained through an input for a specific user's virtual space interior.
  • the user may input image data for a specific object he or she prefers.
  • the first input image data includes real image data of a specific article of a specific object.
  • the user may input a picture (first input image data) of a'B-shirt of brand A'(a specific article) belonging to a clothing (a specific object) to be placed in his or her virtual space.
  • the server inputs the photo into the appearance characteristic recognition model 100, and the individual appearance characteristics of'shirt, light pink, floral pattern, slim, V-neck, sleeveless' ('color, pattern, which are multiple appearance classification criteria) , Top silhouette, neck shape, sleeve length' can calculate multiple individual appearance characteristics).
  • the first input image data includes virtual image data customized by a user.
  • the server calculates a plurality of individual appearance characteristics from the first input image data.
  • the first input image data may be input by a method for customizing an object design to be described later, but is not limited thereto.
  • the user selects a plurality of individual appearance characteristics (eg, shirt, light pink, floral pattern, slim, V-neck, sleeveless) from the list of individual appearance characteristics provided by the server, and the selected individual appearance characteristics
  • a plurality of individual appearance characteristics eg, shirt, light pink, floral pattern, slim, V-neck, sleeveless
  • the server can acquire the individual appearance characteristics selected by the user without having to calculate separate individual appearance characteristics.
  • the server 10 generates first appearance description data by combining a plurality of individual appearance characteristics with respect to the first input image data (S1400).
  • the first external appearance description data may specifically describe the external appearance of the corresponding object through individual external characteristics.
  • the first outline description data can be generated in the form of ⁇ shirt, light pink, floral pattern, slim, V-neck, sleeveless ⁇ .
  • the first appearance description data generation step (S1400) includes extracting code values corresponding to a plurality of individual appearance characteristics of the first input image data (S1410). ) And generating first outline description data in the form of a code string by combining the plurality of code values (S1420). That is, as the server 10 codes the individual appearance characteristics, the appearance description data can be generated as a code string, and through this, the processing of the appearance description data can be efficiently performed.
  • the first appearance description data May be generated as a code string of "Ba01, Bb02, Bg01, Ie01, Ob01, Zb01".
  • the server 10 generates and outputs the first output image data based on the first outline description data (S1600).
  • the first output image data may mean image data for a virtual article of a specific object generated based on the first outline description data.
  • the first appearance description data is a code string of "Ba01, Bb02, Bg01, Ie01, Ob01, Zb01”
  • individual appearance characteristics corresponding to each code value can generate and output video data for a virtual shirt.
  • the first output image data may mean virtual image data including a plurality of individual appearance characteristics identical to the individual appearance characteristics of the first input image data.
  • the server inputs the second input image data into the appearance characteristic recognition model, and the plurality of appearance classification criteria Calculating individual appearance characteristics (S1240), a step of generating second appearance description data by combining a plurality of individual appearance characteristics with respect to the second input image data (S1440), and the server performing the first appearance description data or 2 It further comprises a step (S1800) of storing the appearance description data including the appearance description data as the user's interest information.
  • the server inputs the second input image data into the appearance characteristic recognition model, and calculates individual appearance characteristics for a plurality of appearance classification criteria (S1240).
  • the second input image data may mean image data in which first output image data is modified by a user who has input the first input image data.
  • the server created it based on this. If the image (first output image data) of the virtual shirt that is output is different from the characteristics of the image that the user intends to arrange, the user can modify the first output image data and input the second input image data to be placed. have.
  • the calculation of individual appearance characteristics for the first input image data by the server is incorrect, or the first input among individual appearance characteristics included in the first output image data. It may include cases in which correction is required for individual appearance characteristics other than individual appearance characteristics calculated from image data.
  • the server when the server incorrectly calculates the individual appearance characteristics of the U-neck for the first input image data of a V-neck shirt, the first output image data is a U-neck, or the individual appearance characteristics of the aforementioned coat (shirt, light pink). , Floral pattern, slim, V-neck, sleeveless), but including a crop (individual appearance characteristic for Top Length, which is an external classification standard), which is an individual appearance characteristic that the user does not prefer, the user must first output image data. May be modified to input as second input image data.
  • the first output image data may be modified by a user directly using a program or server, or may be modified by inputting a keyword for a correction direction, but is not limited thereto and Includes correction methods.
  • the user wants to modify the U-neck to the V-neck, the user can directly modify the coat of the first output image data to the V-neck, or input the correction direction through the keyword input of'V-neck'. .
  • the server recommends a plurality of image data combined with other features in addition to the features included in the first output image data to the user, and the user recommends By selecting and adding a feature to be added, modified second input image data can be input.
  • the server can easily obtain the user's preference through the appearance description data for the added feature.
  • the server generates second appearance description data by combining a plurality of individual appearance characteristics with respect to the second input image data (S1440).
  • the second outline description data of the second input image data in which the user corrects the top length of the first output image data from crop to medium is ⁇ shirt, light pink, flower Pattern, slim, V-neck, sleeveless ⁇ or "Ba01, Bb02, Bg01, Bi03, Ie01, Ob01, Zb01" (when the code value corresponding to the medium is Bi03) can be generated.
  • the server may further include transmitting a request for approval of the output image data to the user.
  • the server outputs the first output image data and transmits a request for approval for the first output image data to the user. If the user approves, the second input image data is not input and the user does not In this case, the second input image data from which the first output image data is modified may be input.
  • the steps of calculating the individual appearance characteristics of the input image data, generating the appearance data, and generating and outputting the output image data may be repeated one or more times.
  • the server may output second output image data based on the second input image data, or a user may input third input image data based on the second output image data.
  • the server may further include a step (S1800) of storing the external appearance description data including the first appearance description data or the second appearance description data as the user's interest information.
  • the server includes the first outline description data on the first input image data, the second outline description data on the second input video data, or the difference between the first outline description data and the second outline description data (for example, Appearance description data information including appearance description data for features) can be stored, and through this, information of interest of the user can be obtained. That is, the server can easily obtain the user's interest information by storing and analyzing not only the image data including the first input image data or the second input image data input by the user, but also the appearance description data calculated based on this. .
  • the server outputs from the input image data including the first input image data or the second input image data in consideration of not only individual appearance characteristics, appearance description data, but also abstract characteristics (eg, vintage).
  • Image data can be generated and output, or external description data can be stored.
  • the server may further include displaying image data including first output image data or second input image data in a virtual space. That is, the server may display image data to be arranged by the user in the user's virtual space according to the user's request.
  • the user can display the image of the coat of the style desired by the user and decorate his virtual space according to his taste, and the server can provide individual appearance characteristics based on the input image data input by the user. It is possible to easily obtain the user's interest information by calculating and creating the appearance description data, and supplementing it through the user's correction, and has the effect of being able to utilize variously, such as providing the acquired user's interest information to the clothing market.
  • the calculation of individual appearance characteristics is It may be characterized in that it is performed for each frame in the data, and the appearance description data generation may be characterized in that it is generated by sequentially listing a plurality of individual appearance characteristics for each frame.
  • An apparatus for obtaining user interest information based on input image data includes one or more computers, and performs the aforementioned method for obtaining user interest information based on input image data.
  • the method for obtaining user interest information based on input image data according to the present invention described above may be implemented as a program (or application) and stored in a medium to be executed by being combined with a computer that is hardware.
  • design data refers to a two-dimensional or three-dimensional static or dynamic image including a specific object, such as'image data' defined above. That is,'design data' may be static image data that is one frame, or dynamic image data (ie, moving image data) in which a plurality of frames are consecutive.
  • design data terms are used for convenience of explanation and to distinguish them from image data terms.
  • FIG. 7 is an exemplary diagram for describing a standard model according to an embodiment of the present invention.
  • the server 10 may include a customization module 300 that provides a customizing interface so that a user can customize an object design.
  • the customizing interface may be a platform that can be accessed through a web page that can be used by the user or a dedicated app application that can be used by the user.
  • the server 10 may extract and store in advance the appearance classification criteria and individual appearance characteristics of various objects through the appearance characteristic recognition model 100 as described above.
  • the server 10 may extract the appearance classification criterion and individual appearance characteristics corresponding to the new object in real time through the appearance characteristic recognition model 100 even for a new object selected by the user.
  • the customizing interface may provide a user with functions such as searching for an object, selecting an object, creating and changing design data of the selected object, and purchasing an object.
  • the customizing interface may include text indicating the object name, text (or menu) corresponding to a plurality of appearance classification criteria corresponding to the object, and a plurality of menus and objects matching a plurality of individual appearance characteristics corresponding to the object. May contain design data.
  • the server 10 may display design data corresponding to an object in real time based on a user input detected through a customizing interface, and may change design data in real time according to a user input.
  • the server 10 may generate and store the standard model 310 in advance through the customization module 300 as shown in FIG. 7.
  • the standard model 310 means that when the object is clothing, a fixed joint line and a length reference line of clothing are preset based on the standard human body model 11 so that customizing of clothing design can be efficiently processed. Stands for one standard format. That is, for example, the user may select any one of preset lengths provided through the standard model 310, rather than individually customizing the length of the bottom to a specific value.
  • the standard model 310 may include a standard human body model 11, a plurality of fixed joint lines indicated by a solid line, and a plurality of length reference lines indicated by a dotted line, as shown in FIG. 7.
  • the fixed bonding lines are boundary areas where the respective components of the clothing (eg, the upper body portion and the sleeve) are joined, and may maintain a constant position without changing depending on the clothing.
  • the length reference line is a line representing any one of the lengths of each clothing and may be changed according to the clothing. That is, differently from FIG. 7, the position of the length reference line may be changed. Details of creating an object design based on the standard model will be described later with reference to FIG. 8.
  • the server 10 may additionally change the size of the clothing through the user's input. That is, the actual body size of the user may be additionally reflected in the generated clothing design.
  • the server 10 may manage information by registering a user as a member through a separate platform.
  • the user's member information may include a name, an address, a contact information, an object design creation and change history, an object purchase history, and the like.
  • FIG. 7 is an exemplary diagram for describing a standard model according to an embodiment of the present invention.
  • 8 is a flowchart illustrating a method of customizing an object design according to an embodiment of the present invention.
  • 9 to 21 are exemplary diagrams for explaining a method of customizing an object design according to an embodiment of the present invention.
  • the operations of FIG. 8 may be performed by the server 10 of FIGS. 1 and 2. Meanwhile, for convenience of explanation, a case where the object is clothing will be described.
  • the server 10 may determine an object based on a first user input in operation 41.
  • the object is a top (e.g., Shirt & Blouse, Jacket, Coat), bottoms (e.g. Pants, Skirt, leggings & stocking), or a dress ( Onepiece)
  • the server 10 may provide a separate search interface so that the user can search for a desired object, and when the user selects a specific object through search, it may provide a customizing interface.
  • the object selection menu may be connected to a customizing interface through a link.
  • the server 10 may calculate individual appearance characteristics for a plurality of appearance classification criteria by inputting image data corresponding to the object into an appearance characteristic recognition model in operation 42.
  • the appearance classification criterion is a specific classification criterion for describing the appearance of a specific object, and may include a plurality of individual appearance characteristics expressing various appearance characteristics within the same classification criterion of the object.
  • the appearance classification standard may include a specialized external classification standard and a general-purpose external classification standard that are different for each object.
  • the specialized appearance classification criteria are silhouette, collar & neckline, top length, opening, shoulder, sleeve, sleeve length, sleeve cuff, and general-purpose appearance classification standards are textures that can be applied to both tops, bottoms and dresses, It can be a pattern, color and detail.
  • the plurality of appearance classification criteria of the top may include at least one of silhouette, color & neckline, top length, opening, shoulder, sleeve, sleeve length, sleeve cuff, texture, pattern, color, and detail.
  • the silhouette may be the overall appearance of the clothing, and the individual appearance characteristics of the silhouette may be slim, regular and loose.
  • the collar & neckline may be a neckline of the clothes, and the individual appearance characteristics of the collar & neckline may include at least one of a round neckline, a V neckline, a plunging V neckline, a surplice, and a V neck camisole.
  • the shoulder may be a shoulder portion of clothes, and the individual external characteristics of the shoulder may include at least one of a plain shoulder, a raglan shoulder, a hearter, a drop shoulder, a dolman, an off shoulder, a strapless, and a one shoulder.
  • Individual contour characteristics of the sleeve length may include extram-short sleeves, short sleeves, medium sleeves and long sleeves.
  • Individual appearance characteristics of the top length may include crop, short, medium, long and maxi.
  • openings, sleeves, sleeve cuffs, textures, patterns, colors and details may each include known individual appearance characteristics.
  • the plurality of appearance classification criteria for the bottom may include at least one of a silhouette, a bottom length, a waist position, a texture, a pattern, a color, and a detail.
  • the silhouette may be the overall appearance of the clothing, and the individual appearance characteristics of the silhouette may be straight, skinny, bell-bottom, baggy, and wide in the case of pants, and h-line, a in the case of a skirt. It can be -line, mermaid, flare, balloon.
  • Individual appearance characteristics of the bottom length may include extra-short, short, midi and long.
  • Individual appearance characteristics of the waist position may include high waist, normal waist and low waist.
  • textures, patterns, colors, and details may include individual known individual appearance characteristics.
  • a plurality of appearance classification criteria of one piece may be selected in a total of 14 types in a form in which the remaining categories excluding the length of the top from the top and 3 types selected only from the bottom are combined. That is, the plurality of appearance classification criteria of a dress may include silhouette top, silhouette bottom, collar & neckline, shoulder, sleeve, sleeve cuff, sleeve length, opening, bottom length, waist position, texture, pattern, color and detail. have. Individual appearance characteristics for each of the plurality of appearance classification criteria may include characteristics as described above or previously known.
  • operation 42 may be performed before operation 41. That is, a plurality of appearance classification criteria and individual appearance characteristics corresponding to the object may be calculated and stored in advance.
  • the server 10 may provide a customizing interface 500 based on a plurality of appearance classification criteria corresponding to the object and a plurality of individual appearance characteristics respectively corresponding to the plurality of appearance classification criteria in operation 43. have.
  • the customizing interface 500 may include a plurality of menus 501 and design data 505 that match a plurality of individual appearance characteristics corresponding to an object.
  • the customizing interface 500 includes a slim menu, a regular menu and a loose menu 502 corresponding to a plurality of individual appearance characteristics of a silhouette, and an enumerated menu corresponding to a plurality of individual appearance characteristics of a color & neckline ( 503), and may include a crop menu, a short menu, a medium menu, a long menu, and a maxi menu 504 corresponding to a plurality of individual appearance characteristics of the image length.
  • the slim menu and the crop menu, V displayed in dark shades
  • design data 505 corresponding to a top that has a short length, a slim silhouette and a V-neck shape as shown in FIG. 9 may be displayed.
  • the customizing interface 500 includes an enumeration menu 506 corresponding to a plurality of individual appearance characteristics of the shoulder, and a menu 507 corresponding to a plurality of individual appearance characteristics of the sleeve length. ), an enumeration menu 508 corresponding to a plurality of individual appearance characteristics of the sleeve cuff, an enumeration menu 509 corresponding to a plurality of individual appearance characteristics of a texture, and an enumeration menu 511 corresponding to a plurality of individual appearance characteristics of the pattern.
  • It may further include an enumeration menu 512 corresponding to a plurality of individual appearance characteristics of color, and an enumeration menu 513 corresponding to a plurality of individual appearance characteristics of detail. Meanwhile, the enumeration menus 509, 511, 512, and 513 respectively corresponding to texture, pattern, color, and detail may be linked to separate detail pages, and a user may select various textures, patterns, colors, and details from separate detail pages. .
  • a customizing interface may be configured.
  • the configuration of a menu may be changed so that a user can easily select a plurality of individual appearance characteristics set based on the standard model 310.
  • the server 10 may generate design data of an object based on a second user input detected by the customizing interface 500 in operation 44 and a preset standard model 310.
  • the second user input may be an input for selecting at least one menu from among a plurality of menus.
  • the preset standard model 310 is one of the standard human body model 11, a fixed joint line (solid line) and a length reference line (dotted line) for indicating a plurality of individual appearance characteristics, as shown in FIG. It may include at least one.
  • the server 10 may generate design data based on at least one of a fixed joint line and a length reference line corresponding to at least one menu selected according to a second user input.
  • the standard model 310 includes a fixed joint line and a length reference line corresponding to a plurality of individual appearance characteristics in advance for each object so that it can be used as a standard format for generating design data of an object. Can be set to Therefore, when the user selects any one individual appearance characteristic of a specific object, the server 10 can generate design data corresponding to the object by using the corresponding fixed joint line or length reference line in the standard model 310. have.
  • the design data of the top can be completed when individual appearance characteristics are determined from a plurality of appearance classification criteria related to the top.
  • the plurality of appearance classification criteria related to the top may include silhouette, color & neckline, top length, opening, shoulder, sleeve, sleeve length, sleeve cuff, texture, pattern, color, and detail.
  • Individual appearance characteristics of each of the appearance classification criteria of may be determined by the standard model 310 and user input. For convenience of explanation, the top is divided into a body part, a sleeve part, and others.
  • the outline classification criteria related to the body part are silhouette, color & neckline, top length, opening and shoulder
  • the outline classification criteria related to the sleeve part are sleeve, sleeve length and sleeve cuff, and others are texture, pattern, color and detail.
  • the body part of the top may mean the rest of the top excluding the sleeve, and may include an upper end and a lower end.
  • the upper part can be determined by the length reference line of the silhouette, the fixed joint line of the collar & neckline, and the fixed joint line of the shoulder
  • the lower part is the reference length of the silhouette, the fixed joint line of the collar & neckline, and the image length. It can be determined by the length line.
  • the length reference lines related to the plurality of individual appearance characteristics of the silhouette are a first silhouette length reference line 91, a second silhouette length reference line 92, and a third silhouette length reference line. (93) may be included.
  • the first silhouette length reference line 91, the second silhouette length reference line 92, and the third silhouette length reference line 93 may correspond to loose, regular, and slim, respectively.
  • the fixed bonding lines related to the plurality of individual appearance characteristics of the collar & neckline are a first shoulder fixed bonding line 50 and a first color fixed bonding line 51.
  • a second color fixed bonding line 52, a third color fixed bonding line 53, and a fourth colored fixed bonding line 54 may be included.
  • first color fixed joint line 51 and the second color fixed joint line 52 may be fixed joint lines when expressible on the chest line, and may be a color top line and a color top join line, respectively. have.
  • the plurality of individual appearance characteristics that can be expressed through the first color fixed joint line 51 and the second color fixed joint line 52 include Funnel, Turtleneck, Boat Neckline, Stand Collar, Mandaring Collar, Regular Straight Point Collar, etc. can do.
  • the third color fixed joint line 53 may be a fixed joint line to which the lower end of the upper body is connected when the collar & neckline can be expressed on the chest line
  • the fourth color fixed joint line 54 is a color It may be a fixed joint line to which the lower end of the body part is connected when the &neckline is a type that descends below the chest line.
  • the third color fixed bonding line 53 may be used as a fixed bonding line
  • a four-color fixed bonding line 54 may be used as a fixed bonding line.
  • the plurality of individual appearance characteristics that can be expressed through the third color fixed joint line 53 and the fourth color fixed joint line 54 are Tailored Jacket Collar, Convertible Collar, Sailor Collar, Lapel, Shawl Collar, Scoop, Neckline, Surplice. And the like.
  • the design data of the color & neckline may be generated as shown in Fig. 11(c), the size of the color & neckline may change in width at the same rate as the body plate, and the vertical width may be within a certain range. There may be no change within. Of course, if there is a large difference in size between the standard human body model 11 and the user, the vertical width may also change.
  • the fixed bonding line connecting the lower end and the upper end of the upper body portion may further include a length reference line 80 of the first upper body.
  • the length reference line related to the plurality of individual appearance characteristics of the length of the image is a length reference line 80 of the first phase corresponding to the crop, and the length reference line of the second phase corresponding to the short. (81), a length reference line 82 of the third phase corresponding to the medium, a length reference line 83 of the fourth phase corresponding to the long, and a length reference line 84 of the fifth phase corresponding to maxi. .
  • the opening may be a hole in the top through which the user's body can pass, and may be determined immediately when design data of the upper portion described above is determined.
  • the fixed joint lines related to a plurality of individual external characteristics of the shoulder are a first shoulder fixed joint line 50 corresponding to a plain shoulder, and a second shoulder corresponding to a raglan shoulder (harter).
  • the body portion of the upper body may be generated as shown in FIGS. 14 to 18 by the length reference line of the silhouette, the fixed joint line of the collar & neckline, the length reference line of the upper length, and the fixed joint line of the opening and shoulder.
  • FIG. 14 shows the body base of the top to which the design data of FIG. 11 (c) can be joined, the third color fixed joint line 53, and the third silhouette length reference line corresponding to the slim silhouette ( 93), and (a2) of FIG. 14 shows the third color fixed joint line 53 of the collar & neckline, the third silhouette length reference line 93 of the silhouette, and the crop of the image length.
  • It may be the lower end of the upper garment determined according to the length reference line 80 corresponding to the first phase, and (a1) and (a2) of FIG. 14 may be combined to form the body of the upper garment. If the user does not select the sleeve, the body portion of the top may become the design data of the top.
  • FIG. 14 shows the body base of the top to be joined with the neckline & collar that descends to the bottom of the chest, the fourth color fixed joint line 54, and the third silhouette length reference line corresponding to the slim silhouette ( 93)
  • FIG. 14(b2) shows the color & neckline surplice, the fourth color fixed joint line 54, and the third silhouette length reference line 93 corresponding to the slim silhouette.
  • (b3) of FIG. 14 shows the fourth color fixed joint line 54 of the collar & neckline, the third silhouette length reference line 93 of the silhouette, and the crop of the image length.
  • the length of the corresponding first phase may be the lower end of the top determined according to the reference line 80, and (b1) and (b3) of FIG. 14 may be combined, or (b2) and (b3) may be combined to become the body of the top. have. If the user does not select the sleeve, the body portion of the top may become the design data of the top.
  • (c) of FIG. 14 is a plunging V neckline of the collar & neckline, a length reference line 80 of the first image corresponding to the crop of the image length, and a third silhouette length reference line 93 of the silhouette. It may be the body part of the top determined according to. If the user does not select the sleeve, the body portion of the top may become the design data of the top.
  • FIG. 15 shows the third color fixed joint line 53 of the color & neckline, the third silhouette length reference line 93 of the silhouette, and the first image corresponding to the crop of the image length. It may be the lower end of the top determined according to the length reference line 80, and FIG. 15(b) shows a V neckline of a collar & neckline, a third collar fixed bonding line 53, and a third silhouette corresponding to a slim silhouette. It may be the upper end of the top determined according to the length reference line 93, and (c) of FIG. 15 may be the body of the top of which (a) and (b) are combined. If the user does not select the sleeve, the body portion of the top may become the design data of the top.
  • (a) of FIG. 16 is a plunging V neckline of the collar & neckline, a length reference line 80 of the first image corresponding to the crop of the image length, and a third silhouette length reference line 93 of the silhouette. It may be the body part of the top determined according to, and (b1) and (b2) of FIG. 16 may be specific color designs in the collar & neckline, and the body of the top having a collar by combining (a) and (b2) The part (c1) may be determined, and (a) and (b1) may be combined to determine the body part (c2) of the top with a collar. If the user does not select the sleeve, the body portion of the top may become the design data of the top.
  • the lower end of the body portion of the upper body may be mainly used in the form (a) or (b).
  • the shape is according to the third color fixed joint line 53 of the collar & neckline, the third silhouette length reference line 93 of the silhouette, and the length reference line 80 of the first image corresponding to the crop of the image length. It may be the lower end of the determined top, and (b) the shape is the fourth color fixed joint line 54 of the collar & neckline, the third silhouette length reference line 93 of the silhouette, and the first top corresponding to the crop of the image length. It may be the lower end of the top determined according to the length reference line 80.
  • (a) of FIG. 18 is a body base on which the design data of FIG. 11 (c) can be joined, a third color fixed joint line 53, and a third silhouette length corresponding to the loose silhouette
  • the lower end of the top determined according to (80) may be a combined top.
  • (b) is the upper end of (a) and the third color fixed joint line 53 of the collar & neckline, the first silhouette length reference line 91, and the length reference line 81 of the second phase corresponding to the short of the length of the phase.
  • ) May be a combined top of the lower end of the top determined.
  • (c) is the upper end of (a) and the third color fixed joint line 53 of the collar & neckline, the first silhouette length reference line 91, and the length reference line 82 of the third phase corresponding to the medium of the length of the phase.
  • (d) is the upper end of (a) and the third color fixed joint line 53 of the collar & neckline, the first silhouette length reference line 91, and the length reference line 83 of the fourth phase corresponding to the long of the phase length.
  • (e) is the upper end of (a) and the third color fixed joint line 53 of the collar & neckline, the first silhouette length reference line 91, and the length reference line 84 of the fifth phase corresponding to maxi of the length of the image. ) May be a combined top of the lower end of the top determined. That is, in the case of combining loose, regular, slim silhouette and crop, short, medium, long, and maxi of the length of the top, the body portion of the top may have a total of 15 outlines. Therefore, the user can easily create various design data.
  • design data may be determined based on an external shape classification standard including a sleeve, a sleeve length, and a sleeve cuff and a plurality of individual external characteristics corresponding thereto.
  • an external shape classification standard including a sleeve, a sleeve length, and a sleeve cuff and a plurality of individual external characteristics corresponding thereto.
  • a plurality of individual appearance characteristics corresponding to the sleeve may be the presence or absence of the sleeve.
  • the length reference line related to a plurality of individual external characteristics of the sleeve length is a first sleeve length reference line 56 corresponding to the extram-short sleeve, short
  • a second sleeve length reference line 57 corresponding to a sleeve, a third sleeve length reference line 58 corresponding to a medium sleeve, and a fourth sleeve length reference line 59 corresponding to a long sleeve may be included.
  • the sleeve length may be a length including the sleeve cuff length.
  • the shoulder is Dolman
  • the second sleeve line 57 corresponding to the short sleeve cannot be selected, and if the sleeve cuff is not separately selected, Shirt Cuffs as shown in FIG. 19B may be automatically set.
  • the length of the sleeve is not selected, it may become Sleeveless without the sleeve.
  • the sleeve cuff can be made to be sized to cover the end of the sleeve, and the length of the sleeve can also be varied according to the user's body size, and the size of the sleeve varies with the upper body part. It can be changed equal to the ratio. Also, the size of the sleeve cuff may vary according to the size of the user's wrist circumference. In addition, the length of the end of the sleeve and part of the width of the sleeve cuff may be adjustable.
  • design data may be determined based on an appearance classification criterion including a silhouette, a length of a bottom, and a waist position for a bottom and a plurality of individual appearance characteristics corresponding thereto.
  • an appearance classification criterion including a silhouette, a length of a bottom, and a waist position for a bottom and a plurality of individual appearance characteristics corresponding thereto.
  • the fixed bonding line related to the plurality of individual appearance characteristics of the waist position is the first waist fixed bonding line 70 corresponding to the high waist of the skirt, and the high of the pants.
  • the second waist fixing bonding line 71 corresponding to the waist, the third waist fixing bonding line 72 corresponding to the normal waist of the pants, the fourth waist fixing bonding line 73 corresponding to the normal waist of the skirt, A fifth waist fixing bonding line 74 corresponding to a low waist and a sixth waist fixing bonding line 75 corresponding to a low waist of the pants may be included.
  • the length reference line related to a plurality of individual appearance characteristics of the lower length is a first lower length reference line 76 corresponding to an extra-short, and a second length reference line 76 corresponding to the short.
  • the skirt design data of (c) may be generated according to the fourth waist fixing joint line 73 corresponding to the normal waist of the skirt and the second lower length reference line 77 corresponding to the short.
  • the waist position may fit perfectly into the standard human body model 11, and in the case of a dress, the end line of the top and the waist position of the bottom must match accurately. In the same way as the user's body size change, the size of the bottoms may also change.
  • design data of one piece may be generated by applying the same method as the method in which the top and bottom are determined.
  • textures, patterns, and colors which are universal appearance classification standards common to tops, bottoms, and one-piece, are known, so that various types of textures, patterns, and colors applied to clothing are individually shaped. It can be a characteristic, and can be applied to the design data of a top, bottom, or one piece (eg, cotton, stripe pattern, red) according to the user's selection.
  • a plurality of individual appearance characteristics of details which is a universal appearance classification standard common to tops, bottoms, and dresses, may be various types of clothing accessories.
  • the plurality of individual appearance characteristics of the detail may include Pleats, Shirring, Gather, Trimming, Fur, Bow, Patch Pocket, Cubic, Quilting, Ruffle, Frill, Flounce, Banding, and Draw String. That is, (a) Pocket, (b) Bow, (c) String, (d) Set in Pocket and (e) Zipper of FIG. 21 may be added to the design data of a top, bottom, or one piece.
  • the change of the standard model 310 according to the user's body size mentioned above may be automatically performed according to the user's body size input, and accordingly, the appearance of the standard human body model 11 of the standard model 310 , The position/length of the fixed bonding line and the position/length of the reference line may be varied.
  • the server 10 may display design data generated in the customizing interface in operation 45. Through this, the user can easily purchase or change while checking the design data customized by the user in real time.
  • the server 10 may change the design data based on the third user input and the standard model 310 detected by the customizing interface 500. That is, the user can freely change the generated design data until it is saved or terminated.
  • FIG. 22 is a flowchart illustrating a method of providing a recommended object according to an embodiment of the present invention.
  • 23 is an exemplary view illustrating a method of providing a recommended object according to an embodiment of the present invention. The operations of FIG. 22 may be performed by the server 10 of FIGS. 1 and 2.
  • the server 10 may generate design data in operation 181.
  • the design data generation may be the same as the operation performed in FIG. 8.
  • design data 181 may be generated as shown in FIG. 23.
  • operation 181 may be omitted, and operation 182 may be directly performed based on the object.
  • the server 10 may extract a recommended object corresponding to a combination of appearance classification criteria matched with an object or an abstract characteristic corresponding to the generated design data based on the matching algorithm in operation 182.
  • a recommended object may be extracted by matching abstract characteristics based on an object selected by a user, or a recommended object may be extracted by matching abstract characteristics based on design data generated according to a user's input.
  • three tops arranged in the direction of the arrow may be recommended objects
  • three tops arranged in the direction of the arrow may be design data of the top changed according to the recommended object.
  • the server 10 may extract the first recommended object 182 when the abstract characteristic corresponding to the object or the generated design data is “neat”, and the corresponding abstract characteristic is “individual”.
  • the second recommendation object 183 may be extracted, and when the corresponding abstract characteristic is “formal”, the third recommendation object 184 may be extracted.
  • the server 10 may provide design data corresponding to the recommended object extracted in operation 183 to the user through the customization interface.
  • the server 10 may provide the user with design data 185 to which a color is added based on the first recommendation object 182, and the text is added based on the second recommendation object 183.
  • Design data 186 may be provided to the user, and design data 187 with pockets added based on the third recommended object 184 may be provided to the user.
  • the server 10 may provide all three changed design data or may provide one or more of them.
  • the server 10 may change the design data of the recommended object based on the fourth user input detected in the customizing interface in operation 184 and a preset standard model, and the design data changed in operation 185 Can be displayed.
  • the user may further customize the changed design data provided through the server 10.
  • an object selected by the user or an object suitable for the user can be recommended by grasping the sensitivity included in the generated design data, and the user can easily change the recommended object through the customizing interface.
  • An object design customizing apparatus includes one or more computers and performs the aforementioned object design customization method.
  • the object design customization method of the present invention described above may be implemented as a program (or application) and stored in a medium to be executed by being combined with a computer that is hardware.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • EPROM Erasable Programmable ROM
  • EEPROM Electrically Erasable Programmable ROM
  • Flash Memory Hard Disk, Removable Disk, CD-ROM, or It may reside on any type of computer-readable recording medium well known in the art to which the present invention pertains.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne un procédé d'obtention d'informations utilisateur d'intérêt sur la base de données d'image d'entrée et un procédé de personnalisation d'une conception d'un objet. Un procédé d'obtention d'informations utilisateur d'intérêt sur la base de données d'image d'entrée, selon un mode de réalisation de la présente invention, comprend : une étape dans laquelle un serveur entre des premières données d'image d'entrée dans un modèle de reconnaissance de caractéristiques d'aspect et calcule des caractéristiques d'aspect individuelles pour une pluralité de critères de classification d'aspect ; une étape dans laquelle le serveur génère des premières données de description d'aspect en combinant une pluralité des caractéristiques d'aspect individuelles par rapport aux premières données d'image d'entrée ; et une étape dans laquelle le serveur génère et délivre les premières données d'image de sortie sur la base des premières données de description d'aspect, les premières données d'image d'entrée étant des données d'image entrées par un utilisateur spécifique et les critères de classification d'aspect étant des critères de classification spécifiques pour décrire l'aspect d'un objet spécifique et pouvant comprendre une pluralité de caractéristiques d'aspect individuelles pour exprimer diverses caractéristiques d'aspect avec les mêmes critères de classification de l'objet.
PCT/KR2020/007445 2019-06-10 2020-06-09 Procédé d'obtention d'informations utilisateur d'intérêt sur la base de données d'image d'entrée et procédé de personnalisation de conception d'objet WO2020251238A1 (fr)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
KR10-2019-0067795 2019-06-10
KR20190067795 2019-06-10
KR10-2020-0009600 2020-01-28
KR1020200009600A KR102115573B1 (ko) 2019-06-10 2020-01-28 입력영상데이터 기반 사용자 관심정보 획득방법, 장치 및 프로그램
KR1020200016533A KR102115574B1 (ko) 2019-06-10 2020-02-11 대상체 디자인 커스터마이징 방법, 장치 및 프로그램
KR10-2020-0016533 2020-02-11

Publications (1)

Publication Number Publication Date
WO2020251238A1 true WO2020251238A1 (fr) 2020-12-17

Family

ID=70910841

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/KR2020/007445 WO2020251238A1 (fr) 2019-06-10 2020-06-09 Procédé d'obtention d'informations utilisateur d'intérêt sur la base de données d'image d'entrée et procédé de personnalisation de conception d'objet
PCT/KR2020/007426 WO2020251233A1 (fr) 2019-06-10 2020-06-09 Procédé, appareil et programme d'obtention de caractéristiques abstraites de données d'image

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/007426 WO2020251233A1 (fr) 2019-06-10 2020-06-09 Procédé, appareil et programme d'obtention de caractéristiques abstraites de données d'image

Country Status (2)

Country Link
KR (9) KR20200141373A (fr)
WO (2) WO2020251238A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807708A (zh) * 2021-09-22 2021-12-17 深圳市微琪思服饰有限公司 一种基于分布式的服装柔性生产制造平台系统

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200141373A (ko) * 2019-06-10 2020-12-18 (주)사맛디 외형인식모델 학습용 데이터셋 구축 방법, 장치 및 프로그램
KR102387907B1 (ko) * 2020-06-26 2022-04-18 주식회사 이스트엔드 크리에이터와 프로슈머가 참여하는 무지 의류 디자인 커스터마이징 방법 및 이를 위한 시스템
KR102524049B1 (ko) * 2021-02-08 2023-05-24 (주)사맛디 대상체 특성 정보에 기반한 사용자 코디 추천 장치 및 방법
KR102556642B1 (ko) 2021-02-10 2023-07-18 한국기술교육대학교 산학협력단 기계 학습 훈련을 위한 데이터 생성방법
CN113360477A (zh) * 2021-06-21 2021-09-07 四川大学 一种大规模定制女式皮鞋的分类方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120078837A (ko) * 2011-01-03 2012-07-11 김건민 코디네이션시스템을 이용한 상품 판매 및 경영관리시스템
KR20150115475A (ko) * 2014-04-04 2015-10-14 홍익대학교세종캠퍼스산학협력단 3d 프린팅 로봇의 이미지 변환 툴 시스템 및 이의 구동 방법
KR20180014495A (ko) * 2016-08-01 2018-02-09 삼성에스디에스 주식회사 객체 인식 장치 및 방법
KR20180048536A (ko) * 2018-04-30 2018-05-10 오드컨셉 주식회사 영상 검색 정보 제공 방법, 장치 및 컴퓨터 프로그램
KR20180074565A (ko) * 2016-12-23 2018-07-03 삼성전자주식회사 전자 장치 및 그 동작 방법
KR20190029567A (ko) * 2016-02-17 2019-03-20 옴니어스 주식회사 스타일 특징을 이용한 상품 추천 방법
KR102115573B1 (ko) * 2019-06-10 2020-05-26 (주)사맛디 입력영상데이터 기반 사용자 관심정보 획득방법, 장치 및 프로그램

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1183461A (ja) * 1997-09-09 1999-03-26 Mitsubishi Electric Corp 物品種別認識システム
KR101157744B1 (ko) * 2010-05-06 2012-06-25 윤진호 취향 요소에 기초한 상품 추천 방법과 추천된 상품들을 표시하는 방법 및 이들을 이용한 상품 추천 시스템
WO2014031989A1 (fr) 2012-08-23 2014-02-27 Interdigital Patent Holdings, Inc. Fonctionnement avec programmateurs multiples dans un système sans fil
CN108268539A (zh) * 2016-12-31 2018-07-10 上海交通大学 基于文本分析的视频匹配系统
KR20180133200A (ko) 2018-04-24 2018-12-13 김지우 기록매체에 기록된 의류관리 어플리케이션 프로그램, 이를 이용한 의류 관리 시스템 및 방법

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120078837A (ko) * 2011-01-03 2012-07-11 김건민 코디네이션시스템을 이용한 상품 판매 및 경영관리시스템
KR20150115475A (ko) * 2014-04-04 2015-10-14 홍익대학교세종캠퍼스산학협력단 3d 프린팅 로봇의 이미지 변환 툴 시스템 및 이의 구동 방법
KR20190029567A (ko) * 2016-02-17 2019-03-20 옴니어스 주식회사 스타일 특징을 이용한 상품 추천 방법
KR20180014495A (ko) * 2016-08-01 2018-02-09 삼성에스디에스 주식회사 객체 인식 장치 및 방법
KR20180074565A (ko) * 2016-12-23 2018-07-03 삼성전자주식회사 전자 장치 및 그 동작 방법
KR20180048536A (ko) * 2018-04-30 2018-05-10 오드컨셉 주식회사 영상 검색 정보 제공 방법, 장치 및 컴퓨터 프로그램
KR102115573B1 (ko) * 2019-06-10 2020-05-26 (주)사맛디 입력영상데이터 기반 사용자 관심정보 획득방법, 장치 및 프로그램
KR102115574B1 (ko) * 2019-06-10 2020-05-27 (주)사맛디 대상체 디자인 커스터마이징 방법, 장치 및 프로그램

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807708A (zh) * 2021-09-22 2021-12-17 深圳市微琪思服饰有限公司 一种基于分布式的服装柔性生产制造平台系统
CN113807708B (zh) * 2021-09-22 2024-03-01 深圳市微琪思服饰有限公司 一种基于分布式的服装柔性生产制造平台系统

Also Published As

Publication number Publication date
KR102115573B1 (ko) 2020-05-26
KR102366580B1 (ko) 2022-02-23
KR20200141373A (ko) 2020-12-18
WO2020251233A1 (fr) 2020-12-17
KR20200141929A (ko) 2020-12-21
KR20210002410A (ko) 2021-01-08
KR20200141388A (ko) 2020-12-18
KR20200141375A (ko) 2020-12-18
KR102119253B1 (ko) 2020-06-04
KR102115574B1 (ko) 2020-05-27
KR102355702B1 (ko) 2022-01-26
KR102227896B1 (ko) 2021-03-15
KR20200141384A (ko) 2020-12-18

Similar Documents

Publication Publication Date Title
WO2020251238A1 (fr) Procédé d'obtention d'informations utilisateur d'intérêt sur la base de données d'image d'entrée et procédé de personnalisation de conception d'objet
WO2017171418A1 (fr) Procédé de composition d'image et dispositif électronique associé
WO2020032597A1 (fr) Appareil et procédé pour fournir un élément en fonction d'un attribut d'avatar
WO2020222623A9 (fr) Système et procédé pour construire automatiquement un contenu destiné à des ventes stratégiques
WO2019156522A1 (fr) Dispositif et procédé de création de dessins à base d'images/de texte
WO2020153796A1 (fr) Dispositif électronique et procédé de fonctionnement associé
WO2020085786A1 (fr) Procédé de recommandation de style, dispositif et programme informatique
CN106156297A (zh) 服饰推荐方法及装置
WO2018225939A1 (fr) Procédé, dispositif et programme informatique pour fournir des publicités basées sur une image
WO2020171567A1 (fr) Procédé permettant de reconnaître un objet et dispositif électronique le prenant en charge
WO2019088358A1 (fr) Appareil et procédé pour fournir des informations d'articles de bijouterie personnalisées
WO2020032567A1 (fr) Dispositif électronique destiné à fournir des informations sur un élément sur la base d'une catégorie d'élément
JP2007280351A (ja) 情報提供システム及び情報提供方法等
WO2018182068A1 (fr) Procédé et appareil de fourniture d'informations de recommandation pour un article
WO2018226022A1 (fr) Serveur de recommandation d'article de mode, et procédé de recommandation d'article de mode utilisant ledit serveur
WO2022039450A1 (fr) Procédé, appareil et système pour fournir un service d'essayage virtuel
WO2021071240A1 (fr) Procédé, appareil et programme informatique pour recommander un produit de mode
WO2019117463A1 (fr) Lunettes portables pour achat de vêtements à réalité augmentée et procédé d'achat de vêtements à réalité augmentée
WO2023008617A1 (fr) Système de préparation automatique de bannière publicitaire pour centre commercial en ligne et procédé associé
WO2020184855A1 (fr) Dispositif électronique destiné à fournir un procédé de réponse, et son procédé de fonctionnement
WO2020251236A1 (fr) Procédé, dispositif et programme de récupération de données d'image à l'aide d'un algorithme d'apprentissage profond
WO2020060012A1 (fr) Plateforme mise en œuvre par ordinateur pour fournir des contenus à un dispositif de réalité augmentée, et procédé associé
WO2022025340A1 (fr) Système de construction de placard virtuel et de création d'une combinaison coordonnée, et procédé associé
WO2021215758A1 (fr) Procédé de publicité pour article recommandé, appareil et programme informatique
WO2021153964A1 (fr) Procédé, appareil et système de recommandation de produit de mode

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20822567

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 260422)

122 Ep: pct application non-entry in european phase

Ref document number: 20822567

Country of ref document: EP

Kind code of ref document: A1