WO2020251238A1 - Method for obtaining user interest information on basis of input image data and method for customizing design of object - Google Patents

Method for obtaining user interest information on basis of input image data and method for customizing design of object Download PDF

Info

Publication number
WO2020251238A1
WO2020251238A1 PCT/KR2020/007445 KR2020007445W WO2020251238A1 WO 2020251238 A1 WO2020251238 A1 WO 2020251238A1 KR 2020007445 W KR2020007445 W KR 2020007445W WO 2020251238 A1 WO2020251238 A1 WO 2020251238A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
appearance
server
user
individual
Prior art date
Application number
PCT/KR2020/007445
Other languages
French (fr)
Korean (ko)
Inventor
이종혁
전혜은
Original Assignee
(주)사맛디
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)사맛디 filed Critical (주)사맛디
Publication of WO2020251238A1 publication Critical patent/WO2020251238A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present invention relates to a method for obtaining user interest information based on input image data and a method for customizing object design.
  • the existing method of obtaining user interest information through images is obtained based on information directly tagged by the user, and there is a problem in that the acquisition result becomes inaccurate if the user incorrectly tags a keyword in the image.
  • the results of obtaining interest information differ depending on the keyword selected by the user who inputs an image.
  • the present invention for solving the above-described problem is to provide a method and program for obtaining user interest information based on input image data for obtaining user interest information by analyzing image data input by the user.
  • the present invention provides a method and program for obtaining user interest information based on input image data that outputs specific image data to a user and allows the user to modify the output image data to more accurately obtain user interest information through the modified information. I want to provide.
  • the present invention is to provide a method and program for a user to easily customize an object design through a customizing interface.
  • an object of the present invention is to provide a method and a program for customizing an object design using a plurality of external classification criteria and individual external characteristics corresponding to the object.
  • the present invention is to provide a method and a program for customizing an object design using a preset standard model.
  • the present invention is to provide a method and program for recommending an object suitable for a user by using abstract characteristics corresponding to the object or the user's design data.
  • the server inputs the first input image data into an appearance characteristic recognition model, Calculating an appearance characteristic, the server generating first appearance description data by combining a plurality of individual appearance characteristics with respect to the first input image data, and the server generating a first appearance description data based on the first appearance description data.
  • Generating and outputting output image data wherein the first input image data is image data input from a specific user, and the appearance classification standard is a specific classification standard for describing the appearance of a specific object, and the same It may include a plurality of individual appearance characteristics expressing various appearance characteristics within the classification criteria.
  • the first input image data is image data of a specific article of a specific object received from the user
  • the first output image data is a virtual article of the specific object generated based on the first appearance description data. It may be image data for.
  • the step of generating the first outline description data may include extracting code values corresponding to a plurality of individual appearance characteristics of the first input image data, and combining the plurality of code values, It may include the step of generating the first outline description data.
  • the first output image data may be image data for a virtual article including a plurality of individual appearance characteristics included in the first appearance description data.
  • the server inputs the second input image data into an external appearance characteristic recognition model, calculating individual appearance characteristics for a plurality of appearance classification criteria, and the server inputting a plurality of individual appearance characteristics for the second input image data. And generating second outline description data by combining them, wherein the second input image data may be image data in which the first output image data has been modified by the user.
  • the server may further include storing the first appearance description data or the second appearance description data as the user's interest information.
  • a program for obtaining user interest information based on input image data is combined with hardware to execute the aforementioned method for obtaining user interest information, and is stored in a recording medium.
  • An object design customizing method includes determining, by a server, an object based on a first user input, a plurality of appearance classification criteria corresponding to the object, and the plurality of appearance classification criteria Providing a customizing interface based on a plurality of individual appearance characteristics corresponding to each of, and generating design data of the object based on a second user input detected by the customizing interface and a preset standard model by the server ,
  • the appearance classification standard is a specific classification standard for describing the appearance of a specific object, and includes a plurality of individual appearance characteristics expressing various external characteristics within the same classification standard of the object, and the customizing interface It may include a plurality of menus and the design data matched with the plurality of individual appearance characteristics corresponding to the object.
  • the server may further include displaying the generated design data in the customizing interface.
  • the second user input may be an input for selecting at least one menu from among the plurality of menus.
  • the standard model includes at least one of a standard human body model, a fixed joint line and a length reference line for indicating the plurality of individual appearance characteristics, and the at least one menu selected by the server according to the second user input It may further include generating the design data based on at least one of a fixed junction line and a length reference line corresponding to.
  • the server may further include changing the design data based on the standard model and the third user input detected by the customizing interface.
  • the server extracting a recommended object corresponding to a combination of appearance classification criteria matched with the object or an abstract characteristic corresponding to the generated design data based on a matching algorithm, and design data corresponding to the extracted recommended object. It may further include providing to the user through the customizing interface.
  • the server may further include changing the design data of the recommended object based on a fourth user input detected by the customizing interface and a preset standard model.
  • An object design customization program is combined with hardware to execute the above-described object design customization method, and is stored in a recording medium.
  • the present invention by storing the user's interest information in the form of text-based appearance description data by analyzing image data, it is possible to efficiently acquire and store the user's interest information.
  • the user can easily create and change the design of the object by providing the user with a customizing interface.
  • design freedom is given to the user through the customizing interface, but the processing speed of the customizing method can be increased by using a preset standard model.
  • the user's satisfaction can be maximized since it is possible to easily and simply request creation of an object reflecting a desired design using a customizing interface.
  • FIG. 1 is a block diagram showing a server and related configurations according to an embodiment of the present invention.
  • FIG. 2 is a block diagram showing a server including an external feature recognition model for each object according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of a method of obtaining user interest information based on input image data according to an embodiment of the present invention.
  • FIG. 4 is a block diagram of a system for obtaining user interest information based on input image data according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of a method of generating outline description data according to an embodiment of the present invention.
  • FIG. 6 is a flowchart of a method for obtaining user interest information based on input image data, further comprising the step of receiving second input image data according to an embodiment of the present invention.
  • FIG. 7 is an exemplary diagram for describing a standard model according to an embodiment of the present invention.
  • FIG. 8 is a flowchart illustrating a method of customizing an object design according to an embodiment of the present invention.
  • 9 to 21 are exemplary diagrams for explaining a method of customizing an object design according to an embodiment of the present invention.
  • 22 is a flowchart illustrating a method of providing a recommended object according to an embodiment of the present invention.
  • 23 is an exemplary diagram for describing a method of providing a recommended object according to an embodiment of the present invention.
  • a'computer' includes all various devices capable of performing arithmetic processing and providing results to a user.
  • computers are not only desktop PCs and notebooks, but also smart phones, tablet PCs, cellular phones, PCS phones, and synchronous/asynchronous systems.
  • a mobile terminal of the International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), a personal digital assistant (PDA), and the like may also be applicable.
  • a head mounted display (HMD) device includes a computing function
  • the HMD device may be a computer.
  • the computer may correspond to a server that receives a request from a client and performs information processing.
  • client' refers to all devices including a communication function that users can install and use a program (or application). That is, the client device may include at least one of a telecommunication device such as a smart phone, a tablet, a PDA, a laptop, a smart watch, and a smart camera, and a remote controller, but is not limited thereto.
  • a telecommunication device such as a smart phone, a tablet, a PDA, a laptop, a smart watch, and a smart camera, and a remote controller, but is not limited thereto.
  • object refers to an article of a specific classification or category for performing a search.
  • object when a user wants to search for an image of a desired item in a shopping mall, when a user searches for clothes among item categories, the object may be clothes.
  • image data refers to a two-dimensional or three-dimensional static or dynamic image including a specific object. That is,'image data' may be static image data that is one frame, or dynamic image data (ie, moving image data) in which a plurality of frames are consecutive.
  • 'learning image data means image data used for training a learning model.
  • the'appearance classification standard' refers to a classification standard of an appearance expression necessary for describing the appearance of a specific object or for annotation. That is, the'appearance classification criterion' is a specific classification criterion for describing the appearance of a specific object, and includes a plurality of individual appearance characteristics expressing various appearance characteristics within the same classification criterion of the object.
  • the appearance classification standard is a classification standard for the appearance of the clothing, and may correspond to a pattern, color, fit, length, and the like. That is, when the appearance classification standard for a specific object increases, the external shape of a specific article belonging to the object can be described in detail.
  • 'individual appearance characteristics' refers to various characteristics included in a specific appearance classification standard. For example, if the appearance classification criterion is color, the individual appearance characteristics mean various individual colors.
  • the'expert client 30' is responsible for giving individual appearance characteristics to the learning image data (i.e., labeling the learning image data) or giving the image data individual appearance characteristics within the unlearned appearance classification criteria. It means the client of the expert who does it.
  • abtract characteristic refers to an abstract characteristic given to a specific object.
  • the'abstract characteristic' may be an emotional characteristic for a specific object (for example, in the case of clothing, an emotional or fashionable expression such as vintage).
  • 'abstract characteristic' may mean a shape change or motion when the image data is a moving picture.
  • 1 is a block diagram showing a server and related configurations according to an embodiment of the present invention.
  • 2 is a block diagram showing a server including an external feature recognition model for each object according to an embodiment of the present invention.
  • the image data search method of the server 10 refers to a method of accurately extracting image data desired by a user based on abstract terms representing the appearance of a specific object.
  • the method of customizing the object design may be performed based on the image data search method. Therefore, first, a method of searching for image data will be described.
  • the server 10 inputs the image data into the appearance characteristic recognition model 100, and individual appearance characteristics for a plurality of appearance classification criteria are obtained.
  • the server 10 generating appearance description data by combining a plurality of individual appearance characteristics of the image data, and the server 10 receiving a search keyword from a specific user, the matching algorithm 200 ), extracting image data corresponding to a combination of appearance classification criteria matched with an abstract characteristic corresponding to the search keyword.
  • the server 10 may store a plurality of appearance classification criteria, a plurality of individual appearance characteristics, abstract characteristics, appearance description data, extracted image data, customized design data, and the like in the database 400.
  • the server 10 inputs the image data into the appearance characteristic recognition model 100 to calculate individual appearance characteristics for a plurality of appearance classification criteria. That is, the server 10 provides new image data for which the external characteristic analysis has not been performed to the external characteristic recognition model 100 to calculate individual external characteristics for each external classification standard of a specific object.
  • the appearance characteristic recognition model 100 includes a plurality of individual characteristic recognition modules 110 for determining different appearance classification criteria, as shown in FIG. 1. That is, the appearance characteristic recognition model 100 includes a plurality of individual characteristic recognition modules 110 specialized to recognize each appearance classification criterion. The more the external classification criteria of a specific object are, the more the server 10 includes a plurality of individual characteristic recognition modules 110 in the external characteristic recognition model 100. The individual characteristic recognition module 110 calculates individual external characteristics included in a specific external classification standard of image data.
  • the server 10 acquires all the individual appearance characteristics of each appearance classification criterion for image data.
  • the individual characteristic recognition module 110 is trained through a deep learning learning model by matching individual external characteristics of a specific external classification standard with respect to a plurality of training image data. That is, the individual characteristic recognition module 110 is constructed with a specific deep learning algorithm, and learning is performed by matching a specific one of a plurality of appearance classification criteria with image data for learning.
  • the server 10 may perform a process of training each individual characteristic recognition module 110 as follows.
  • the server 10 acquires a plurality of training image data for a specific object. For example, when the object is a specific type of clothing (eg, a shirt), the server 10 acquires images of several shirts.
  • the server 10 may be selected by an expert from among image data previously stored, or may newly acquire an object image that is easy to learn.
  • the server 10 acquires each appearance classification standard definition and a plurality of individual appearance characteristics for each appearance classification standard. That is, the server 10 sets the initial number of individual characteristic recognition modules 110 according to the setting of a plurality of appearance classification criteria. In addition, the server 10 sets a type of feature for labeling the training image data for each appearance classification criterion as a plurality of individual appearance characteristics in each appearance classification criterion are set.
  • the server 10 may receive a plurality of appearance classification criteria for analyzing the appearance of a specific object and a plurality of individual appearance characteristics within each appearance classification standard from the expert client 30 for analyzing the appearance of a specific object.
  • the server 10 may receive an appearance classification standard and individual appearance characteristics included therein from a client of a designer who is a clothing expert.
  • the server 10 labels the training image data with a plurality of individual appearance characteristics of each appearance classification criterion. That is, the server 10 receives and matches at least one individual appearance characteristic for each of a plurality of appearance classification criteria for each training image data. For example, when 10 external classification criteria are set for a specific object, the server 10 receives one individual external characteristic for each of the 10 external classification criteria for each learning image data including the corresponding object, and A training data set that matches the image data and 10 individual type characteristics is formed.
  • the server 10 performs training by matching the training image data with individual appearance characteristics of a specific external classification standard labeled therefor. That is, when the server 10 trains the individual characteristic recognition module 110 for the A appearance classification criterion, deep learning by extracting only the training image data and the individual appearance characteristics of the A appearance classification criterion matched from the training dataset. Enter into the learning model. Through this, the server 10 constructs each individual characteristic recognition module 110 capable of recognizing individual external characteristics of each external classification standard.
  • the external characteristic recognition model 100 includes a combination of individual characteristic recognition modules 110 different for each object type.
  • fashion miscellaneous goods types for example, shoes, wallets, bags
  • the server 10 creates a combination of individual characteristic recognition modules 110 for each object type.
  • a specialized external feature recognition model for recognizing the external appearance of a specific object is created.
  • each external characteristic recognition model 100 for a plurality of objects may share and use a specific individual characteristic recognition module 110.
  • the color recognition module can be used universally regardless of the object type, so that the server 10 provides a plurality of external characteristics distinguished for each object.
  • a universal color recognition module can be used in the recognition model 100.
  • the server 10 generates appearance description data by combining or listing a plurality of individual appearance characteristics of the image data. If the external appearance classification criteria for a specific object are divided in detail, the external appearance description data specifically describes the external appearance of the object through individual external characteristics.
  • the step of generating the outline description data includes extracting a code value corresponding to a plurality of individual appearance characteristics of the image data and combining the plurality of code values to generate the outline description data in the form of a code string. And generating. That is, as the server 10 codes the individual appearance characteristics, the appearance description data can be generated as a code string, and through this, the processing of the appearance description data can be efficiently performed.
  • the server 10 when there is an unlearned appearance classification standard of a specific object for which the individual characteristic recognition module 110 is not constructed (for example, recognition through a deep learning learning model among the external classification criteria of the object) If there is something difficult to do or the individual characteristic recognition module 110 has not yet been constructed due to the creation of a new external classification standard), the server 10 is based on the unlearned external classification standard from the expert client or the image provider client 40. For each image data, the individual appearance characteristics are input.
  • the server 10 generates the appearance description data by combining the input individual appearance characteristics and the calculated individual appearance characteristics.
  • the input individual appearance characteristics are obtained for the unlearned appearance classification criteria from an image provider client 40 or an expert client that provided the image data, and the calculated individual appearance characteristics are transmitted to the individual characteristic recognition module 110. It is calculated by inputting image data.
  • the matching algorithm 200 extracts image data corresponding to a combination of appearance classification criteria matched with an abstract characteristic corresponding to the search keyword. (S600).
  • a user wants to search for desired image data based on a search keyword that is one of the abstract characteristics of a specific object or a search keyword that is similar to the abstract characteristic
  • the server 10 uses the matching algorithm 200 to search for the desired image data.
  • a combination of appearance classification criteria matching the corresponding abstract characteristics is extracted, and image data having a corresponding combination of appearance classification criteria in the appearance description data is extracted.
  • the abstract characteristic may be matched with a plurality of individual appearance characteristics for a specific appearance classification criterion.
  • the server 10 may not match the specific external classification standard with the corresponding abstract characteristic.
  • the server 10 may match the appearance classification criterion 1 with the abstract characteristic X.
  • the server 10 may match a plurality of individual appearance characteristics of the appearance classification criterion 2 with the abstract characteristic X.
  • the server 10 when a new appearance classification criterion for a specific object is added, the server 10 obtains individual appearance characteristics of the new appearance classification criterion for the training image data, and constructs a new learning data set. The step and the server 10 training a new individual characteristic recognition module 110 based on the new learning data set, and adding it to the external characteristic recognition model. That is, when a new external classification standard for a specific object is added (for example, a new standard for dividing the external characteristics of clothing is added), the server 10 does not change the existing individual characteristic recognition module 110. It is possible to change the external characteristic recognition model 100 according to the situation in which the new external classification standard is added by additionally constructing only the individual characteristic recognition module 110 for the new external classification standard.
  • the server 10 acquires individual appearance characteristics of the new appearance classification criteria for the training image data, and constructs a new training data set.
  • the server 10 when constructing a new individual characteristic recognition module 110 by using the same image data used to train another individual characteristic recognition module 110 in the past, the server 10 is an expert client 30 ), the individual appearance characteristics of the new appearance classification standard are input for each learning image data.
  • the server 10 acquires new image data for training the individual characteristic recognition module 110 for the new appearance classification criteria, and inputs individual appearance characteristics of the new appearance classification criteria, respectively. And construct a new training data set.
  • the server 10 trains the new individual feature recognition module 110 based on the new learning data set, and adds it to the external feature recognition model (S710). Through this, the server 10 adds a new individual characteristic recognition module 110 together with a plurality of existing individual characteristic recognition modules 110 to the existing plurality of external characteristic recognition models.
  • the server 10 inputs the image data from which the appearance description data has been obtained by the individual characteristic recognition module 110, which is already established, into the new individual characteristic recognition module 110, and the new appearance classification standard It further comprises the step of adding an individual appearance characteristic to the. That is, the server 10 performs a process of updating the appearance description data of the previously acquired image data to reflect the new appearance classification criteria. To this end, the server 10 inserts all the image data into the new individual characteristic recognition module 110 to calculate individual appearance characteristics.
  • the server 10 further includes updating the matching algorithm 200 by matching the individual appearance characteristics of the new appearance classification criteria with each abstract characteristic. That is, when the user searches for image data based on a keyword corresponding to an abstract characteristic, the server 10 reflects the new appearance classification criteria and provides the optimal search result. For characteristics, the individual external characteristics of the new external classification standard are linked.
  • the server 10 further includes the step of setting the matching algorithm 200 by receiving setting data matching the combination of the abstract characteristic and the appearance classification criterion from the expert client.
  • the definition of abstract characteristics may be changed or different due to factors such as regional differences, changes in the times, and establishment of new definitions.
  • abstract characteristics representing specific fashion trends or emotional characteristics may change according to the change of the times, and may be defined differently according to regions around the world (for example, ' The abstract characteristic (ie, emotional characteristic) of'vintage' can be defined as having a different appearance in the past and the present.) Therefore, the server 10 is a matching relationship between the combination of abstract characteristics in the matching algorithm 200 and individual appearance characteristics. You can add or change settings.
  • the server 10 when the definition of a specific abstract characteristic is changed, the server 10 receives a combination of appearance classification criteria for the current abstract characteristic from the expert client 30.
  • the server 10 may set the combination of the abstract characteristic before the change and the appearance classification criterion as the definition of the corresponding abstract characteristic at a specific point in the past. Through this, the server 10 may accumulate definition or description information of specific abstract characteristics according to changes in the times.
  • the server 10 may receive and store a combination of appearance classification criteria for each region from the expert client 30.
  • the server 10 obtains the reference image data from the user client 20, the reference image data acquisition step, and inputs the reference image data into the appearance characteristic recognition model 100, Calculating individual appearance characteristics for a plurality of appearance classification criteria, the server 10 generating appearance description data by combining a plurality of individual appearance characteristics with respect to the reference image data, and the server 10 performing the reference image And extracting image data including appearance description data identical to or similar to the data. That is, when a user does not search based on a keyword corresponding to an abstract characteristic, but performs a search based on a specific object image (ie, reference image data) that the user has, the server 10 Appearance description data is generated, and image data including the same or similar appearance description data is extracted and provided to the user client 20.
  • a specific object image ie, reference image data
  • the server 10 acquires reference image data from the user client 20. That is, the server 10 receives the reference image data stored in the user client 20 or searched online by the user.
  • the server 10 inputs the reference image data into the appearance characteristic recognition model 100 to calculate individual appearance characteristics included in each appearance classification criterion. That is, the server 10 acquires a plurality of individual external characteristics for describing the external characteristics of the reference image data as text information through each individual characteristic recognition module 110. Thereafter, the server 10 generates appearance description data by combining a plurality of individual appearance characteristics with respect to the reference image data.
  • the server 10 extracts image data including the same outline description data as the reference image data.
  • the server 10 searches for and provides the image data having the same external description data as the reference image data.
  • the server 10 in the case of searching for image data to a range similar to the reference image data, includes from a low importance to a similar range among a plurality of appearance classification criteria included in the appearance description data of the reference image data. It expands and extracts image data including one or more extended outline description data.
  • the server 10 may include an importance ranking for a plurality of appearance classification criteria of a specific object (for example, the higher the importance ranking, the higher the priority, the higher the search range is maintained at a fixed value), The degree of similarity between individual appearance characteristics within a specific appearance classification criterion may be included.
  • the server 10 receives a request to provide additional image data from the user client 20, the step of sequentially providing image data having different appearance classification criteria and adding by the user
  • the server 10 further includes setting a personalized abstract characteristic based on the external description data of the selected image data. That is, when performing a search based on a search keyword, the server 10 expands the search range and provides additional image data while changing at least one appearance classification criterion from the description information of the abstract characteristic corresponding to the search keyword to another individual appearance characteristic.
  • the server 10 receives one or more desired image images from the extended search range from the user.
  • the server 10 personalizes a search keyword or abstract characteristic input by the user based on the selected video image. For example, since the external definition of the general abstract characteristic and the external definition of the abstract characteristic that the user is thinking may be different, the server 10 is based on the external description data of the video image selected by the user in the expanded search result. Set the description information or appearance definition of the abstract characteristic that the user thinks (that is, the description information of the personalized abstract characteristic). Through this, if the user performs a search with the same search keyword or abstract characteristic in the future, the server 10 does not search based on the description information of the general abstract characteristic, but performs a search based on the description information of the personalized abstract characteristic. By doing so, the user can first provide the desired image.
  • the server 10 when there is an abstract characteristic corresponding to the outline description data of the selected image data, the server 10 provides the user client 20 with an abstract characteristic suitable for extracting the selected image data. It includes more. That is, the server 10 notifies that the external appearance definition known to the user for a specific abstract characteristic is different from the commonly used appearance definition, and provides an abstract characteristic (or search keyword) that matches the external appearance definition that the actual user thinks. It is extracted and provided. Through this, it is possible for the user to recognize a search keyword for obtaining a desired search result when re-searching later.
  • the abstract characteristic when the image data is moving image data including a plurality of frames, the abstract characteristic may be an expression representing a specific shape change or motion. That is, the abstract characteristic may be a textual expression representing a specific motion or shape change.
  • the server 10 generates appearance description data in which combinations of individual appearance characteristics (that is, individual appearance characteristics belonging to each appearance classification criterion) for a plurality of frames of video data, which are moving images, are arranged in time series. Specifically, the step of calculating the individual appearance characteristics is performed for each frame in the moving picture data, and the step of generating the appearance description data sequentially generates a plurality of individual appearance characteristics for each frame.
  • the server 10 is a matching algorithm 200 in which each abstract characteristic (for example, an expression representing a shape change or motion) and time series data of individual appearance characteristics within each appearance classification criterion are matched. Includes.
  • each abstract characteristic for example, an expression representing a shape change or motion
  • time series data of individual appearance characteristics within each appearance classification criterion are matched.
  • the server 10 searches for and provides video data corresponding to an abstract characteristic (ie, a specific motion or shape change) desired by the user.
  • FIG. 3 is a flowchart of a method for obtaining user interest information based on input image data according to an embodiment of the present invention
  • FIG. 4 is a block diagram of a system for obtaining user interest information based on input image data according to an embodiment of the present invention.
  • the server 10 inputs the first input image data into the appearance characteristic recognition model 100, Calculating individual appearance characteristics for a plurality of appearance classification criteria (S1200), a step of generating, by the server 10, first appearance description data by combining a plurality of individual appearance characteristics for the first input image data (S1400) ), and the server 10 generating and outputting first output image data based on the first outline description data (S1600).
  • S1200 Calculating individual appearance characteristics for a plurality of appearance classification criteria
  • S1400 first appearance description data by combining a plurality of individual appearance characteristics for the first input image data
  • S1600 first outline description data
  • the server 10 inputs the first input image data into the appearance characteristic recognition model 100 to calculate individual appearance characteristics for a plurality of appearance classification criteria (S1200).
  • the first input image data refers to image data input from a specific user who wants to acquire interest information.
  • the first input image data includes image data for a real object or a virtual object.
  • the first input image data may be obtained by various methods.
  • the first input image data may be obtained through an input for a specific user's virtual space interior.
  • the user may input image data for a specific object he or she prefers.
  • the first input image data includes real image data of a specific article of a specific object.
  • the user may input a picture (first input image data) of a'B-shirt of brand A'(a specific article) belonging to a clothing (a specific object) to be placed in his or her virtual space.
  • the server inputs the photo into the appearance characteristic recognition model 100, and the individual appearance characteristics of'shirt, light pink, floral pattern, slim, V-neck, sleeveless' ('color, pattern, which are multiple appearance classification criteria) , Top silhouette, neck shape, sleeve length' can calculate multiple individual appearance characteristics).
  • the first input image data includes virtual image data customized by a user.
  • the server calculates a plurality of individual appearance characteristics from the first input image data.
  • the first input image data may be input by a method for customizing an object design to be described later, but is not limited thereto.
  • the user selects a plurality of individual appearance characteristics (eg, shirt, light pink, floral pattern, slim, V-neck, sleeveless) from the list of individual appearance characteristics provided by the server, and the selected individual appearance characteristics
  • a plurality of individual appearance characteristics eg, shirt, light pink, floral pattern, slim, V-neck, sleeveless
  • the server can acquire the individual appearance characteristics selected by the user without having to calculate separate individual appearance characteristics.
  • the server 10 generates first appearance description data by combining a plurality of individual appearance characteristics with respect to the first input image data (S1400).
  • the first external appearance description data may specifically describe the external appearance of the corresponding object through individual external characteristics.
  • the first outline description data can be generated in the form of ⁇ shirt, light pink, floral pattern, slim, V-neck, sleeveless ⁇ .
  • the first appearance description data generation step (S1400) includes extracting code values corresponding to a plurality of individual appearance characteristics of the first input image data (S1410). ) And generating first outline description data in the form of a code string by combining the plurality of code values (S1420). That is, as the server 10 codes the individual appearance characteristics, the appearance description data can be generated as a code string, and through this, the processing of the appearance description data can be efficiently performed.
  • the first appearance description data May be generated as a code string of "Ba01, Bb02, Bg01, Ie01, Ob01, Zb01".
  • the server 10 generates and outputs the first output image data based on the first outline description data (S1600).
  • the first output image data may mean image data for a virtual article of a specific object generated based on the first outline description data.
  • the first appearance description data is a code string of "Ba01, Bb02, Bg01, Ie01, Ob01, Zb01”
  • individual appearance characteristics corresponding to each code value can generate and output video data for a virtual shirt.
  • the first output image data may mean virtual image data including a plurality of individual appearance characteristics identical to the individual appearance characteristics of the first input image data.
  • the server inputs the second input image data into the appearance characteristic recognition model, and the plurality of appearance classification criteria Calculating individual appearance characteristics (S1240), a step of generating second appearance description data by combining a plurality of individual appearance characteristics with respect to the second input image data (S1440), and the server performing the first appearance description data or 2 It further comprises a step (S1800) of storing the appearance description data including the appearance description data as the user's interest information.
  • the server inputs the second input image data into the appearance characteristic recognition model, and calculates individual appearance characteristics for a plurality of appearance classification criteria (S1240).
  • the second input image data may mean image data in which first output image data is modified by a user who has input the first input image data.
  • the server created it based on this. If the image (first output image data) of the virtual shirt that is output is different from the characteristics of the image that the user intends to arrange, the user can modify the first output image data and input the second input image data to be placed. have.
  • the calculation of individual appearance characteristics for the first input image data by the server is incorrect, or the first input among individual appearance characteristics included in the first output image data. It may include cases in which correction is required for individual appearance characteristics other than individual appearance characteristics calculated from image data.
  • the server when the server incorrectly calculates the individual appearance characteristics of the U-neck for the first input image data of a V-neck shirt, the first output image data is a U-neck, or the individual appearance characteristics of the aforementioned coat (shirt, light pink). , Floral pattern, slim, V-neck, sleeveless), but including a crop (individual appearance characteristic for Top Length, which is an external classification standard), which is an individual appearance characteristic that the user does not prefer, the user must first output image data. May be modified to input as second input image data.
  • the first output image data may be modified by a user directly using a program or server, or may be modified by inputting a keyword for a correction direction, but is not limited thereto and Includes correction methods.
  • the user wants to modify the U-neck to the V-neck, the user can directly modify the coat of the first output image data to the V-neck, or input the correction direction through the keyword input of'V-neck'. .
  • the server recommends a plurality of image data combined with other features in addition to the features included in the first output image data to the user, and the user recommends By selecting and adding a feature to be added, modified second input image data can be input.
  • the server can easily obtain the user's preference through the appearance description data for the added feature.
  • the server generates second appearance description data by combining a plurality of individual appearance characteristics with respect to the second input image data (S1440).
  • the second outline description data of the second input image data in which the user corrects the top length of the first output image data from crop to medium is ⁇ shirt, light pink, flower Pattern, slim, V-neck, sleeveless ⁇ or "Ba01, Bb02, Bg01, Bi03, Ie01, Ob01, Zb01" (when the code value corresponding to the medium is Bi03) can be generated.
  • the server may further include transmitting a request for approval of the output image data to the user.
  • the server outputs the first output image data and transmits a request for approval for the first output image data to the user. If the user approves, the second input image data is not input and the user does not In this case, the second input image data from which the first output image data is modified may be input.
  • the steps of calculating the individual appearance characteristics of the input image data, generating the appearance data, and generating and outputting the output image data may be repeated one or more times.
  • the server may output second output image data based on the second input image data, or a user may input third input image data based on the second output image data.
  • the server may further include a step (S1800) of storing the external appearance description data including the first appearance description data or the second appearance description data as the user's interest information.
  • the server includes the first outline description data on the first input image data, the second outline description data on the second input video data, or the difference between the first outline description data and the second outline description data (for example, Appearance description data information including appearance description data for features) can be stored, and through this, information of interest of the user can be obtained. That is, the server can easily obtain the user's interest information by storing and analyzing not only the image data including the first input image data or the second input image data input by the user, but also the appearance description data calculated based on this. .
  • the server outputs from the input image data including the first input image data or the second input image data in consideration of not only individual appearance characteristics, appearance description data, but also abstract characteristics (eg, vintage).
  • Image data can be generated and output, or external description data can be stored.
  • the server may further include displaying image data including first output image data or second input image data in a virtual space. That is, the server may display image data to be arranged by the user in the user's virtual space according to the user's request.
  • the user can display the image of the coat of the style desired by the user and decorate his virtual space according to his taste, and the server can provide individual appearance characteristics based on the input image data input by the user. It is possible to easily obtain the user's interest information by calculating and creating the appearance description data, and supplementing it through the user's correction, and has the effect of being able to utilize variously, such as providing the acquired user's interest information to the clothing market.
  • the calculation of individual appearance characteristics is It may be characterized in that it is performed for each frame in the data, and the appearance description data generation may be characterized in that it is generated by sequentially listing a plurality of individual appearance characteristics for each frame.
  • An apparatus for obtaining user interest information based on input image data includes one or more computers, and performs the aforementioned method for obtaining user interest information based on input image data.
  • the method for obtaining user interest information based on input image data according to the present invention described above may be implemented as a program (or application) and stored in a medium to be executed by being combined with a computer that is hardware.
  • design data refers to a two-dimensional or three-dimensional static or dynamic image including a specific object, such as'image data' defined above. That is,'design data' may be static image data that is one frame, or dynamic image data (ie, moving image data) in which a plurality of frames are consecutive.
  • design data terms are used for convenience of explanation and to distinguish them from image data terms.
  • FIG. 7 is an exemplary diagram for describing a standard model according to an embodiment of the present invention.
  • the server 10 may include a customization module 300 that provides a customizing interface so that a user can customize an object design.
  • the customizing interface may be a platform that can be accessed through a web page that can be used by the user or a dedicated app application that can be used by the user.
  • the server 10 may extract and store in advance the appearance classification criteria and individual appearance characteristics of various objects through the appearance characteristic recognition model 100 as described above.
  • the server 10 may extract the appearance classification criterion and individual appearance characteristics corresponding to the new object in real time through the appearance characteristic recognition model 100 even for a new object selected by the user.
  • the customizing interface may provide a user with functions such as searching for an object, selecting an object, creating and changing design data of the selected object, and purchasing an object.
  • the customizing interface may include text indicating the object name, text (or menu) corresponding to a plurality of appearance classification criteria corresponding to the object, and a plurality of menus and objects matching a plurality of individual appearance characteristics corresponding to the object. May contain design data.
  • the server 10 may display design data corresponding to an object in real time based on a user input detected through a customizing interface, and may change design data in real time according to a user input.
  • the server 10 may generate and store the standard model 310 in advance through the customization module 300 as shown in FIG. 7.
  • the standard model 310 means that when the object is clothing, a fixed joint line and a length reference line of clothing are preset based on the standard human body model 11 so that customizing of clothing design can be efficiently processed. Stands for one standard format. That is, for example, the user may select any one of preset lengths provided through the standard model 310, rather than individually customizing the length of the bottom to a specific value.
  • the standard model 310 may include a standard human body model 11, a plurality of fixed joint lines indicated by a solid line, and a plurality of length reference lines indicated by a dotted line, as shown in FIG. 7.
  • the fixed bonding lines are boundary areas where the respective components of the clothing (eg, the upper body portion and the sleeve) are joined, and may maintain a constant position without changing depending on the clothing.
  • the length reference line is a line representing any one of the lengths of each clothing and may be changed according to the clothing. That is, differently from FIG. 7, the position of the length reference line may be changed. Details of creating an object design based on the standard model will be described later with reference to FIG. 8.
  • the server 10 may additionally change the size of the clothing through the user's input. That is, the actual body size of the user may be additionally reflected in the generated clothing design.
  • the server 10 may manage information by registering a user as a member through a separate platform.
  • the user's member information may include a name, an address, a contact information, an object design creation and change history, an object purchase history, and the like.
  • FIG. 7 is an exemplary diagram for describing a standard model according to an embodiment of the present invention.
  • 8 is a flowchart illustrating a method of customizing an object design according to an embodiment of the present invention.
  • 9 to 21 are exemplary diagrams for explaining a method of customizing an object design according to an embodiment of the present invention.
  • the operations of FIG. 8 may be performed by the server 10 of FIGS. 1 and 2. Meanwhile, for convenience of explanation, a case where the object is clothing will be described.
  • the server 10 may determine an object based on a first user input in operation 41.
  • the object is a top (e.g., Shirt & Blouse, Jacket, Coat), bottoms (e.g. Pants, Skirt, leggings & stocking), or a dress ( Onepiece)
  • the server 10 may provide a separate search interface so that the user can search for a desired object, and when the user selects a specific object through search, it may provide a customizing interface.
  • the object selection menu may be connected to a customizing interface through a link.
  • the server 10 may calculate individual appearance characteristics for a plurality of appearance classification criteria by inputting image data corresponding to the object into an appearance characteristic recognition model in operation 42.
  • the appearance classification criterion is a specific classification criterion for describing the appearance of a specific object, and may include a plurality of individual appearance characteristics expressing various appearance characteristics within the same classification criterion of the object.
  • the appearance classification standard may include a specialized external classification standard and a general-purpose external classification standard that are different for each object.
  • the specialized appearance classification criteria are silhouette, collar & neckline, top length, opening, shoulder, sleeve, sleeve length, sleeve cuff, and general-purpose appearance classification standards are textures that can be applied to both tops, bottoms and dresses, It can be a pattern, color and detail.
  • the plurality of appearance classification criteria of the top may include at least one of silhouette, color & neckline, top length, opening, shoulder, sleeve, sleeve length, sleeve cuff, texture, pattern, color, and detail.
  • the silhouette may be the overall appearance of the clothing, and the individual appearance characteristics of the silhouette may be slim, regular and loose.
  • the collar & neckline may be a neckline of the clothes, and the individual appearance characteristics of the collar & neckline may include at least one of a round neckline, a V neckline, a plunging V neckline, a surplice, and a V neck camisole.
  • the shoulder may be a shoulder portion of clothes, and the individual external characteristics of the shoulder may include at least one of a plain shoulder, a raglan shoulder, a hearter, a drop shoulder, a dolman, an off shoulder, a strapless, and a one shoulder.
  • Individual contour characteristics of the sleeve length may include extram-short sleeves, short sleeves, medium sleeves and long sleeves.
  • Individual appearance characteristics of the top length may include crop, short, medium, long and maxi.
  • openings, sleeves, sleeve cuffs, textures, patterns, colors and details may each include known individual appearance characteristics.
  • the plurality of appearance classification criteria for the bottom may include at least one of a silhouette, a bottom length, a waist position, a texture, a pattern, a color, and a detail.
  • the silhouette may be the overall appearance of the clothing, and the individual appearance characteristics of the silhouette may be straight, skinny, bell-bottom, baggy, and wide in the case of pants, and h-line, a in the case of a skirt. It can be -line, mermaid, flare, balloon.
  • Individual appearance characteristics of the bottom length may include extra-short, short, midi and long.
  • Individual appearance characteristics of the waist position may include high waist, normal waist and low waist.
  • textures, patterns, colors, and details may include individual known individual appearance characteristics.
  • a plurality of appearance classification criteria of one piece may be selected in a total of 14 types in a form in which the remaining categories excluding the length of the top from the top and 3 types selected only from the bottom are combined. That is, the plurality of appearance classification criteria of a dress may include silhouette top, silhouette bottom, collar & neckline, shoulder, sleeve, sleeve cuff, sleeve length, opening, bottom length, waist position, texture, pattern, color and detail. have. Individual appearance characteristics for each of the plurality of appearance classification criteria may include characteristics as described above or previously known.
  • operation 42 may be performed before operation 41. That is, a plurality of appearance classification criteria and individual appearance characteristics corresponding to the object may be calculated and stored in advance.
  • the server 10 may provide a customizing interface 500 based on a plurality of appearance classification criteria corresponding to the object and a plurality of individual appearance characteristics respectively corresponding to the plurality of appearance classification criteria in operation 43. have.
  • the customizing interface 500 may include a plurality of menus 501 and design data 505 that match a plurality of individual appearance characteristics corresponding to an object.
  • the customizing interface 500 includes a slim menu, a regular menu and a loose menu 502 corresponding to a plurality of individual appearance characteristics of a silhouette, and an enumerated menu corresponding to a plurality of individual appearance characteristics of a color & neckline ( 503), and may include a crop menu, a short menu, a medium menu, a long menu, and a maxi menu 504 corresponding to a plurality of individual appearance characteristics of the image length.
  • the slim menu and the crop menu, V displayed in dark shades
  • design data 505 corresponding to a top that has a short length, a slim silhouette and a V-neck shape as shown in FIG. 9 may be displayed.
  • the customizing interface 500 includes an enumeration menu 506 corresponding to a plurality of individual appearance characteristics of the shoulder, and a menu 507 corresponding to a plurality of individual appearance characteristics of the sleeve length. ), an enumeration menu 508 corresponding to a plurality of individual appearance characteristics of the sleeve cuff, an enumeration menu 509 corresponding to a plurality of individual appearance characteristics of a texture, and an enumeration menu 511 corresponding to a plurality of individual appearance characteristics of the pattern.
  • It may further include an enumeration menu 512 corresponding to a plurality of individual appearance characteristics of color, and an enumeration menu 513 corresponding to a plurality of individual appearance characteristics of detail. Meanwhile, the enumeration menus 509, 511, 512, and 513 respectively corresponding to texture, pattern, color, and detail may be linked to separate detail pages, and a user may select various textures, patterns, colors, and details from separate detail pages. .
  • a customizing interface may be configured.
  • the configuration of a menu may be changed so that a user can easily select a plurality of individual appearance characteristics set based on the standard model 310.
  • the server 10 may generate design data of an object based on a second user input detected by the customizing interface 500 in operation 44 and a preset standard model 310.
  • the second user input may be an input for selecting at least one menu from among a plurality of menus.
  • the preset standard model 310 is one of the standard human body model 11, a fixed joint line (solid line) and a length reference line (dotted line) for indicating a plurality of individual appearance characteristics, as shown in FIG. It may include at least one.
  • the server 10 may generate design data based on at least one of a fixed joint line and a length reference line corresponding to at least one menu selected according to a second user input.
  • the standard model 310 includes a fixed joint line and a length reference line corresponding to a plurality of individual appearance characteristics in advance for each object so that it can be used as a standard format for generating design data of an object. Can be set to Therefore, when the user selects any one individual appearance characteristic of a specific object, the server 10 can generate design data corresponding to the object by using the corresponding fixed joint line or length reference line in the standard model 310. have.
  • the design data of the top can be completed when individual appearance characteristics are determined from a plurality of appearance classification criteria related to the top.
  • the plurality of appearance classification criteria related to the top may include silhouette, color & neckline, top length, opening, shoulder, sleeve, sleeve length, sleeve cuff, texture, pattern, color, and detail.
  • Individual appearance characteristics of each of the appearance classification criteria of may be determined by the standard model 310 and user input. For convenience of explanation, the top is divided into a body part, a sleeve part, and others.
  • the outline classification criteria related to the body part are silhouette, color & neckline, top length, opening and shoulder
  • the outline classification criteria related to the sleeve part are sleeve, sleeve length and sleeve cuff, and others are texture, pattern, color and detail.
  • the body part of the top may mean the rest of the top excluding the sleeve, and may include an upper end and a lower end.
  • the upper part can be determined by the length reference line of the silhouette, the fixed joint line of the collar & neckline, and the fixed joint line of the shoulder
  • the lower part is the reference length of the silhouette, the fixed joint line of the collar & neckline, and the image length. It can be determined by the length line.
  • the length reference lines related to the plurality of individual appearance characteristics of the silhouette are a first silhouette length reference line 91, a second silhouette length reference line 92, and a third silhouette length reference line. (93) may be included.
  • the first silhouette length reference line 91, the second silhouette length reference line 92, and the third silhouette length reference line 93 may correspond to loose, regular, and slim, respectively.
  • the fixed bonding lines related to the plurality of individual appearance characteristics of the collar & neckline are a first shoulder fixed bonding line 50 and a first color fixed bonding line 51.
  • a second color fixed bonding line 52, a third color fixed bonding line 53, and a fourth colored fixed bonding line 54 may be included.
  • first color fixed joint line 51 and the second color fixed joint line 52 may be fixed joint lines when expressible on the chest line, and may be a color top line and a color top join line, respectively. have.
  • the plurality of individual appearance characteristics that can be expressed through the first color fixed joint line 51 and the second color fixed joint line 52 include Funnel, Turtleneck, Boat Neckline, Stand Collar, Mandaring Collar, Regular Straight Point Collar, etc. can do.
  • the third color fixed joint line 53 may be a fixed joint line to which the lower end of the upper body is connected when the collar & neckline can be expressed on the chest line
  • the fourth color fixed joint line 54 is a color It may be a fixed joint line to which the lower end of the body part is connected when the &neckline is a type that descends below the chest line.
  • the third color fixed bonding line 53 may be used as a fixed bonding line
  • a four-color fixed bonding line 54 may be used as a fixed bonding line.
  • the plurality of individual appearance characteristics that can be expressed through the third color fixed joint line 53 and the fourth color fixed joint line 54 are Tailored Jacket Collar, Convertible Collar, Sailor Collar, Lapel, Shawl Collar, Scoop, Neckline, Surplice. And the like.
  • the design data of the color & neckline may be generated as shown in Fig. 11(c), the size of the color & neckline may change in width at the same rate as the body plate, and the vertical width may be within a certain range. There may be no change within. Of course, if there is a large difference in size between the standard human body model 11 and the user, the vertical width may also change.
  • the fixed bonding line connecting the lower end and the upper end of the upper body portion may further include a length reference line 80 of the first upper body.
  • the length reference line related to the plurality of individual appearance characteristics of the length of the image is a length reference line 80 of the first phase corresponding to the crop, and the length reference line of the second phase corresponding to the short. (81), a length reference line 82 of the third phase corresponding to the medium, a length reference line 83 of the fourth phase corresponding to the long, and a length reference line 84 of the fifth phase corresponding to maxi. .
  • the opening may be a hole in the top through which the user's body can pass, and may be determined immediately when design data of the upper portion described above is determined.
  • the fixed joint lines related to a plurality of individual external characteristics of the shoulder are a first shoulder fixed joint line 50 corresponding to a plain shoulder, and a second shoulder corresponding to a raglan shoulder (harter).
  • the body portion of the upper body may be generated as shown in FIGS. 14 to 18 by the length reference line of the silhouette, the fixed joint line of the collar & neckline, the length reference line of the upper length, and the fixed joint line of the opening and shoulder.
  • FIG. 14 shows the body base of the top to which the design data of FIG. 11 (c) can be joined, the third color fixed joint line 53, and the third silhouette length reference line corresponding to the slim silhouette ( 93), and (a2) of FIG. 14 shows the third color fixed joint line 53 of the collar & neckline, the third silhouette length reference line 93 of the silhouette, and the crop of the image length.
  • It may be the lower end of the upper garment determined according to the length reference line 80 corresponding to the first phase, and (a1) and (a2) of FIG. 14 may be combined to form the body of the upper garment. If the user does not select the sleeve, the body portion of the top may become the design data of the top.
  • FIG. 14 shows the body base of the top to be joined with the neckline & collar that descends to the bottom of the chest, the fourth color fixed joint line 54, and the third silhouette length reference line corresponding to the slim silhouette ( 93)
  • FIG. 14(b2) shows the color & neckline surplice, the fourth color fixed joint line 54, and the third silhouette length reference line 93 corresponding to the slim silhouette.
  • (b3) of FIG. 14 shows the fourth color fixed joint line 54 of the collar & neckline, the third silhouette length reference line 93 of the silhouette, and the crop of the image length.
  • the length of the corresponding first phase may be the lower end of the top determined according to the reference line 80, and (b1) and (b3) of FIG. 14 may be combined, or (b2) and (b3) may be combined to become the body of the top. have. If the user does not select the sleeve, the body portion of the top may become the design data of the top.
  • (c) of FIG. 14 is a plunging V neckline of the collar & neckline, a length reference line 80 of the first image corresponding to the crop of the image length, and a third silhouette length reference line 93 of the silhouette. It may be the body part of the top determined according to. If the user does not select the sleeve, the body portion of the top may become the design data of the top.
  • FIG. 15 shows the third color fixed joint line 53 of the color & neckline, the third silhouette length reference line 93 of the silhouette, and the first image corresponding to the crop of the image length. It may be the lower end of the top determined according to the length reference line 80, and FIG. 15(b) shows a V neckline of a collar & neckline, a third collar fixed bonding line 53, and a third silhouette corresponding to a slim silhouette. It may be the upper end of the top determined according to the length reference line 93, and (c) of FIG. 15 may be the body of the top of which (a) and (b) are combined. If the user does not select the sleeve, the body portion of the top may become the design data of the top.
  • (a) of FIG. 16 is a plunging V neckline of the collar & neckline, a length reference line 80 of the first image corresponding to the crop of the image length, and a third silhouette length reference line 93 of the silhouette. It may be the body part of the top determined according to, and (b1) and (b2) of FIG. 16 may be specific color designs in the collar & neckline, and the body of the top having a collar by combining (a) and (b2) The part (c1) may be determined, and (a) and (b1) may be combined to determine the body part (c2) of the top with a collar. If the user does not select the sleeve, the body portion of the top may become the design data of the top.
  • the lower end of the body portion of the upper body may be mainly used in the form (a) or (b).
  • the shape is according to the third color fixed joint line 53 of the collar & neckline, the third silhouette length reference line 93 of the silhouette, and the length reference line 80 of the first image corresponding to the crop of the image length. It may be the lower end of the determined top, and (b) the shape is the fourth color fixed joint line 54 of the collar & neckline, the third silhouette length reference line 93 of the silhouette, and the first top corresponding to the crop of the image length. It may be the lower end of the top determined according to the length reference line 80.
  • (a) of FIG. 18 is a body base on which the design data of FIG. 11 (c) can be joined, a third color fixed joint line 53, and a third silhouette length corresponding to the loose silhouette
  • the lower end of the top determined according to (80) may be a combined top.
  • (b) is the upper end of (a) and the third color fixed joint line 53 of the collar & neckline, the first silhouette length reference line 91, and the length reference line 81 of the second phase corresponding to the short of the length of the phase.
  • ) May be a combined top of the lower end of the top determined.
  • (c) is the upper end of (a) and the third color fixed joint line 53 of the collar & neckline, the first silhouette length reference line 91, and the length reference line 82 of the third phase corresponding to the medium of the length of the phase.
  • (d) is the upper end of (a) and the third color fixed joint line 53 of the collar & neckline, the first silhouette length reference line 91, and the length reference line 83 of the fourth phase corresponding to the long of the phase length.
  • (e) is the upper end of (a) and the third color fixed joint line 53 of the collar & neckline, the first silhouette length reference line 91, and the length reference line 84 of the fifth phase corresponding to maxi of the length of the image. ) May be a combined top of the lower end of the top determined. That is, in the case of combining loose, regular, slim silhouette and crop, short, medium, long, and maxi of the length of the top, the body portion of the top may have a total of 15 outlines. Therefore, the user can easily create various design data.
  • design data may be determined based on an external shape classification standard including a sleeve, a sleeve length, and a sleeve cuff and a plurality of individual external characteristics corresponding thereto.
  • an external shape classification standard including a sleeve, a sleeve length, and a sleeve cuff and a plurality of individual external characteristics corresponding thereto.
  • a plurality of individual appearance characteristics corresponding to the sleeve may be the presence or absence of the sleeve.
  • the length reference line related to a plurality of individual external characteristics of the sleeve length is a first sleeve length reference line 56 corresponding to the extram-short sleeve, short
  • a second sleeve length reference line 57 corresponding to a sleeve, a third sleeve length reference line 58 corresponding to a medium sleeve, and a fourth sleeve length reference line 59 corresponding to a long sleeve may be included.
  • the sleeve length may be a length including the sleeve cuff length.
  • the shoulder is Dolman
  • the second sleeve line 57 corresponding to the short sleeve cannot be selected, and if the sleeve cuff is not separately selected, Shirt Cuffs as shown in FIG. 19B may be automatically set.
  • the length of the sleeve is not selected, it may become Sleeveless without the sleeve.
  • the sleeve cuff can be made to be sized to cover the end of the sleeve, and the length of the sleeve can also be varied according to the user's body size, and the size of the sleeve varies with the upper body part. It can be changed equal to the ratio. Also, the size of the sleeve cuff may vary according to the size of the user's wrist circumference. In addition, the length of the end of the sleeve and part of the width of the sleeve cuff may be adjustable.
  • design data may be determined based on an appearance classification criterion including a silhouette, a length of a bottom, and a waist position for a bottom and a plurality of individual appearance characteristics corresponding thereto.
  • an appearance classification criterion including a silhouette, a length of a bottom, and a waist position for a bottom and a plurality of individual appearance characteristics corresponding thereto.
  • the fixed bonding line related to the plurality of individual appearance characteristics of the waist position is the first waist fixed bonding line 70 corresponding to the high waist of the skirt, and the high of the pants.
  • the second waist fixing bonding line 71 corresponding to the waist, the third waist fixing bonding line 72 corresponding to the normal waist of the pants, the fourth waist fixing bonding line 73 corresponding to the normal waist of the skirt, A fifth waist fixing bonding line 74 corresponding to a low waist and a sixth waist fixing bonding line 75 corresponding to a low waist of the pants may be included.
  • the length reference line related to a plurality of individual appearance characteristics of the lower length is a first lower length reference line 76 corresponding to an extra-short, and a second length reference line 76 corresponding to the short.
  • the skirt design data of (c) may be generated according to the fourth waist fixing joint line 73 corresponding to the normal waist of the skirt and the second lower length reference line 77 corresponding to the short.
  • the waist position may fit perfectly into the standard human body model 11, and in the case of a dress, the end line of the top and the waist position of the bottom must match accurately. In the same way as the user's body size change, the size of the bottoms may also change.
  • design data of one piece may be generated by applying the same method as the method in which the top and bottom are determined.
  • textures, patterns, and colors which are universal appearance classification standards common to tops, bottoms, and one-piece, are known, so that various types of textures, patterns, and colors applied to clothing are individually shaped. It can be a characteristic, and can be applied to the design data of a top, bottom, or one piece (eg, cotton, stripe pattern, red) according to the user's selection.
  • a plurality of individual appearance characteristics of details which is a universal appearance classification standard common to tops, bottoms, and dresses, may be various types of clothing accessories.
  • the plurality of individual appearance characteristics of the detail may include Pleats, Shirring, Gather, Trimming, Fur, Bow, Patch Pocket, Cubic, Quilting, Ruffle, Frill, Flounce, Banding, and Draw String. That is, (a) Pocket, (b) Bow, (c) String, (d) Set in Pocket and (e) Zipper of FIG. 21 may be added to the design data of a top, bottom, or one piece.
  • the change of the standard model 310 according to the user's body size mentioned above may be automatically performed according to the user's body size input, and accordingly, the appearance of the standard human body model 11 of the standard model 310 , The position/length of the fixed bonding line and the position/length of the reference line may be varied.
  • the server 10 may display design data generated in the customizing interface in operation 45. Through this, the user can easily purchase or change while checking the design data customized by the user in real time.
  • the server 10 may change the design data based on the third user input and the standard model 310 detected by the customizing interface 500. That is, the user can freely change the generated design data until it is saved or terminated.
  • FIG. 22 is a flowchart illustrating a method of providing a recommended object according to an embodiment of the present invention.
  • 23 is an exemplary view illustrating a method of providing a recommended object according to an embodiment of the present invention. The operations of FIG. 22 may be performed by the server 10 of FIGS. 1 and 2.
  • the server 10 may generate design data in operation 181.
  • the design data generation may be the same as the operation performed in FIG. 8.
  • design data 181 may be generated as shown in FIG. 23.
  • operation 181 may be omitted, and operation 182 may be directly performed based on the object.
  • the server 10 may extract a recommended object corresponding to a combination of appearance classification criteria matched with an object or an abstract characteristic corresponding to the generated design data based on the matching algorithm in operation 182.
  • a recommended object may be extracted by matching abstract characteristics based on an object selected by a user, or a recommended object may be extracted by matching abstract characteristics based on design data generated according to a user's input.
  • three tops arranged in the direction of the arrow may be recommended objects
  • three tops arranged in the direction of the arrow may be design data of the top changed according to the recommended object.
  • the server 10 may extract the first recommended object 182 when the abstract characteristic corresponding to the object or the generated design data is “neat”, and the corresponding abstract characteristic is “individual”.
  • the second recommendation object 183 may be extracted, and when the corresponding abstract characteristic is “formal”, the third recommendation object 184 may be extracted.
  • the server 10 may provide design data corresponding to the recommended object extracted in operation 183 to the user through the customization interface.
  • the server 10 may provide the user with design data 185 to which a color is added based on the first recommendation object 182, and the text is added based on the second recommendation object 183.
  • Design data 186 may be provided to the user, and design data 187 with pockets added based on the third recommended object 184 may be provided to the user.
  • the server 10 may provide all three changed design data or may provide one or more of them.
  • the server 10 may change the design data of the recommended object based on the fourth user input detected in the customizing interface in operation 184 and a preset standard model, and the design data changed in operation 185 Can be displayed.
  • the user may further customize the changed design data provided through the server 10.
  • an object selected by the user or an object suitable for the user can be recommended by grasping the sensitivity included in the generated design data, and the user can easily change the recommended object through the customizing interface.
  • An object design customizing apparatus includes one or more computers and performs the aforementioned object design customization method.
  • the object design customization method of the present invention described above may be implemented as a program (or application) and stored in a medium to be executed by being combined with a computer that is hardware.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • EPROM Erasable Programmable ROM
  • EEPROM Electrically Erasable Programmable ROM
  • Flash Memory Hard Disk, Removable Disk, CD-ROM, or It may reside on any type of computer-readable recording medium well known in the art to which the present invention pertains.

Abstract

Provided are a method for obtaining user interest information on the basis of input image data and a method for customizing a design of an object. A method for obtaining user interest information on the basis of input image data, according to one embodiment of the present invention, comprises: a step in which a server inputs first input image data into an appearance characteristic recognition model and calculates individual appearance characteristics for a plurality of appearance classification criteria; a step in which the server generates first appearance description data by combining a plurality of the individual appearance characteristics with respect to the first input image data; and a step in which the server generates and outputs first output image data on the basis of the first appearance description data, wherein the first input image data is image data inputted by a specific user, and the appearance classification criteria are specific classification criteria for describing the appearance of a specific object and may include a plurality of individual appearance characteristics for expressing various appearance characteristics within the same classification criteria of the object.

Description

입력영상데이터 기반 사용자 관심정보 획득 방법 및 대상체 디자인 커스터마이징 방법User's interest information acquisition method based on input image data and object design customization method
본 발명은 입력영상데이터 기반 사용자 관심정보 획득 방법 및 대상체 디자인 커스터마이징 방법에 관한 것이다.The present invention relates to a method for obtaining user interest information based on input image data and a method for customizing object design.
최근 소비자들은 브랜드들이 제안하는 대상체(또는 상품, 예를 들면, 의류)를 맹목적으로 선택하는 것이 아니라 개성있는 본인의 스타일을 구성하고, 유행을 선도하며 이끌어 나가기를 원한다. 즉, 사람들은 브랜드를 뛰어넘어 자신의 라이프 스타일을 표현하고, 개성을 드러낼 수 있는 상품, 즉 자신만을 위한 맞춤 상품을 원한다. 또한, 사람들은 인기 상품 중심으로 판매되는 대중의 상품 브랜드보다는 차별적인 상품의 희소성을 추구한다.Recently, consumers do not want to blindly select an object (or product, for example, clothing) proposed by brands, but rather to compose their own style, lead the fashion, and lead. In other words, people want products that can transcend brands to express their lifestyle and reveal their individuality, that is, customized products for themselves. In addition, people pursue the scarcity of differentiated products rather than popular product brands that are sold around popular products.
이에 브랜드는 변화된 소비 시장과 구매 욕구에 맞게 "만들어 놓은 것을 어떻게 잘 팔 것인가"에서 "어떻게 잘 팔리는 것을 효율적으로 만들까?"로 발상을 전환해야 하며, 판매 중점 아이템을 중심으로 한 단순한 단품 판매보다는 소비자의 감성과 감각을 반영하는 상품 구성으로 변화할 필요성이 높아지고 있다.Accordingly, the brand must change its concept from "how to sell what is made well" to "how to make what sells well?" to meet the changed consumption market and purchase desire. There is a growing need to change the composition of products that reflect the sensibility and sense of the company.
특히, 패션 소비재 제품을 중심으로 개인화 서비스 동향이 확산되고 있다. 예를 들어, 패션상품 소비자의 개성화와 차별화 욕구로 자신의 메시지를 전달하려는 개인 맞춤화된 디자인 니즈가 점차 증가하고 있다. 그러나 기존 패션의류제품의 디자인 및 패턴 라인업은 소비자 트렌드를 바탕으로 하고 있으나, 생산자 및 생산 시스템 중심으로 마켓을 형성하고 있으므로 소비자 니즈 및 트렌드에 관한 이머징 이슈에 빠르게 부응하지 못하는 문제점이 있었다.In particular, the trend of personalized service is spreading around fashion consumer products. For example, the need for personalized design to deliver their message through individualization and differentiation of fashion product consumers is gradually increasing. However, the design and pattern lineup of existing fashion apparel products is based on consumer trends, but since the market is formed centering on producers and production systems, there is a problem in not quickly responding to emerging issues related to consumer needs and trends.
한편, 최근 인터넷의 발달로 전자 상거래를 이용하여 대상체(또는 상품)를 구매하는 인터넷 사용자들이 많이 늘어나고 있다. 인터넷 쇼핑몰이 상거래의 중요한 수단으로 발달함으로써 의류를 취급하는 인터넷 쇼핑몰들도 증가하고 취급하는 상품수도 늘어가고 있다. 그러나, 현재 인터넷 쇼핑몰에서 의류가 전시 및 판매되는 방법으로는 의류들을 단순히 브랜드나 종류에 따라 분류하고, 각 의류에 대한 이미지 정보와 상세설명(상품코드, 브랜드 명, 색상, 소재, 제조회사, 상품설명 등)을 웹 사이트와 같은 인터넷 사이트 상에서 노출시켜 인터넷 이용자의 선택을 기다리는 획일적이고 수동적인 방법들이 사용되고 있다. 또한 많은 사람들이 구매했던 의류 상품들을 위주로 의류 정보를 노출시키는 방법도 사용되고 있으나, 이는 집단적 구매 기록을 참조한 것으로 각 개인의 특성화된 취향이 반영되지 않는 문제점이 있었다.Meanwhile, with the recent development of the Internet, many Internet users who purchase objects (or products) using electronic commerce are increasing. As Internet shopping malls develop as an important means of commerce, the number of Internet shopping malls dealing with clothing is increasing, and the number of products being handled is increasing. However, as a method of displaying and selling clothing in the current Internet shopping mall, clothing is simply classified according to brand or type, and image information and detailed descriptions for each clothing (product code, brand name, color, material, manufacturer, product) Description, etc.) are exposed on an Internet site such as a web site, and uniform and passive methods are used to wait for Internet users' choice. In addition, a method of exposing clothing information mainly on clothing products purchased by many people is also used, but this refers to the collective purchase record, and there is a problem that the specialized taste of each individual is not reflected.
또한, 기존의 이미지를 통한 사용자 관심정보 획득 방식은 사용자가 직접 태깅한 정보를 기반으로 획득이 이루어져, 사용자가 영상에 키워드를 잘못 태깅하면 획득 결과가 부정확해지는 문제점이 존재하였다. 또한, 사용자마다 정의하는 키워드에 차이가 존재할 수 있어서, 이미지를 입력하는 사용자가 선택한 키워드에 따라 관심정보의 획득 결과가 상이한 문제가 존재하였다.In addition, the existing method of obtaining user interest information through images is obtained based on information directly tagged by the user, and there is a problem in that the acquisition result becomes inaccurate if the user incorrectly tags a keyword in the image. In addition, since there may be differences in keywords defined for each user, there is a problem in that the results of obtaining interest information differ depending on the keyword selected by the user who inputs an image.
상술한 바와 같은 문제점을 해결하기 위한 본 발명은 사용자가 입력한 영상데이터를 분석하여 사용자의 관심정보를 획득하는 입력영상데이터 기반 사용자 관심정보 획득 방법 및 프로그램을 제공하고자 한다.The present invention for solving the above-described problem is to provide a method and program for obtaining user interest information based on input image data for obtaining user interest information by analyzing image data input by the user.
또한, 본 발명은 사용자에게 특정한 영상데이터를 출력하고, 사용자가 출력된 영상데이터를 수정하도록 하여 수정된 정보를 통해 더욱 정확하게 사용자의 관심정보를 획득하는 입력영상데이터 기반 사용자 관심정보 획득 방법 및 프로그램을 제공하고자 한다.In addition, the present invention provides a method and program for obtaining user interest information based on input image data that outputs specific image data to a user and allows the user to modify the output image data to more accurately obtain user interest information through the modified information. I want to provide.
또한, 본 발명은 사용자가 커스터마이징 인터페이스를 통해 손쉽게 대상체 디자인을 커스터마이징하는 방법 및 프로그램을 제공하고자 한다.In addition, the present invention is to provide a method and program for a user to easily customize an object design through a customizing interface.
또한, 본 발명은 대상체에 대응하는 복수의 외형분류기준과 개별외형특성을 이용하여 대상체 디자인을 커스터마이징하는 방법 및 프로그램을 제공하고자 한다.In addition, an object of the present invention is to provide a method and a program for customizing an object design using a plurality of external classification criteria and individual external characteristics corresponding to the object.
또한, 본 발명은 미리 설정한 표준모델을 활용하여 대상체 디자인을 커스터마이징하는 방법 및 프로그램을 제공하고자 한다.In addition, the present invention is to provide a method and a program for customizing an object design using a preset standard model.
또한, 본 발명은 대상체 또는 사용자의 디자인데이터에 대응하는 추상적 특성을 이용하여 사용자에 적합한 대상체를 추천하는 방법 및 프로그램을 제공하고자 한다.In addition, the present invention is to provide a method and program for recommending an object suitable for a user by using abstract characteristics corresponding to the object or the user's design data.
본 발명이 해결하고자 하는 과제들은 이상에서 언급된 과제로 제한되지 않으며, 언급되지 않은 또 다른 과제들은 아래의 기재로부터 통상의 기술자에게 명확하게 이해될 수 있을 것이다.The problems to be solved by the present invention are not limited to the problems mentioned above, and other problems that are not mentioned will be clearly understood by those skilled in the art from the following description.
상술한 과제를 해결하기 위한 본 발명의 일 실시예에 따른 입력영상데이터 기반 사용자 관심정보 획득 방법은, 서버가 제1 입력영상데이터를 외형특성 인식모델에 입력하여, 복수의 외형분류기준에 대한 개별외형특성을 산출하는 단계, 상기 서버가 상기 제1 입력영상데이터에 대한 복수의 개별외형특성을 조합하여 제1 외형서술데이터를 생성하는 단계 및 상기 서버가 상기 제1 외형서술데이터를 기초로 제1 출력영상데이터를 생성하여 출력하는 단계를 포함하고 상기 제1 입력영상데이터는 특정한 사용자로부터 입력받은 영상데이터이고, 상기 외형분류기준은 특정한 대상체의 외형을 서술하기 위한 특정한 분류기준으로서, 상기 대상체의 동일한 분류기준 내의 다양한 외형특성을 표현하는 복수의 개별외형특성을 포함하는 것일 수 있다.In the method for obtaining user interest information based on input image data according to an embodiment of the present invention for solving the above-described problem, the server inputs the first input image data into an appearance characteristic recognition model, Calculating an appearance characteristic, the server generating first appearance description data by combining a plurality of individual appearance characteristics with respect to the first input image data, and the server generating a first appearance description data based on the first appearance description data. Generating and outputting output image data, wherein the first input image data is image data input from a specific user, and the appearance classification standard is a specific classification standard for describing the appearance of a specific object, and the same It may include a plurality of individual appearance characteristics expressing various appearance characteristics within the classification criteria.
또한, 상기 제1 입력영상데이터는 상기 사용자로부터 입력받은 특정 대상체의 특정 물품에 대한 영상데이터이고, 상기 제1 출력영상데이터는 상기 제1 외형서술데이터를 기초로 생성된 상기 특정 대상체의 가상의 물품에 대한 영상데이터인 것일 수 있다.In addition, the first input image data is image data of a specific article of a specific object received from the user, and the first output image data is a virtual article of the specific object generated based on the first appearance description data. It may be image data for.
또한, 상기 제1 외형서술데이터를 생성하는 단계는, 상기 제1 입력영상데이터에 대한 복수의 개별외형특성에 대응하는 코드값을 추출하는 단계 및 상기 복수의 코드값을 조합하여, 코드열 형태의 제1 외형서술데이터를 생성하는 단계를 포함할 수 있다.In addition, the step of generating the first outline description data may include extracting code values corresponding to a plurality of individual appearance characteristics of the first input image data, and combining the plurality of code values, It may include the step of generating the first outline description data.
또한, 상기 제1 출력영상데이터는, 상기 제1 외형서술데이터에 포함된 복수의 개별외형특성을 포함하는 가상의 물품에 대한 영상데이터인 것일 수 있다.In addition, the first output image data may be image data for a virtual article including a plurality of individual appearance characteristics included in the first appearance description data.
또한, 상기 서버가 제2 입력영상데이터를 외형특성 인식모델에 입력하여, 복수의 외형분류기준에 대한 개별외형특성을 산출하는 단계 및 상기 서버가 상기 제2 입력영상데이터에 대한 복수의 개별외형특성을 조합하여 제2 외형서술데이터를 생성하는 단계를 더 포함하고, 상기 제2 입력영상데이터는, 상기 사용자에 의해 상기 제1 출력영상데이터가 수정된 영상데이터인 것일 수 있다.In addition, the server inputs the second input image data into an external appearance characteristic recognition model, calculating individual appearance characteristics for a plurality of appearance classification criteria, and the server inputting a plurality of individual appearance characteristics for the second input image data. And generating second outline description data by combining them, wherein the second input image data may be image data in which the first output image data has been modified by the user.
또한, 상기 서버가 상기 제1 외형서술데이터 또는 상기 제2 외형서술데이터를 상기 사용자의 관심정보로 저장하는 단계를 더 포함할 수 있다.In addition, the server may further include storing the first appearance description data or the second appearance description data as the user's interest information.
본 발명의 다른 실시예에 따른 입력영상데이터 기반 사용자 관심정보 획득 프로그램은, 하드웨어와 결합되어 상기 언급된 사용자 관심정보 획득 방법을 실행하며, 기록매체에 저장된다.A program for obtaining user interest information based on input image data according to another embodiment of the present invention is combined with hardware to execute the aforementioned method for obtaining user interest information, and is stored in a recording medium.
본 발명의 또 다른 실시예에 따른 대상체 디자인 커스터마이징 방법은, 서버가 제1 사용자 입력에 기반하여 대상체를 결정하는 단계, 상기 서버가 상기 대상체에 대응하는 복수의 외형분류기준 및 상기 복수의 외형분류기준에 각각 대응하는 복수의 개별외형특성에 기반하여 커스터마이징 인터페이스를 제공하는 단계 및 상기 서버가 상기 커스터마이징 인터페이스에서 검출한 제2 사용자 입력 및 미리 설정한 표준 모델에 기반하여 상기 대상체의 디자인데이터를 생성하는 단계,를 포함하고, 상기 외형분류기준은, 특정한 대상체의 외형을 서술하기 위한 특정한 분류기준으로서, 상기 대상체의 동일한 분류기준 내의 다양한 외형특성을 표현하는 복수의 개별외형특성을 포함하고, 상기 커스터마이징 인터페이스는 상기 대상체에 대응하는 상기 복수의 개별외형특성과 매칭되는 복수의 메뉴 및 상기 디자인데이터를 포함하는 것일 수 있다.An object design customizing method according to another embodiment of the present invention includes determining, by a server, an object based on a first user input, a plurality of appearance classification criteria corresponding to the object, and the plurality of appearance classification criteria Providing a customizing interface based on a plurality of individual appearance characteristics corresponding to each of, and generating design data of the object based on a second user input detected by the customizing interface and a preset standard model by the server ,, and the appearance classification standard is a specific classification standard for describing the appearance of a specific object, and includes a plurality of individual appearance characteristics expressing various external characteristics within the same classification standard of the object, and the customizing interface It may include a plurality of menus and the design data matched with the plurality of individual appearance characteristics corresponding to the object.
또한, 상기 서버가 상기 커스터마이징 인터페이스에서 상기 생성한 디자인데이터를 표시하는 단계를 더 포함할 수 있다.In addition, the server may further include displaying the generated design data in the customizing interface.
또한, 상기 제2 사용자 입력은 상기 복수의 메뉴 중 적어도 하나의 메뉴를 선택하는 입력인 것일 수 있다.In addition, the second user input may be an input for selecting at least one menu from among the plurality of menus.
또한, 상기 표준 모델은 표준 인체 모형, 상기 복수의 개별외형특성을 나타내기 위한 고정 접합 라인 및 길이 기준 라인 중 적어도 하나를 포함하고, 상기 서버가 상기 제2 사용자 입력에 따라 선택된 상기 적어도 하나의 메뉴에 대응하는 고정 접합 라인 및 길이 기준 라인 중 적어도 하나에 기반하여 상기 디자인데이터를 생성하는 단계를 더 포함할 수 있다.In addition, the standard model includes at least one of a standard human body model, a fixed joint line and a length reference line for indicating the plurality of individual appearance characteristics, and the at least one menu selected by the server according to the second user input It may further include generating the design data based on at least one of a fixed junction line and a length reference line corresponding to.
또한, 상기 서버가 상기 커스터마이징 인터페이스에서 검출한 제3사용자 입력 및 상기 표준 모델에 기반하여 상기 디자인데이터를 변경하는 단계를 더 포함할 수 있다.In addition, the server may further include changing the design data based on the standard model and the third user input detected by the customizing interface.
또한, 상기 서버가 매칭알고리즘에 기반하여 상기 대상체 또는 상기 생성한 디자인데이터에 대응하는 추상적 특성에 매칭된 외형분류기준 조합에 해당하는 추천 대상체를 추출하는 단계 및 상기 추출한 추천 대상체에 대응하는 디자인데이터를 상기 커스터마이징 인터페이스를 통해 사용자에게 제공하는 단계를 더 포함할 수 있다.In addition, the server extracting a recommended object corresponding to a combination of appearance classification criteria matched with the object or an abstract characteristic corresponding to the generated design data based on a matching algorithm, and design data corresponding to the extracted recommended object. It may further include providing to the user through the customizing interface.
또한, 상기 서버가 상기 커스터마이징 인터페이스에서 검출한 제4사용자 입력 및 미리 설정한 표준 모델에 기반하여 상기 추천 대상체의 상기 디자인데이터를 변경하는 단계를 더 포함할 수 있다.Further, the server may further include changing the design data of the recommended object based on a fourth user input detected by the customizing interface and a preset standard model.
본 발명의 또 다른 실시예에 따른 대상체 디자인 커스터마이징 프로그램은, 하드웨어와 결합되어 상기 언급된 대상체 디자인 커스터마이징 방법을 실행하며, 기록매체에 저장된다.An object design customization program according to another embodiment of the present invention is combined with hardware to execute the above-described object design customization method, and is stored in a recording medium.
본 발명의 기타 구체적인 사항들은 상세한 설명 및 도면들에 포함되어 있다.Other specific details of the present invention are included in the detailed description and drawings.
상기와 같은 본 발명에 따르면, 사용자가 입력한 영상데이터 및 사용자의 수정을 거친 영상데이터를 분석하여 사용자의 관심정보를 정확하게 획득할 수 있다.According to the present invention as described above, it is possible to accurately obtain the user's interest information by analyzing the image data input by the user and the image data modified by the user.
또한, 상기 본 발명에 의하면, 사용자의 관심정보를 영상데이터를 분석한 텍스트 기반의 외형서술데이터의 형태로 저장함으로써, 효율적인 사용자의 관심정보의 획득 및 저장이 가능하다.In addition, according to the present invention, by storing the user's interest information in the form of text-based appearance description data by analyzing image data, it is possible to efficiently acquire and store the user's interest information.
또한, 상기 본 발명에 의하면, 사용자에게 커스터마이징 인터페이스를 제공함으로써 사용자가 손쉽게 대상체의 디자인을 생성 및 변경할 수 있다.In addition, according to the present invention, the user can easily create and change the design of the object by providing the user with a customizing interface.
또한, 상기 본 발명에 의하면, 대상체에 대응하는 복수의 외형분류기준과 개별외형특성을 이용함으로써 커스터마이징의 효율을 높일 수 있다.In addition, according to the present invention, it is possible to increase the efficiency of customization by using a plurality of appearance classification criteria and individual appearance characteristics corresponding to the object.
또한, 상기 본 발명에 의하면, 커스터마이징 인터페이스를 통해 사용자에게 디자인 자유도를 부여하되 미리 설정한 표준모델을 활용하여 커스터마이징 방법의 처리 속도를 높일 수 있다.In addition, according to the present invention, design freedom is given to the user through the customizing interface, but the processing speed of the customizing method can be increased by using a preset standard model.
또한, 상기 본 발명에 의하면, 커스터마이징 인터페이스를 이용하여 쉽고 간편하게 원하는 디자인이 반영된 대상체 제작을 요청할 수 있어 사용자의 만족도를 극대화시킬 수 있다.In addition, according to the present invention, the user's satisfaction can be maximized since it is possible to easily and simply request creation of an object reflecting a desired design using a customizing interface.
또한, 상기 본 발명에 의하면, 독특하고 유니크한 디자인을 원하는 사용자들에게 본인만이 소장할 수 있는 대상체를 제작해줌으로써, 대상체 소장 가치를 높일 수 있고, 사용자의 흥미를 배가시킬 수 있다.In addition, according to the present invention, by creating objects that can only be possessed by users who desire a unique and unique design, it is possible to increase the object holding value and increase the user's interest.
본 발명의 효과들은 이상에서 언급된 효과로 제한되지 않으며, 언급되지 않은 또 다른 효과들은 아래의 기재로부터 통상의 기술자에게 명확하게 이해될 수 있을 것이다.The effects of the present invention are not limited to the effects mentioned above, and other effects not mentioned will be clearly understood by those skilled in the art from the following description.
도 1은 본 발명의 일 실시예에 따른 서버와 관련 구성들을 나타낸 블록도이다.1 is a block diagram showing a server and related configurations according to an embodiment of the present invention.
도 2는 본 발명의 일 실시예에 따른 대상체별 외형특성 인식모델을 포함하는 서버를 나타낸 블록도이다.2 is a block diagram showing a server including an external feature recognition model for each object according to an embodiment of the present invention.
도 3은 본 발명의 일 실시예에 따른 입력영상데이터 기반 사용자 관심정보 획득 방법의 흐름도이다.3 is a flowchart of a method of obtaining user interest information based on input image data according to an embodiment of the present invention.
도 4는 본 발명의 일 실시예에 따른 입력영상데이터 기반 사용자 관심정보 획득 시스템의 블록도이다.4 is a block diagram of a system for obtaining user interest information based on input image data according to an embodiment of the present invention.
도 5는 본 발명의 일 실시예에 따른 외형서술데이터 생성 방법의 흐름도이다.5 is a flowchart of a method of generating outline description data according to an embodiment of the present invention.
도 6은 본 발명의 일 실시예에 따른 제2 입력영상데이터를 입력받는 단계를 더 포함하는 입력영상데이터 기반 사용자 관심정보 획득 방법의 흐름도이다.6 is a flowchart of a method for obtaining user interest information based on input image data, further comprising the step of receiving second input image data according to an embodiment of the present invention.
도 7은 본 발명의 일 실시예에 따른 표준 모델을 설명하기 위한 예시도이다.7 is an exemplary diagram for describing a standard model according to an embodiment of the present invention.
도 8은 본 발명의 일 실시예에 따른 대상체 디자인 커스터마이징 방법을 설명하기 위한 흐름도이다.8 is a flowchart illustrating a method of customizing an object design according to an embodiment of the present invention.
도 9 내지 도 21은 본 발명의 일 실시예에 따른 대상체 디자인 커스터마이징 방법을 설명하기 위한 예시도이다.9 to 21 are exemplary diagrams for explaining a method of customizing an object design according to an embodiment of the present invention.
도 22는 본 발명의 일 실시예에 따른 추천 대상체를 제공하는 방법을 설명하기 위한 흐름도이다.22 is a flowchart illustrating a method of providing a recommended object according to an embodiment of the present invention.
도 23은 본 발명의 일 실시예에 따른 추천 대상체를 제공하는 방법을 설명하기 위한 예시도이다.23 is an exemplary diagram for describing a method of providing a recommended object according to an embodiment of the present invention.
본 발명의 이점 및 특징, 그리고 그것들을 달성하는 방법은 첨부되는 도면과 함께 상세하게 후술되어 있는 실시예들을 참조하면 명확해질 것이다. 그러나, 본 발명은 이하에서 개시되는 실시예들에 제한되는 것이 아니라 서로 다른 다양한 형태로 구현될 수 있으며, 단지 본 실시예들은 본 발명의 개시가 완전하도록 하고, 본 발명이 속하는 기술 분야의 통상의 기술자에게 본 발명의 범주를 완전하게 알려주기 위해 제공되는 것이며, 본 발명은 청구항의 범주에 의해 정의될 뿐이다. Advantages and features of the present invention, and a method of achieving them will become apparent with reference to the embodiments described below in detail together with the accompanying drawings. However, the present invention is not limited to the embodiments disclosed below, but may be implemented in a variety of different forms, only the present embodiments are intended to complete the disclosure of the present invention, It is provided to fully inform the technician of the scope of the present invention, and the present invention is only defined by the scope of the claims.
본 명세서에서 사용된 용어는 실시예들을 설명하기 위한 것이며 본 발명을 제한하고자 하는 것은 아니다. 본 명세서에서, 단수형은 문구에서 특별히 언급하지 않는 한 복수형도 포함한다. 명세서에서 사용되는 "포함한다(comprises)" 및/또는 "포함하는(comprising)"은 언급된 구성요소 외에 하나 이상의 다른 구성요소의 존재 또는 추가를 배제하지 않는다. 명세서 전체에 걸쳐 동일한 도면 부호는 동일한 구성 요소를 지칭하며, "및/또는"은 언급된 구성요소들의 각각 및 하나 이상의 모든 조합을 포함한다. 비록 "제1 ", "제2 " 등이 다양한 구성요소들을 서술하기 위해서 사용되나, 이들 구성요소들은 이들 용어에 의해 제한되지 않음은 물론이다. 이들 용어들은 단지 하나의 구성요소를 다른 구성요소와 구별하기 위하여 사용하는 것이다. 따라서, 이하에서 언급되는 제1 구성요소는 본 발명의 기술적 사상 내에서 제2 구성요소일 수도 있음은 물론이다.The terms used in the present specification are for describing exemplary embodiments and are not intended to limit the present invention. In this specification, the singular form also includes the plural form unless specifically stated in the phrase. As used in the specification, “comprises” and/or “comprising” do not exclude the presence or addition of one or more other elements other than the mentioned elements. Throughout the specification, the same reference numerals refer to the same elements, and “and/or” includes each and all combinations of one or more of the mentioned elements. Although “first”, “second”, and the like are used to describe various elements, it goes without saying that these elements are not limited by these terms. These terms are only used to distinguish one component from another component. Therefore, it goes without saying that the first component mentioned below may be the second component within the technical idea of the present invention.
다른 정의가 없다면, 본 명세서에서 사용되는 모든 용어(기술 및 과학적 용어를 포함)는 본 발명이 속하는 기술분야의 통상의 기술자에게 공통적으로 이해될 수 있는 의미로 사용될 수 있을 것이다. 또한, 일반적으로 사용되는 사전에 정의되어 있는 용어들은 명백하게 특별히 정의되어 있지 않는 한 이상적으로 또는 과도하게 해석되지 않는다.Unless otherwise defined, all terms (including technical and scientific terms) used in the present specification may be used as meanings that can be commonly understood by those of ordinary skill in the art to which the present invention belongs. In addition, terms defined in a commonly used dictionary are not interpreted ideally or excessively unless explicitly defined specifically.
본 명세서에서 '컴퓨터'는 연산처리를 수행하여 사용자에게 결과를 제공할 수 있는 다양한 장치들이 모두 포함된다. 예를 들어, 컴퓨터는 데스크 탑 PC, 노트북(Note Book) 뿐만 아니라 스마트폰(Smart phone), 태블릿 PC, 셀룰러폰(Cellular phone), 피씨에스폰(PCS phone; Personal Communication Service phone), 동기식/비동기식 IMT-2000(International Mobile Telecommunication-2000)의 이동 단말기, 팜 PC(Palm Personal Computer), 개인용 디지털 보조기(PDA; Personal Digital Assistant) 등도 해당될 수 있다. 또한, 헤드마운트 디스플레이(Head Mounted Display; HMD) 장치가 컴퓨팅 기능을 포함하는 경우, HMD장치가 컴퓨터가 될 수 있다. 또한, 컴퓨터는 클라이언트로부터 요청을 수신하여 정보처리를 수행하는 서버가 해당될 수 있다.In the present specification, a'computer' includes all various devices capable of performing arithmetic processing and providing results to a user. For example, computers are not only desktop PCs and notebooks, but also smart phones, tablet PCs, cellular phones, PCS phones, and synchronous/asynchronous systems. A mobile terminal of the International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), a personal digital assistant (PDA), and the like may also be applicable. In addition, when a head mounted display (HMD) device includes a computing function, the HMD device may be a computer. Also, the computer may correspond to a server that receives a request from a client and performs information processing.
본 명세서에서 '클라이언트'는 사용자들이 프로그램(또는 어플리케이션)을 설치하여 사용할 수 있는 통신 기능을 포함한 모든 장치를 말한다. 즉, 클라이언트 장치는 스마트폰, 태블릿, PDA, 랩톱, 스마트워치, 스마트카메라 등과 같은 전기 통신 장치, 리모트 콘트롤러 중 하나 이상을 포함할 수 있으나, 이에 제한되는 것은 아니다.In this specification,'client' refers to all devices including a communication function that users can install and use a program (or application). That is, the client device may include at least one of a telecommunication device such as a smart phone, a tablet, a PDA, a laptop, a smart watch, and a smart camera, and a remote controller, but is not limited thereto.
본 명세서에서 '대상체'는 검색을 수행하는 특정한 분류 또는 카테고리의 물품을 의미한다. 예를 들어, 쇼핑몰에서 원하는 물품의 이미지를 검색하고자 할 때, 사용자가 물품 카테고리 중에서 의류 검색을 수행하는 경우, 대상체는 의류일 수 있다.In the present specification, "object" refers to an article of a specific classification or category for performing a search. For example, when a user wants to search for an image of a desired item in a shopping mall, when a user searches for clothes among item categories, the object may be clothes.
본 명세서에서, '영상데이터'(또는 '디자인데이터')는 특정한 대상체를 포함하는 2차원 또는 3차원의 정적 또는 동적 이미지를 의미한다. 즉, '영상데이터'는 하나의 프레임인 정적 영상데이터일 수도 있고, 복수의 프레임이 연속되는 동적 영상데이터(즉, 동영상데이터)일 수도 있다.In the present specification, “image data” (or “design data”) refers to a two-dimensional or three-dimensional static or dynamic image including a specific object. That is,'image data' may be static image data that is one frame, or dynamic image data (ie, moving image data) in which a plurality of frames are consecutive.
본 명세서에서, '학습용 영상데이터'는 학습모델의 트레이닝에 이용되는 영상데이터를 의미한다.In the present specification,'learning image data' means image data used for training a learning model.
본 명세서에서 '외형분류기준'은 특정한 대상체의 외형을 서술(description)하거나 주석 삽입(annotation)을 위해 필요한 외형 표현의 분류기준을 의미한다. 즉, '외형분류기준'은 특정한 대상체의 외형을 서술하기 위한 특정한 분류기준으로서, 상기 대상체의 동일한 분류기준 내의 다양한 외형특성을 표현하는 복수의 개별외형특성을 포함한다. 예를 들어, 대상체가 의류인 경우, 외형분류기준은 의류의 외형에 대한 분류기준으로, 패턴(Pattern), 색상(Color), 핏(fit), 기장(Length) 등이 해당될 수 있다. 즉, 특정한 대상체에 대해 외형분류기준이 많아지면, 대상체에 속하는 특정한 물품의 외형을 상세히 기술할 수 있다.In the present specification, the'appearance classification standard' refers to a classification standard of an appearance expression necessary for describing the appearance of a specific object or for annotation. That is, the'appearance classification criterion' is a specific classification criterion for describing the appearance of a specific object, and includes a plurality of individual appearance characteristics expressing various appearance characteristics within the same classification criterion of the object. For example, when the object is clothing, the appearance classification standard is a classification standard for the appearance of the clothing, and may correspond to a pattern, color, fit, length, and the like. That is, when the appearance classification standard for a specific object increases, the external shape of a specific article belonging to the object can be described in detail.
본 명세서에서 '개별외형특성'은 특정한 외형분류기준 내에 포함되는 다양한 특성을 의미한다. 예를 들어, 외형분류기준이 색상인 경우, 개별외형특성은 다양한 개별 색상을 의미한다.In this specification,'individual appearance characteristics' refers to various characteristics included in a specific appearance classification standard. For example, if the appearance classification criterion is color, the individual appearance characteristics mean various individual colors.
본 명세서에서 '전문가 클라이언트(30)'는 학습용 영상데이터에 개별외형특성을 부여(즉, 학습용 영상데이터의 레이블링(Lableing))하거나, 영상데이터에 미학습 외형분류기준 내의 개별외형특성을 부여하는 역할을 수행하는 전문가의 클라이언트를 의미한다.In this specification, the'expert client 30' is responsible for giving individual appearance characteristics to the learning image data (i.e., labeling the learning image data) or giving the image data individual appearance characteristics within the unlearned appearance classification criteria. It means the client of the expert who does it.
본 명세서에서 '추상적 특성'은 특정한 대상체에 대해 부여되는 추상적인 특성을 의미한다. 예를 들어, '추상적 특성'은 특정한 대상체에 대한 감성적 특성(예를 들어, 의류인 경우, 빈티지와 같은 감성적 또는 유행적 표현)일 수 있다. 또한, 예를 들어, '추상적 특성'은, 영상데이터가 동영상인 경우, 형상변화 또는 동작의 의미일 수 있다.In the present specification, "abstract characteristic" refers to an abstract characteristic given to a specific object. For example, the'abstract characteristic' may be an emotional characteristic for a specific object (for example, in the case of clothing, an emotional or fashionable expression such as vintage). In addition, for example,'abstract characteristic' may mean a shape change or motion when the image data is a moving picture.
이하, 첨부된 도면을 참조하여 본 발명의 실시예를 상세하게 설명한다. Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings.
도 1은 본 발명의 일 실시예에 따른 서버와 관련 구성들을 나타낸 블록도이다. 도 2는 본 발명의 일 실시예에 따른 대상체별 외형특성 인식모델을 포함하는 서버를 나타낸 블록도이다.1 is a block diagram showing a server and related configurations according to an embodiment of the present invention. 2 is a block diagram showing a server including an external feature recognition model for each object according to an embodiment of the present invention.
본 발명의 입력영상데이터 기반 사용자 관심정보 획득 방법 및 대상체 디자인 커스터마이징 방법을 설명하기에 앞서 서버(10)의 영상데이터 검색 방법을 우선 설명한다. 여기서 서버(10)의 영상데이터 검색 방법이란 특정한 대상체의 외형을 나타내는 추상적 용어를 기반으로 사용자가 원하는 영상데이터를 정확하게 추출하는 방법을 의미한다. 대상체 디자인의 커스터마이징 방법은 상기 영상데이터 검색 방법에 기초하여 이루어질 수 있다. 따라서, 우선 영상데이터 검색 방법을 설명한다.Prior to describing the method of obtaining user interest information based on input image data and the method of customizing object design according to the present invention, a method of searching for image data of the server 10 will be described first. Here, the image data search method of the server 10 refers to a method of accurately extracting image data desired by a user based on abstract terms representing the appearance of a specific object. The method of customizing the object design may be performed based on the image data search method. Therefore, first, a method of searching for image data will be described.
도 1을 참조하면, 본 발명의 일실시예에 따른 영상데이터 검색방법은, 서버(10)가 영상데이터를 외형특성 인식모델(100)에 입력하여, 복수의 외형분류기준에 대한 개별외형특성을 산출하는 단계, 서버(10)가 상기 영상데이터에 대한 복수의 개별외형특성을 조합하여 외형서술데이터를 생성하는 단계 및 서버(10)가 특정한 사용자로부터 검색키워드를 입력받음에 따라, 매칭알고리즘(200)에서 상기 검색키워드에 대응하는 추상적 특성에 매칭된 외형분류기준 조합에 해당하는 영상데이터를 추출하는 단계를 포함한다.Referring to FIG. 1, in the image data search method according to an embodiment of the present invention, the server 10 inputs the image data into the appearance characteristic recognition model 100, and individual appearance characteristics for a plurality of appearance classification criteria are obtained. According to the calculating step, the server 10 generating appearance description data by combining a plurality of individual appearance characteristics of the image data, and the server 10 receiving a search keyword from a specific user, the matching algorithm 200 ), extracting image data corresponding to a combination of appearance classification criteria matched with an abstract characteristic corresponding to the search keyword.
일 실시예에서, 서버(10)는 복수의 외형분류기준, 복수의 개별외형특성, 추상적 특성, 외형서술데이터, 추출한 영상데이터, 커스터마이징한 디자인데이터 등을 데이터베이스(400)에 저장할 수 있다.In one embodiment, the server 10 may store a plurality of appearance classification criteria, a plurality of individual appearance characteristics, abstract characteristics, appearance description data, extracted image data, customized design data, and the like in the database 400.
일 실시예에서, 서버(10)가 영상데이터를 외형특성 인식모델(100)에 입력하여, 복수의 외형분류기준에 대한 개별외형특성을 산출한다. 즉, 서버(10)는 외형특성 분석이 진행되지 않은 신규 영상데이터를 외형특성 인식모델(100)에 제공하여, 특정한 대상체의 외형분류기준별로 개별외형특성을 산출한다.In one embodiment, the server 10 inputs the image data into the appearance characteristic recognition model 100 to calculate individual appearance characteristics for a plurality of appearance classification criteria. That is, the server 10 provides new image data for which the external characteristic analysis has not been performed to the external characteristic recognition model 100 to calculate individual external characteristics for each external classification standard of a specific object.
일 실시예에서, 상기 외형특성 인식모델(100)은, 도 1에서와 같이, 상이한 외형 분류기준을 판단하는 복수의 개별특성인식모듈(110)을 포함한다. 즉, 상기 외형특성 인식모델(100)은 각각의 외형분류기준을 인식하는 것으로 특화된 복수의 개별특성인식모듈(110)을 포함한다. 특정한 대상체의 외형분류기준이 많을수록, 서버(10)는 다수의 개별특성인식모듈(110)을 외형특성 인식모델(100) 내에 포함한다. 상기 개별특성인식모듈(110)은 영상데이터의 특정한 외형분류기준에 포함된 개별외형특성을 산출하는 것이다.In one embodiment, the appearance characteristic recognition model 100 includes a plurality of individual characteristic recognition modules 110 for determining different appearance classification criteria, as shown in FIG. 1. That is, the appearance characteristic recognition model 100 includes a plurality of individual characteristic recognition modules 110 specialized to recognize each appearance classification criterion. The more the external classification criteria of a specific object are, the more the server 10 includes a plurality of individual characteristic recognition modules 110 in the external characteristic recognition model 100. The individual characteristic recognition module 110 calculates individual external characteristics included in a specific external classification standard of image data.
일 실시예에서, 상기 개별외형특성 산출단계는, 상기 외형특성 인식모델(100) 내의 각각의 개별특성인식모듈(110)에 영상데이터를 입력하여, 상기 영상데이터에 대한 복수의 개별외형특성을 산출한다. 이를 통해, 서버(10)는 영상데이터에 대한 각 외형분류기준의 개별외형특성을 모두 획득한다.In one embodiment, in the step of calculating the individual appearance characteristics, by inputting image data to each individual characteristic recognition module 110 in the appearance characteristic recognition model 100, a plurality of individual appearance characteristics for the image data is calculated. do. Through this, the server 10 acquires all the individual appearance characteristics of each appearance classification criterion for image data.
또한, 일 실시예에서, 상기 개별특성인식모듈(110)은, 복수의 학습용 영상데이터에 대해 특정한 외형분류기준의 개별외형특성을 매칭하여 딥러닝 학습모델을 통해 트레이닝이 된 것이다. 즉, 개별특성인식모듈(110)은 특정한 딥러닝 알고리즘으로 구축되는 것으로, 복수의 외형분류기준 중에서 특정한 하나와 학습용 영상데이터를 매칭하여 학습을 수행한 것이다.In addition, in one embodiment, the individual characteristic recognition module 110 is trained through a deep learning learning model by matching individual external characteristics of a specific external classification standard with respect to a plurality of training image data. That is, the individual characteristic recognition module 110 is constructed with a specific deep learning algorithm, and learning is performed by matching a specific one of a plurality of appearance classification criteria with image data for learning.
이를 위해, 일 실시예에서, 서버(10)는 다음과 같이 각각의 개별특성인식모듈(110)을 트레이닝하는 과정을 수행할 수 있다.To this end, in one embodiment, the server 10 may perform a process of training each individual characteristic recognition module 110 as follows.
일 실시예에서, 먼저, 서버(10)는 특정한 대상체에 대한 복수의 학습용 영상데이터를 획득한다. 예를 들어, 대상체가 특정한 의류 유형(예를 들어, 셔츠)인 경우, 서버(10)는 여러 셔츠의 이미지를 획득한다. 서버(10)는 기존에 저장되어 있는 영상데이터 중에서 전문가에 의해 선별될 수도 있고, 학습에 용이한 대상체 이미지를 신규로 획득할 수도 있다.In one embodiment, first, the server 10 acquires a plurality of training image data for a specific object. For example, when the object is a specific type of clothing (eg, a shirt), the server 10 acquires images of several shirts. The server 10 may be selected by an expert from among image data previously stored, or may newly acquire an object image that is easy to learn.
일 실시예에서, 서버(10)는 각각의 외형분류기준 정의 및 각 외형분류기준에 대한 복수의 개별외형특성을 획득한다. 즉, 서버(10)는 복수의 외형분류기준을 설정함에 따라 개별특성인식모듈(110)의 초기 개수를 설정한다. 그리고, 서버(10)는 각 외형분류기준 내의 복수의 개별외형특성을 설정함에 따라, 각 외형분류기준에 대해 학습용 영상데이터를 레이블링할 특징(feature) 종류를 설정한다.In one embodiment, the server 10 acquires each appearance classification standard definition and a plurality of individual appearance characteristics for each appearance classification standard. That is, the server 10 sets the initial number of individual characteristic recognition modules 110 according to the setting of a plurality of appearance classification criteria. In addition, the server 10 sets a type of feature for labeling the training image data for each appearance classification criterion as a plurality of individual appearance characteristics in each appearance classification criterion are set.
일 실시예에서, 서버(10)는 특정한 대상체 외형 분석에 대한 전문가 클라이언트(30)로부터 특정한 대상체 외형을 분석하기 위한 복수의 외형분류기준 및 각 외형분류기준 내의 복수의 개별외형특성을 설정받을 수 있다. 예를 들어, 의류에 대한 외형특성 인식모델(100)을 구축하는 경우, 서버(10)는 의류 전문가인 디자이너의 클라이언트로부터 외형분류기준 및 이에 포함되는 개별외형특성을 입력받을 수 있다.In one embodiment, the server 10 may receive a plurality of appearance classification criteria for analyzing the appearance of a specific object and a plurality of individual appearance characteristics within each appearance classification standard from the expert client 30 for analyzing the appearance of a specific object. . For example, when constructing the outer appearance characteristic recognition model 100 for clothing, the server 10 may receive an appearance classification standard and individual appearance characteristics included therein from a client of a designer who is a clothing expert.
그 후, 서버(10)는 학습용 영상데이터를 각 외형분류기준의 복수의 개별외형특성으로 레이블링한다. 즉, 서버(10)는 각각의 학습용 영상데이터에 대해 복수의 외형분류기준별로 적어도 하나의 개별외형특성을 입력받아서 매칭한다. 예를 들어, 특정한 대상체에 대해 10개의 외형분류기준을 설정한 경우, 서버(10)는 해당 대상체를 포함한 각각의 학습용 영상데이터에 대해 10개의 외형분류기준별로 하나의 개별외형특성을 입력받고, 학습용 영상데이터와 10개의 개별유형특성을 매칭한 학습데이터셋을 형성한다.Thereafter, the server 10 labels the training image data with a plurality of individual appearance characteristics of each appearance classification criterion. That is, the server 10 receives and matches at least one individual appearance characteristic for each of a plurality of appearance classification criteria for each training image data. For example, when 10 external classification criteria are set for a specific object, the server 10 receives one individual external characteristic for each of the 10 external classification criteria for each learning image data including the corresponding object, and A training data set that matches the image data and 10 individual type characteristics is formed.
그 후, 서버(10)는 학습용 영상데이터와 이에 대해 레이블링된 특정한 외형분류기준의 개별외형특성을 매칭하여 트레이닝(Training)을 수행한다. 즉, 서버(10)가, A 외형분류기준에 대한 개별특성인식모듈(110)을 트레이닝하는 경우, 학습데이터셋에서 학습용 영상데이터와 이에 매칭된 A 외형분류기준의 개별외형특성만을 추출하여 딥러닝 학습모델에 입력한다. 이를 통해, 서버(10)는 각각의 외형분류기준의 개별외형특성을 인식할 수 있는 각각의 개별특성인식모듈(110)을 구축한다.Thereafter, the server 10 performs training by matching the training image data with individual appearance characteristics of a specific external classification standard labeled therefor. That is, when the server 10 trains the individual characteristic recognition module 110 for the A appearance classification criterion, deep learning by extracting only the training image data and the individual appearance characteristics of the A appearance classification criterion matched from the training dataset. Enter into the learning model. Through this, the server 10 constructs each individual characteristic recognition module 110 capable of recognizing individual external characteristics of each external classification standard.
또한, 일 실시예에서, 도 2를 참조하면, 상기 외형특성 인식모델(100)은, 대상체 유형별로 상이한 개별특성인식모듈(110)의 조합을 포함한다. 예를 들어, 동일한 대분류 내에 속하는 패션잡화 유형(예를 들어, 신발, 지갑, 가방)은 상이한 외형분류기준을 가지므로, 서버(10)는 대상체 유형별로 개별특성인식모듈(110)의 조합을 생성하여 특정한 대상체 외형 인식을 위한 특화 외형특성인식모델을 생성한다.In addition, in an embodiment, referring to FIG. 2, the external characteristic recognition model 100 includes a combination of individual characteristic recognition modules 110 different for each object type. For example, fashion miscellaneous goods types (for example, shoes, wallets, bags) belonging to the same major category have different appearance classification criteria, so the server 10 creates a combination of individual characteristic recognition modules 110 for each object type. Thus, a specialized external feature recognition model for recognizing the external appearance of a specific object is created.
또한, 일 실시예에서, 복수의 대상체에 대한 각 외형특성 인식모델(100)은 특정한 개별특성인식모듈(110)을 공유하여 사용할 수 있다. 예를 들어, 개별특성인식모듈(110)이 색상 인식을 수행하는 경우, 대상체 유형에 무관하게 동일하게 색상인식모듈은 범용적으로 사용될 수 있으므로, 서버(10)는 대상체별로 구별되는 복수의 외형특성 인식모델(100)에서 범용 색상인식모듈을 사용할 수 있다.In addition, in an embodiment, each external characteristic recognition model 100 for a plurality of objects may share and use a specific individual characteristic recognition module 110. For example, when the individual characteristic recognition module 110 performs color recognition, the color recognition module can be used universally regardless of the object type, so that the server 10 provides a plurality of external characteristics distinguished for each object. A universal color recognition module can be used in the recognition model 100.
일 실시예에서, 서버(10)가 상기 영상데이터에 대한 복수의 개별외형특성을 조합 또는 나열하여 외형서술데이터를 생성한다. 특정한 대상체에 대해 외형분류기준이 세부적으로 나누어져 있으면, 상기 외형서술데이터는 해당 대상체의 외형을 개별외형특성을 통해 구체적으로 서술한다.In one embodiment, the server 10 generates appearance description data by combining or listing a plurality of individual appearance characteristics of the image data. If the external appearance classification criteria for a specific object are divided in detail, the external appearance description data specifically describes the external appearance of the object through individual external characteristics.
일 실시예에서, 상기 외형서술데이터 생성단계는, 상기 영상데이터에 대한 복수의 개별외형특성에 대응하는 코드 값을 추출하는 단계 및 상기 복수의 코드값을 조합하여, 코드열 형태의 외형서술데이터를 생성하는 단계를 포함한다. 즉, 서버(10)가 개별외형특성을 코드화함에 따라 외형서술데이터를 코드열로 생성할 수 있고, 이를 통해 외형서술데이터의 처리가 효율적으로 될 수 있다.In one embodiment, the step of generating the outline description data includes extracting a code value corresponding to a plurality of individual appearance characteristics of the image data and combining the plurality of code values to generate the outline description data in the form of a code string. And generating. That is, as the server 10 codes the individual appearance characteristics, the appearance description data can be generated as a code string, and through this, the processing of the appearance description data can be efficiently performed.
또한, 다른 일 실시예에서, 개별특성인식모듈(110)이 구축되지 않은, 특정한 대상체의 미학습 외형분류기준이 존재하는 경우(예를 들어, 대상체의 외형분류기준 중에서 딥러닝 학습모델을 통해 인식하기 어려운 것이 존재하거나 새로운 외형분류기준이 생성됨에 따라 아직 개별특성인식모듈(110)이 구축되지 못한 경우), 서버(10)는 전문가클라이언트 또는 영상제공자 클라이언트(40)로부터 해당 미학습 외형분류기준에 대해 영상데이터의 개별외형특성을 입력받는다.In addition, in another embodiment, when there is an unlearned appearance classification standard of a specific object for which the individual characteristic recognition module 110 is not constructed (for example, recognition through a deep learning learning model among the external classification criteria of the object) If there is something difficult to do or the individual characteristic recognition module 110 has not yet been constructed due to the creation of a new external classification standard), the server 10 is based on the unlearned external classification standard from the expert client or the image provider client 40. For each image data, the individual appearance characteristics are input.
구체적으로, 상기 외형서술데이터 생성단계(S400)서, 서버(10)가 입력개별외형특성과 산출개별외형특성을 조합하여 외형서술데이터를 생성한다. 상기 입력개별외형특성은 상기 영상데이터를 제공한 영상제공자 클라이언트(40)또는 전문가클라이언트로부터 상기 미학습 외형분류기준에 대해 획득된 것이고, 상기 산출개별외형특성은 상기 개별특성인식모듈(110)에 상기 영상데이터를 입력함에 따라 산출된 것이다.Specifically, in the appearance description data generation step (S400), the server 10 generates the appearance description data by combining the input individual appearance characteristics and the calculated individual appearance characteristics. The input individual appearance characteristics are obtained for the unlearned appearance classification criteria from an image provider client 40 or an expert client that provided the image data, and the calculated individual appearance characteristics are transmitted to the individual characteristic recognition module 110. It is calculated by inputting image data.
일 실시예에서, 서버(10)가 특정한 사용자로부터 검색키워드를 입력받음에 따라, 매칭알고리즘(200)에서 상기 검색키워드에 대응하는 추상적 특성에 매칭된 외형분류기준 조합에 해당하는 영상데이터를 추출한다(S600). 사용자가 특정한 대상체의 추상적 특성 중 어느 하나인 검색키워드 또는 추상적 특성과 유사한 키워드로 판단되는 검색키워드를 기반으로 원하는 영상데이터를 검색하고자 하는 경우, 서버(10)는 매칭알고리즘(200)에서 검색키워드에 대응하는 추상적 특성에 매칭된 외형분류기준 조합을 추출하고, 외형서술데이터 내에 해당 외형분류기준 조합이 있는 영상데이터를 추출한다.In one embodiment, as the server 10 receives a search keyword from a specific user, the matching algorithm 200 extracts image data corresponding to a combination of appearance classification criteria matched with an abstract characteristic corresponding to the search keyword. (S600). When a user wants to search for desired image data based on a search keyword that is one of the abstract characteristics of a specific object or a search keyword that is similar to the abstract characteristic, the server 10 uses the matching algorithm 200 to search for the desired image data. A combination of appearance classification criteria matching the corresponding abstract characteristics is extracted, and image data having a corresponding combination of appearance classification criteria in the appearance description data is extracted.
일 실시예에서, 추상적 특성은 특정한 외형분류기준에 대해 복수의 개별외형특성이 매칭될 수도 있다. 또한, 특정 외형분류기준이 특정한 추상적 특성을 정의하는데 고려되지 않는 경우, 서버(10)는 특정한 외형분류기준을 해당 추상적 특성에 매칭하지 않을 수도 있다. 예를 들어, 추상적특성 X를 정의하는데 있어서 외형분류기준 1이 고려될 필요가 없는 경우(즉, 외형분류기준 1의 모든 개별외형특성이 적용된 대상체가 추상적특성 X에 포함될 수 있는 경우), 서버(10)는 외형분류기준 1을 추상적 특성 X에 매칭하지 않을 수도 있다. 또한, 서버(10)는 외형분류기준 2의 복수의 개별외형특성을 추상적 특성 X에 매칭할 수도 있다.In one embodiment, the abstract characteristic may be matched with a plurality of individual appearance characteristics for a specific appearance classification criterion. In addition, when a specific appearance classification standard is not considered in defining a specific abstract characteristic, the server 10 may not match the specific external classification standard with the corresponding abstract characteristic. For example, when defining the abstract characteristic X, the appearance classification criterion 1 does not need to be considered (that is, when an object to which all the individual appearance characteristics of the appearance classification criterion 1 is applied can be included in the abstract characteristic X), the server ( 10) may not match the appearance classification criterion 1 with the abstract characteristic X. In addition, the server 10 may match a plurality of individual appearance characteristics of the appearance classification criterion 2 with the abstract characteristic X.
또한, 다른 일 실시예에서, 특정한 대상체에 대한 신규 외형분류기준이 추가되는 경우, 서버(10)가 학습용 영상데이터에 대한 신규 외형분류기준의 개별외형특성을 획득하여, 신규 학습데이터셋을 구축하는 단계 및 서버(10)가 상기 신규 학습데이터셋을 기반으로 신규 개별특성인식모듈(110)을 트레이닝하여, 상기 외형특성인식모델에 추가하는 단계를 더 포함한다. 즉, 특정한 대상체에 대한 새로운 외형분류기준이 추가되는 경우(예를 들어, 의류의 외형특성을 나누는 새로운 기준이 추가된 경우), 서버(10)는 기존의 개별특성인식모듈(110)을 변경하지 않고 새로운 외형분류기준에 대한 개별특성인식모듈(110)만 추가 구축하여 외형특성 인식모델(100)을 새로운 외형분류기준이 추가된 상황에 맞게 변경할 수 있다.In addition, in another embodiment, when a new appearance classification criterion for a specific object is added, the server 10 obtains individual appearance characteristics of the new appearance classification criterion for the training image data, and constructs a new learning data set. The step and the server 10 training a new individual characteristic recognition module 110 based on the new learning data set, and adding it to the external characteristic recognition model. That is, when a new external classification standard for a specific object is added (for example, a new standard for dividing the external characteristics of clothing is added), the server 10 does not change the existing individual characteristic recognition module 110. It is possible to change the external characteristic recognition model 100 according to the situation in which the new external classification standard is added by additionally constructing only the individual characteristic recognition module 110 for the new external classification standard.
먼저, 서버(10)가 학습용 영상데이터에 대한 신규 외형분류기준의 개별외형특성을 획득하여, 신규 학습데이터셋을 구축한다. 일실시예로, 기존에 다른 개별특성인식모듈(110)을 트레이닝하기 위해 이용된 영상데이터를 동일하게 활용하여 신규 개별특성인식모듈(110)을 구축하는 경우, 서버(10)는 전문가 클라이언트(30)로부터 학습용 영상데이터 각각에 대해 신규 외형분류기준의 개별외형특성을 입력받는다. 또한, 다른 일실시예로, 서버(10)는 신규 외형분류기준에 대한 개별특성인식모듈(110)을 트레이닝하기 위한 신규 영상데이터를 획득하고, 이에 대해 신규 외형분류기준의 개별외형특성을 각각 입력받아서 신규 학습데이터셋을 구축한다.First, the server 10 acquires individual appearance characteristics of the new appearance classification criteria for the training image data, and constructs a new training data set. In one embodiment, when constructing a new individual characteristic recognition module 110 by using the same image data used to train another individual characteristic recognition module 110 in the past, the server 10 is an expert client 30 ), the individual appearance characteristics of the new appearance classification standard are input for each learning image data. In addition, in another embodiment, the server 10 acquires new image data for training the individual characteristic recognition module 110 for the new appearance classification criteria, and inputs individual appearance characteristics of the new appearance classification criteria, respectively. And construct a new training data set.
그 후, 서버(10)가 상기 신규 학습데이터셋을 기반으로 신규 개별특성인식모듈(110)을 트레이닝하여, 상기 외형특성인식모델에 추가한다(S710). 이를 통해, 서버(10)는 기존에 복수의 외형특성인식모델에 복수의 기존 개별특성인식모듈(110)과 함께 신규 개별특성인식모듈(110)을 추가한다.Thereafter, the server 10 trains the new individual feature recognition module 110 based on the new learning data set, and adds it to the external feature recognition model (S710). Through this, the server 10 adds a new individual characteristic recognition module 110 together with a plurality of existing individual characteristic recognition modules 110 to the existing plurality of external characteristic recognition models.
또한, 다른 일 실시예에서, 서버(10)가 이미 구축된 개별특성인식모듈(110)에 의해 외형서술데이터가 획득된 상기 영상데이터를 신규 개별특성인식모듈(110)에 입력하여 신규 외형분류기준에 대해 개별외형특성을 추가하는 단계를 더 포함한다. 즉, 서버(10)는 기존에 획득된 영상데이터의 외형서술데이터를 신규 외형분류기준을 반영하도록 업데이트하는 과정을 수행한다. 이를 위해, 서버(10)는 신규 개별특성인식모듈(110)에 모든 영상데이터를 삽입하여 개별외형특성을 산출하는 과정을 수행한다.In addition, in another embodiment, the server 10 inputs the image data from which the appearance description data has been obtained by the individual characteristic recognition module 110, which is already established, into the new individual characteristic recognition module 110, and the new appearance classification standard It further comprises the step of adding an individual appearance characteristic to the. That is, the server 10 performs a process of updating the appearance description data of the previously acquired image data to reflect the new appearance classification criteria. To this end, the server 10 inserts all the image data into the new individual characteristic recognition module 110 to calculate individual appearance characteristics.
또한, 다른 일 실시예에서, 서버(10)가 각각의 추상적 특성에 상기 신규 외형분류기준의 개별외형특성을 매칭하여 상기 매칭알고리즘(200)을 업데이트하는 단계를 더 포함한다. 즉, 서버(10)는 사용자가 추상적 특성에 대응하는 키워드를 기반으로 영상데이터 검색을 수행하는 경우에 신규 외형분류기준을 반영하여 최적 검색결과를 제공하기 위해, 매칭알고리즘(200) 내 각각의 추상적 특성에 대해 신규 외형분류기준의 개별외형특성을 연결한다.Further, in another embodiment, the server 10 further includes updating the matching algorithm 200 by matching the individual appearance characteristics of the new appearance classification criteria with each abstract characteristic. That is, when the user searches for image data based on a keyword corresponding to an abstract characteristic, the server 10 reflects the new appearance classification criteria and provides the optimal search result. For characteristics, the individual external characteristics of the new external classification standard are linked.
또한, 다른 일 실시예에서, 서버(10)가 전문가클라이언트로부터 상기 추상적 특성과 상기 외형분류기준 조합을 매칭하는 설정데이터를 수신하여, 매칭알고리즘(200)을 설정하는 단계를 더 포함한다. 추상적 특성의 정의는 지역적 차이, 시대변화, 새로운 정의 정립 등의 요인에 의해 변경되거나 상이할 수 있다. 예를 들어, 대상체가 패션의류 또는 패션잡화인 경우, 특정한 패션트랜드 또는 감성적 특성을 나타내는 추상적 특성은 시대 변화에 따라 변경될 수도 있고, 전세계 지역에 따라 상이하게 정의될 수 있다(예를 들어, '빈티지'라는 추상적 특성(즉, 감성적 특성)은 과거와 현재에 다른 외형을 가지는 것으로 정의될 수 있다.) 따라서, 서버(10)는 매칭알고리즘(200) 내의 추상적 특성과 개별외형특성 조합의 매칭관계를 추가 또는 변경 설정할 수 있다.Further, in another embodiment, the server 10 further includes the step of setting the matching algorithm 200 by receiving setting data matching the combination of the abstract characteristic and the appearance classification criterion from the expert client. The definition of abstract characteristics may be changed or different due to factors such as regional differences, changes in the times, and establishment of new definitions. For example, when the object is fashion clothing or fashion accessories, abstract characteristics representing specific fashion trends or emotional characteristics may change according to the change of the times, and may be defined differently according to regions around the world (for example, ' The abstract characteristic (ie, emotional characteristic) of'vintage' can be defined as having a different appearance in the past and the present.) Therefore, the server 10 is a matching relationship between the combination of abstract characteristics in the matching algorithm 200 and individual appearance characteristics. You can add or change settings.
일 실시예에서, 특정한 추상적 특성에 대한 정의가 변경되는 경우, 서버(10)는 전문가 클라이언트(30)로부터 현시점의 해당 추상적 특성에 대한 외형분류기준 조합을 입력받는다. 이 때, 서버(10)는 변경 전의 추상적 특성과 외형분류기준 조합을 과거 특정시점의 해당 추상적 특성의 정의로 설정할 수 있다. 이를 통해, 서버(10)는 시대 변화에 따른 특정한 추상적 특성의 정의 또는 서술정보를 누적할 수 있다.In one embodiment, when the definition of a specific abstract characteristic is changed, the server 10 receives a combination of appearance classification criteria for the current abstract characteristic from the expert client 30. In this case, the server 10 may set the combination of the abstract characteristic before the change and the appearance classification criterion as the definition of the corresponding abstract characteristic at a specific point in the past. Through this, the server 10 may accumulate definition or description information of specific abstract characteristics according to changes in the times.
또한, 다른 일 실시예에서, 지역별로 동일한 추상적 특성을 상이한 외형으로 정의하여야 함에 따라, 서버(10)는 전문가 클라이언트(30)로부터 각 지역별 외형분류기준 조합을 설정받아서 저장할 수 있다.In addition, in another embodiment, since the same abstract characteristic for each region must be defined as a different appearance, the server 10 may receive and store a combination of appearance classification criteria for each region from the expert client 30.
또한, 다른 일 실시예에서, 서버(10)가 사용자 클라이언트(20)로부터 기준영상데이터를 획득하는 단계, 기준영상데이터 획득단계, 상기 기준영상데이터를 상기 외형특성 인식모델(100)에 입력하여, 복수의 외형분류기준에 대한 개별외형특성을 산출하는 단계, 서버(10)가 상기 기준영상데이터에 대한 복수의 개별외형특성을 조합하여 외형서술데이터를 생성하는 단계 및 서버(10)가 상기 기준영상데이터와 동일 또는 유사한 외형서술데이터를 포함하는 영상데이터를 추출하는 단계를 포함한다. 즉, 사용자가 추상적 특성에 상응하는 키워드를 기반으로 검색하는 것이 아니라 사용자가 가지고 있는 특정한 대상체 이미지(즉, 기준영상데이터)를 기반으로 검색을 수행하는 경우, 서버(10)는 기준영상데이터에 대한 외형서술데이터를 생성하고, 동일 또는 유사한 외형서술데이터를 포함하는 영상데이터를 추출하여 사용자 클라이언트(20)에 제공한다.In addition, in another embodiment, the server 10 obtains the reference image data from the user client 20, the reference image data acquisition step, and inputs the reference image data into the appearance characteristic recognition model 100, Calculating individual appearance characteristics for a plurality of appearance classification criteria, the server 10 generating appearance description data by combining a plurality of individual appearance characteristics with respect to the reference image data, and the server 10 performing the reference image And extracting image data including appearance description data identical to or similar to the data. That is, when a user does not search based on a keyword corresponding to an abstract characteristic, but performs a search based on a specific object image (ie, reference image data) that the user has, the server 10 Appearance description data is generated, and image data including the same or similar appearance description data is extracted and provided to the user client 20.
먼저, 서버(10)가 사용자 클라이언트(20)로부터 기준영상데이터를 획득한다. 즉, 서버(10)는 사용자 클라이언트(20)에 저장되거나 사용자가 온라인에서 검색한 기준영상데이터를 수신한다.First, the server 10 acquires reference image data from the user client 20. That is, the server 10 receives the reference image data stored in the user client 20 or searched online by the user.
그 후, 서버(10)는 상기 기준영상데이터를 상기 외형특성 인식모델(100)에 입력하여 각 외형분류기준에 포함된 개별외형특성을 산출한다. 즉, 서버(10)는 기준영상데이터 외형특성을 텍스트정보로 서술하기 위한 복수의 개별외형특성을 각각의 개별특성인식모듈(110)을 통해 획득한다. 그 후, 서버(10)는 상기 기준영상데이터에 대한 복수의 개별외형특성을 조합하여 외형서술데이터를 생성한다.Thereafter, the server 10 inputs the reference image data into the appearance characteristic recognition model 100 to calculate individual appearance characteristics included in each appearance classification criterion. That is, the server 10 acquires a plurality of individual external characteristics for describing the external characteristics of the reference image data as text information through each individual characteristic recognition module 110. Thereafter, the server 10 generates appearance description data by combining a plurality of individual appearance characteristics with respect to the reference image data.
그 후, 서버(10)가 상기 기준영상데이터와 동일한 외형서술데이터를 포함하는 영상데이터를 추출한다. 일실시예로, 기준영상데이터와 동일한 외형서술데이터를 포함하는 영상데이터를 검색하는 경우, 서버(10)는 기준영상데이터와 동일한 외형서술데이터를 가지는 영상데이터를 탐색하여 제공한다.After that, the server 10 extracts image data including the same outline description data as the reference image data. In one embodiment, in the case of searching for image data including the same external description data as the reference image data, the server 10 searches for and provides the image data having the same external description data as the reference image data.
또한, 다른 일 실시예에서, 기준영상데이터와 유사한 범위까지 영상데이터를 검색하는 경우, 서버(10)는 기준영상데이터의 외형서술데이터 내에 포함된 복수의 외형분류기준 중에서 중요도가 낮은 것부터 유사범위까지 확장하고, 확장된 하나 이상의 외형서술데이터를 포함하는 영상데이터를 추출한다. 이를 위해, 서버(10)는 특정한 대상체의 복수 외형분류기준에 대해 중요도 순위(예를 들어, 중요도 순위가 높을수록 유사범위까지 검색범위 확장 시에 고정값으로 유지되는 것)를 포함할 수 있고, 특정한 외형분류기준 내에서 개별외형특성 간의 유사도를 포함할 수 있다.In addition, in another embodiment, in the case of searching for image data to a range similar to the reference image data, the server 10 includes from a low importance to a similar range among a plurality of appearance classification criteria included in the appearance description data of the reference image data. It expands and extracts image data including one or more extended outline description data. To this end, the server 10 may include an importance ranking for a plurality of appearance classification criteria of a specific object (for example, the higher the importance ranking, the higher the priority, the higher the search range is maintained at a fixed value), The degree of similarity between individual appearance characteristics within a specific appearance classification criterion may be included.
또한, 다른 일 실시예에서, 서버(10)가 사용자 클라이언트(20)로부터 추가영상데이터 제공요청을 수신함에 따라, 적어도 하나의 외형분류기준이 상이한 영상데이터를 순차적으로 제공하는 단계 및 사용자에 의해 추가영상데이터 중에서 하나 이상의 영상데이터가 선택되면, 서버(10)가 선택된 영상데이터의 외형서술데이터를 기반으로 개인화 추상적 특성을 설정하는 단계를 더 포함한다. 즉, 검색키워드 기반으로 검색 수행 시에, 서버(10)는 검색키워드에 상응하는 추상적 특성의 서술정보에서 적어도 하나의 외형분류기준을 다른 개별외형특성으로 변경하면서 검색범위를 확장하여 추가영상데이터를 사용자 클라이언트(20)에 제공한다. 그 후, 서버(10)는 사용자로부터 확장된 검색범위에서 원하는 하나 이상의 영상이미지를 선택받는다. 그 후, 서버(10)는 선택된 영상이미지를 기반으로 사용자가 입력한 검색키워드 또는 추상적 특성에 대해 개인화를 수행한다. 예를 들어, 일반적인 추상적 특성의 외형 정의와 사용자가 생각하고 있는 추상적 특성의 외형 정의가 상이할 수 있으므로, 서버(10)는 확장된 검색결과에서 사용자에 의해 선택된 영상이미지의 외형서술데이터를 기반으로 사용자가 생각하는 추상적 특성의 서술정보 또는 외형 정의(즉, 개인화된 추상적 특성의 서술정보)를 설정한다. 이를 통해, 해당 사용자가 추후에도 동일한 검색키워드 또는 추상적 특성으로 검색을 수행하면, 서버(10)는 일반적인 추상적 특성의 서술정보를 기반으로 검색하지 않고, 개인화된 추상적 특성의 서술정보를 기반으로 검색을 수행하여 사용자가 원하는 이미지를 먼저 제공할 수 있게 된다.In addition, in another embodiment, as the server 10 receives a request to provide additional image data from the user client 20, the step of sequentially providing image data having different appearance classification criteria and adding by the user When one or more image data is selected from among the image data, the server 10 further includes setting a personalized abstract characteristic based on the external description data of the selected image data. That is, when performing a search based on a search keyword, the server 10 expands the search range and provides additional image data while changing at least one appearance classification criterion from the description information of the abstract characteristic corresponding to the search keyword to another individual appearance characteristic. Provided to the user client 20. After that, the server 10 receives one or more desired image images from the extended search range from the user. Thereafter, the server 10 personalizes a search keyword or abstract characteristic input by the user based on the selected video image. For example, since the external definition of the general abstract characteristic and the external definition of the abstract characteristic that the user is thinking may be different, the server 10 is based on the external description data of the video image selected by the user in the expanded search result. Set the description information or appearance definition of the abstract characteristic that the user thinks (that is, the description information of the personalized abstract characteristic). Through this, if the user performs a search with the same search keyword or abstract characteristic in the future, the server 10 does not search based on the description information of the general abstract characteristic, but performs a search based on the description information of the personalized abstract characteristic. By doing so, the user can first provide the desired image.
또한, 다른 일 실시예에서, 상기 선택된 영상데이터의 외형서술데이터에 상응하는 추상적 특성이 존재하는 경우, 서버(10)가 사용자 클라이언트(20)에 상기 선택된 영상데이터 추출에 적합한 추상적 특성을 제공하는 단계를 더 포함한다. 즉, 서버(10)는 특정한 추상적 특성에 대해 사용자가 알고 있는 외형 정의와 일반적으로 사용되는 외형 정의가 상이함을 안내하고, 실제 사용자가 생각하는 외형 정의에 부합하는 추상적 특성(또는 검색키워드)를 추출하여 제공한다. 이를 통해, 사용자가 추후 재검색 시에 원하는 검색결과를 얻을 수 있는 검색키워드를 인지하도록 할 수 있다.Further, in another embodiment, when there is an abstract characteristic corresponding to the outline description data of the selected image data, the server 10 provides the user client 20 with an abstract characteristic suitable for extracting the selected image data. It includes more. That is, the server 10 notifies that the external appearance definition known to the user for a specific abstract characteristic is different from the commonly used appearance definition, and provides an abstract characteristic (or search keyword) that matches the external appearance definition that the actual user thinks. It is extracted and provided. Through this, it is possible for the user to recognize a search keyword for obtaining a desired search result when re-searching later.
또한, 다른 일 실시예에서, 상기 영상데이터가 복수의 프레임을 포함하는 동영상데이터인 경우, 추상적 특성은 특정한 형상변화 또는 동작을 나타내는 표현일 수 있다. 즉, 추상적 특성은 특정한 동작 또는 형상변화를 나타내는 텍스트 표현일 수 있다.In addition, in another embodiment, when the image data is moving image data including a plurality of frames, the abstract characteristic may be an expression representing a specific shape change or motion. That is, the abstract characteristic may be a textual expression representing a specific motion or shape change.
이를 위해, 서버(10)는 동영상인 영상데이터의 복수 프레임에 대한 개별외형특성(즉, 각 외형분류기준에 속하는 개별외형특성) 조합을 시계열적으로 나열한 외형서술데이터를 생성한다. 구체적으로, 상기 개별외형특성 산출단계는, 상기 동영상데이터 내의 각각의 프레임에 대해 수행하고, 상기 외형서술데이터 생성단계는, 각 프레임에 대한 복수의 개별외형특성을 순차적으로 나열하여 생성한다.To this end, the server 10 generates appearance description data in which combinations of individual appearance characteristics (that is, individual appearance characteristics belonging to each appearance classification criterion) for a plurality of frames of video data, which are moving images, are arranged in time series. Specifically, the step of calculating the individual appearance characteristics is performed for each frame in the moving picture data, and the step of generating the appearance description data sequentially generates a plurality of individual appearance characteristics for each frame.
또한, 다른 일 실시예에서, 서버(10)는 각 추상적 특성(예를 들어, 형상변화 또는 동작을 나타내는 표현)과 각각의 외형분류기준 내 개별외형특성의 시계열데이터를 매칭한 매칭알고리즘(200)을 포함한다. 이를 통해, 영상데이터 검색단계에서, 서버(10)는 사용자가 원하는 추상적 특성(즉, 특정한 동작 또는 형상변화)에 해당하는 동영상데이터를 탐색하여 제공한다.In addition, in another embodiment, the server 10 is a matching algorithm 200 in which each abstract characteristic (for example, an expression representing a shape change or motion) and time series data of individual appearance characteristics within each appearance classification criterion are matched. Includes. Through this, in the image data search step, the server 10 searches for and provides video data corresponding to an abstract characteristic (ie, a specific motion or shape change) desired by the user.
이하에서는, 본 발명의 일 실시예에 따른 입력영상데이터 기반 사용자 관심정보 획득 방법 및 프로그램을 구체적으로 설명한다.Hereinafter, a method and program for obtaining user interest information based on input image data according to an embodiment of the present invention will be described in detail.
도 3은 본 발명의 일 실시예에 따른 입력영상데이터 기반 사용자 관심정보 획득 방법의 흐름도이고, 도 4는 본 발명의 일 실시예에 따른 입력영상데이터 기반 사용자 관심정보 획득 시스템의 블록도이다.3 is a flowchart of a method for obtaining user interest information based on input image data according to an embodiment of the present invention, and FIG. 4 is a block diagram of a system for obtaining user interest information based on input image data according to an embodiment of the present invention.
도 3 및 도 4를 참조하면, 본 발명의 일 실시예에 따른 입력영상데이터 기반 사용자 관심정보 획득 방법은, 서버(10)가 제1 입력영상데이터를 외형특성 인식모델(100)에 입력하여, 복수의 외형분류기준에 대한 개별외형특성을 산출하는 단계(S1200), 서버(10)가 상기 제1 입력영상데이터에 대한 복수의 개별외형특성을 조합하여 제1 외형서술데이터를 생성하는 단계(S1400) 및 서버(10)가 상기 제1 외형서술데이터를 기초로 제1 출력영상데이터를 생성하여 출력하는 단계(S1600)를 포함한다. 이하, 각 단계에 대한 상세한 설명을 기술한다.3 and 4, in the method of obtaining user interest information based on input image data according to an embodiment of the present invention, the server 10 inputs the first input image data into the appearance characteristic recognition model 100, Calculating individual appearance characteristics for a plurality of appearance classification criteria (S1200), a step of generating, by the server 10, first appearance description data by combining a plurality of individual appearance characteristics for the first input image data (S1400) ), and the server 10 generating and outputting first output image data based on the first outline description data (S1600). Hereinafter, a detailed description of each step will be described.
서버(10)가 제1 입력영상데이터를 외형특성 인식모델(100)에 입력하여, 복수의 외형분류기준에 대한 개별외형특성을 산출한다(S1200). 상기 제1 입력영상데이터는, 관심정보를 획득하고자 하는 특정한 사용자로부터 입력받은 영상데이터를 의미한다. 상기 제1 입력영상데이터는 현실의 객체 또는 가상의 객체에 대한 영상데이터를 포함한다.The server 10 inputs the first input image data into the appearance characteristic recognition model 100 to calculate individual appearance characteristics for a plurality of appearance classification criteria (S1200). The first input image data refers to image data input from a specific user who wants to acquire interest information. The first input image data includes image data for a real object or a virtual object.
상기 제1 입력영상데이터는 다양한 방법에 의해 획득될 수 있다. 일 실시예로, 상기 제1 입력영상데이터는 특정 사용자의 가상공간 인테리어를 위한 입력을 통해 획득될 수 있다. 예를 들어, 사용자는 본인의 커뮤니티 플랫폼을 꾸미기 위하여, 선호하는 특정 객체에 대한 영상데이터를 입력할 수 있다.The first input image data may be obtained by various methods. In an embodiment, the first input image data may be obtained through an input for a specific user's virtual space interior. For example, in order to decorate the user's own community platform, the user may input image data for a specific object he or she prefers.
또한, 일 실시예로, 상기 제1 입력영상데이터는 특정 대상체의 특정 물품에 대한 현실의 영상데이터를 포함한다. 예를 들어, 사용자는 본인의 가상공간에 배치하고자 하는 의류(특정 대상체)에 속하는 'A브랜드의 B셔츠'(특정 물품)에 대한 사진(제1 입력영상데이터)을 입력할 수 있다. 이 경우, 서버는 상기 사진을 외형특성 인식모델(100)에 입력하여, '셔츠, 밝은 분홍색, 꽃무늬, 슬림, V넥, 민소매'의 개별외형특성(복수의 외형분류기준인 '색상, 패턴, Top silhouette, 목모양, 소매 길이'에 대한 복수의 개별외형특성)을 산출할 수 있다.In addition, in an embodiment, the first input image data includes real image data of a specific article of a specific object. For example, the user may input a picture (first input image data) of a'B-shirt of brand A'(a specific article) belonging to a clothing (a specific object) to be placed in his or her virtual space. In this case, the server inputs the photo into the appearance characteristic recognition model 100, and the individual appearance characteristics of'shirt, light pink, floral pattern, slim, V-neck, sleeveless' ('color, pattern, which are multiple appearance classification criteria) , Top silhouette, neck shape, sleeve length' can calculate multiple individual appearance characteristics).
다른 실시예로, 상기 제1 입력영상데이터는 사용자가 커스터마이징(Customizing)한 가상의 영상데이터를 포함한다. 사용자가 특정 대상체에 대하여 자유롭게 커스터마이징하여 제1 입력영상데이터를 입력한 경우, 서버는 상기 제1 입력영상데이터로부터 복수의 개별외형특성을 산출한다. 구체적으로, 제1 입력영상데이터는 후술하는 대상체 디자인 커스터마이징 방법에 의해 입력될 수도 있으나, 이에 한정되지는 않는다.In another embodiment, the first input image data includes virtual image data customized by a user. When a user freely customizes a specific object and inputs the first input image data, the server calculates a plurality of individual appearance characteristics from the first input image data. Specifically, the first input image data may be input by a method for customizing an object design to be described later, but is not limited thereto.
또 다른 실시예로, 사용자가 서버가 제공하는 개별외형특성 목록에서 복수의 개별외형특성(예를 들어, 셔츠, 밝은 분홍색, 꽃무늬, 슬림, V넥, 민소매)을 선택하여, 선택된 개별외형특성을 갖는 제1 입력영상데이터가 입력(커스터마이징)된 경우, 서버는 별도의 개별외형특성을 산출할 필요 없이 사용자가 선택한 개별외형특성을 획득할 수 있다.In another embodiment, the user selects a plurality of individual appearance characteristics (eg, shirt, light pink, floral pattern, slim, V-neck, sleeveless) from the list of individual appearance characteristics provided by the server, and the selected individual appearance characteristics When the first input image data having a is input (customized), the server can acquire the individual appearance characteristics selected by the user without having to calculate separate individual appearance characteristics.
그리고, 서버(10)가 제1 입력영상데이터에 대한 복수의 개별외형특성을 조합하여 제1 외형서술데이터를 생성한다(S1400). 전술한 바와 같이, 특정한 대상체에 대해 외형분류기준이 세부적으로 나누어져 있으면, 상기 제1 외형서술데이터는 해당 대상체의 외형을 개별외형특성을 통해 구체적으로 서술할 수 있다. 예를 들어, 제1 외형서술데이터는 {셔츠, 밝은 분홍색, 꽃무늬, 슬림, V넥, 민소매}의 형태로 생성 가능하다.Then, the server 10 generates first appearance description data by combining a plurality of individual appearance characteristics with respect to the first input image data (S1400). As described above, if the external shape classification criteria for a specific object are divided in detail, the first external appearance description data may specifically describe the external appearance of the corresponding object through individual external characteristics. For example, the first outline description data can be generated in the form of {shirt, light pink, floral pattern, slim, V-neck, sleeveless}.
또한, 도 5를 참조하면, 일 실시예로, 상기 제1 외형서술데이터 생성단계(S1400)는, 상기 제1 입력영상데이터에 대한 복수의 개별외형특성에 대응하는 코드값을 추출하는 단계(S1410) 및 상기 복수의 코드값을 조합하여, 코드열 형태의 제1 외형서술데이터를 생성하는 단계(S1420)를 포함할 수 있다. 즉, 서버(10)가 개별외형특성을 코드화함에 따라 외형서술데이터를 코드열로 생성할 수 있고, 이를 통해 외형서술데이터의 처리가 효율적으로 될 수 있다. Further, referring to FIG. 5, as an embodiment, the first appearance description data generation step (S1400) includes extracting code values corresponding to a plurality of individual appearance characteristics of the first input image data (S1410). ) And generating first outline description data in the form of a code string by combining the plurality of code values (S1420). That is, as the server 10 codes the individual appearance characteristics, the appearance description data can be generated as a code string, and through this, the processing of the appearance description data can be efficiently performed.
예를 들어, 개별외형특성에 대응하는 코드값이 ''셔츠-Zb01, 밝은 분홍색-Ob01, 꽃무늬-Ie01, 슬림-Ba01, V넥-Bb02, 민소매-Bg01"인 경우, 제1 외형서술데이터는 "Ba01, Bb02, Bg01, Ie01, Ob01, Zb01"의 코드열로 생성될 수 있다. For example, if the code value corresponding to individual appearance characteristics is''Shirt-Zb01, light pink-Ob01, floral pattern-Ie01, slim-Ba01, V-neck-Bb02, sleeveless-Bg01”, the first appearance description data May be generated as a code string of "Ba01, Bb02, Bg01, Ie01, Ob01, Zb01".
그리고, 서버(10)가 제1 외형서술데이터를 기초로 제1 출력영상데이터를 생성하여 출력한다(S1600). 일 실시예에서, 상기 제1 출력영상데이터는 상기 제1 외형서술데이터를 기초로 생성된 특정 대상체의 가상의 물품에 대한 영상데이터를 의미할 수 있다. Then, the server 10 generates and outputs the first output image data based on the first outline description data (S1600). In an embodiment, the first output image data may mean image data for a virtual article of a specific object generated based on the first outline description data.
예를 들어, 제1 외형서술데이터가 "Ba01, Bb02, Bg01, Ie01, Ob01, Zb01"의 코드열인 경우, 각각의 코드값에 대응하는 개별외형특성(셔츠, 밝은 분홍색, 꽃무늬, 슬림, V넥, 민소매)을 포함하는 가상의 셔츠에 대한 영상데이터를 생성하여 출력할 수 있다. 즉, 일 실시예에서, 제1 출력영상데이터는 제1 입력영상데이터의 개별외형특성과 동일한 복수의 개별외형특성을 포함하는 가상의 영상데이터를 의미할 수 있다. 이 경우, 가상의 영상데이터를 제공함으로써, 현실의 특정 물품에 대한 영상데이터를 출력하는 경우 발생할 수 있는 문제(예를 들어, 저작권 분쟁)를 최소화할 수 있는 효과가 있다.For example, if the first appearance description data is a code string of "Ba01, Bb02, Bg01, Ie01, Ob01, Zb01", individual appearance characteristics corresponding to each code value (shirt, light pink, floral pattern, slim, V-neck, sleeveless) can generate and output video data for a virtual shirt. That is, in an embodiment, the first output image data may mean virtual image data including a plurality of individual appearance characteristics identical to the individual appearance characteristics of the first input image data. In this case, by providing virtual image data, there is an effect of minimizing a problem (eg, a copyright dispute) that may occur when outputting image data for a specific product in reality.
또한, 도 6을 참조하면, 본 발명의 일 실시예에 따른 입력영상데이터 기반 사용자 관심정보 획득 방법은, 서버가 제2 입력영상데이터를 외형특성 인식모델에 입력하여, 복수의 외형분류기준에 대한 개별외형특성을 산출하는 단계(S1240), 서버가 제2 입력영상데이터에 대한 복수의 개별외형특성을 조합하여 제2 외형서술데이터를 생성하는 단계(S1440) 및 서버가 제1 외형서술데이터 또는 제2 외형서술데이터를 포함하는 외형서술데이터를 사용자의 관심정보로 저장하는 단계(S1800)를 더 포함한다. 이하, 각 단계에 대한 상세한 설명을 기술한다.6, in the method for obtaining user interest information based on input image data according to an embodiment of the present invention, the server inputs the second input image data into the appearance characteristic recognition model, and the plurality of appearance classification criteria Calculating individual appearance characteristics (S1240), a step of generating second appearance description data by combining a plurality of individual appearance characteristics with respect to the second input image data (S1440), and the server performing the first appearance description data or 2 It further comprises a step (S1800) of storing the appearance description data including the appearance description data as the user's interest information. Hereinafter, a detailed description of each step will be described.
서버가 제2 입력영상데이터를 외형특성 인식모델에 입력하여, 복수의 외형분류기준에 대한 개별외형특성을 산출한다(S1240). 일 실시예에서, 상기 제2 입력영상데이터는 제1 입력영상데이터를 입력한 사용자에 의해 제1 출력영상데이터가 수정된 영상데이터를 의미할 수 있다.The server inputs the second input image data into the appearance characteristic recognition model, and calculates individual appearance characteristics for a plurality of appearance classification criteria (S1240). In an embodiment, the second input image data may mean image data in which first output image data is modified by a user who has input the first input image data.
예를 들어, 사용자가 본인의 가상공간에 'A브랜드의 B셔츠'와 유사한 이미지를 배치하고자 'A브랜드의 B셔츠'에 관한 사진을 제1 입력영상데이터로 입력하였으나, 서버가 이를 기초로 생성하여 출력한 가상의 셔츠에 대한 이미지(제1 출력영상데이터)가 사용자가 배치하려던 이미지의 특성과 상이한 경우, 사용자는 제1 출력영상데이터를 수정하여 배치하고자 하는 제2 입력영상데이터를 입력할 수 있다.For example, in order to place an image similar to the'B-shirt of brand A'in the user's virtual space, a picture of the'B-shirt of brand A'was entered as the first input image data, but the server created it based on this. If the image (first output image data) of the virtual shirt that is output is different from the characteristics of the image that the user intends to arrange, the user can modify the first output image data and input the second input image data to be placed. have.
일 실시예에서, 제1 출력영상데이터의 수정이 필요한 경우는, 서버의 제1 입력영상데이터에 대한 개별외형특성 산출이 잘못된 경우 또는 제1 출력영상데이터에 포함되어 있는 개별외형특성 중 제1 입력영상데이터로부터 산출된 개별외형특성 이외의 개별외형특성에 대해 수정이 필요한 경우를 포함할 수 있다. In one embodiment, when correction of the first output image data is required, the calculation of individual appearance characteristics for the first input image data by the server is incorrect, or the first input among individual appearance characteristics included in the first output image data. It may include cases in which correction is required for individual appearance characteristics other than individual appearance characteristics calculated from image data.
예를 들어, V넥 셔츠의 제1 입력영상데이터에 대해 서버가 U넥의 개별외형특성을 잘못 산출한 결과 제1 출력영상데이터가 U넥인 경우 또는 전술한 코트의 개별외형특성(셔츠, 밝은 분홍색, 꽃무늬, 슬림, V넥, 민소매)을 모두 포함하지만 사용자가 선호하지 않는 개별외형특성인 크롭(외형분류기준인 Top Length에 대한 개별외형특성)을 포함하는 경우, 사용자는 제1 출력영상데이터를 수정하여 제2 입력영상데이터로 입력할 수 있다.For example, when the server incorrectly calculates the individual appearance characteristics of the U-neck for the first input image data of a V-neck shirt, the first output image data is a U-neck, or the individual appearance characteristics of the aforementioned coat (shirt, light pink). , Floral pattern, slim, V-neck, sleeveless), but including a crop (individual appearance characteristic for Top Length, which is an external classification standard), which is an individual appearance characteristic that the user does not prefer, the user must first output image data. May be modified to input as second input image data.
또한, 제1 출력영상데이터의 수정은, 제1 출력영상데이터를 프로그램 또는 서버를 이용하여 사용자가 직접 수정하거나, 수정 방향에 대한 키워드를 입력하여 수정할 수 있으나, 이에 제한되지 않고 영상데이터에 대한 모든 수정 방법을 포함한다. 예를 들어, 사용자가 U넥에 대해 V넥으로 수정하려는 경우, 제1 출력영상데이터의 코트를 사용자가 직접 V넥으로 수정하거나, 'V넥'의 키워드 입력을 통해 수정 방향을 입력할 수 있다.In addition, the first output image data may be modified by a user directly using a program or server, or may be modified by inputting a keyword for a correction direction, but is not limited thereto and Includes correction methods. For example, if the user wants to modify the U-neck to the V-neck, the user can directly modify the coat of the first output image data to the V-neck, or input the correction direction through the keyword input of'V-neck'. .
또한, 일 실시예에서, 제1 출력영상데이터의 수정은, 서버가 제1 출력영상데이터에 포함된 특징 외에 다른 특징들로 조합된 다수의 영상데이터를 사용자에게 추천하고, 사용자가 추천된 제품에서 추가하고자 하는 특징을 선택하여 추가함으로써, 수정된 제2 입력영상데이터를 입력할 수 있다. 이 경우, 서버는 추가된 특징에 대한 외형서술데이터를 통하여 사용자의 선호도를 간편하게 획득할 수 있다.In addition, in an embodiment, in the modification of the first output image data, the server recommends a plurality of image data combined with other features in addition to the features included in the first output image data to the user, and the user recommends By selecting and adding a feature to be added, modified second input image data can be input. In this case, the server can easily obtain the user's preference through the appearance description data for the added feature.
서버가 제2 입력영상데이터에 대한 복수의 개별외형특성을 조합하여 제2 외형서술데이터를 생성한다(S1440). 예를 들어, 사용자가 제1 출력영상데이터의 상의 길이(Top Length)를 크롭(crop)에서 미디엄(Medium)으로 수정한 제2 입력영상데이터의 제2 외형서술데이터는 {셔츠, 밝은 분홍색, 꽃무늬, 슬림, V넥, 민소매}의 형태 또는 "Ba01, Bb02, Bg01, Bi03, Ie01, Ob01, Zb01"(미디엄에 대응하는 코드값이 Bi03인 경우)로 생성 가능하다.The server generates second appearance description data by combining a plurality of individual appearance characteristics with respect to the second input image data (S1440). For example, the second outline description data of the second input image data in which the user corrects the top length of the first output image data from crop to medium is {shirt, light pink, flower Pattern, slim, V-neck, sleeveless} or "Ba01, Bb02, Bg01, Bi03, Ie01, Ob01, Zb01" (when the code value corresponding to the medium is Bi03) can be generated.
또한, 일 실시예에서, 도면에 도시되지는 않았으나, 서버가 사용자에게 출력영상데이터에 대한 승인 요청을 전송하는 단계를 더 포함할 수 있다. 예를 들어, 서버가 제1 출력영상데이터를 출력하고 사용자에게 제1 출력영상데이터에 대한 승인 요청을 전송하고, 사용자가 승인한 경우에는 제2 입력영상데이터를 입력받지 않고, 사용자가 승인하지 않은 경우 제1 출력영상데이터가 수정된 제2 입력영상데이터를 입력받을 수 있다.In addition, in an embodiment, although not shown in the drawing, the server may further include transmitting a request for approval of the output image data to the user. For example, the server outputs the first output image data and transmits a request for approval for the first output image data to the user. If the user approves, the second input image data is not input and the user does not In this case, the second input image data from which the first output image data is modified may be input.
또한, 일 실시예에서, 전술한 입력영상데이터의 개별외형특성을 산출하고 외형데이터를 생성하고 출력영상데이터를 생성하여 출력하는 단계는, 한 번 이상 반복되어 수행될 수 있다. 예를 들어, 서버가 제2 입력영상데이터를 기초로 제2 출력영상데이터를 출력하거나, 사용자가 제2 출력영상데이터를 기초로 제3 입력영상데이터를 입력할 수 있다.In addition, in an embodiment, the steps of calculating the individual appearance characteristics of the input image data, generating the appearance data, and generating and outputting the output image data may be repeated one or more times. For example, the server may output second output image data based on the second input image data, or a user may input third input image data based on the second output image data.
또한, 일 실시예에서, 서버가 제1 외형서술데이터 또는 제2 외형서술데이터를 포함하는 외형서술데이터를 사용자의 관심정보로 저장하는 단계(S1800)를 더 포함할 수 있다. 서버는 제1 입력영상데이터에 관한 제1 외형서술데이터, 제2 입력영상데이터에 관한 제2 외형서술데이터 또는 제1 외형서술데이터와 제2 외형서술데이터의 차이(예를 들어, 사용자가 수정한 특징에 대한 외형서술데이터)를 포함하는 외형서술데이터 정보를 저장할 수 있고, 이를 통해 사용자의 관심정보에 대해 획득할 수 있다. 즉, 서버는 사용자가 입력한 제1 입력영상데이터 또는 제2 입력영상데이터를 포함하는 영상데이터뿐만 아니라, 이를 기초로 산출한 외형서술데이터를 저장하고 분석함으로써 간편하게 사용자의 관심정보를 획득할 수 있다.In addition, in an embodiment, the server may further include a step (S1800) of storing the external appearance description data including the first appearance description data or the second appearance description data as the user's interest information. The server includes the first outline description data on the first input image data, the second outline description data on the second input video data, or the difference between the first outline description data and the second outline description data (for example, Appearance description data information including appearance description data for features) can be stored, and through this, information of interest of the user can be obtained. That is, the server can easily obtain the user's interest information by storing and analyzing not only the image data including the first input image data or the second input image data input by the user, but also the appearance description data calculated based on this. .
또한, 일 실시예에서, 서버는 제1 입력영상데이터 또는 제2 입력영상데이터를 포함하는 입력영상데이터로부터 개별외형특성, 외형서술데이터뿐만 아니라 추상적특성(예를 들어, 빈티지)을 더 고려하여 출력영상데이터를 생성 및 출력하거나, 외형서술데이터를 저장할 수 있다.In addition, in one embodiment, the server outputs from the input image data including the first input image data or the second input image data in consideration of not only individual appearance characteristics, appearance description data, but also abstract characteristics (eg, vintage). Image data can be generated and output, or external description data can be stored.
또한, 일 실시예에서, 도면에 도시되지는 않았으나, 서버가 제1 출력영상데이터 또는 제2 입력영상데이터를 포함하는 영상데이터를 가상공간에 디스플레이하는 단계를 더 포함할 수 있다. 즉, 서버는 사용자의 요청에 따라 사용자가 배치하고자 하는 영상데이터를 사용자의 가상공간에 디스플레이할 수 있다. In addition, in an embodiment, although not shown in the drawing, the server may further include displaying image data including first output image data or second input image data in a virtual space. That is, the server may display image data to be arranged by the user in the user's virtual space according to the user's request.
이를 통해, 전술한 예에서 사용자는 본인이 원하는 스타일의 코트에 대한 이미지를 디스플레이하여 자신의 가상공간을 취향에 따라 인테리어할 수 있고, 서버는 사용자가 입력하는 입력영상데이터를 기반으로, 개별외형특성을 산출하고 외형서술데이터를 생성하고 사용자의 수정을 통한 보완으로 사용자의 관심정보를 용이하게 획득할 수 있고, 획득한 사용자의 관심정보를 의류 마켓에 제공하는 등 다양하게 활용할 수 있는 효과가 있다.Through this, in the above example, the user can display the image of the coat of the style desired by the user and decorate his virtual space according to his taste, and the server can provide individual appearance characteristics based on the input image data input by the user. It is possible to easily obtain the user's interest information by calculating and creating the appearance description data, and supplementing it through the user's correction, and has the effect of being able to utilize variously, such as providing the acquired user's interest information to the clothing market.
또한, 본 발명의 일 실시예에서, 제1 입력영상데이터, 제1 출력영상데이터 또는 제2 입력영상데이터를 포함하는 영상데이터가 복수의 프레임을 포함하는 동영상데이터인 경우, 개별외형특성 산출은 동영상 데이터 내의 각각의 프레임에 대해 수행되는 것을 특징으로 할 수 있고, 외형서술데이터 생성은 각 프레임에 대한 복수의 개별외형특성을 순차적으로 나열하여 생성되는 것을 특징으로 할 수 있다.In addition, in an embodiment of the present invention, when the image data including the first input image data, the first output image data, or the second input image data is moving image data including a plurality of frames, the calculation of individual appearance characteristics is It may be characterized in that it is performed for each frame in the data, and the appearance description data generation may be characterized in that it is generated by sequentially listing a plurality of individual appearance characteristics for each frame.
본 발명의 다른 실시예에 따른 입력영상데이터 기반 사용자 관심정보 획득장치는, 하나 이상의 컴퓨터를 포함하고, 상기 언급된 입력영상데이터 기반 사용자 관심정보 획득 방법을 수행한다.An apparatus for obtaining user interest information based on input image data according to another embodiment of the present invention includes one or more computers, and performs the aforementioned method for obtaining user interest information based on input image data.
이상에서 전술한 본 발명의 입력영상데이터 기반 사용자 관심정보 획득방법은, 하드웨어인 컴퓨터와 결합되어 실행되기 위해 프로그램(또는 어플리케이션)으로 구현되어 매체에 저장될 수 있다.The method for obtaining user interest information based on input image data according to the present invention described above may be implemented as a program (or application) and stored in a medium to be executed by being combined with a computer that is hardware.
이하에서는, 본 발명의 또 다른 실시예에 따른 대상체 디자인 커스터마이징 방법 및 프로그램을 구체적으로 설명한다.Hereinafter, a method and a program for customizing an object design according to another embodiment of the present invention will be described in detail.
한편, 하기에서 디자인데이터는 앞서 정의한 '영상데이터'와 같이 특정한 대상체를 포함하는 2차원 또는 3차원의 정적 또는 동적 이미지를 의미한다. 즉, '디자인데이터'는 하나의 프레임인 정적 영상데이터일 수도 있고, 복수의 프레임이 연속되는 동적 영상데이터(즉, 동영상데이터)일 수도 있다. 단, 설명의 편의와 영상데이터 용어와의 구별을 위해 디자인데이터 용어를 사용한다.Meanwhile, in the following, design data refers to a two-dimensional or three-dimensional static or dynamic image including a specific object, such as'image data' defined above. That is,'design data' may be static image data that is one frame, or dynamic image data (ie, moving image data) in which a plurality of frames are consecutive. However, design data terms are used for convenience of explanation and to distinguish them from image data terms.
도 7은 본 발명의 일 실시예에 따른 표준 모델을 설명하기 위한 예시도이다.7 is an exemplary diagram for describing a standard model according to an embodiment of the present invention.
도 7을 참조하면, 서버(10)는 사용자가 대상체 디자인을 커스터마이징할 수 있도록 커스터마이징 인터페이스를 제공하는 커스터마이징 모듈(300)을 포함할 수 있다. 예를 들어, 커스터마이징 인터페이스는 사용자가 이용할 수 있는 사용자가 이용할 수 있는 웹 페이지 또는 전용 앱 어플리케이션 등을 통해 접근할 수 있는 플랫폼일 수 있다.Referring to FIG. 7, the server 10 may include a customization module 300 that provides a customizing interface so that a user can customize an object design. For example, the customizing interface may be a platform that can be accessed through a web page that can be used by the user or a dedicated app application that can be used by the user.
일 실시예에서, 서버(10)는 앞서 설명한 바와 같이 외형 특성 인식 모델(100)을 통해 다양한 대상체들의 외형분류기준과 개별외형특성을 추출하여 미리 저장할 수 있다. 물론, 서버(10)는 사용자가 선택한 새로운 대상체도 외형특성인식모델(100)을 통해 실시간으로 새로운 대상체에 대응하는 외형분류기준과 개별외형특성을 추출할 수 있다.In an embodiment, the server 10 may extract and store in advance the appearance classification criteria and individual appearance characteristics of various objects through the appearance characteristic recognition model 100 as described above. Of course, the server 10 may extract the appearance classification criterion and individual appearance characteristics corresponding to the new object in real time through the appearance characteristic recognition model 100 even for a new object selected by the user.
일 실시예에서, 커스터마이징 인터페이스는 사용자에게 대상체의 검색, 대상체의 선택, 선택된 대상체의 디자인데이터 생성 및 변경, 대상체 구매 등의 기능을 제공할 수 있다. 예를 들어, 커스터마이징 인터페이스는 대상체 명칭을 나타내는 텍스트, 대상체에 대응하는 복수의 외형분류기준에 대응하는 텍스트(또는 메뉴), 대상체에 대응하는 복수의 개별외형특성과 매칭되는 복수의 메뉴 및 대상체를 나타내는 디자인데이터를 포함할 수 있다.In an embodiment, the customizing interface may provide a user with functions such as searching for an object, selecting an object, creating and changing design data of the selected object, and purchasing an object. For example, the customizing interface may include text indicating the object name, text (or menu) corresponding to a plurality of appearance classification criteria corresponding to the object, and a plurality of menus and objects matching a plurality of individual appearance characteristics corresponding to the object. May contain design data.
일 실시예에서, 서버(10)는 커스터마이징 인터페이스를 통해 검출한 사용자 입력에 기반하여 대상체에 대응하는 디자인데이터를 실시간으로 표시할 수 있고, 사용자 입력에 따라 실시간으로 디자인데이터를 변경할 수 있다.In one embodiment, the server 10 may display design data corresponding to an object in real time based on a user input detected through a customizing interface, and may change design data in real time according to a user input.
한편, 대상체가 의류일 경우, 사용자가 일일이 의류의 모든 길이, 유형, 접합 부분 등을 구체적으로 지정하는 것은 사용자에게 부담이 될 수 있고 커스터마이징 방법의 처리 속도도 지연되고 비효율적이 될 수 있다. 따라서, 서버(10)는 도 7에 도시된 바와 같이 커스터마이징 모듈(300)을 통해 미리 표준모델(310)을 생성하여 저장할 수 있다.On the other hand, when the object is clothing, it may be burdensome for the user to specifically designate all lengths, types, joints, etc. of the clothing individually, and the processing speed of the customizing method may be delayed and inefficient. Accordingly, the server 10 may generate and store the standard model 310 in advance through the customization module 300 as shown in FIG. 7.
일 실시예에서, 표준모델(310)이란, 대상체가 의류일 경우, 의류 디자인의 커스터마이징을 효율적으로 처리할 수 있도록 의류의 고정 접합 라인, 길이 기준 라인을 표준인체모형(11)에 기반하여 미리 설정한 표준 포멧을 의미한다. 즉, 예를 들어, 사용자는 하의의 길이를 구체적인 수치로 일일히 커스터마이징을 하는 것이 아니라 표준모델(310)을 통해 제공되는 미리 설정된 길이들 중 어느 하나를 선택할 수 있다. 예를 들어, 표준모델(310)은 도 7에 도시된 바와 같이 표준인체모형(11), 실선으로 표시된 복수의 고정 접합 라인들, 점선으로 표시된 복수의 길이 기준 라인들을 포함할 수 있다. 여기서 고정 접합 라인들은 의류의 각 구성들(예컨대, 상의 바디 부분과 슬리브)이 접합하는 경계부위이고 의상에 따라 변하지 않고 일정한 위치를 유지할 수 있다. 또한, 여기서 길이 기준 라인은 각각의 의류의 길이들 중 어느 하나를 대표하는 라인이고 의상에 따라 변경될 수 있다. 즉, 도 7에 도시된 바와 다르게 길이 기준 라인의 위치는 변경될 수 있다. 표준모델에 기반하여 대상체 디자인을 생성하는 구체적인 내용은 도 8와 함께 후술한다.In one embodiment, the standard model 310 means that when the object is clothing, a fixed joint line and a length reference line of clothing are preset based on the standard human body model 11 so that customizing of clothing design can be efficiently processed. Stands for one standard format. That is, for example, the user may select any one of preset lengths provided through the standard model 310, rather than individually customizing the length of the bottom to a specific value. For example, the standard model 310 may include a standard human body model 11, a plurality of fixed joint lines indicated by a solid line, and a plurality of length reference lines indicated by a dotted line, as shown in FIG. 7. Here, the fixed bonding lines are boundary areas where the respective components of the clothing (eg, the upper body portion and the sleeve) are joined, and may maintain a constant position without changing depending on the clothing. In addition, the length reference line is a line representing any one of the lengths of each clothing and may be changed according to the clothing. That is, differently from FIG. 7, the position of the length reference line may be changed. Details of creating an object design based on the standard model will be described later with reference to FIG. 8.
한편, 서버(10)는 표준모델을 통해 사용자의 의류 디자인이 생성될 경우 사용자의 입력을 통해 추가적으로 의류의 사이즈를 변경할 수 있다. 즉, 생성한 의류 디자인에 사용자의 실제 신체 사이즈를 추가로 반영할 수 있다.Meanwhile, when the user's clothing design is generated through the standard model, the server 10 may additionally change the size of the clothing through the user's input. That is, the actual body size of the user may be additionally reflected in the generated clothing design.
일 실시예에서, 서버(10)는 별도의 플랫폼을 통해 사용자를 회원으로 가입시켜 정보를 관리할 수 있다. 사용자의 회원 정보는 이름, 주소, 연락처, 대상체 디자인 생성 및 변경 히스토리, 대상체 구매 내역 등을 포함할 수 있다.In one embodiment, the server 10 may manage information by registering a user as a member through a separate platform. The user's member information may include a name, an address, a contact information, an object design creation and change history, an object purchase history, and the like.
도 7은 본 발명의 일 실시예에 따른 표준 모델을 설명하기 위한 예시도이다. 도 8는 본 발명의 일 실시예에 따른 대상체 디자인 커스터마이징 방법을 설명하기 위한 흐름도이다. 도 9 내지 도 21은 본 발명의 일 실시예에 따른 대상체 디자인 커스터마이징 방법을 설명하기 위한 예시도이다. 도 8의 동작들은 도 1 및 도 2의 서버(10)에 의해 수행될 수 있다. 한편, 설명의 편의를 위해 대상체가 의류인 경우를 예를 들어 설명한다.7 is an exemplary diagram for describing a standard model according to an embodiment of the present invention. 8 is a flowchart illustrating a method of customizing an object design according to an embodiment of the present invention. 9 to 21 are exemplary diagrams for explaining a method of customizing an object design according to an embodiment of the present invention. The operations of FIG. 8 may be performed by the server 10 of FIGS. 1 and 2. Meanwhile, for convenience of explanation, a case where the object is clothing will be described.
도 7 내지 도 21을 참조하면, 일 실시예에서, 서버(10)는, 동작 41에서 제1 사용자 입력에 기반하여 대상체를 결정할 수 있다. 여기서 대상체는 상의(예: 셔츠&블라우스(Shirt&Blouse), 자켓(Jacket), 코트(Coat)), 하의(예: 바지(Pants), 치마(Skirt), 레깅스&스타킹(leggings & stocking) 또는 원피스(Onepiece)일 수 있다. 서버(10)는 사용자가 원하는 대상체를 검색할 수 있도록 별도의 검색 인터페이스를 제공할 수 있고, 사용자가 검색을 통해 특정 대상체를 선택할 경우 커스터마이징 인터페이스를 제공할 수 있다. 예컨대, 대상체 선택 메뉴는 커스터마이징 인터페이스와 링크로 연결될 수 있다.7 to 21, in an embodiment, the server 10 may determine an object based on a first user input in operation 41. Here, the object is a top (e.g., Shirt & Blouse, Jacket, Coat), bottoms (e.g. Pants, Skirt, leggings & stocking), or a dress ( Onepiece) The server 10 may provide a separate search interface so that the user can search for a desired object, and when the user selects a specific object through search, it may provide a customizing interface. The object selection menu may be connected to a customizing interface through a link.
일 실시예에서, 서버(10)는, 동작 42에서 대상체에 대응하는 영상데이터를 외형특성 인식모델에 입력하여, 복수의 외형분류기준에 대한 개별외형특성을 산출할 수 있다. 예를 들어, 외형분류기준은, 특정한 대상체의 외형을 서술하기 위한 특정한 분류기준으로서, 상기 대상체의 동일한 분류기준 내의 다양한 외형특성을 표현하는 복수의 개별외형특성을 포함할 수 있다. 따라서, 외형분류기준은 대상체 별로 달라지는 특화 외형분류기준과 범용 외형분류기준을 포함할 수 있다. 예컨대, 상의일 경우 특화 외형분류기준은 실루엣, 칼라&네크라인, 상의길이, 오프닝, 숄더, 슬리브, 슬리브길이, 슬리브커프스이고, 범용 외형분류기준은 상의, 하의 및 원피스에 모두 적용될 수 있는 텍스처, 패턴, 컬러 및 디테일일 수 있다.In an embodiment, the server 10 may calculate individual appearance characteristics for a plurality of appearance classification criteria by inputting image data corresponding to the object into an appearance characteristic recognition model in operation 42. For example, the appearance classification criterion is a specific classification criterion for describing the appearance of a specific object, and may include a plurality of individual appearance characteristics expressing various appearance characteristics within the same classification criterion of the object. Accordingly, the appearance classification standard may include a specialized external classification standard and a general-purpose external classification standard that are different for each object. For example, in the case of tops, the specialized appearance classification criteria are silhouette, collar & neckline, top length, opening, shoulder, sleeve, sleeve length, sleeve cuff, and general-purpose appearance classification standards are textures that can be applied to both tops, bottoms and dresses, It can be a pattern, color and detail.
예를 들어, 상의의 복수의 외형분류기준은 실루엣, 칼라&네크라인, 상의길이, 오프닝, 숄더, 슬리브, 슬리브길이, 슬리브커프스, 텍스처, 패턴, 컬러 및 디테일 중 적어도 하나를 포함할 수 있다. 예컨대, 실루엣은 옷의 전체적인 외형일 수 있고, 실루엣의 개별외형특성은 slim, regular 및 loose일 수 있다. 칼라&네크라인은 옷의 목둘레 라인일 수 있고, 칼라&네크라인의 개별외형특성은 round neckline, V neckline, plunging V neckline, surplice 및 V neck camisole 중 적어도 하나를 포함할 수 있다. 숄더는 옷의 어깨부분일 수 있고, 숄더의 개별외형특성은 plain shoulder, raglan shoulder, harter, drop shoulder, dolman, off shoulder, strapless 및 one shoulder 중 적어도 하나를 포함할 수 있다. 슬리브 길이의 개별외형특성은 extram-short sleeve, short sleeve, medium sleeve 및 long sleeve를 포함할 수 있다. 상의길이의 개별외형특성은 crop, short, medium, long 및 maxi를 포함할 수 있다. 이외에도 오프닝, 슬리브, 슬리브커프스, 텍스처, 패턴, 컬러 및 디테일은 각각의 공지된 개별외형특성들을 포함할 수 있다.For example, the plurality of appearance classification criteria of the top may include at least one of silhouette, color & neckline, top length, opening, shoulder, sleeve, sleeve length, sleeve cuff, texture, pattern, color, and detail. For example, the silhouette may be the overall appearance of the clothing, and the individual appearance characteristics of the silhouette may be slim, regular and loose. The collar & neckline may be a neckline of the clothes, and the individual appearance characteristics of the collar & neckline may include at least one of a round neckline, a V neckline, a plunging V neckline, a surplice, and a V neck camisole. The shoulder may be a shoulder portion of clothes, and the individual external characteristics of the shoulder may include at least one of a plain shoulder, a raglan shoulder, a harter, a drop shoulder, a dolman, an off shoulder, a strapless, and a one shoulder. Individual contour characteristics of the sleeve length may include extram-short sleeves, short sleeves, medium sleeves and long sleeves. Individual appearance characteristics of the top length may include crop, short, medium, long and maxi. In addition, openings, sleeves, sleeve cuffs, textures, patterns, colors and details may each include known individual appearance characteristics.
예를 들어, 하의의 복수의 외형분류기준은 실루엣, 하의길이, 허리위치, 텍스처, 패턴, 컬러 및 디테일 중 적어도 하나를 포함할 수 있다. 예컨대, 실루엣은 옷의 전체적인 외형일 수 있고, 실루엣의 개별외형특성은 바지(pants)일 경우 straight, skinny, bell-bottom, baggy, wide일 수 있고, 치마(skirt)일 경우 h-line, a-line, mermaid, flare, balloon일 수 있다. 하의 길이의 개별외형특성은 extra-short, short, midi 및 long을 포함할 수 있다. 허리위치의 개별외형특성은 high waist, normal waist 및 low waist를 포함할 수 있다. 이외에도 텍스처, 패턴, 컬러 및 디테일은 각각의 공지된 개별외형특성들을 포함할 수 있다.For example, the plurality of appearance classification criteria for the bottom may include at least one of a silhouette, a bottom length, a waist position, a texture, a pattern, a color, and a detail. For example, the silhouette may be the overall appearance of the clothing, and the individual appearance characteristics of the silhouette may be straight, skinny, bell-bottom, baggy, and wide in the case of pants, and h-line, a in the case of a skirt. It can be -line, mermaid, flare, balloon. Individual appearance characteristics of the bottom length may include extra-short, short, midi and long. Individual appearance characteristics of the waist position may include high waist, normal waist and low waist. In addition, textures, patterns, colors, and details may include individual known individual appearance characteristics.
예를 들어, 원피스의 복수의 외형분류기준은 상의에서 상의길이를 제외한 나머지 카테고리와, 하의에서만 선택하는 3가지를 합쳐서 조합하는 형식으로 총 14가지가 선택될 수 있다. 즉, 원피스의 복수의 외형분류기준은 실루엣 상의, 실루엣 하의, 칼라&네크라인, 숄더, 슬리브, 슬리브커프스, 슬리브길이, 오프닝, 하의길이, 허리위치, 텍스처, 패턴, 컬러 및 디테일을 포함할 수 있다. 각각의 복수의 외형분류기준 별 개별외형특성은 앞서 설명한 바와 같거나 기공지된 특성들을 포함할 수 있다.For example, a plurality of appearance classification criteria of one piece may be selected in a total of 14 types in a form in which the remaining categories excluding the length of the top from the top and 3 types selected only from the bottom are combined. That is, the plurality of appearance classification criteria of a dress may include silhouette top, silhouette bottom, collar & neckline, shoulder, sleeve, sleeve cuff, sleeve length, opening, bottom length, waist position, texture, pattern, color and detail. have. Individual appearance characteristics for each of the plurality of appearance classification criteria may include characteristics as described above or previously known.
한편, 동작 42는 동작 41 이전에 미리 수행될 수 있다. 즉, 대상체에 대응하는 복수의 외형분류기준과 개별외형특성은 미리 산출되어 저장될 수 있다.Meanwhile, operation 42 may be performed before operation 41. That is, a plurality of appearance classification criteria and individual appearance characteristics corresponding to the object may be calculated and stored in advance.
일 실시예에서, 서버(10)는, 동작 43에서 대상체에 대응하는 복수의 외형분류기준 및 복수의 외형분류기준에 각각 대응하는 복수의 개별외형특성에 기반하여 커스터마이징 인터페이스(500)를 제공할 수 있다.In an embodiment, the server 10 may provide a customizing interface 500 based on a plurality of appearance classification criteria corresponding to the object and a plurality of individual appearance characteristics respectively corresponding to the plurality of appearance classification criteria in operation 43. have.
예를 들어, 도 9에 도시된 바와 같이 커스터마이징 인터페이스(500)는 대상체에 대응하는 복수의 개별외형특성과 매칭되는 복수의 메뉴(501) 및 디자인데이터(505를 포함할 수 있다. 예를 들어, 대상체가 상의일 경우, 커스터마이징 인터페이스(500)는 실루엣의 복수의 개별외형특성에 대응하는 slim 메뉴, regular 메뉴 및 loose 메뉴(502), 칼라&네크라인의 복수의 개별외형특성에 대응하는 열거 메뉴(503), 상의 길이의 복수의 개별외형특성에 대응하는 crop 메뉴, short 메뉴, medium 메뉴, long 메뉴 및 maxi 메뉴(504)를 포함할 수 있다. 한편, 짙은 음영으로 표시된 slim 메뉴와 crop 메뉴, V neckline 메뉴에 따라 도 9에 도시된 바와 같이 길이가 짧고 실루엣이 슬림형이면서 브이넥 형상을 가진 상의에 대응하는 디자인데이터(505)가 표시될 수 있다.For example, as shown in Fig. 9, the customizing interface 500 may include a plurality of menus 501 and design data 505 that match a plurality of individual appearance characteristics corresponding to an object. For example, When the object is a top, the customizing interface 500 includes a slim menu, a regular menu and a loose menu 502 corresponding to a plurality of individual appearance characteristics of a silhouette, and an enumerated menu corresponding to a plurality of individual appearance characteristics of a color & neckline ( 503), and may include a crop menu, a short menu, a medium menu, a long menu, and a maxi menu 504 corresponding to a plurality of individual appearance characteristics of the image length. On the other hand, the slim menu and the crop menu, V displayed in dark shades According to the neckline menu, design data 505 corresponding to a top that has a short length, a slim silhouette and a V-neck shape as shown in FIG. 9 may be displayed.
한편, 일 실시예에서, 도 10에 도시된 바와 같이 커스터마이징 인터페이스(500)는 숄더의 복수의 개별외형특성에 대응하는 열거메뉴(506), 슬리브 길이의 복수의 개별외형특성에 대응하는 메뉴(507), 슬리브 커프의 복수의 개별외형특성에 대응하는 열거 메뉴(508), 텍스쳐의 복수의 개별외형특성에 대응하는 열거메뉴(509), 패턴의 복수의 개별외형특성에 대응하는 열거메뉴(511), 컬러의 복수의 개별외형특성에 대응하는 열거메뉴(512), 디테일의 복수의 개별외형특성에 대응하는 열거메뉴(513)를 더 포함할 수 있다. 한편, 텍스쳐, 패턴, 컬러 및 디테일에 각각 대응하는 열거메뉴들(509,511,512,513)은 각각 별도의 상세 페이지와 링크 연결될 수 있고, 사용자는 별도의 상세 페이지에서 다양한 텍스쳐, 패턴, 컬러 및 디테일을 선택할 수 있다.Meanwhile, in an embodiment, as shown in FIG. 10, the customizing interface 500 includes an enumeration menu 506 corresponding to a plurality of individual appearance characteristics of the shoulder, and a menu 507 corresponding to a plurality of individual appearance characteristics of the sleeve length. ), an enumeration menu 508 corresponding to a plurality of individual appearance characteristics of the sleeve cuff, an enumeration menu 509 corresponding to a plurality of individual appearance characteristics of a texture, and an enumeration menu 511 corresponding to a plurality of individual appearance characteristics of the pattern. , It may further include an enumeration menu 512 corresponding to a plurality of individual appearance characteristics of color, and an enumeration menu 513 corresponding to a plurality of individual appearance characteristics of detail. Meanwhile, the enumeration menus 509, 511, 512, and 513 respectively corresponding to texture, pattern, color, and detail may be linked to separate detail pages, and a user may select various textures, patterns, colors, and details from separate detail pages. .
한편, 도 9와 도 10에 도시된 바와 다르게 커스터마이징 인터페이스가 구성될 수 있다. 예를 들어, 표준모델(310)에 기반하여 설정된 복수의 개별외형특성들을 사용자가 손쉽게 선택할 수 있도록 메뉴의 구성이 변경될 수 있다.Meanwhile, differently from those shown in FIGS. 9 and 10, a customizing interface may be configured. For example, the configuration of a menu may be changed so that a user can easily select a plurality of individual appearance characteristics set based on the standard model 310.
일 실시예에서, 서버(10)는, 동작 44에서 커스터마이징 인터페이스(500)에서 검출한 제2 사용자 입력 및 미리 설정한 표준 모델(310)에 기반하여 대상체의 디자인데이터를 생성할 수 있다. 예를 들어, 제2 사용자 입력은 복수의 메뉴 중 적어도 하나의 메뉴를 선택하는 입력일 수 있다.In an embodiment, the server 10 may generate design data of an object based on a second user input detected by the customizing interface 500 in operation 44 and a preset standard model 310. For example, the second user input may be an input for selecting at least one menu from among a plurality of menus.
예를 들어, 미리 설정한 표준 모델(310)은 도 7에 도시된 바와 같이 표준 인체 모형(11), 복수의 개별외형특성을 나타내기 위한 고정 접합 라인(실선) 및 길이 기준 라인(점선) 중 적어도 하나를 포함할 수 있다. For example, the preset standard model 310 is one of the standard human body model 11, a fixed joint line (solid line) and a length reference line (dotted line) for indicating a plurality of individual appearance characteristics, as shown in FIG. It may include at least one.
일 실시예에서, 서버(10)는 제2 사용자 입력에 따라 선택된 적어도 하나의 메뉴에 대응하는 고정 접합 라인 및 길이 기준 라인 중 적어도 하나에 기반하여 디자인데이터를 생성할 수 있다. 예컨대, 표준 모델(310)은 대상체의 디자인데이터를 생성하기 위한 표준 포멧으로 활용될 수 있도록 각각의 대상체마다 미리 복수의 개별외형특성에 대응하는 고정 접합 라인과 길이 기준 라인이 표준인체모형(11)에 설정될 수 있다. 따라서, 사용자가 특정 대상체의 어느 하나의 개별외형특성을 선택할 경우, 서버(10)는 표준 모델(310)에서 대응하는 고정 접합 라인 또는 길이 기준 라인을 활용하여 대상체에 대응하는 디자인데이터를 생성할 수 있다.In an embodiment, the server 10 may generate design data based on at least one of a fixed joint line and a length reference line corresponding to at least one menu selected according to a second user input. For example, the standard model 310 includes a fixed joint line and a length reference line corresponding to a plurality of individual appearance characteristics in advance for each object so that it can be used as a standard format for generating design data of an object. Can be set to Therefore, when the user selects any one individual appearance characteristic of a specific object, the server 10 can generate design data corresponding to the object by using the corresponding fixed joint line or length reference line in the standard model 310. have.
하기에서 구체적으로 도 11 내지 도 19를 통해 표준 모델(310)에 기반한 상의에 대응하는 디자인데이터를 생성하는 방법을 설명하고, 도 20을 이용하여 표준 모델(310)에 기반한 하의에 대응하는 디자인데이터를 생성하는 방법을 설명하고, 도 21을 이용하여 의류의 범용 외형분류기준에 따른 디자인데이터를 생성하는 방법을 설명한다.In the following, a method of generating design data corresponding to a top based on the standard model 310 will be described in detail through FIGS. 11 to 19, and design data corresponding to a bottom based on the standard model 310 using FIG. 20 A method of generating a will be described, and a method of generating design data according to a universal appearance classification standard of clothing will be described using FIG. 21.
도 11 내지 도 19를 참조하면, 상의의 디자인데이터는 상의와 관련된 복수의 외형분류기준에서 각각 개별외형특성이 결정될 때 완성될 수 있다. 앞서 설명한 바와 같이 상의와 관련된 복수의 외형분류기준은 실루엣, 칼라&네크라인, 상의길이, 오프닝, 숄더, 슬리브, 슬리브길이, 슬리브커프스, 텍스처, 패턴, 컬러 및 디테일을 포함할 수 있고, 상기 복수의 외형분류기준들 각각의 개별외형특성은 표준 모델(310)과 사용자 입력에 의해 정해질 수 있다. 설명의 편의를 위해 상의를 바디부, 슬리브부 및 기타로 나누어 설명한다. 바디부와 관련된 외형분류기준들은 실루엣, 칼라&네크라인, 상의길이, 오프닝 및 숄더이고, 슬리브부와 관련된 외형분류기준은 슬리브, 슬리브 길이 및 슬리브 커프스이고, 기타는 텍스쳐, 패턴, 컬러 및 디테일일 수 있다. 즉, 상의의 바디부란 상의에서 슬리브를 제외한 나머지 부분을 의미할 수 있고, 상단부와 하단부를 포함할 수 있다. 여기서 상단부는 실루엣의 길이 기준 라인, 칼라&네크라인의 고정 접합 라인 및 숄더의 고정 접합 라인에 의해 결정될 수 있고, 하단부는 실루엣의 길이 기준 라인, 칼라&네크라인의 고정 접합 라인 및 상의 길이의 기준 길이 라인에 의해 결정될 수 있다.Referring to FIGS. 11 to 19, the design data of the top can be completed when individual appearance characteristics are determined from a plurality of appearance classification criteria related to the top. As described above, the plurality of appearance classification criteria related to the top may include silhouette, color & neckline, top length, opening, shoulder, sleeve, sleeve length, sleeve cuff, texture, pattern, color, and detail. Individual appearance characteristics of each of the appearance classification criteria of may be determined by the standard model 310 and user input. For convenience of explanation, the top is divided into a body part, a sleeve part, and others. The outline classification criteria related to the body part are silhouette, color & neckline, top length, opening and shoulder, and the outline classification criteria related to the sleeve part are sleeve, sleeve length and sleeve cuff, and others are texture, pattern, color and detail. I can. That is, the body part of the top may mean the rest of the top excluding the sleeve, and may include an upper end and a lower end. Here, the upper part can be determined by the length reference line of the silhouette, the fixed joint line of the collar & neckline, and the fixed joint line of the shoulder, and the lower part is the reference length of the silhouette, the fixed joint line of the collar & neckline, and the image length. It can be determined by the length line.
일 실시예에서, 도 7을 참조하면, 실루엣의 복수의 개별외형특성과 관련된 길이 기준 라인은 제1실루엣 길이 기준 라인(91), 제2실루엣 길이 기준 라인(92) 및 제3실루엣 길이 기준 라인(93)을 포함할 수 있다. 제1실루엣 길이 기준 라인(91), 제2실루엣 길이 기준 라인(92) 및 제3실루엣 길이 기준 라인(93)은 각각 loose, regular 및 slim에 대응할 수 있다.In one embodiment, referring to FIG. 7, the length reference lines related to the plurality of individual appearance characteristics of the silhouette are a first silhouette length reference line 91, a second silhouette length reference line 92, and a third silhouette length reference line. (93) may be included. The first silhouette length reference line 91, the second silhouette length reference line 92, and the third silhouette length reference line 93 may correspond to loose, regular, and slim, respectively.
일 실시예에서, 도 11의 (a)를 참조하면, 칼라&네크라인의 복수의 개별외형특성과 관련된 고정 접합 라인은 제1숄더 고정 접합 라인(50), 제1칼라 고정 접합 라인(51), 제2칼라 고정 접합 라인(52), 제3칼라 고정 접합 라인(53) 및 제4칼라 고정 접합 라인(54)을 포함할 수 있다.In one embodiment, referring to (a) of FIG. 11, the fixed bonding lines related to the plurality of individual appearance characteristics of the collar & neckline are a first shoulder fixed bonding line 50 and a first color fixed bonding line 51. , A second color fixed bonding line 52, a third color fixed bonding line 53, and a fourth colored fixed bonding line 54 may be included.
예를 들어, 제1칼라 고정 접합 라인(51)과 제2칼라 고정 접합 라인(52)은 가슴 선 위에서 표현가능 한 경우의 고정 접합 라인들일 수 있고, 각각 칼라 탑 라인 및 칼라 탑 조인 라인일 수 있다. 제1칼라 고정 접합 라인(51)과 제2칼라 고정 접합 라인(52)을 통해 표현할 수 있는 복수의 개별외형특성은 Funnel, Turtleneck, Boat Neckline, Stand Collar, Mandaring Collar, Regular Straight Point Collar 등을 포함할 수 있다.For example, the first color fixed joint line 51 and the second color fixed joint line 52 may be fixed joint lines when expressible on the chest line, and may be a color top line and a color top join line, respectively. have. The plurality of individual appearance characteristics that can be expressed through the first color fixed joint line 51 and the second color fixed joint line 52 include Funnel, Turtleneck, Boat Neckline, Stand Collar, Mandaring Collar, Regular Straight Point Collar, etc. can do.
예를 들어, 제3칼라 고정 접합 라인(53)은 칼라&네크라인이 가슴 선 위에서 표현 가능할 때 상의 바디부의 하단부가 연결되는 고정 접합 라인일 수 있고, 제4칼라 고정 접합 라인(54)은 칼라&네크라인이 가슴라인 아래까지 내려오는 종류일 경우의 바디부의 하단부가 연결되는 고정 접합 라인일 수 있다. 예컨대, 도 11의 (b)와 같이, 가슴의 윗 부분까지 노출하는 형태의 상의라면 제3칼라 고정 접합 라인(53)이 고정 접합 라인으로 활용될 수 있고, 가슴의 중앙 부분까지 드러내는 상의라면 제4칼라 고정 접합 라인(54)이 고정 접합라인으로 활용될 수 있다. 제3칼라 고정 접합 라인(53)과 제4칼라 고정 접합 라인(54)을 통해 표현할 수 있는 복수의 개별외형특성은 Tailored Jacket Collar, Convertible Collar, Sailor Collar, Lapel, Shawl Collar, Scoop, Neckline, Surplice 등을 포함할 수 있다.For example, the third color fixed joint line 53 may be a fixed joint line to which the lower end of the upper body is connected when the collar & neckline can be expressed on the chest line, and the fourth color fixed joint line 54 is a color It may be a fixed joint line to which the lower end of the body part is connected when the &neckline is a type that descends below the chest line. For example, as shown in (b) of FIG. 11, if the top is exposed to the upper part of the chest, the third color fixed bonding line 53 may be used as a fixed bonding line, and if the top is exposed to the center of the chest, A four-color fixed bonding line 54 may be used as a fixed bonding line. The plurality of individual appearance characteristics that can be expressed through the third color fixed joint line 53 and the fourth color fixed joint line 54 are Tailored Jacket Collar, Convertible Collar, Sailor Collar, Lapel, Shawl Collar, Scoop, Neckline, Surplice. And the like.
예를 들어, 칼라&네크라인의 디자인데이터는 도 11의 (c)와 같이 생성될 수 있고, 칼라&네크라인의 사이즈는 몸판과 동등한 비율로 가로길이가 변화할 수 있고, 세로 너비는 일정범위 내에서는 변화가 없을 수 있다. 물론, 표준인체모형(11)과 사용자의 사이즈 차이가 많이 날 경우에는 세로 너비도 변화될 수 있다.For example, the design data of the color & neckline may be generated as shown in Fig. 11(c), the size of the color & neckline may change in width at the same rate as the body plate, and the vertical width may be within a certain range. There may be no change within. Of course, if there is a large difference in size between the standard human body model 11 and the user, the vertical width may also change.
일 실시예에서, 도 12을 참조하면, 상의 바디부의 하단부와 상단부를 연결하는 고정 접합 라인은 제1상의 길이 기준 라인(80)을 더 포함할 수 있다. In one embodiment, referring to FIG. 12, the fixed bonding line connecting the lower end and the upper end of the upper body portion may further include a length reference line 80 of the first upper body.
일 실시예에서, 도 7을 다시 참조하면, 상의 길이의 복수의 개별외형특성과 관련된 길이 기준 라인은 crop에 대응하는 제1상의 길이 기준 라인(80), short에 대응하는 제2상의 길이 기준 라인(81), medium에 대응하는 제3상의 길이 기준 라인(82), long에 대응하는 제4상의 길이 기준 라인(83), maxi에 대응하는 제5상의 길이 기준 라인(84)을 포함할 수 있다.In one embodiment, referring again to FIG. 7, the length reference line related to the plurality of individual appearance characteristics of the length of the image is a length reference line 80 of the first phase corresponding to the crop, and the length reference line of the second phase corresponding to the short. (81), a length reference line 82 of the third phase corresponding to the medium, a length reference line 83 of the fourth phase corresponding to the long, and a length reference line 84 of the fifth phase corresponding to maxi. .
일 실시예에서, 오프닝은 상의에서 사용자의 신체가 통과할 수 있는 구멍일 수 있고, 앞서 설명한 상단부의 디자인데이터가 결정될 경우 곧바로 결정될 수 있다.In one embodiment, the opening may be a hole in the top through which the user's body can pass, and may be determined immediately when design data of the upper portion described above is determined.
일 실시예에서, 도 13를 참조하면, 숄더의 복수의 개별외형특성과 관련된 고정 접합 라인은 plain shoulder에 대응하는 제1숄더 고정 접합 라인(50), raglan shoulder(harter)에 대응하는 제2숄더 고정 접합 라인(60), drop shoulder에 대응하는 제3숄더 고정 접합 라인(61), dolman에 대응하는 제4숄더 고정 접합 라인(62), off shoulder(strapless)에 대응하는 제5숄더 고정 접합 라인(63) 및 one shoulder에 대응하는 제6숄더 고정 접합 라인(64)을 포함할 수 있다.In one embodiment, referring to FIG. 13, the fixed joint lines related to a plurality of individual external characteristics of the shoulder are a first shoulder fixed joint line 50 corresponding to a plain shoulder, and a second shoulder corresponding to a raglan shoulder (harter). Fixed joint line 60, third shoulder fixed joint line 61 corresponding to the drop shoulder, fourth shoulder fixed joint line 62 corresponding to dolman, and fifth shoulder fixed joint line corresponding to off shoulder (strapless) It may include a sixth shoulder fixed joint line 64 corresponding to the (63) and one shoulder.
이와 같이 실루엣의 길이 기준 라인, 칼라&네크라인의 고정 접합 라인, 상의길이의 길이 기준 라인, 오프닝 및 숄더의 고정 접합 라인에 의해 상의의 바디부가 도 14 내지 도 18와 같이 생성될 수 있다.In this way, the body portion of the upper body may be generated as shown in FIGS. 14 to 18 by the length reference line of the silhouette, the fixed joint line of the collar & neckline, the length reference line of the upper length, and the fixed joint line of the opening and shoulder.
예를 들어, 도 14의 (a1)은도 11 (c)의 디자인데이터를 접합할 수 있는 상의 바디베이스, 제3칼라 고정 접합 라인(53) 및 실루엣의 slim에 대응하는 제3실루엣 길이 기준 라인(93)에 따라 결정된 상의의 상단부일 수 있고, 도 14의 (a2)는 칼라&네크라인의 제3칼라 고정 접합 라인(53), 실루엣의 제3실루엣 길이 기준 라인(93) 및 상의 길이의 crop에 대응하는 제1상의 길이 기준 라인(80)에 따라 결정된 상의의 하단부일 수 있고, 도 14의 (a1)과 (a2)가 결합되어 상의의 바디부가 될 수 있다. 만약, 사용자가 슬리브를 선택하지 않을 경우, 상의의 바디부가 곧 상의의 디자인데이터가 될 수 있다.For example, (a1) of FIG. 14 shows the body base of the top to which the design data of FIG. 11 (c) can be joined, the third color fixed joint line 53, and the third silhouette length reference line corresponding to the slim silhouette ( 93), and (a2) of FIG. 14 shows the third color fixed joint line 53 of the collar & neckline, the third silhouette length reference line 93 of the silhouette, and the crop of the image length. It may be the lower end of the upper garment determined according to the length reference line 80 corresponding to the first phase, and (a1) and (a2) of FIG. 14 may be combined to form the body of the upper garment. If the user does not select the sleeve, the body portion of the top may become the design data of the top.
또한, 예를 들어, 도 14의 (b1)은가슴 아래까지 내려오는 neckline&collar와 접합 되어지는 상의 바디베이스, 제4칼라 고정 접합 라인(54) 및 실루엣의 slim에 대응하는 제3실루엣 길이 기준 라인(93)에 따라 결정된 상의의 상단부일 수 있고, 도 14의 (b2)는 칼라&네크라인의 surplice, 제4칼라 고정 접합 라인(54) 및 실루엣의 slim에 대응하는 제3실루엣 길이 기준 라인(93)에 따라 결정된 상의의 상단부일 수 있고, 도 14의 (b3)는 칼라&네크라인의 제4칼라 고정 접합 라인(54), 실루엣의 제3실루엣 길이 기준 라인(93) 및 상의 길이의 crop에 대응하는 제1상의 길이 기준 라인(80)에 따라 결정된 상의의 하단부일 수 있고, 도 14의 (b1)과 (b3)가 결합되거나 (b2)과 (b3)가 결합되어 상의의 바디부가 될 수 있다. 만약, 사용자가 슬리브를 선택하지 않을 경우, 상의의 바디부가 곧 상의의 디자인데이터가 될 수 있다.In addition, for example, (b1) of FIG. 14 shows the body base of the top to be joined with the neckline & collar that descends to the bottom of the chest, the fourth color fixed joint line 54, and the third silhouette length reference line corresponding to the slim silhouette ( 93), and FIG. 14(b2) shows the color & neckline surplice, the fourth color fixed joint line 54, and the third silhouette length reference line 93 corresponding to the slim silhouette. ), and (b3) of FIG. 14 shows the fourth color fixed joint line 54 of the collar & neckline, the third silhouette length reference line 93 of the silhouette, and the crop of the image length. The length of the corresponding first phase may be the lower end of the top determined according to the reference line 80, and (b1) and (b3) of FIG. 14 may be combined, or (b2) and (b3) may be combined to become the body of the top. have. If the user does not select the sleeve, the body portion of the top may become the design data of the top.
또한, 예를 들어, 도 14의 (c)는 칼라&네크라인의 plunging V neckline, 상의 길이의 crop에 대응하는 제1상의 길이 기준 라인(80) 및 실루엣의 제3실루엣 길이 기준 라인(93)에 따라 결정된 상의의 바디부일 수 있다. 만약, 사용자가 슬리브를 선택하지 않을 경우, 상의의 바디부가 곧 상의의 디자인데이터가 될 수 있다.In addition, for example, (c) of FIG. 14 is a plunging V neckline of the collar & neckline, a length reference line 80 of the first image corresponding to the crop of the image length, and a third silhouette length reference line 93 of the silhouette. It may be the body part of the top determined according to. If the user does not select the sleeve, the body portion of the top may become the design data of the top.
또한, 예를 들어, 도 15의 (a)는 칼라&네크라인의 제3칼라 고정 접합 라인(53), 실루엣의 제3실루엣 길이 기준 라인(93) 및 상의 길이의 crop에 대응하는 제1상의 길이 기준 라인(80)에 따라 결정된 상의의 하단부일 수 있고, 도 15의 (b)는 칼라&네크라인의 V neckline, 제3칼라 고정 접합 라인(53) 및 실루엣의 slim에 대응하는 제3실루엣 길이 기준 라인(93)에 따라 결정된 상의의 상단부일 수 있고, 도 15의 (c)는 (a)와 (b)가 결합된 상의의 바디부일 수 있다. 만약, 사용자가 슬리브를 선택하지 않을 경우, 상의의 바디부가 곧 상의의 디자인데이터가 될 수 있다.In addition, for example, (a) of FIG. 15 shows the third color fixed joint line 53 of the color & neckline, the third silhouette length reference line 93 of the silhouette, and the first image corresponding to the crop of the image length. It may be the lower end of the top determined according to the length reference line 80, and FIG. 15(b) shows a V neckline of a collar & neckline, a third collar fixed bonding line 53, and a third silhouette corresponding to a slim silhouette. It may be the upper end of the top determined according to the length reference line 93, and (c) of FIG. 15 may be the body of the top of which (a) and (b) are combined. If the user does not select the sleeve, the body portion of the top may become the design data of the top.
또한, 예를 들어, 도 16의 (a)는 칼라&네크라인의 plunging V neckline, 상의 길이의 crop에 대응하는 제1상의 길이 기준 라인(80) 및 실루엣의 제3실루엣 길이 기준 라인(93)에 따라 결정된 상의의 바디부일 수 있고, 도 16의 (b1)과 (b2)는 칼라&네크라인에서 특정 칼라 디자인들일 수 있고, (a)와 (b2)가 결합되어 칼라를 구비한 상의의 바디부(c1)이 결정될 수 있고, (a)와 (b1)이 결합되어 칼라를 구비한 상의의 바디부 (c2)가 결정될 수 있다. 만약, 사용자가 슬리브를 선택하지 않을 경우, 상의의 바디부가 곧 상의의 디자인데이터가 될 수 있다.In addition, for example, (a) of FIG. 16 is a plunging V neckline of the collar & neckline, a length reference line 80 of the first image corresponding to the crop of the image length, and a third silhouette length reference line 93 of the silhouette. It may be the body part of the top determined according to, and (b1) and (b2) of FIG. 16 may be specific color designs in the collar & neckline, and the body of the top having a collar by combining (a) and (b2) The part (c1) may be determined, and (a) and (b1) may be combined to determine the body part (c2) of the top with a collar. If the user does not select the sleeve, the body portion of the top may become the design data of the top.
또한, 예를 들어, 도 17과 같이 상의의 바디부 중 하단부는 주로 (a) 형태 또는 (b) 형태가 쓰일 수 있다. (a) 형태는 칼라&네크라인의 제3칼라 고정 접합 라인(53), 실루엣의 제3실루엣 길이 기준 라인(93) 및 상의 길이의 crop에 대응하는 제1상의 길이 기준 라인(80)에 따라 결정된 상의의 하단부일 수 있고, (b) 형태는 칼라&네크라인의 제4칼라 고정 접합 라인(54), 실루엣의 제3실루엣 길이 기준 라인(93) 및 상의 길이의 crop에 대응하는 제1상의 길이 기준 라인(80)에 따라 결정된 상의의 하단부일 수 있다.In addition, for example, as shown in FIG. 17, the lower end of the body portion of the upper body may be mainly used in the form (a) or (b). (a) The shape is according to the third color fixed joint line 53 of the collar & neckline, the third silhouette length reference line 93 of the silhouette, and the length reference line 80 of the first image corresponding to the crop of the image length. It may be the lower end of the determined top, and (b) the shape is the fourth color fixed joint line 54 of the collar & neckline, the third silhouette length reference line 93 of the silhouette, and the first top corresponding to the crop of the image length. It may be the lower end of the top determined according to the length reference line 80.
또한, 예를 들어, 도 18의 (a)는도 11 (c)의 디자인데이터를 접합할 수 있는 상의 바디베이스, 제3칼라 고정 접합 라인(53) 및 실루엣의 loose에 대응하는 제3실루엣 길이 기준 라인(94)에 따라 결정된 상의의 상단부와 칼라&네크라인의 제3칼라 고정 접합 라인(53), 제1실루엣 길이 기준 라인(91) 및 상의 길이의 crop에 대응하는 제1상의 길이 기준 라인(80)에 따라 결정된 상의의 하단부가 결합된 상의일 수 있다. (b)는 (a)의 상단부와 칼라&네크라인의 제3칼라 고정 접합 라인(53), 제1실루엣 길이 기준 라인(91) 및 상의 길이의 short에 대응하는 제2상의 길이 기준 라인(81)에 따라 결정된 상의의 하단부가 결합된 상의일 수 있다. (c)는 (a)의 상단부와 칼라&네크라인의 제3칼라 고정 접합 라인(53), 제1실루엣 길이 기준 라인(91) 및 상의 길이의 medium에 대응하는 제3상의 길이 기준 라인(82)에 따라 결정된 상의의 하단부가 결합된 상의일 수 있다. (d)는 (a)의 상단부와 칼라&네크라인의 제3칼라 고정 접합 라인(53), 제1실루엣 길이 기준 라인(91) 및 상의 길이의 long에 대응하는 제4상의 길이 기준 라인(83)에 따라 결정된 상의의 하단부가 결합된 상의일 수 있다. (e)는 (a)의 상단부와 칼라&네크라인의 제3칼라 고정 접합 라인(53), 제1실루엣 길이 기준 라인(91) 및 상의 길이의 maxi에 대응하는 제5상의 길이 기준 라인(84)에 따라 결정된 상의의 하단부가 결합된 상의일 수 있다. 즉, 이와 같이 실루엣의 loose, regular, slim 및 상의 길이의 crop, short, medium, long 및 maxi를 조합할 경우, 상의의 바디부는 총 15가지의 윤곽선을 가질 수 있다. 따라서, 사용자는 다양한 디자인데이터를 손쉽게 생성할 수 있다.In addition, for example, (a) of FIG. 18 is a body base on which the design data of FIG. 11 (c) can be joined, a third color fixed joint line 53, and a third silhouette length corresponding to the loose silhouette The upper end of the top and the third color fixed joint line 53 of the color & neckline determined according to the reference line 94, the first silhouette length reference line 91, and the length reference line of the first image corresponding to the crop of the image length The lower end of the top determined according to (80) may be a combined top. (b) is the upper end of (a) and the third color fixed joint line 53 of the collar & neckline, the first silhouette length reference line 91, and the length reference line 81 of the second phase corresponding to the short of the length of the phase. ) May be a combined top of the lower end of the top determined. (c) is the upper end of (a) and the third color fixed joint line 53 of the collar & neckline, the first silhouette length reference line 91, and the length reference line 82 of the third phase corresponding to the medium of the length of the phase. ) May be a combined top of the lower end of the top determined. (d) is the upper end of (a) and the third color fixed joint line 53 of the collar & neckline, the first silhouette length reference line 91, and the length reference line 83 of the fourth phase corresponding to the long of the phase length. ) May be a combined top of the lower end of the top determined. (e) is the upper end of (a) and the third color fixed joint line 53 of the collar & neckline, the first silhouette length reference line 91, and the length reference line 84 of the fifth phase corresponding to maxi of the length of the image. ) May be a combined top of the lower end of the top determined. That is, in the case of combining loose, regular, slim silhouette and crop, short, medium, long, and maxi of the length of the top, the body portion of the top may have a total of 15 outlines. Therefore, the user can easily create various design data.
일 실시예에서, 도 19를 참조하면 슬리브부는 슬리브, 슬리브 길이 및 슬리브 커프스를 포함하는 외형분류기준과 이에 대응하는 복수의 개별외형특성에 의해 디자인데이터가 결정될 수 있다. 예를 들어, 슬리브에 대응하는 복수의 개별외형특성은 슬리브의 존재 유무일 수 있다. 또한, 예를 들어, 도 19의 (a)에 도시된 바와 같이, 슬리브 길이의 복수의 개별외형특성과 관련된 길이 기준 라인은 extram-short sleeve에 대응하는 제1슬리브 길이 기준 라인(56), short sleeve에 대응하는 제2슬리브 길이 기준 라인(57), medium sleeve에 대응하는 제3슬리브 길이 기준 라인(58) 및 long sleeve에 대응하는 제4슬리브 길이 기준 라인(59)를 포함할 수 있다. 여기서 슬리브 길이는 슬리브 커프스 길이까지 포함한 길이일 수 있다. 한편, 숄더가 Dolman일 경우, short sleeve에 대응하는 제2슬리브 라인(57)는 선택할 수 없고, 슬리브 커프스를 별도로 선택하지 않으면 자동으로 도 19의 (b)와 같은 Shirt Cuffs가 설정될 수 있다. 한편, 슬리브 길이를 선택하지 않을 경우에도 슬리브가 존재하지 않는 Sleeveless가 될 수 있다. 한편, 슬리브와 슬리브 커프스가 겹칠 경우, 슬리브 커프스가 슬리브 끝단을 덮을 수 있도록 크기 변동 가능하게 제작될 수 있고, 사용자의 신체 사이즈에 맞추어 슬리브의 길이도 변동될 수 있고, 슬리브의 크기는 상의 바디부의 변동 비율과 동일하게 변동될 수 있다. 또한, 슬리브 커프스는 사용자의 손목 둘레 사이즈에 따라 사이즈가 변동될 수 있다. 또한 슬리브의 끝단 길이와 슬리브 커프스의 너비 일부는 조정 가능할 수 있다.In one embodiment, referring to FIG. 19, design data may be determined based on an external shape classification standard including a sleeve, a sleeve length, and a sleeve cuff and a plurality of individual external characteristics corresponding thereto. For example, a plurality of individual appearance characteristics corresponding to the sleeve may be the presence or absence of the sleeve. In addition, for example, as shown in (a) of Figure 19, the length reference line related to a plurality of individual external characteristics of the sleeve length is a first sleeve length reference line 56 corresponding to the extram-short sleeve, short A second sleeve length reference line 57 corresponding to a sleeve, a third sleeve length reference line 58 corresponding to a medium sleeve, and a fourth sleeve length reference line 59 corresponding to a long sleeve may be included. Here, the sleeve length may be a length including the sleeve cuff length. Meanwhile, when the shoulder is Dolman, the second sleeve line 57 corresponding to the short sleeve cannot be selected, and if the sleeve cuff is not separately selected, Shirt Cuffs as shown in FIG. 19B may be automatically set. On the other hand, even if the length of the sleeve is not selected, it may become Sleeveless without the sleeve. On the other hand, when the sleeve and the sleeve cuff overlap, the sleeve cuff can be made to be sized to cover the end of the sleeve, and the length of the sleeve can also be varied according to the user's body size, and the size of the sleeve varies with the upper body part. It can be changed equal to the ratio. Also, the size of the sleeve cuff may vary according to the size of the user's wrist circumference. In addition, the length of the end of the sleeve and part of the width of the sleeve cuff may be adjustable.
일 실시예에서, 도 20를 참조하면, 하의는 실루엣, 하의길이, 허리위치를 포함하는 외형분류기준과 이에 대응하는 복수의 개별외형특성에 의해 디자인데이터가 결정될 수 있다. 예를 들어, 도 20의 (a)에 도시된 바와 같이, 허리 위치의 복수의 개별외형특성과 관련된 고정 접합 라인은 치마의 high waist에 대응하는 제1허리 고정 접합 라인(70), 바지의 high waist에 대응하는 제2허리 고정 접합 라인(71), 바지의 normal waist에 대응하는 제3허리 고정 접합 라인(72), 치마의 normal waist에 대응하는 제4허리 고정 접합 라인(73), 치마의 low waist에 대응하는 제5허리 고정 접합 라인(74), 바지의 low waist에 대응하는 제6허리 고정 접합 라인(75)을 포함할 수 있다. 또한, 도 20의 (b)에 도시된 바와 같이, 하의 길이의 복수의 개별외형특성과 관련된 길이 기준 라인은 extra-short에 대응하는 제1하의 길이 기준 라인(76), short에 대응하는 제2하의 길이 기준 라인(77), midi에 대응하는 제3하의 길이 기준 라인(78) 및 long에 대응하는 제4하의 길이 기준 라인(79)를 포함할 수 있다. 예를 들어, 치마의 normal waist에 대응하는 제4허리 고정 접합 라인(73) 및 short에 대응하는 제2하의 길이 기준 라인(77)에 따라 (c)의 치마 디자인데이터가 생성될 수 있다. 한편, 실루엣과 관계없이 허리 위치는 표준 인체 모형(11)에 딱 맞을 수 있고, 원피스의 경우 상의의 끝라인과 하의의 허리 위치가 정확하게 맞아야 한다. 사용자의 신체 사이즈 변화와 동일하게 하의의 사이즈도 변동될 수 있다.In one embodiment, referring to FIG. 20, design data may be determined based on an appearance classification criterion including a silhouette, a length of a bottom, and a waist position for a bottom and a plurality of individual appearance characteristics corresponding thereto. For example, as shown in (a) of FIG. 20, the fixed bonding line related to the plurality of individual appearance characteristics of the waist position is the first waist fixed bonding line 70 corresponding to the high waist of the skirt, and the high of the pants. The second waist fixing bonding line 71 corresponding to the waist, the third waist fixing bonding line 72 corresponding to the normal waist of the pants, the fourth waist fixing bonding line 73 corresponding to the normal waist of the skirt, A fifth waist fixing bonding line 74 corresponding to a low waist and a sixth waist fixing bonding line 75 corresponding to a low waist of the pants may be included. In addition, as shown in (b) of FIG. 20, the length reference line related to a plurality of individual appearance characteristics of the lower length is a first lower length reference line 76 corresponding to an extra-short, and a second length reference line 76 corresponding to the short. A lower length reference line 77, a third lower length reference line 78 corresponding to midi, and a fourth lower length reference line 79 corresponding to long. For example, the skirt design data of (c) may be generated according to the fourth waist fixing joint line 73 corresponding to the normal waist of the skirt and the second lower length reference line 77 corresponding to the short. On the other hand, regardless of the silhouette, the waist position may fit perfectly into the standard human body model 11, and in the case of a dress, the end line of the top and the waist position of the bottom must match accurately. In the same way as the user's body size change, the size of the bottoms may also change.
일 실시예에서, 도면에는 도시되지 않았지만, 상의와 하의가 결정되는 방식과 동일한 방식이 적용되어 원피스의 디자인데이터도 생성될 수 있다.In one embodiment, although not shown in the drawings, design data of one piece may be generated by applying the same method as the method in which the top and bottom are determined.
일 실시예에서, 도면에는 도시되지 않았지만, 상의, 하의 및 원피스에 공통되는 범용 외형분류기준인 텍스쳐, 패턴 및 컬러는 기공지되어 의류에 적용되는 다양한 종류의 텍스쳐들, 패턴들 및 색상들이 개별외형특성이 될 수 있고, 사용자의 선택에 따라 상의, 하의 또는 원피스의 디자인데이터에 적용(예: 면, 스트라이프 무늬, 적색)될 수 있다.In one embodiment, although not shown in the drawings, textures, patterns, and colors, which are universal appearance classification standards common to tops, bottoms, and one-piece, are known, so that various types of textures, patterns, and colors applied to clothing are individually shaped. It can be a characteristic, and can be applied to the design data of a top, bottom, or one piece (eg, cotton, stripe pattern, red) according to the user's selection.
일 실시예에서, 도 21을 참조하면 상의, 하의 및 원피스에 공통되는 범용 외형분류기준인 디테일의 복수의 개별외형특성은 다양한 종류의 의류 악세서리일 수 있다. 예를 들어, 디테일의 복수의 개별외형특성은 Pleats, Shirring, Gather, Trimming, Fur, Bow, Patch Pocket, Cubic, Quilting, Ruffle, Frill, Flounce, Banding, Draw String을 포함할 수 있다. 즉, 도 21의 (a) Pocket, (b) Bow, (c) String, (d) Set in Pocket 및 (e) Zipper가 상의, 하의 또는 원피스의 디자인데이터에 부가될 수 있다.In one embodiment, referring to FIG. 21, a plurality of individual appearance characteristics of details, which is a universal appearance classification standard common to tops, bottoms, and dresses, may be various types of clothing accessories. For example, the plurality of individual appearance characteristics of the detail may include Pleats, Shirring, Gather, Trimming, Fur, Bow, Patch Pocket, Cubic, Quilting, Ruffle, Frill, Flounce, Banding, and Draw String. That is, (a) Pocket, (b) Bow, (c) String, (d) Set in Pocket and (e) Zipper of FIG. 21 may be added to the design data of a top, bottom, or one piece.
한편, 상기에서 언급한 사용자의 신체 사이즈에 따른 표준 모델(310)의 변경은 사용자의 신체 사이즈 입력에 따라 자동으로 수행될 수 있고, 이에 따라 표준 모델(310)의 표준인체모형(11)의 외형, 고정 접합 라인의 위치/길이 및 길이 기준 라인의 위치/길이가 변동될 수 있다.Meanwhile, the change of the standard model 310 according to the user's body size mentioned above may be automatically performed according to the user's body size input, and accordingly, the appearance of the standard human body model 11 of the standard model 310 , The position/length of the fixed bonding line and the position/length of the reference line may be varied.
일 실시예에서, 서버(10)는, 동작 45에서 커스터마이징 인터페이스에서 생성한 디자인데이터를 표시할 수 있다. 이를 통해, 사용자는 본인이 커스터마이징한 디자인데이터를 실시간으로 확인하면서 손쉽게 구매 또는 변경할 수 있다.In an embodiment, the server 10 may display design data generated in the customizing interface in operation 45. Through this, the user can easily purchase or change while checking the design data customized by the user in real time.
한편, 도면에는 도시되지 않았지만, 서버(10)는 커스터마이징 인터페이스(500)에서 검출한 제3사용자 입력 및 표준 모델(310)에 기반하여 디자인데이터를 변경할 수 있다. 즉, 사용자는 생성한 디자인데이터를 저장하거나 종료하지 전까지는 자유롭게 생성한 디자인데이터를 변경할 수 있다.Meanwhile, although not shown in the drawing, the server 10 may change the design data based on the third user input and the standard model 310 detected by the customizing interface 500. That is, the user can freely change the generated design data until it is saved or terminated.
도 22은 본 발명의 일 실시예에 따른 추천 대상체를 제공하는 방법을 설명하기 위한 흐름도이다. 도 23는 본 발명의 일 실시예에 따른 추천 대상체를 제공하는 방법을 설명하기 위한 예시도이다. 도 22의 동작들은 도 1 및 도 2의 서버(10)에 의해 수행될 수 있다.22 is a flowchart illustrating a method of providing a recommended object according to an embodiment of the present invention. 23 is an exemplary view illustrating a method of providing a recommended object according to an embodiment of the present invention. The operations of FIG. 22 may be performed by the server 10 of FIGS. 1 and 2.
도 22 및 도 23를 참조하면, 일 실시예에서, 서버(10)는, 동작 181에서 디자인데이터 생성할 수 있다. 디자인 데이터 생성은 도 8에서 이루어진 동작과 동일할 수 있다. 예를 들어, 도 23에 도시된 바와 같이 디자인데이터(181)가 생성될 수 있다. 물론, 동작 181은 생략될 수 있고, 동작 182에서 대상체 기준으로 바로 진행될 수 있다.22 and 23, in an embodiment, the server 10 may generate design data in operation 181. The design data generation may be the same as the operation performed in FIG. 8. For example, design data 181 may be generated as shown in FIG. 23. Of course, operation 181 may be omitted, and operation 182 may be directly performed based on the object.
일 실시예에서, 서버(10)는, 동작 182에서 매칭알고리즘에 기반하여 대상체 또는 생성한 디자인데이터에 대응하는 추상적 특성에 매칭된 외형분류기준 조합에 해당하는 추천 대상체를 추출할 수 있다. 예를 들어, 디자인데이터를 생성하기 이전에 사용자가 선택한 대상체를 기준으로 추상적 특성을 매칭하여 추천 대상체를 추출하거나 사용자의 입력에 따라 생성한 디자인데이터를 기준으로 추상적 특성을 매칭하여 추천 대상체를 추출할 수 있다. 도 23의 (a) 화살표 방향에 배치된 3개의 상의가 추천 대상체일 수 있고, (b) 화살표 방향에 배치된 3개의 상의가 추천 대상체에 따라 변경된 상의의 디자인데이터일 수 있다. 예를 들어, 서버(10)는 대상체 또는 생성한 디자인데이터에 대응하는 추상적 특성이 “단정한”일 경우에 제1추천 대상체(182)를 추출할 수 있고, 대응하는 추상적 특성이 “개성있는”일 경우에 제2추천 대상체(183)를 추출할 수 있고, 대응하는 추상적 특성이 “격식있는”일 경우에 제3추천 대상체(184)를 추출할 수 있다.In an embodiment, the server 10 may extract a recommended object corresponding to a combination of appearance classification criteria matched with an object or an abstract characteristic corresponding to the generated design data based on the matching algorithm in operation 182. For example, before generating design data, a recommended object may be extracted by matching abstract characteristics based on an object selected by a user, or a recommended object may be extracted by matching abstract characteristics based on design data generated according to a user's input. I can. In (a) of FIG. 23, three tops arranged in the direction of the arrow may be recommended objects, and (b) three tops arranged in the direction of the arrow may be design data of the top changed according to the recommended object. For example, the server 10 may extract the first recommended object 182 when the abstract characteristic corresponding to the object or the generated design data is “neat”, and the corresponding abstract characteristic is “individual”. In one case, the second recommendation object 183 may be extracted, and when the corresponding abstract characteristic is “formal”, the third recommendation object 184 may be extracted.
일 실시예에서, 서버(10)는, 동작 183에서 추출한 추천 대상체에 대응하는 디자인데이터를 커스터마이징 인터페이스를 통해 사용자에게 제공할 수 있다. 예를 들어, 서버(10)는 제1추천 대상체(182)에 기반하여 칼라가 추가된 디자인데이터(185)를 사용자에게 제공할 수 있고, 제2추천 대상체(183)에 기반하여 텍스트가 추가된 디자인데이터(186)를 사용자에게 제공할 수 있고, 제3추천 대상체(184)에 기반하여 포켓이 추가된 디자인데이터(187)를 사용자에게 제공할 수 있다. 물론, 서버(10)는 변경된 디자인데이터 3개를 모두 제공하거나 이 중 하나 이상을 제공할 수도 있다.In an embodiment, the server 10 may provide design data corresponding to the recommended object extracted in operation 183 to the user through the customization interface. For example, the server 10 may provide the user with design data 185 to which a color is added based on the first recommendation object 182, and the text is added based on the second recommendation object 183. Design data 186 may be provided to the user, and design data 187 with pockets added based on the third recommended object 184 may be provided to the user. Of course, the server 10 may provide all three changed design data or may provide one or more of them.
일 실시예에서, 서버(10)는, 동작 184에서 커스터마이징 인터페이스에서 검출한 제4사용자 입력 및 미리 설정한 표준 모델에 기반하여 추천 대상체의 디자인데이터를 변경할 수 있고, 동작 185에서 변경한 디자인데이터를 표시할 수 있다. 예를 들어, 사용자는 서버(10)를 통해 제공받은 변경된 디자인데이터를 추가로 커스터마이징할 수 있다.In an embodiment, the server 10 may change the design data of the recommended object based on the fourth user input detected in the customizing interface in operation 184 and a preset standard model, and the design data changed in operation 185 Can be displayed. For example, the user may further customize the changed design data provided through the server 10.
이와 같이, 본 발명은 사용자가 선택한 대상체 또는 생성한 디자인데이터에 포함되어 있는 감성을 파악하여 사용자에게 적합한 대상체를 추천할 수 있고, 사용자는 추천받은 대상체를 커스터마이징 인터페이스를 통해 손쉽게 추가로 변경할 수 있다.As described above, according to the present invention, an object selected by the user or an object suitable for the user can be recommended by grasping the sensitivity included in the generated design data, and the user can easily change the recommended object through the customizing interface.
본 발명의 또 다른 실시예에 따른 대상체 디자인 커스터마이징 장치는, 하나 이상의 컴퓨터를 포함하고, 상기 언급된 대상체 디자인 커스터마이징 방법을 수행한다.An object design customizing apparatus according to still another embodiment of the present invention includes one or more computers and performs the aforementioned object design customization method.
이상에서 전술한 본 발명의 대상체 디자인 커스터마이징 방법은, 하드웨어인 컴퓨터와 결합되어 실행되기 위해 프로그램(또는 어플리케이션)으로 구현되어 매체에 저장될 수 있다.The object design customization method of the present invention described above may be implemented as a program (or application) and stored in a medium to be executed by being combined with a computer that is hardware.
본 발명의 실시예와 관련하여 설명된 방법 또는 알고리즘의 단계들은 하드웨어로 직접 구현되거나, 하드웨어에 의해 실행되는 소프트웨어 모델로 구현되거나, 또는 이들의 결합에 의해 구현될 수 있다. 소프트웨어 모델은 RAM(Random Access Memory), ROM(Read Only Memory), EPROM(Erasable Programmable ROM), EEPROM(Electrically Erasable Programmable ROM), 플래시 메모리(Flash Memory), 하드 디스크, 착탈형 디스크, CD-ROM, 또는 본 발명이 속하는 기술 분야에서 잘 알려진 임의의 형태의 컴퓨터 판독가능 기록매체에 상주할 수도 있다.The steps of a method or algorithm described in connection with an embodiment of the present invention may be directly implemented in hardware, implemented as a software model executed by hardware, or a combination thereof. Software models include Random Access Memory (RAM), Read Only Memory (ROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), Flash Memory, Hard Disk, Removable Disk, CD-ROM, or It may reside on any type of computer-readable recording medium well known in the art to which the present invention pertains.
이상, 첨부된 도면을 참조로 하여 본 발명의 실시예를 설명하였지만, 본 발명이 속하는 기술분야의 통상의 기술자는 본 발명이 그 기술적 사상이나 필수적인 특징을 변경하지 않고서 다른 구체적인 형태로 실시될 수 있다는 것을 이해할 수 있을 것이다. 그러므로, 이상에서 기술한 실시예들은 모든 면에서 예시적인 것이며, 제한적이 아닌 것으로 이해해야만 한다.In the above, embodiments of the present invention have been described with reference to the accompanying drawings, but those of ordinary skill in the art to which the present invention pertains can be implemented in other specific forms without changing the technical spirit or essential features. You can understand. Therefore, the embodiments described above are illustrative in all respects, and should be understood as non-limiting.

Claims (15)

  1. 서버가 제1 입력영상데이터를 외형특성 인식모델에 입력하여, 복수의 외형분류기준에 대한 개별외형특성을 산출하는 단계;Calculating individual appearance characteristics for a plurality of appearance classification criteria by inputting, by the server, the first input image data into an appearance characteristic recognition model;
    상기 서버가 상기 제1 입력영상데이터에 대한 복수의 개별외형특성을 조합하여 제1 외형서술데이터를 생성하는 단계; 및Generating, by the server, first appearance description data by combining a plurality of individual appearance characteristics with respect to the first input image data; And
    상기 서버가 상기 제1 외형서술데이터를 기초로 제1 출력영상데이터를 생성하여 출력하는 단계를 포함하고And generating and outputting, by the server, first output image data based on the first outline description data,
    상기 제1 입력영상데이터는 특정한 사용자로부터 입력받은 영상데이터이고,The first input image data is image data input from a specific user,
    상기 외형분류기준은 특정한 대상체의 외형을 서술하기 위한 특정한 분류기준으로서, 상기 대상체의 동일한 분류기준 내의 다양한 외형특성을 표현하는 복수의 개별외형특성을 포함하는 것인, 입력영상데이터 기반 사용자 관심정보 획득 방법.The appearance classification standard is a specific classification standard for describing the appearance of a specific object, and includes a plurality of individual appearance characteristics expressing various external characteristics within the same classification standard of the object, obtaining user interest information based on input image data Way.
  2. 제1 항에 있어서,The method of claim 1,
    상기 제1 입력영상데이터는 상기 사용자로부터 입력받은 특정 대상체의 특정 물품에 대한 영상데이터이고,The first input image data is image data for a specific article of a specific object received from the user,
    상기 제1 출력영상데이터는 상기 제1 외형서술데이터를 기초로 생성된 상기 특정 대상체의 가상의 물품에 대한 영상데이터인, 입력영상데이터 기반 사용자 관심정보 획득 방법.The first output image data is image data on a virtual article of the specific object generated based on the first outline description data, a method of obtaining user interest information based on input image data.
  3. 제1 항에 있어서,The method of claim 1,
    상기 제1 외형서술데이터를 생성하는 단계는,The step of generating the first outline description data,
    상기 제1 입력영상데이터에 대한 복수의 개별외형특성에 대응하는 코드값을 추출하는 단계; 및Extracting code values corresponding to a plurality of individual appearance characteristics of the first input image data; And
    상기 복수의 코드값을 조합하여, 코드열 형태의 제1 외형서술데이터를 생성하는 단계를 포함하는, 입력영상데이터 기반 사용자 관심정보 획득 방법.And generating first appearance description data in the form of a code string by combining the plurality of code values.
  4. 제1 항에 있어서,The method of claim 1,
    상기 제1 출력영상데이터는, 상기 제1 외형서술데이터에 포함된 복수의 개별외형특성을 포함하는 가상의 물품에 대한 영상데이터인, 입력영상데이터 기반 사용자 관심정보 획득 방법.The first output image data is image data for a virtual article including a plurality of individual appearance characteristics included in the first appearance description data, a method of obtaining user interest information based on input image data.
  5. 제1 항에 있어서,The method of claim 1,
    상기 서버가 제2 입력영상데이터를 외형특성 인식모델에 입력하여, 복수의 외형분류기준에 대한 개별외형특성을 산출하는 단계; 및Calculating, by the server, second input image data into an appearance characteristic recognition model, and calculating individual appearance characteristics for a plurality of appearance classification criteria; And
    상기 서버가 상기 제2 입력영상데이터에 대한 복수의 개별외형특성을 조합하여 제2 외형서술데이터를 생성하는 단계를 더 포함하고,The server further comprising the step of generating second appearance description data by combining a plurality of individual appearance characteristics of the second input image data,
    상기 제2 입력영상데이터는, 상기 사용자에 의해 상기 제1 출력영상데이터가 수정된 영상데이터인, 입력영상데이터 기반 사용자 관심정보 획득 방법.The second input image data is image data in which the first output image data has been modified by the user, a method of obtaining user interest information based on input image data.
  6. 제5 항에 있어서,The method of claim 5,
    상기 서버가 상기 제1 외형서술데이터 또는 상기 제2 외형서술데이터를 상기 사용자의 관심정보로 저장하는 단계를 더 포함하는, 입력영상데이터 기반 사용자 관심정보 획득 방법.The server further comprising the step of storing the first outline description data or the second outline description data as the user's interest information, input image data-based user interest information acquisition method.
  7. 하드웨어인 컴퓨터와 결합되어, 제1 항 내지 제6 항 중 어느 한 항의 방법을 실행시키기 위하여 기록매체에 저장된, 입력영상데이터 기반 사용자 관심정보 획득 프로그램.A program for acquiring user interest information based on input image data, which is combined with a computer as hardware and stored in a recording medium to execute the method of any one of claims 1 to 6.
  8. 서버가 제1 사용자 입력에 기반하여 대상체를 결정하는 단계;Determining, by the server, an object based on the first user input;
    상기 서버가 상기 대상체에 대응하는 복수의 외형분류기준 및 상기 복수의 외형분류기준에 각각 대응하는 복수의 개별외형특성에 기반하여 커스터마이징 인터페이스를 제공하는 단계; 및Providing, by the server, a customizing interface based on a plurality of appearance classification criteria corresponding to the object and a plurality of individual appearance characteristics respectively corresponding to the plurality of appearance classification criteria; And
    상기 서버가 상기 커스터마이징 인터페이스에서 검출한 제2 사용자 입력 및 미리 설정한 표준 모델에 기반하여 상기 대상체의 디자인데이터를 생성하는 단계;를 포함하고,Generating, by the server, design data of the object based on a second user input detected in the customizing interface and a preset standard model; Including,
    상기 외형분류기준은, 특정한 대상체의 외형을 서술하기 위한 특정한 분류기준으로서, 상기 대상체의 동일한 분류기준 내의 다양한 외형특성을 표현하는 복수의 개별외형특성을 포함하고,The appearance classification standard is a specific classification standard for describing the appearance of a specific object, and includes a plurality of individual appearance characteristics expressing various external characteristics within the same classification standard of the object,
    상기 커스터마이징 인터페이스는 상기 대상체에 대응하는 상기 복수의 개별외형특성과 매칭되는 복수의 메뉴 및 상기 디자인데이터를 포함하는 것을 특징으로 하는 대상체 디자인 커스터마이징 방법.Wherein the customizing interface includes a plurality of menus and the design data matched with the plurality of individual appearance characteristics corresponding to the object.
  9. 제8 항에 있어서,The method of claim 8,
    상기 서버가 상기 커스터마이징 인터페이스에서 상기 생성한 디자인데이터를 표시하는 단계;를 더 포함하는 것을 특징으로 하는 대상체 디자인 커스터마이징 방법.And displaying the generated design data in the customizing interface by the server.
  10. 제8 항에 있어서,The method of claim 8,
    상기 제2 사용자 입력은 상기 복수의 메뉴 중 적어도 하나의 메뉴를 선택하는 입력인 것을 특징으로 하는 대상체 디자인 커스터마이징 방법.The second user input is an input for selecting at least one of the plurality of menus.
  11. 제10 항에 있어서,The method of claim 10,
    상기 표준 모델은 표준 인체 모형, 상기 복수의 개별외형특성을 나타내기 위한 고정 접합 라인 및 길이 기준 라인 중 적어도 하나를 포함하고,The standard model includes at least one of a standard human body model, a fixed joint line and a length reference line for indicating the plurality of individual appearance characteristics,
    상기 서버가 상기 제2 사용자 입력에 따라 선택된 상기 적어도 하나의 메뉴에 대응하는 고정 접합 라인 및 길이 기준 라인 중 적어도 하나에 기반하여 상기 디자인데이터를 생성하는 단계;를 더 포함하는 것을 특징으로 하는 대상체 디자인 커스터마이징 방법.And generating, by the server, the design data based on at least one of a fixed joint line and a length reference line corresponding to the at least one menu selected according to the second user input. How to customize.
  12. 제8 항에 있어서,The method of claim 8,
    상기 서버가 상기 커스터마이징 인터페이스에서 검출한 제3사용자 입력 및 상기 표준 모델에 기반하여 상기 디자인데이터를 변경하는 단계;를 더 포함하는 것을 특징으로 하는 대상체 디자인 커스터마이징 방법.And changing, by the server, the design data based on a third user input detected by the customizing interface and the standard model.
  13. 제8 항에 있어서,The method of claim 8,
    상기 서버가 매칭알고리즘에 기반하여 상기 대상체 또는 상기 생성한 디자인데이터에 대응하는 추상적 특성에 매칭된 외형분류기준 조합에 해당하는 추천 대상체를 추출하는 단계; 및Extracting, by the server, a recommended object corresponding to a combination of appearance classification criteria matched with the object or an abstract characteristic corresponding to the generated design data based on a matching algorithm; And
    상기 추출한 추천 대상체에 대응하는 디자인데이터를 상기 커스터마이징 인터페이스를 통해 사용자에게 제공하는 단계;를 포함하는 것을 특징으로 하는 대상체 디자인 커스터마이징 방법.And providing design data corresponding to the extracted recommended object to a user through the customization interface.
  14. 제13 항에 있어서,The method of claim 13,
    상기 서버가 상기 커스터마이징 인터페이스에서 검출한 제4사용자 입력 및 미리 설정한 표준 모델에 기반하여 상기 추천 대상체의 상기 디자인데이터를 변경하는 단계;를 더 포함하는 것을 특징으로 하는 대상체 디자인 커스터마이징 방법.And changing, by the server, the design data of the recommended object based on a fourth user input detected by the customizing interface and a preset standard model.
  15. 하드웨어인 컴퓨터와 결합되어, 제8 항 내지 제14 항 중 어느 한 항의 방법을 실행시키기 위하여 매체에 저장된, 대상체 디자인 커스터마이징 프로그램.An object design customizing program that is combined with a computer that is hardware and stored in a medium to execute the method of any one of claims 8 to 14.
PCT/KR2020/007445 2019-06-10 2020-06-09 Method for obtaining user interest information on basis of input image data and method for customizing design of object WO2020251238A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
KR20190067795 2019-06-10
KR10-2019-0067795 2019-06-10
KR10-2020-0009600 2020-01-28
KR1020200009600A KR102115573B1 (en) 2019-06-10 2020-01-28 System, method and program for acquiring user interest based on input image data
KR10-2020-0016533 2020-02-11
KR1020200016533A KR102115574B1 (en) 2019-06-10 2020-02-11 Method, device and program for customizing object design

Publications (1)

Publication Number Publication Date
WO2020251238A1 true WO2020251238A1 (en) 2020-12-17

Family

ID=70910841

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/KR2020/007445 WO2020251238A1 (en) 2019-06-10 2020-06-09 Method for obtaining user interest information on basis of input image data and method for customizing design of object
PCT/KR2020/007426 WO2020251233A1 (en) 2019-06-10 2020-06-09 Method, apparatus, and program for obtaining abstract characteristics of image data

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/007426 WO2020251233A1 (en) 2019-06-10 2020-06-09 Method, apparatus, and program for obtaining abstract characteristics of image data

Country Status (2)

Country Link
KR (9) KR20200141373A (en)
WO (2) WO2020251238A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807708A (en) * 2021-09-22 2021-12-17 深圳市微琪思服饰有限公司 Flexible clothing production and manufacturing platform system based on distributed mode

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200141373A (en) * 2019-06-10 2020-12-18 (주)사맛디 System, method and program of constructing dataset for training appearance recognition model
KR102387907B1 (en) * 2020-06-26 2022-04-18 주식회사 이스트엔드 Creators and prosumers participate in the no design clothing design customizing method and system for the same
KR102524049B1 (en) * 2021-02-08 2023-05-24 (주)사맛디 Device and method for recommending apparel for user based on characteristic information
KR102556642B1 (en) 2021-02-10 2023-07-18 한국기술교육대학교 산학협력단 Method of generating data for machine learning training
CN113360477A (en) * 2021-06-21 2021-09-07 四川大学 Classification method for large-scale customized women's leather shoes

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120078837A (en) * 2011-01-03 2012-07-11 김건민 The commodity sales and management system that used a coordination system
KR20150115475A (en) * 2014-04-04 2015-10-14 홍익대학교세종캠퍼스산학협력단 Image converting tool system of 3D printing robot and Driving method thereof
KR20180014495A (en) * 2016-08-01 2018-02-09 삼성에스디에스 주식회사 Apparatus and method for recognizing objects
KR20180048536A (en) * 2018-04-30 2018-05-10 오드컨셉 주식회사 Method, apparatus and computer program for providing search information from video
KR20180074565A (en) * 2016-12-23 2018-07-03 삼성전자주식회사 Image display device and operating method for the same
KR20190029567A (en) * 2016-02-17 2019-03-20 옴니어스 주식회사 Method for recommending a product using style feature
KR102115573B1 (en) * 2019-06-10 2020-05-26 (주)사맛디 System, method and program for acquiring user interest based on input image data

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1183461A (en) * 1997-09-09 1999-03-26 Mitsubishi Electric Corp Article assortment recognition system
KR101157744B1 (en) * 2010-05-06 2012-06-25 윤진호 Method and system for recommending products based on preference and presenting recommended products for customers
KR102040883B1 (en) 2012-08-23 2019-11-05 인터디지탈 패튼 홀딩스, 인크 Operating with multiple schedulers in a wireless system
CN108268539A (en) * 2016-12-31 2018-07-10 上海交通大学 Video matching system based on text analyzing
KR20180133200A (en) 2018-04-24 2018-12-13 김지우 Application program for managing clothes recorded in recording media, system and method for managing clothes using the same

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120078837A (en) * 2011-01-03 2012-07-11 김건민 The commodity sales and management system that used a coordination system
KR20150115475A (en) * 2014-04-04 2015-10-14 홍익대학교세종캠퍼스산학협력단 Image converting tool system of 3D printing robot and Driving method thereof
KR20190029567A (en) * 2016-02-17 2019-03-20 옴니어스 주식회사 Method for recommending a product using style feature
KR20180014495A (en) * 2016-08-01 2018-02-09 삼성에스디에스 주식회사 Apparatus and method for recognizing objects
KR20180074565A (en) * 2016-12-23 2018-07-03 삼성전자주식회사 Image display device and operating method for the same
KR20180048536A (en) * 2018-04-30 2018-05-10 오드컨셉 주식회사 Method, apparatus and computer program for providing search information from video
KR102115573B1 (en) * 2019-06-10 2020-05-26 (주)사맛디 System, method and program for acquiring user interest based on input image data
KR102115574B1 (en) * 2019-06-10 2020-05-27 (주)사맛디 Method, device and program for customizing object design

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807708A (en) * 2021-09-22 2021-12-17 深圳市微琪思服饰有限公司 Flexible clothing production and manufacturing platform system based on distributed mode
CN113807708B (en) * 2021-09-22 2024-03-01 深圳市微琪思服饰有限公司 Distributed clothing flexible production manufacturing platform system

Also Published As

Publication number Publication date
KR20200141929A (en) 2020-12-21
KR20210002410A (en) 2021-01-08
KR102115573B1 (en) 2020-05-26
KR20200141384A (en) 2020-12-18
KR102115574B1 (en) 2020-05-27
KR20200141375A (en) 2020-12-18
KR102227896B1 (en) 2021-03-15
KR102119253B1 (en) 2020-06-04
WO2020251233A1 (en) 2020-12-17
KR102366580B1 (en) 2022-02-23
KR20200141388A (en) 2020-12-18
KR20200141373A (en) 2020-12-18
KR102355702B1 (en) 2022-01-26

Similar Documents

Publication Publication Date Title
WO2020251238A1 (en) Method for obtaining user interest information on basis of input image data and method for customizing design of object
WO2017171418A1 (en) Method for composing image and electronic device thereof
WO2020222623A9 (en) System and method for automatically constructing content for strategic sales
WO2019156522A1 (en) Image/text-based design creating device and method
WO2020032597A1 (en) Apparatus and method for providing item according to attribute of avatar
WO2016105087A1 (en) Method and system for generating 3d synthetic image by combining body data and clothes data
WO2020085786A1 (en) Style recommendation method, device and computer program
WO2016200150A1 (en) Method and apparatus for providing content
WO2018225939A1 (en) Method, device, and computer program for providing image-based advertisement
WO2020171567A1 (en) Method for recognizing object and electronic device supporting the same
WO2020032567A1 (en) Electronic device for providing information on item based on category of item
JP2007280351A (en) Information providing system and method, or like
WO2018226022A1 (en) Fashion item recommendation server, and fashion item recommendation method using same
WO2019088358A1 (en) Apparatus and method for providing customized jewelry information
WO2018182068A1 (en) Method and apparatus for providing recommendation information for item
WO2021071240A1 (en) Method, apparatus, and computer program for recommending fashion product
WO2021040256A1 (en) Electronic device, and method thereof for recommending clothing
WO2020184855A1 (en) Electronic device for providing response method, and operating method thereof
WO2020251236A1 (en) Image data retrieval method, device, and program using deep learning algorithm
WO2020060012A1 (en) A computer implemented platform for providing contents to an augmented reality device and method thereof
WO2021071238A1 (en) Fashion product recommendation method, device, and computer program
WO2021215758A1 (en) Recommended item advertising method, apparatus, and computer program
WO2021153964A1 (en) Fashion product recommendation method, apparatus, and system
WO2021107556A1 (en) Method for providing recommended item based on user event information and device for executing same
WO2019117463A1 (en) Wearable glasses for augmented reality clothes shopping, and augmented reality clothes shopping method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20822567

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 260422)

122 Ep: pct application non-entry in european phase

Ref document number: 20822567

Country of ref document: EP

Kind code of ref document: A1