KR20170096971A - Method for recommending a product using style feature - Google Patents

Method for recommending a product using style feature Download PDF

Info

Publication number
KR20170096971A
KR20170096971A KR1020170021441A KR20170021441A KR20170096971A KR 20170096971 A KR20170096971 A KR 20170096971A KR 1020170021441 A KR1020170021441 A KR 1020170021441A KR 20170021441 A KR20170021441 A KR 20170021441A KR 20170096971 A KR20170096971 A KR 20170096971A
Authority
KR
South Korea
Prior art keywords
style
image
product
query
learning data
Prior art date
Application number
KR1020170021441A
Other languages
Korean (ko)
Inventor
장윤훈
전재영
박준철
최형원
전진우
Original Assignee
옴니어스 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 옴니어스 주식회사 filed Critical 옴니어스 주식회사
Priority to PCT/KR2017/001801 priority Critical patent/WO2017142361A1/en
Publication of KR20170096971A publication Critical patent/KR20170096971A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0255Targeted advertisements based on user history
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations

Abstract

A method for recommending a product using style features, comprising: acquiring a query image from a user client; extracting a category and a style feature of the query image; using a learned model using a plurality of learning data images; Searching for at least one product image having style characteristics similar to the query image from the product image database, wherein the retrieved product image belongs to a category different from the category of the query image, and retrieving the retrieved at least one product image And providing the user data to the user client, wherein the plurality of learning data images include label information indicating a category and a style characteristic of each of the plurality of learning data images.

Description

{METHOD FOR RECOMMENDING A PRODUCT USING STYLE FEATURE}

A product recommendation method using a style feature is disclosed. More particularly, the present invention relates to a product recommendation method and system for recognizing a product from a query image input by a user and recommending a product image of another category matching the recognized product to a user.

Fashion styling is important for people who are involved in social activities. Because fashion styling is the basis of instant judgment.

As the importance and importance of fashion styling has increased, fashion styling service through the Internet has been provided. The fashion styling service through the Internet is performed in such a manner that, when a user registers product information on an Internet site, the stylists directly select products suitable for the registered product and present them to the user.

However, this method is problematic in that it takes a lot of time and money to select a product because the product selection is performed by stylists.

In addition, the existing recommendation engine based on machine learning is predominantly based on the machine learning method based on user's purchase history and user's preference. This method has a problem in that it takes a lot of time and money to collect enough user data because the screening of goods is performed based on the user data.

Patent Registration No. 10-0511210 (registered date: August 23, 2005)

SUMMARY OF THE INVENTION It is an object of the present invention to provide a product recommendation method using a style feature.

The problems to be solved by the present invention are not limited to the above-mentioned problems, and other problems which are not mentioned can be clearly understood by those skilled in the art from the following description.

According to an aspect of the present invention, there is provided a product recommendation method using style features, including: obtaining a query image from a user client; extracting a category and a style feature of the query image; A search method for searching at least one product image having style characteristics similar to the query image from a product image database using a model learned using a plurality of learning data images, And providing the searched at least one merchandise image to the user client, wherein the plurality of learning data images comprise at least one of the plurality of learning data images, .

In addition, the learned model may include a feature extraction unit and an image search unit, and the step of extracting the category and style features of the query image may include extracting category and style characteristics of the query image using the feature extraction unit of the learned model, And the step of searching for the at least one goods image may include searching for the at least one goods image using the image searching unit of the learned model.

In addition, the step of retrieving the at least one merchandise image may include receiving from the user client a selection input for at least one style contained in a style feature of the query image, and receiving at least one merchandise corresponding to the selected style And searching for an image.

The learned model may be a model that associates the plurality of learning data images with a style feature space representing a relationship between the plurality of learning data images. May be used to extract the style characteristics of each of the plurality of learning data images included in the style minutiae space and the distance between the plurality of learning data images may be determined according to the extracted style characteristics.

The style minutiae space may be characterized in that learning data images having similar style characteristics among the plurality of learning data images are arranged close to each other and learning data images having different style characteristics are arranged far away from each other, The step of searching for one merchandise image may include determining a reference merchandise image similar to the query image within the style minutiae space and searching at least one merchandise image located within a predetermined distance from the determined reference merchandise image Step < / RTI >

The plurality of learning data images may further include relationship information indicating a connection relationship between the plurality of learning data images.

The learned model may be a model that associates the plurality of learning data images with one or more style feature point spaces classified based on style features of the plurality of learning data images, Space is used to extract a style feature of each of a plurality of learning data images included in each of the one or more style feature point spaces, The distance between the plurality of learning data images can be determined.

The step of retrieving the at least one article image may further include the steps of determining a reference article image similar to the query image in a style feature point space corresponding to at least one style included in a style feature of the query image, And searching for at least one merchandise image located within a predetermined distance from the determined reference merchandise image within the space.

In addition, the learned model may include a generative model for generating a target product image having a style characteristic similar to the query image, and the step of searching for the at least one product image may include: Generating at least one target product image having a style characteristic similar to the query image using the target model and retrieving at least one product image similar to the at least one target product image from the product image database can do.

In addition, the step of generating the target product image may include generating a target product image corresponding to at least one of a style feature and a category corresponding to the selection input received from the user client.

Other specific details of the invention are included in the detailed description and drawings.

It is possible to recognize a product from a query image input by a user and automatically recommend a product of another category having a style characteristic similar to the recognized product so that it is possible to reduce the time and cost required for recommending the product to the user.

In addition, since the product is automatically recommended to the user by using the stylistic feature point extracted through the learning of the product image database from the query image input by the user, product recommendation is possible without the help of user data.

The effects of the present invention are not limited to the above-mentioned effects, and other effects not mentioned can be clearly understood by those skilled in the art from the following description.

1 is a diagram illustrating a method of recommending a product according to an embodiment.
2 is a diagram illustrating a method of searching for a recommended product matching a query product according to an embodiment.
3 is a diagram showing a method of recommending a product considering the style of a query product.
4 is a diagram showing an example of data for learning a model recommending a product using a style feature.
FIG. 5 is a diagram illustrating a style feature point space according to an exemplary embodiment.
6 is a view showing a style feature point space according to another embodiment.
7 is a diagram illustrating a method of using a generator model according to an embodiment.
FIG. 8 is a flowchart briefly illustrating a method for recommending a product using a style feature according to an embodiment.

BRIEF DESCRIPTION OF THE DRAWINGS The advantages and features of the present invention and the manner of achieving them will become apparent with reference to the embodiments described in detail below with reference to the accompanying drawings. It should be understood, however, that the invention is not limited to the disclosed embodiments, but may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, Is provided to fully convey the scope of the present invention to a technician, and the present invention is only defined by the scope of the claims.

The terminology used herein is for the purpose of illustrating embodiments and is not intended to be limiting of the present invention. In the present specification, the singular form includes plural forms unless otherwise specified in the specification. The terms " comprises "and / or" comprising "used in the specification do not exclude the presence or addition of one or more other elements in addition to the stated element. Like reference numerals refer to like elements throughout the specification and "and / or" include each and every combination of one or more of the elements mentioned. Although "first "," second "and the like are used to describe various components, it is needless to say that these components are not limited by these terms. These terms are used only to distinguish one component from another. Therefore, it goes without saying that the first component mentioned below may be the second component within the technical scope of the present invention.

Unless defined otherwise, all terms (including technical and scientific terms) used herein may be used in a sense that is commonly understood by one of ordinary skill in the art to which this invention belongs. In addition, commonly used predefined terms are not ideally or excessively interpreted unless explicitly defined otherwise.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.

1 is a diagram illustrating a method of recommending a product according to an embodiment.

Referring to FIG. 1, an example is shown in which a user client 200 receives at least one recommended product image 20 and 30 for a query image 10 from a server 100.

The query image 10 may be a two-dimensional color image including items such as clothing, shoes, bags, and accessories. This query image is either distributed to the user client 100 from another device (not shown), acquired through a camera (not shown) included in the user client 200, It may have been obtained through a screen capture function.

The query image 10 may be uploaded from the user client 200 to the server 100 or may be selected by the user client 200 among the images stored in the server 100, May be transmitted to the server 100 by transmitting the link information in which the query image 10 is stored to the server 100.

The server 100 acquires the query image 10 from the user client 200 and recognizes the query product from the query image 10. [ According to the disclosed embodiment, the server 100 recognizes a product from the query image 10 based on Deep Learning. After that, the server 100 searches for and acquires a recommendation image including a recommended product of a style similar to the query product among the products belonging to categories different from the query product. And provides the obtained recommendation image to the user client 200. [

In this specification, the goods represented by the query image 10 and the recommended goods images 20 and 30 are shown and described as being limited to clothing and accessories, but the query image 10 and the recommended goods image 20 And 30 are not limited.

The server 100 that has acquired the query image 10 from the user client 200 extracts the category and style characteristics of the query product included in the query image 10.

In this specification, 'category' means the type of clothing. For example, categories include tops, bottoms, shoes, bags, hats, and the like.

As used herein, 'style' refers to the style of clothing commonly used in the industry. For example, a style may include casual and office styles, but the manner in which styles are classified is not limited. Also, the style may be classified into a large category and a small category belonging to each large category. For example, casual styles can be subdivided into business casual and young casual.

In one embodiment, the server 100 may extract one or more categories to which a query item may belong, and obtain a probability corresponding to each category. For example, it can be judged that the probability of an inquiry item is 90%, and the probability of an under item is 10%. In this case, the server 100 can determine that the query item is an image.

In addition, the server 100 may extract one or more styles to which the query item may belong, and obtain a probability corresponding to each style. For example, it can be judged that the probability of a query item being a casual style is 80% and the probability of being an office style is 20%. In this case, the server 100 can determine that the query product is a casual style.

However, unlike the category classification, the classification of the style can be relatively unclear. For example, jeans and even casual style can be matched, but it can also match the style of the office. Also, in recent years, there are many business casual style clothes that are difficult to clearly classify office and casual styles, so it may be difficult to clearly classify the style as the category.

Therefore, the server 100 can extract the style characteristics of the query product instead of determining the style of the query product. In one embodiment, the style feature may be information indicating a probability that a query item corresponds to each style. In addition, the style feature may include information about at least one style to which the query item may belong.

For example, a query item can be both casual style and office style if it is judged that the probability of a query item being a casual style is 60% and the probability of an office style being 40%. Thus, the style feature may be information that includes information that the query item is both casual and office style, but a little closer to the casual style.

The server 100 searches at least one article image based on the extracted category and style characteristics for the query article. The server 100 searches for at least one recommended image including a recommended product belonging to a category different from the category of the query product so that the server 100 can wear the product together with the product included in the query product. For example, if the category of the query item is contingent, the server 100 searches for at least one recommendation image including a recommendation item belonging to the shoe and bag category underneath.

In addition, the server 100 searches for at least one recommendation image that includes a recommendation item having a style characteristic similar to the query item while belonging to a category different from the query item. If a referenced item belongs to a category other than a query item, it can be worn with the query item, but it is not necessarily compatible with the query item. For example, a query product is a casual style, but if you recommend an office style product, the two products do not match.

Accordingly, the server 100 may search for and provide at least one recommended image including a recommendation product having style characteristics similar to the query product to the user client 200. In one embodiment, the style features of the query item can be determined with a particular style. For example, a query item can be determined to be a casual style. In this case, the server 100 may provide a recommendation image including a recommended product belonging to a casual style.

In another embodiment, the query item may belong to both a casual style and an office style. In this case, the server 100 can provide a recommendation image 20 including a recommendation item belonging to a casual style and a recommendation image 30 including a recommendation article belonging to an office style, respectively. As another example, the server 100 may provide the user client 200 with a recommendation image that includes a recommendation item belonging to a style in which a selection input is received from a user client, of the two styles.

In another embodiment, the query item may have a 60% chance of belonging to a casual style and a 40% chance of belonging to an office style. In this case, the server 100 may provide the user client 200 with a recommendation image including a recommendation item having a style characteristic similar to that of the query image 10. For example, the server 100 may provide the user client 200 with a recommendation image that includes a recommendation item similar to a query item, and a probability of belonging to a casual style and a probability of belonging to an office style.

In addition, the server 100 can search for a recommended product that belongs to a category different from the query product and has a similar style characteristic, and matches the query product when it is worn together with the query product. In the present embodiment, 'to match' is judged according to the conventional wisdom of a fashion person or a general consumer, and can mean aesthetically good effect when the two goods are worn together.

Accordingly, the server 100 searches for at least one recommendation product belonging to a category different from the query product, having characteristics similar to the query product, and suitable for wearing together with the query product. The server 100 provides the user client 200 with a recommended image including the searched recommended product. In addition, the server 100 may provide the user client 200 with a link or a shopping mall that can purchase each recommended product.

2 is a diagram illustrating a method of searching for a recommended product matching a query product according to an embodiment.

In one embodiment, the server 100 uses deep running to retrieve a recommendation product that matches the query item.

The server 100 extracts features from the query image 10. According to the embodiment, the server 100 extracts features of the query image 10 through a learned model based on deep learning.

Deep learning is a set of machine learning algorithms that try to achieve a high level of abstraction (a task that summarizes key content or functions in large amounts of data or complex data) through a combination of several nonlinear transformation techniques. Is defined. Deep learning can be viewed as a field of machine learning that teaches computers how people think in a big way.

When there is any data, it is represented by the form that the computer understands (for example, the pixel information is represented by a column vector in the case of the image), and many researches How to create expression models and how to model them). As a result of these efforts, various deep-running techniques have been developed. Deep learning techniques include Deep Neural Networks (DNN), Convolutional Deep Neural Networks (CNN), Recurrent Neural Networks (RNN), and Deep Belief Networks (DBN). have. The server 100 can extract the feature of the query image using the model learned by the composite neural network technique among the deep learning techniques as illustrated.

In addition, the server 100 may extract the characteristics of the product images stored in the product image database of the server. According to one embodiment, the feature extraction operation for the product images can be performed only once at a time. In this case, the features extracted from each product image can be mapped and stored in each product image. According to another embodiment, the feature extraction operation for the product images may be performed each time the feature extraction operation for the query image is completed. In addition, the product images stored in the product image database may be periodically updated, and feature extraction operations for the product images may be performed each time a new product image is stored in the product image database.

The server 100 searches for a product image matching the query image 10 based on features extracted from the query image 10. For this purpose, the server 10 may use a model for converting an image space into a feature space. The model can be learned, for example, by the Siamese CNN technique.

As learning data for learning the model, a learning image pair can be used. The learning image pair includes a first learning image pair and a second learning image pair.

The first learning image pair is made up of images of products matching each other. And the second learning image pair is composed of images of products that do not match each other.

In one embodiment, the first learning pair and the second learning pair may use a database obtained from an online shopping mall. For example, using the data obtained from the database of the online shopping mall, a list of goods purchased by consumers together can be obtained. The server 100 can determine that the higher the frequency with which the consumers purchase together, the more the products match each other.

In another embodiment, the first learning pair and the second learning pair use a predetermined algorithm for determining whether or not they match each other based on the shape, color, and pattern of each clothes, Data may be used.

In one embodiment, the first learning pair and the second learning pair may be comprised of images of goods belonging to different categories and matching or unrelated to each other.

According to one embodiment, the learning image pair may include images of two items. For example, the first learning image pair is composed of images including images matching shoes and shoes. As another example, the second learning image pair is composed of images including images that are not compatible with oral and verbal images.

According to another embodiment, the learning image pair may include images of n commodities. For example, the first learning image pair is composed of images including shoes, a tops that match shoes, trousers that match shoes, skirts that match shoes, and bags that match shoes. As another example, the second learning image pair may be composed of images including shoes, shoes that do not match with shoes, pants that do not match with shoes, skirts that do not match with shoes, bags that do not match shoes.

When the learning is completed using the learning image pair as described above, the images of the products matching each other among the images of the products belonging to different categories are arranged close to each other in the feature point space. For example, white casual shirts and black casual pants are different colors, but both products are casual style, so they can be seen as matching products. Thus, the white casual shirt and the black casual pants are placed close together within the feature space.

On the other hand, the images of the products that do not match each other among the images of the products belonging to different categories are located far away in the minutiae space. For example, a casual shirt and a suit pants can be seen as a commodity that do not match each other. Thus, the casual shirt and the suit pants are placed far away from each other within the feature space.

Referring to FIG. 2, a feature point space 300 for searching for a recommended product matching with a query product included in the query image 10 is shown.

The server 100 extracts the feature of the query image 10 and searches for the recommended product matching the query image 10 in the feature point space 300. In one embodiment, the server 100 extracts features from the query image 10 using the model learned by deep learning. Here, the model may be a model learned by a CNN technique. However, the model is not necessarily limited to a model learned by the composite neural network technique, and it is needless to say that a model learned by other types of deep learning techniques may be used.

The server 100 searches for the goods 310 similar to the query goods included in the query image 10 in the feature point space 300 by using the feature of the query image 10. The product 310 may be the same product as the query product, or may be a product having similar characteristics.

The server 100 searches for and obtains at least one recommendation item 320 located close to the item 310 within the feature space. For example, the server 100 may acquire the jeans 320 located close to the shirt 310 similar to the shirt included in the query image 10 within the feature space.

The server 100 can provide the user client 200 with the recommended image including the obtained jeans 320 together with the price information of the jeans 320 and the link of the shopping mall where the jeans 320 can be purchased.

3 is a diagram showing a method of recommending a product considering the style of a query product.

The server 100 searches for a recommendation image including at least one recommendation product matching the query item.

In one embodiment, when the query product included in the query image 10 is a casual style image, the server 100 may recommend casual style pants, shoes, bags, etc. matching the query item.

When the query product included in the query image 10 is an office style image, the server 100 can recommend office style pants, shoes, bags, etc. matching the query product.

In addition, when the query product included in the query image 10 is an image corresponding to both the casual style and the office style, the server 100 may store casual style pants, shoes, bags and office-style pants, shoes, Each can be recommended.

In addition, the server 100 may recommend pants, shoes, and bags of the style in which the selection input is received from the user client 200 among the casual style and the office style.

Hereinafter, the product recommendation method using the style feature will be described in detail. In the disclosed embodiment, the server 100 creates a learned model that can recommend products using style features. The server 100 can perform product recommendation using style features using the learned model.

4 is a diagram showing an example of data for learning a model recommending a product using a style feature.

The server 100 collects data for learning a model recommending a product using a style feature. The server 100 can learn the model using deep learning, but the method used for learning the model is not limited.

For example, the server 100 can learn the model using Siamese CNN. As another example, the server 100 may use a generative model such as Generative Adversarial Networks (GAN) or Variable Auto Encoder (VAE). A specific method using each model will be described later.

Referring to Fig. 4, an example of data used for learning a model is shown. According to the disclosed embodiment, the data used to train the model includes at least one merchandise image.

In one example, the merchandise image may be uploaded by an administrator (not shown) of the server 100. [ As another example, the merchandise image may be automatically collected from another apparatus (not shown) or another server (not shown) associated with the server 100. The product images uploaded or collected may include label information.

For example, the label information includes information on a category and a style of a product included in each product image.

In this specification, 'category' means the type of clothing. For example, categories include tops, bottoms, shoes, bags, hats, and the like.

As used herein, 'style' refers to the style of clothing commonly used in the industry. For example, a style may include casual and office styles, but the manner in which styles are classified is not limited. Also, the style may be classified into a large category and a small category belonging to each large category. For example, casual styles can be subdivided into business casual and young casual.

As shown in FIG. 4, each merchandise image includes style information such as a casual style or an office style. Further, each merchandise image includes category information such as a bag, shoes, or pants.

The server 100 may learn a model using product images including label information including information on categories and styles. The learned model is used by the server 100 to recommend a product having a style characteristic similar to the query product, belonging to a category different from the query product.

In one embodiment, the learning data used for the learning of the model further includes relationship information indicating a connection relationship between product images used for learning.

The connection relationship between the product images includes information on whether or not they fit together. For example, information as to whether or not the particular top and bottom are worn together is stored in the learning data in the form of information on the connection relationship between the specific top and bottom.

In the present embodiment, 'to match' is judged according to the conventional wisdom of a fashion person or a general consumer, and can mean aesthetically good effect when the two goods are worn together. If the specific top and bottom of the embodiment have similar style characteristics, then the bottom of the specific top can match. However, they may not match each other due to the specific color or shape of the specific image and the underneath. Accordingly, the learning data may further include information on category and style, as well as information on whether each product image matches with each other.

Relationship information indicating whether product images belonging to different categories match each other may be inputted by the manager or may be automatically generated by the server 100. [ The server 100 may collect various pieces of coordination information from a shopping mall or a pictorial image, and may determine whether the respective product images match each other based on the collected coordination information. For example, in a pictorial image, the top and bottom of a model wearing together can be judged to be matching each other. In addition, the server 100 can determine that the products purchased together by the users in the shopping mall match each other.

FIG. 5 is a diagram illustrating a style feature point space according to an exemplary embodiment.

According to the disclosed embodiment, the server 100 can learn a model using product images shown in FIG. 4 and Siamese CNN. As a result of the learning, the server 100 acquires the Style Feature Space 400 as shown in Fig.

5, the style feature point space 400 is mapped into a two-dimensional space for convenience of explanation. However, according to the disclosed embodiment, the style feature space 400 comprises an n-dimensional space (n > 2).

Within the style minutiae space 400, product images with similar style features are located close to each other, and product images with different style features are located far from each other.

The server 100 obtains the feature of the query image 10 and acquires the product image 410 similar to the query image 10 in the style feature point space 400. [ The server 100 acquires at least one merchandise image located close to the merchandise image 410. The server 100 provides the user client 200 with information on at least one obtained product image and a product included in each product image.

In one embodiment, the server 100 may generate the style feature point space 400 using the relationship information indicating the connection relationship between each product image.

According to the embodiment, in the style feature point space 400, the product images having similar style characteristics are clustered so as to be close to each other, and the product images having different style characteristics or not matching with each other are located far away from each other .

The server 100 obtains the feature of the query image 10 and acquires the product image 410 similar to the query image 10 in the style feature point space 400. [ The server 100 acquires at least one merchandise image located close to the merchandise image 410. The server 100 provides the user client 200 with at least one obtained product image and information about the products included in each product image so that the query image 10 having style characteristics similar to the query image 10, To the user client (200).

6 is a view showing a style feature point space according to another embodiment.

According to the disclosed embodiment, the server 100 can learn a model using product images shown in FIG. 4 and Siamese CNN.

In one embodiment, the server 100 may classify the merchandise images shown in the training data based on the style characteristics. For example, the server 100 can determine which style each of the merchandise images belongs to from the style characteristics of each of the merchandise images shown in the learning data.

For example, the server 100 may obtain the probability that each merchandise image belongs to each style. The server 100 acquires information on styles having a probability of 30% or more for each merchandise image. The server 100 determines that each product image belongs to the obtained style.

The method for determining the style to which each product image belongs and specific values are provided for the sake of illustration. Actually, a method of determining the style to which each product image belongs is not limited.

The server 100 may learn a model using each classified product image and generate a style feature point space for each style as a result of the learning.

Referring to FIG. 6, a first style feature point space 500 including casual style product images and a second style feature point space 600 including office-style product images are shown. In the case of the shirt 510 shown in FIG. 6, both the first style feature point space 500 and the second style feature point space 510 are included in both the office style and the casual style.

A method of generating the minutiae space 300 shown in Fig. 2 may be used to generate the first style minutiae space 500 and the second style minutiae space 502, respectively, have.

Therefore, in the first style feature point space 500 and the second style feature point space 502, the product images matching each other are disposed close to each other, and the product images that do not match each other are disposed apart from each other.

The server 100 searches the goods image of another category that belongs to the same style as the query image 10 and matches the query image 10 using the query image 10 and the style feature point spaces 500 and 502 .

The server 100 acquires the characteristics of the query image 10 and obtains the goods image 510 similar to the query image 10 from the one or more style feature space spaces corresponding to the style of the query image 10. [ Since the query image 10 belongs to both the office style and the casual style, the server 100 can obtain both the office style product image and the casual style product image.

The server 100 searches for one or more goods images located close to the goods 510 in the first style feature point space 500 and the second style feature point space 502. In one embodiment, the server 100 selects the styling feature point space of the style corresponding to the selection input obtained from the user client 200 and places it near the merchandise image similar to the query image 10 within the selected stylistic feature point space One or more merchandise images may be acquired.

According to the disclosed embodiment, the server 100 belongs to the same style as the query image 10 and can provide the user client 200 with a different category of merchandise image matching the query image 10.

7 is a diagram illustrating a method of using a generator model according to an embodiment.

In one embodiment, the server 100 may utilize a Generative Model such as Generative Adversarial Networks (GAN) or Variable Auto Encoders (VAE).

The generative model is used to create models that can generate images that are similar to real images from sample images. For example, a generative model can be used to generate human face images such as real photographs from human face images.

Likewise, the generator model can be used to generate an image matching the actual image from the learning data as shown in FIG. For example, a generative model can be used to create a costume image of a similar style to match the actual costume.

Referring to FIG. 7, a database 600 of merchandise images stored in the server 100 is shown. The server 100 can use the database 600 to learn the generator model.

When the server 100 acquires the query image 10, the server 100 generates a product image 40 having style characteristics similar to the query image 10. [ In one embodiment, the server 100 may generate a merchandise image with a style feature corresponding to the selection input obtained from the user client 200. [

The server 100 acquires a merchandise image 610 similar to the merchandise image 40 from the database 600. [ The server 100 may provide the user client 200 with information on the goods included in the obtained goods image 610 and the goods image 610. [

FIG. 8 is a flowchart briefly illustrating a method for recommending a product using a style feature according to an embodiment.

Referring to FIG. 8, a product recommendation method using a style feature is composed of steps that are processed in a time-series manner in the server 100 shown in FIG. Therefore, even when the method of FIG. 8 is omitted, the description of the server 100 of FIG. 1 can be applied to FIG.

In step S710, the server 100 acquires a query image from the user client 200. [

In step S720, the server 100 extracts the category and style characteristics of the query image.

In step S730, the server 100 retrieves at least one merchandise image having style characteristics similar to the query image from the merchandise image database using the learned model using the plurality of learning data images. The retrieved product image belongs to a category different from the category of the query image.

The plurality of learning data images include label information indicating a category and a style characteristic of each of the plurality of learning data images.

In step S740, the server 100 provides at least one retrieved product image to the user client 200. [

The steps of a method or algorithm described in connection with the embodiments of the present invention may be embodied directly in hardware, in software modules executed in hardware, or in a combination of both. The software module may be a random access memory (RAM), a read only memory (ROM), an erasable programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), a flash memory, a hard disk, a removable disk, a CD- May reside in any form of computer readable recording medium known in the art to which the invention pertains.

While the present invention has been described in connection with what is presently considered to be practical exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, You will understand. Therefore, it should be understood that the above-described embodiments are illustrative in all aspects and not restrictive.

10: query video
20 and 30: Featured Video
100: Server
200: User Client

Claims (10)

A product recommendation method using a style feature,
Obtaining a query image from a user client;
Extracting category and style characteristics of the query image;
A search method for searching at least one product image having style characteristics similar to the query image from a product image database using a model learned using a plurality of learning data images, , ≪ / RTI > And
Providing the retrieved at least one merchandise image to the user client; Lt; / RTI >
Wherein the plurality of learning data images comprise label information indicating a category and style characteristic of each of the plurality of learning data images.
The method according to claim 1,
Wherein the learned model includes a feature extraction unit and an image search unit,
Wherein the extracting of the category and style features of the query image comprises:
And extracting a category and a style feature of the query image using the feature extraction unit of the learned model,
The step of retrieving the at least one merchandise image comprises:
And retrieving the at least one merchandise image using an image retrieval unit of the learned model.
The method according to claim 1,
The step of retrieving the at least one merchandise image comprises:
Receiving a selection input for at least one style included in a style feature of the query image from the user client; And
Retrieving at least one merchandise image corresponding to the selected style; / RTI >
The method according to claim 1,
The learned model may include:
Wherein the plurality of learning data images correspond to a style feature space representing a relationship between the plurality of learning data images.
Wherein the style minutiae space is used to extract a style characteristic of each of the plurality of learning data images included in the style minutiae space and determines a distance between the plurality of learning data images according to the extracted style characteristic How.
5. The method of claim 4,
The style minutiae space comprises:
Learning data images having similar style characteristics among the plurality of learning data images are arranged close to each other and learning data images having different style characteristics are arranged far away from each other,
The step of retrieving the at least one merchandise image comprises:
Determining, within the style feature point space, a reference product image similar to the query image; And
Searching at least one merchandise image located within a predetermined distance from the determined reference merchandise image; / RTI >
The method according to claim 1,
Wherein the plurality of learning data images include:
Further comprising relationship information indicating a connection relationship between the plurality of learning data images.
The method according to claim 6,
The learned model may include:
Wherein the plurality of learning data images correspond to at least one style minutiae space classified on the basis of style characteristics,
Wherein the one or more style feature point spaces are used to extract a style feature of each of a plurality of learning data images included in each of the one or more style feature point spaces, Wherein a distance between the plurality of learning data images is determined according to a connection relationship.
8. The method of claim 7,
The step of retrieving the at least one merchandise image comprises:
Determining a reference merchandise image similar to the query image in a style feature point space corresponding to at least one style included in a style feature of the query image; And
Searching at least one merchandise image located within a predetermined distance from the determined reference merchandise image within the style minutiae space; / RTI >
The method according to claim 1,
The learned model may include:
And a generative model for generating a target product image having style characteristics similar to the query image,
The step of retrieving the at least one merchandise image comprises:
Generating at least one target product image having style characteristics similar to the query image using the generative model; And
Retrieving from the merchandise image database at least one merchandise image similar to the at least one target merchandise image; / RTI >
10. The method of claim 9,
Wherein the step of generating the target product image comprises:
Generating a target product image corresponding to at least one of a style feature and a category corresponding to a selection input received from the user client.
KR1020170021441A 2016-02-17 2017-02-17 Method for recommending a product using style feature KR20170096971A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2017/001801 WO2017142361A1 (en) 2016-02-17 2017-02-17 Method for recommending product using style characteristic

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020160018450 2016-02-17
KR20160018450 2016-02-17

Related Child Applications (1)

Application Number Title Priority Date Filing Date
KR1020190028948A Division KR20190029567A (en) 2016-02-17 2019-03-13 Method for recommending a product using style feature

Publications (1)

Publication Number Publication Date
KR20170096971A true KR20170096971A (en) 2017-08-25

Family

ID=59761700

Family Applications (2)

Application Number Title Priority Date Filing Date
KR1020170021441A KR20170096971A (en) 2016-02-17 2017-02-17 Method for recommending a product using style feature
KR1020190028948A KR20190029567A (en) 2016-02-17 2019-03-13 Method for recommending a product using style feature

Family Applications After (1)

Application Number Title Priority Date Filing Date
KR1020190028948A KR20190029567A (en) 2016-02-17 2019-03-13 Method for recommending a product using style feature

Country Status (1)

Country Link
KR (2) KR20170096971A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190093813A (en) * 2018-01-19 2019-08-12 네이버 주식회사 Method and system for recommending product based on artificial intelligence
KR20190109652A (en) * 2018-03-07 2019-09-26 네이버 주식회사 Method and system for recommending product based style space created using artificial intelligence
WO2019182378A1 (en) * 2018-03-21 2019-09-26 Lg Electronics Inc. Artificial intelligence server
KR20190119219A (en) * 2018-04-02 2019-10-22 카페24 주식회사 Main image recommendation method and apparatus, and system
JP2019207508A (en) * 2018-05-28 2019-12-05 株式会社リコー Image search apparatus, image search method, image search program, and product catalog generation system
KR20200013141A (en) * 2018-07-17 2020-02-06 주식회사 비주얼 Method and electric apparatus for ordering jewelry product
KR20200023705A (en) * 2018-08-22 2020-03-06 주식회사 비주얼 Method and electric apparatus for recommending jewelry product
CN110909754A (en) * 2018-09-14 2020-03-24 哈尔滨工业大学(深圳) Attribute generation countermeasure network and matching clothing generation method based on same
CN111179031A (en) * 2019-12-23 2020-05-19 第四范式(北京)技术有限公司 Training method, device and system for commodity recommendation model
KR20200104607A (en) * 2019-02-27 2020-09-04 주식회사 마크애니 Personalized item recommendation method and apparatus using image analysis
KR20210016593A (en) * 2018-01-19 2021-02-16 네이버 주식회사 Method and system for recommending product based on artificial intelligence
WO2021215758A1 (en) * 2020-04-23 2021-10-28 오드컨셉 주식회사 Recommended item advertising method, apparatus, and computer program
CN116127111A (en) * 2023-01-03 2023-05-16 百度在线网络技术(北京)有限公司 Picture searching method, picture searching device, electronic equipment and computer readable storage medium

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200141373A (en) * 2019-06-10 2020-12-18 (주)사맛디 System, method and program of constructing dataset for training appearance recognition model
KR102150720B1 (en) 2020-01-03 2020-09-02 주식회사 스타일쉐어 Image embedding apparatus and method for content-based user clustering
KR102133039B1 (en) * 2020-03-30 2020-07-10 서명교 Server for providing apparel shopping mall platform
KR102178962B1 (en) 2020-04-21 2020-11-13 주식회사 스타일쉐어 Creator recommendation artificail neural network apparatus and method for fashion brand
KR102178961B1 (en) 2020-04-21 2020-11-13 주식회사 스타일쉐어 Artificial neural network apparatus and method for recommending fashion item using user clustering
KR102642704B1 (en) * 2021-11-01 2024-03-04 소리달 주식회사 Shoes recommending apparatus based stereo image
KR102628994B1 (en) * 2023-04-24 2024-01-25 주식회사 엔피오이 AI-based personalized bag recommendation system for consumers

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100511210B1 (en) 2004-12-27 2005-08-30 주식회사지앤지커머스 Method for converting 2d image into pseudo 3d image and user-adapted total coordination method in use artificial intelligence, and service besiness method thereof

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210016593A (en) * 2018-01-19 2021-02-16 네이버 주식회사 Method and system for recommending product based on artificial intelligence
KR20190093813A (en) * 2018-01-19 2019-08-12 네이버 주식회사 Method and system for recommending product based on artificial intelligence
KR20190109652A (en) * 2018-03-07 2019-09-26 네이버 주식회사 Method and system for recommending product based style space created using artificial intelligence
WO2019182378A1 (en) * 2018-03-21 2019-09-26 Lg Electronics Inc. Artificial intelligence server
US11531864B2 (en) 2018-03-21 2022-12-20 Lg Electronics Inc. Artificial intelligence server
KR20190119219A (en) * 2018-04-02 2019-10-22 카페24 주식회사 Main image recommendation method and apparatus, and system
JP2019207508A (en) * 2018-05-28 2019-12-05 株式会社リコー Image search apparatus, image search method, image search program, and product catalog generation system
US11900423B2 (en) 2018-05-28 2024-02-13 Ricoh Company, Ltd. Image retrieval apparatus image retrieval method, product catalog generation system, and recording medium
KR20200013141A (en) * 2018-07-17 2020-02-06 주식회사 비주얼 Method and electric apparatus for ordering jewelry product
KR20200023705A (en) * 2018-08-22 2020-03-06 주식회사 비주얼 Method and electric apparatus for recommending jewelry product
CN110909754A (en) * 2018-09-14 2020-03-24 哈尔滨工业大学(深圳) Attribute generation countermeasure network and matching clothing generation method based on same
CN110909754B (en) * 2018-09-14 2023-04-07 哈尔滨工业大学(深圳) Attribute generation countermeasure network and matching clothing generation method based on same
KR20200104607A (en) * 2019-02-27 2020-09-04 주식회사 마크애니 Personalized item recommendation method and apparatus using image analysis
CN111179031A (en) * 2019-12-23 2020-05-19 第四范式(北京)技术有限公司 Training method, device and system for commodity recommendation model
CN111179031B (en) * 2019-12-23 2023-09-26 第四范式(北京)技术有限公司 Training method, device and system for commodity recommendation model
WO2021215758A1 (en) * 2020-04-23 2021-10-28 오드컨셉 주식회사 Recommended item advertising method, apparatus, and computer program
CN116127111A (en) * 2023-01-03 2023-05-16 百度在线网络技术(北京)有限公司 Picture searching method, picture searching device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
KR20190029567A (en) 2019-03-20

Similar Documents

Publication Publication Date Title
KR20190029567A (en) Method for recommending a product using style feature
JP7196885B2 (en) Search system, search method, and program
JP7272490B2 (en) Search support system and search support method
CN104504055B (en) The similar computational methods of commodity and commercial product recommending system based on image similarity
US20200320769A1 (en) Method and system for predicting garment attributes using deep learning
KR102072339B1 (en) Image feature data extraction and use
KR100687906B1 (en) System for recommendation the goods and method therefor
WO2018228448A1 (en) Method and apparatus for recommending matching clothing, electronic device and storage medium
US20220138831A1 (en) Method of Providing Fashion Item Recommendation Service Using User's Body Type and Purchase History
KR102317432B1 (en) Method, apparatus and program for fashion trend prediction based on integrated analysis of image and text
CN110110181A (en) A kind of garment coordination recommended method based on user styles and scene preference
US9727620B2 (en) System and method for item and item set matching
TW201411515A (en) Interactive clothes searching in online stores
US9460342B1 (en) Determining body measurements
KR20200045668A (en) Method, apparatus and computer program for style recommendation
De Melo et al. Content-based filtering enhanced by human visual attention applied to clothing recommendation
KR102295459B1 (en) A method of providing a fashion item recommendation service to a user using a date
KR20200066970A (en) Method and system for automatic, customized outfit coordination, product recommendation and information management based on user's fashion item possession and preference
KR20200141251A (en) Method of advertising personalized fashion item and server performing the same
KR20200042203A (en) Outfit coordination system and method based on user input Images
CN114201681A (en) Method and device for recommending clothes
KR102495868B1 (en) Fashion-related customized perfume recommendation system using ai
KR102200038B1 (en) A method of providing a fashion item recommendation service to a user using a date
KR20210131198A (en) Method, apparatus and computer program for advertising recommended product
CN106294419B (en) Method and device for collecting service object information

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E902 Notification of reason for refusal
E601 Decision to refuse application
A107 Divisional application of patent
WITB Written withdrawal of application