KR20160120674A - Clothes recommendation system using gpu - Google Patents

Clothes recommendation system using gpu Download PDF

Info

Publication number
KR20160120674A
KR20160120674A KR1020160042321A KR20160042321A KR20160120674A KR 20160120674 A KR20160120674 A KR 20160120674A KR 1020160042321 A KR1020160042321 A KR 1020160042321A KR 20160042321 A KR20160042321 A KR 20160042321A KR 20160120674 A KR20160120674 A KR 20160120674A
Authority
KR
South Korea
Prior art keywords
garment
image file
tag information
information
image
Prior art date
Application number
KR1020160042321A
Other languages
Korean (ko)
Inventor
정권진
Original Assignee
주식회사 컴퍼니원헌드레드
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 컴퍼니원헌드레드 filed Critical 주식회사 컴퍼니원헌드레드
Publication of KR20160120674A publication Critical patent/KR20160120674A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0269Targeted advertisements based on user profile or attribute
    • G06Q30/0271Personalized advertisement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W88/00Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
    • H04W88/02Terminal devices

Abstract

A garment recommendation system using a graphic processing apparatus according to the present invention includes a separation unit for recognizing garment from a first image file and generating garment region information by determining a degree of a body part of a person in the first image file; An extracting unit for extracting first tag information including a feature vector of the garment from the garment area information; A recommendation unit for retrieving second tag information matched with the first tag information from a database and selecting and presenting a second image file having the second tag information; And the separating unit generates the garment area information using the graphic processing apparatus.

Description

{CLOTHES RECOMMENDATION SYSTEM USING GPU}

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a recommendation system, and more particularly, to a recommendation system for clothes using a graphic processing apparatus that analyzes an image file and recommends clothes suitable for a user.

With the development of video technology, services that use images in various forms in various industrial fields are being used.

As an example of such an image-based service, a clothing recommendation system used for recommendation of clothes may be presented among image-based recommendation systems.

The clothing recommendation system is required to extract tag information from a selected image, classify the extracted tag information, and search for a product having tag information matched.

In the conventional clothing recommendation system, it is necessary to input tag information such as color, pattern, and type for an image when searching for a product matching the selected image. However, the tag information must visually recognize the image selected by the searcher and manually input the tag information of the image directly. Therefore, the conventional garment recommendation system has a problem that requires a considerably cumbersome process such as visual recognition of images and input of tag information.

For example, in the case of the garment recommending system image-based garment recommending system, in which garments matched with the image file displayed by the user such as the online garment shopping mall are recommended from the garment shopping mall server, visual recognition is required for the image selected by the user , There is a problem that the user must manually input the tag information of the selected image.

In order to solve the above problem, when a user transmits an image to a server, excessive network traffic may occur during transmission. In particular, when a large number of clients access the server at the same time and request extraction of the tag information, excessive network traffic and congestion of the server may arise. In this case, there is a problem that the server can not properly respond to all requests.

Also, the central processing unit (CPU) used in the process of extracting the tag information from the image file is not suitable for the image processing due to the limited number of cores for the arithmetic processing due to the structural limitations, There is a problem that the construction cost of the system increases when a plurality of CPUs are used to solve the problem.

In addition, when the user transmits the image itself to the server, the image file presented by the user is not only displayed as garments but includes images irrelevant to the result of searching the user such as the person wearing the clothing or the background, There is a problem that it is difficult to obtain.

SUMMARY OF THE INVENTION The present invention has been made in view of the above problems, and it is an object of the present invention to provide a graphic processing device capable of easily extracting tag information of an input image file and outputting a search result, And a clothes recommendation system using the clothes.

Another problem to be solved by the present invention is that a server and a client share a function of preprocessing an image file selected for a garment to be recommended, thereby reducing traffic burden, server burden, cost reduction, and system efficiency And a clothes recommendation system using the processing apparatus.

In addition, the speed of extracting the tag information from the image file is accelerated, and the system construction cost is reduced by smoothly extracting the tag information with only a small amount of CPU.

In order to solve the above problems, a garment recommendation system using a graphic processing apparatus according to the present invention recognizes garments from a first image file and determines a degree to which a body part of a person is displayed in the first image file, ; An extracting unit for extracting first tag information including a feature vector of the garment from the garment area information; A recommendation unit for retrieving second tag information matched with the first tag information from a database and selecting and presenting a second image file having the second tag information; And the separating unit generates the garment area information using the graphic processing apparatus.

The tag information is extracted from the image file through the garment recommendation system using the graphic processing apparatus according to the present invention, and the tag information is input through the series of processes for recommending the garment to the user through the extracted tag information, And can increase the network traffic to the server and increase the amount of computation.

Also, the speed of extracting the tag information from the image file is accelerated, and the tag information can be extracted smoothly with only a small amount of CPU, thereby reducing the system construction cost.

1 is a block diagram showing an embodiment of a clothes recommendation system according to the present invention.

Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. It is to be understood that the terminology used herein is for the purpose of description and should not be interpreted as limiting the scope of the present invention.

The embodiments described in the present specification and the configurations shown in the drawings are preferred embodiments of the present invention and are not intended to represent all of the technical ideas of the present invention and thus various equivalents and modifications Can be.

1 is a block diagram showing an embodiment of a clothes recommendation system using the graphic processing apparatus of the present invention.

Referring to FIG. 1, the present invention includes a separating unit 100, an extracting unit 200, and a recommending unit 300.

The separating unit 100 determines the degree of display of a body part of a person wearing the image file garment from a first image file in which a piece of clothing is displayed, and separates the area where the clothing is displayed to generate garment area information. That is, the separating unit 100 can recognize that the garment is recognized from the first image file, and the garment area information is generated by determining the degree of the human body part displayed on the first image file.

Here, clothes refers to various kinds of clothing that a person wears, and means any thing worn on a body such as a hat, a glove, and a god as well as a top and a bottom made of various materials.

The image file is a file of various image formats in which the clothing is displayed when the image file is displayed on the screen, and a photograph of the model wearing the clothing or clothes is stored so that the photographed image can be displayed through the electronic device. The garment displayed in the image file can be displayed in various forms, such as a garment only, or a person wearing the garment may be displayed together. When a person wearing the garment is displayed together, the whole body of the person wearing the garment may be displayed or only a part thereof may be displayed. In the present invention, the first image file can be regarded as an image file that the user inputs through the terminal of the client to receive clothing from the clothing recommendation system. The second image file is an image file stored in the database 400 of the server, and may be selected by the recommendation unit 300 to be described later and presented to the user.

The separating unit 100 judges whether or not a person wearing the clothing of the input first image file is displayed on the body of the first image file, and provides clothing area information on the display position of clothes or the clothing type in the first image file.

When the separating unit 100 receives the first image file, it determines whether the first image file is a file showing only clothes or a file showing a person wearing clothes. If it is judged that the person wearing the clothes is displayed together, it is judged whether the whole body of the person is displayed or not. Various image processing algorithms may be presented in order for the separator 100 to determine the degree of display of the body part of the person being displayed in the first image file.

The separator 100 may apply different algorithms according to each case. That is, the separating unit 100 may be different from the algorithm used according to the degree of display of the body part of the wearer of the image file in order to separate only the area where the clothing is displayed in the first image file.

When the separating unit 100 determines that all of the persons who wear the garment are displayed on the first image file, the separating unit 100 separates the body part of the person wearing the garment using the Pose Estimation method To generate garment region information from the first image file.

The posture measuring method is a method of determining the posture of a person wearing the garment by calculating a probability distribution of probability that a body part of a person wearing the garment exists at a specific position on the first image file through a machine learning algorithm have.

More specifically, the steps of the attitude measuring method will be described as follows. The separation unit 100 first determines the probability that the point corresponds to a body part such as a head, a hand, a foot, a knee, an elbow, or the like, with respect to an arbitrary point on the first image file. Thereafter, the separating unit 100 determines a posture taken by a person wearing the garment on the first image file by applying a relative probability distribution in which a different body part exists with respect to the specific body part. Based on the determined posture, the separation unit 100 can know which piece of clothing is to be worn by a person wearing the garment, and generates garment area information based on the information.

For example, the separating unit 100 may have a relatively high probability that the neck is located relatively below the head of the wearer's body, and the probability that the hand is located within a certain distance from the elbow is relatively high. Probability distribution can be applied. The information for applying the relative probability distribution may be prepared in advance based on information input by machine learning.

If the separating unit 100 determines that only a part of the whole body of the person wearing the garment is displayed in the first image file, the separating unit 100 extracts apparel area information Lt; / RTI >

The object recognition method is a method of identifying what kind of object an object on a first image file is. In the present invention, an object recognition method using a convolutional neural network algorithm is exemplified as a machine learning algorithm.

Spiral neural network is a type of artificial neural network for object recognition, and it can perform convolution operation used in computer vision field. An object recognition method using a spiral neural network algorithm operates by a method of extracting features from an image through various filtering on a first image file to identify an object.

When the separating unit 100 determines that the first or second part of the person wearing the garment is displayed on the first image file, it is required to divide the first image file into a plurality of segments matching the features.

More specifically, a process in which the separating unit 100 divides a first image file to generate a plurality of segments is as follows.

The separating unit 100 divides the first image file into super pixels, which are a set of pixels, and generates segments by clustering the super pixels through a color histogram and a local binary pattern (LBP) histogram.

More specifically, the separating unit 100 analyzes each pixel of the first image file, and generates a super pixel in which the matched pixels are grouped into one group because they are adjacent and similar in color or arrangement. Accordingly, the first image file is divided by the separating unit 100 into a smaller number of super pixels than the total number of pixels.

Then, the separator 100 selects the superpixel having the feature of matching the color histogram and the local binary pattern (LBP) histogram among the adjacent superpixels, and the separator 100 separates the selected superpixel into AGM Gaussian Mixture) algorithm to generate segments.

Here, the color histogram is a histogram that divides the color of each pixel on the image into several groups and obtains the number of pixels corresponding to each group. The separating unit 100 analyzes the first image file, divides the first image file into a plurality of groups, and normalizes each group so that it can be used as tag information to be described later.

The LBP histogram means that adjacent pixels centered on each pixel on the image are determined to be brighter or darker to represent the characteristics of the pixel. For example, when analyzing the brightness of eight pixels in eight adjacent directions centering on one pixel, 256 kinds of values may be obtained. Of these, some values important for determining a pattern count the number of pixels to obtain a histogram .

The AGM algorithm is a kind of GM (Gaussian Mixture) algorithm that is used to classify information. It is an algorithm used for clustering. The GM algorithm is an algorithm that calculates the mean and covariance of each distribution after assuming that the data to be classified consists of the sum of several Gaussian distributions. Although the general GM algorithm is difficult to use for clustering because it decides whether there are a small number of Gaussian distributions or not and determines the parameters for it, AGM algorithm can be applied to clustering by adding the process of merging if certain Gaussian distributions are close enough. Respectively.

When the segment is formed through the above process, the separating unit 100 can determine which segment is an apparel segment indicated by the apparel in the image file recognized through the attitude measuring method or the object recognizing method. In addition, the separating unit 100 can determine what kind of garment is to be displayed through the machine learning algorithm by the garment segment. The separating unit 100 may generate information on the garment segment displayed in the first image file and the type of the garment displayed on the garment segment as garment region information and provide the garment region information to the extracting unit 200 to be described later.

On the other hand, if the separating unit 100 determines that only the garment is displayed in the first image file, the separating unit 100 separates the garment from the first image file using the image segmentation method, Garment region information may be generated, and the image segmentation method may use a Graph Cut algorithm.

The graph cut algorithm is an algorithm that transforms an image into a graph form and divides it into a minimum cost in order to separate the image into distinct boundaries. More specifically, the graph cut algorithm regards each pixel as a vertex on the graph and constructs a graph in which adjacent pixels are connected by an edge. Then, based on the color similarity between the pixels, Lt; / RTI > The graph cut algorithm divides the graph into two parts by eliminating the edges of the graph composed of weighted edges in this way, and selects and removes the edges so that the weighted sum of the removed edges becomes as small as possible.

More specifically, the process of generating the garment region information using the image segmentation method by the separation unit 100 is as follows.

The separation unit 100 finds the type and position of the garment using the spiral neural network-based object recognition method from the first image file. Through this object recognition method, the separation unit 100 can increase the efficiency of the image segmentation method by limiting the area corresponding to the garment in the first image file.

Thereafter, the separating unit 100 can determine the area where the garment is displayed using the graph cut algorithm, and confirm the color or pattern of the garment using the color histogram or the LBP histogram to generate garment region information.

Various image processing algorithms used in the process of determining the degree of body part display of the person wearing the garment in the first image file and separating the area in which the garment is displayed to generate garment area information, , And in particular, parallel processing of computation is important. However, when such an image processing operation is processed using a CPU, there is a problem that the processing speed is slowed down due to a small number of cores of the CPU. Accordingly, when the separating unit 100 generates the garment area information, the speed of the process is reduced by using a graphic processing unit (GPU), and the calculation burdened by the CPU is reduced, It is possible to generate garment area information.

More specifically, when a garment of an image file is recognized by the object recognition method described above, when processing a large number of matrix multiplication operations required in an attitude measurement method, a graph cut algorithm So that the processing speed of the operation can be improved.

The extracting unit 200 receives the garment area information from the separating unit 100 and extracts the tag information of the garment.

The tag information is a word or a keyword to be used for searching for information on clothes. In the present invention, the tag information is one of various categories for classifying the garment displayed in the garment area information, and may have a feature vector for a type, a color, a pattern, and the like. The recommendation unit 300, which will be described later, can hierarchically search for clothes using the first image file and the tag information matching the first image file.

The tag information may include a category vector denoting the type of clothing, a color vector denoting the color of the clothing, and a pattern vector denoted by a pattern vector denoting a pattern of clothing, and the tag information may include at least one of the feature vectors Or more.

The first tag information is tag information for a feature vector of a first image file input by a user through a terminal of a client and the second tag information is tag information for a feature vector of a second image file stored in the database 400 It can mean. Since the tag information provides a category for searching the corresponding image file, it is possible to have tag information in which all the feature vectors are matched, even though they are different image files. Even if all the feature vectors are not matched, have. Accordingly, even if there is one tag information, one or more different image files having the corresponding tag information may be stored in the database 400.

 More specifically, the category vector refers to a value for a type of garment determined in consideration of the position or form of the garment. For example, a value associated with the type of garment, such as top, bottom, dress, etc., may be stored as a value of the category vector. The color vector means a value for a color which is displayed in the largest ratio among colors of the corresponding garment. For example, a value associated with the color of the garment, such as yellow, red, etc., may be stored as the value of the color vector. The pattern vector means a value for a pattern having a high specific gravity in consideration of the repeated or uniform color arrangement in the garment. For example, a value associated with a pattern such as a checkered pattern, a stripe pattern, or a non-patterned pattern may be stored as a value of a pattern vector.

Of the vectors, in the case of a category vector, the information may be generated using the information included in the apparel area information provided by the separator 100.

The process of extracting the first tag information from the first image file through the separation unit 100 and the extraction unit 200 is as follows.

For example, if the first image file has a model wearing a red checkered dress, the separating unit 100 determines whether the entire body of the model is displayed in the first image file, And separates it from the first image file. The separating unit 100 then generates garment region information for the corresponding region and provides it to the extracting unit 200.

The extracting unit 200 receives the apparel area information from the separating unit 100 and extracts the tag information from the apparel that the apparel area information displays. In this case, the first tag information may be composed of 'one piece' type vector, 'red' color vector, and 'check pattern' pattern vector.

The separating unit 100 and the extracting unit 200 may perform functions using a terminal of a client that inputs a first image file. More specifically, the separation unit 100 generates garment region information using the terminal of the client that inputs the first image file, and the extraction unit 200 extracts the first tag information using the terminal of the client have. Alternatively, only the separation unit 100 may perform a function using the terminal of the client according to the environment of the garment recommendation system.

That is, the first image file is processed as the first tag information through the separator 100 and the extractor 200. In the process of processing the image, the terminal of the client receiving the first image file processes the requested image processing job , And transmits only the first tag information, which is the processed result, to the recommendation unit 300 including the server. If only the separating unit 100 uses the terminal of the client, the apparel area information generated by the separating unit 100 may be transmitted to the extracting unit 200 included in the server.

By allowing the terminal of the client to process the separation and extraction of the first image file, it is possible to reduce the computation amount according to the image processing of the server selecting and presenting the second image file and the network traffic due to the uploading of the first image file As a result, it is possible to provide an environment in which the server can respond more quickly to a garment recommendation request through a plurality of clients.

The recommendation unit 300 may search second tag information matching with the first tag information, and may select and present a second image file having the second tag information.

In this case, the recommendation unit 300 can determine that the feature vectors of the first tag information and the second tag information are matched when they match within a predetermined vector distance.

More specifically, the recommendation unit 300 receives a vector value representing a numerical value of each feature vector included in the first tag information received from the extracting unit 200 and a vector value representing a numerical value of each feature vector included in the second tag information The vector distance between each vector can be obtained. The recommendation unit 300 can determine that the vector distance is matched with the second tag information having a vector distance within a predetermined range, and acquires, from the database 400 using the corresponding second tag information, One or more second image files matching the information may be searched and presented to the user. At this time, the recommendation unit 300 can select from the second image file having the second tag information having the closest vector distance and present it to the user, thereby improving the accuracy of recommendation of the clothing. The vector distance can be arbitrarily set by the operator or the administrator of the server on which the recommendation unit 300 is operated depending on the environment or the necessity.

The user may be recommended from the second image file presented by the recommendation unit 300 to the clothing similar in character to the first image file input by the user through the terminal of the client.

The database 400 stores the tag information extracted from the pre-stored second image file, the extracting unit 200 of the corresponding second image file, the apparel area information from which the second tag information is extracted, and the apparel area information 2 Image files can be matched and saved.

Claims (11)

A separating unit for recognizing clothing from a first image file and for generating garment region information by determining a degree to which a body part of a person is displayed on the first image file;
An extracting unit for extracting first tag information including a feature vector of the garment from the garment area information; And
A recommendation unit for retrieving second tag information matched with the first tag information from a database and selecting and presenting a second image file having the second tag information; Lt; / RTI >
Wherein the separating unit generates the clothing area information using the graphic processing unit.
The apparatus of claim 1, wherein the separator
And the garment region information is generated by identifying the body part of the person using the attitude measurement method when it is recognized that all the body parts of the person are displayed in the first image file.
3. The method according to claim 2,
And determining a posture of the person by calculating a probability distribution of a probability that the body part of the person exists at a specific position on the first image file through a machine learning algorithm.
4. The apparatus of claim 3, wherein the separator
The first image file is divided into superpixels which are a set of pixels and the superpixel having the feature of matching among the adjacent superpixels through a color histogram and a local binary pattern (LBP) histogram is selected, CLAIMS 1. A garment recommendation system using a graphics processing device that generates segments by clustering using an Approximated Gaussian Mixture (AGM) algorithm, and generates garment region information using the segments and the attitude measurement method.
The apparatus of claim 1, wherein the separator
And the garment region information is generated from the first image file by using an object recognition method when only a part of the body part of the person is recognized in the first image file.
6. The apparatus of claim 5, wherein the separator
The first image file is divided into superpixels which are a set of pixels and the superpixel having the feature of matching among the adjacent superpixels through a color histogram and a local binary pattern (LBP) histogram is selected, CLAIMS 1. A garment recommendation system using a graphics processing device that generates segments by clustering using an Approximated Gaussian Mixture (AGM) algorithm, and generates garment region information using the segments and the object recognition method.
The apparatus of claim 1, wherein the separator
Generating garment region information for an area in which the garment is displayed from the first image file by using an image dividing method when it is recognized that only the garment is displayed in the first image file,
Wherein the image dividing method uses a graph cut algorithm.
The apparatus of claim 1, wherein the separator
And the garment region information is generated using a terminal of a client that inputs the first image file.
9. The method of claim 8,
And the extracting unit extracts the first tag information using the terminal of the client.
2. The method of claim 1, wherein the first and second tag information
A garment recommendation system using a graphic processing device including at least one of a category vector denoting a category, a color vector denoting a color of the apparel, and a pattern vector denoting a pattern of the apparel, .
2. The apparatus of claim 1, wherein the recommendation section
When the feature vectors of the first tag information and the second tag information match within a predetermined vector distance, it is determined that the matching is performed.
KR1020160042321A 2015-04-08 2016-04-06 Clothes recommendation system using gpu KR20160120674A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR20150049774 2015-04-08
KR1020150049774 2015-04-08

Publications (1)

Publication Number Publication Date
KR20160120674A true KR20160120674A (en) 2016-10-18

Family

ID=57244454

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160042321A KR20160120674A (en) 2015-04-08 2016-04-06 Clothes recommendation system using gpu

Country Status (1)

Country Link
KR (1) KR20160120674A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106874344A (en) * 2016-12-25 2017-06-20 惠州市蓝微电子有限公司 A kind of method and terminal for recognizing clothing material
KR20180051449A (en) * 2018-03-16 2018-05-16 오드컨셉 주식회사 Method, apparatus and computer program for providing shopping informations
KR20180051448A (en) * 2018-03-16 2018-05-16 오드컨셉 주식회사 Method, apparatus and computer program for providing shopping informations
KR20180098098A (en) * 2017-02-24 2018-09-03 권오민 Online delivery method using image advertiging
KR20200076224A (en) 2018-12-19 2020-06-29 이정재 Method and user terminal device for recommending clothing store
WO2020141907A1 (en) * 2019-01-04 2020-07-09 삼성전자주식회사 Image generation apparatus for generating image on basis of keyword and image generation method
KR20210030190A (en) 2019-09-09 2021-03-17 주식회사 웨얼리 Method, apparatus and computer program for analyzing fashion image using artificial intelligence model of hierarchy structure
KR20220005323A (en) * 2020-07-06 2022-01-13 아주대학교산학협력단 Apparatus and method for classifying style based on deep learning using fashion attribute
KR20220046411A (en) * 2020-10-06 2022-04-14 주식회사 스마일벤처스 Product information tag device and method
KR20220111592A (en) * 2021-02-02 2022-08-09 주식회사 패션에이드 Fashion coordination style recommendation system and method by artificial intelligence

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106874344A (en) * 2016-12-25 2017-06-20 惠州市蓝微电子有限公司 A kind of method and terminal for recognizing clothing material
KR20180098098A (en) * 2017-02-24 2018-09-03 권오민 Online delivery method using image advertiging
KR20180051449A (en) * 2018-03-16 2018-05-16 오드컨셉 주식회사 Method, apparatus and computer program for providing shopping informations
KR20180051448A (en) * 2018-03-16 2018-05-16 오드컨셉 주식회사 Method, apparatus and computer program for providing shopping informations
KR20200076224A (en) 2018-12-19 2020-06-29 이정재 Method and user terminal device for recommending clothing store
WO2020141907A1 (en) * 2019-01-04 2020-07-09 삼성전자주식회사 Image generation apparatus for generating image on basis of keyword and image generation method
KR20210030190A (en) 2019-09-09 2021-03-17 주식회사 웨얼리 Method, apparatus and computer program for analyzing fashion image using artificial intelligence model of hierarchy structure
KR20220005323A (en) * 2020-07-06 2022-01-13 아주대학교산학협력단 Apparatus and method for classifying style based on deep learning using fashion attribute
KR20220046411A (en) * 2020-10-06 2022-04-14 주식회사 스마일벤처스 Product information tag device and method
KR20220111592A (en) * 2021-02-02 2022-08-09 주식회사 패션에이드 Fashion coordination style recommendation system and method by artificial intelligence

Similar Documents

Publication Publication Date Title
KR20160120238A (en) Clothes recommendation system
KR20160120674A (en) Clothes recommendation system using gpu
US10049308B1 (en) Synthesizing training data
CN110399890B (en) Image recognition method and device, electronic equipment and readable storage medium
US9460518B2 (en) Visual clothing retrieval
JP6825141B2 (en) Fashion coordination recommendation methods and devices, electronic devices, storage media
JP2022093550A (en) Information processing apparatus, information processing method, and program
US9817900B2 (en) Interactive clothes searching in online stores
US20110142335A1 (en) Image Comparison System and Method
WO2017114237A1 (en) Image query method and device
CN109614925A (en) Dress ornament attribute recognition approach and device, electronic equipment, storage medium
US9959480B1 (en) Pixel-structural reference image feature extraction
KR20150058663A (en) Apparatus and Method Searching Shoes Image Using Matching Pair
CN114937232B (en) Wearing detection method, system and equipment for medical waste treatment personnel protective appliance
CN107533547B (en) Product indexing method and system
CN112330383A (en) Apparatus and method for visual element-based item recommendation
US10474919B2 (en) Method for determining and displaying products on an electronic display device
JP2017084078A (en) Style search apparatus, method, and program
Roychowdhury Classification of large-scale fundus image data sets: a cloud-computing framework
JP2019109843A (en) Classification device, classification method, attribute recognition device, and machine learning device
US9785835B2 (en) Methods for assisting with object recognition in image sequences and devices thereof
CN108764232B (en) Label position obtaining method and device
Lezoray et al. A color object recognition scheme: application to cellular sorting
CN113392741A (en) Video clip extraction method and device, electronic equipment and storage medium
US11527090B2 (en) Information processing apparatus, control method, and non-transitory storage medium

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E601 Decision to refuse application