CN108960959A - Multi-modal complementary garment coordination method, system and medium neural network based - Google Patents
Multi-modal complementary garment coordination method, system and medium neural network based Download PDFInfo
- Publication number
- CN108960959A CN108960959A CN201810501840.4A CN201810501840A CN108960959A CN 108960959 A CN108960959 A CN 108960959A CN 201810501840 A CN201810501840 A CN 201810501840A CN 108960959 A CN108960959 A CN 108960959A
- Authority
- CN
- China
- Prior art keywords
- implicit
- clothes
- indicate
- model
- text
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0631—Item recommendations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0641—Shopping interfaces
- G06Q30/0643—Graphical representation of items or shoppers
Landscapes
- Business, Economics & Management (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Marketing (AREA)
- Economics (AREA)
- Development Economics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses multi-modal complementary garment coordination method, system and media neural network based, obtain visual signature from the picture of clothes, obtain text feature from the verbal description of clothes;Using the visual signature of self-encoding encoder study different garment and the compatible space of text feature, the implicit expression of visual signature and the implicit expression of text feature are obtained;It establishes and rebuilds the vector relational model between input feature vector respectively;Establish clothes compatibility model;Then, it is based on clothes compatibility model, compatible preference pattern is constructed using Bayes's personalized ordering algorithm;Establish the implicit consistency model indicated with the implicit expression of text feature of visual signature;Then, the multi-modal hidden feature consistency model of clothes is established;Multi-modal complementary garment coordination model of the building based on deep neural network;The multi-modal complementary garment coordination model having been built up is trained;Garment coordination recommendation is carried out using trained multi-modal complementary garment coordination model.
Description
Technical field
The present invention relates to multi-modal complementary garment coordination method, system and media neural network based.
Background technique
Nowadays, other than the psychological need to clothes, more and more people start to focus on and pursue the fashion of dress,
It is graceful, generous proper etc..However, being not that everyone has good taste to garment coordination.When the clothes for facing magnanimity
When commodity, many people can feel that garment coordination is very difficult and bored.Therefore, we develop a set of effective garment coordination side
Case is come to help people be that given clothes are found out and coordinate the collocation of fashion again.Current garment coordination technology mainly includes based on association
The same method filtered and the method based on content.Wherein, the former by the user's history behavior with similar tastes and preference into
Row is recommended, such as: the buying behavior of user, click buying behavior etc. of the user to the text description and other users of commodity.It is this
There is cold start-up in method, i.e., can not recommend for one without the article of any relevant historical behavior or user.Afterwards
Person is recommended based on the vision compatibility between article.This method usually only considers the visual information of article, cannot be comprehensive
Compatibility between article is modeled.In addition, the problem of there is also sparsities during garment coordination.
Summary of the invention
In order to solve the deficiencies in the prior art, the present invention provides multi-modal complementary garment coordination sides neural network based
Method, system and medium can effectively solve the problems, such as the sparsity between clothes, and can pass through the multi-modal pass of excavation article
System comprehensively models the compatibility different garment.
As the first aspect of the present invention, multi-modal complementary garment coordination method neural network based is provided;
Multi-modal complementary garment coordination method neural network based, comprising:
Step (1): obtaining visual signature from the picture of clothes, meanwhile, it is special that text is obtained from the verbal description of clothes
Sign;
Step (2): it using the visual signature of self-encoding encoder study different garment and the compatible space of text feature, obtains
The implicit expression of visual signature and the implicit expression of text feature;
Step (3): the implicit expression of the visual signature obtained step (2) using multi-decoder and text feature it is hidden
It is decoded as rebuilding vector containing expression;It establishes and rebuilds the vector relational model between input feature vector respectively;
Step (4): being based on the implicit expression for the visual signature that step (2) obtain and the implicit expression of text feature, establishes
Clothes compatibility model;Then, it is based on clothes compatibility model, compatible preference is constructed using Bayes's personalized ordering algorithm
Model;
Step (5): being based on the implicit expression for the visual signature that step (2) obtain and the implicit expression of text feature, establishes
Visual signature is implicit to be indicated and the implicit consistency model indicated of text feature;Then, the multi-modal hidden feature of clothes is established
Consistency model;
Step (6): the calculated result based on step (3), step (4) and step (5) is constructed based on deep neural network
Multi-modal complementation garment coordination model;The multi-modal complementary garment coordination model having been built up is trained;Using having instructed
The multi-modal complementary garment coordination model perfected carries out garment coordination recommendation.
As a further improvement of the present invention, in step (1):
The clothes, comprising: jacket, lower clothing and shoes;
The picture of the clothes refers to the color image of jacket, lower clothing or shoes;
The verbal description of the clothes, comprising: pattern, function and the classification of clothes;
The visual signature, comprising: the visual signature of the visual signature of jacket, the visual signature of lower clothing and shoes;
The text feature, comprising: the text feature of the text feature of jacket, the text feature of lower clothing and shoes;
As a further improvement of the present invention, in step (1):
Visual signature is obtained in the picture from clothes, is obtained from the picture of clothes by depth convolutional neural networks
Take visual signature;
Text feature is obtained in the verbal description from clothes, is obtained from the verbal description of clothes by bag of words
Take text feature.
As a further improvement of the present invention, in step (2):
By jacket tiVisual signatureJacket tiText featureLower clothing bjVisual signatureLower clothing bjText
FeatureShoes skVisual signatureWith shoes skText featureIt is input in the encoder of self-encoding encoder;Export jacket
tiVision implicit indicateJacket tiText implicit indicateLower clothing bjVision implicit indicateThe text of lower clothing is hidden
Containing expressionShoes skVision implicit indicateWith shoes skText implicit indicate
As a further improvement of the present invention, in step (3):
By the decoder of self-encoding encoder, by jacket tiVision implicit indicateJacket tiText implicit indicate
Lower clothing bjVision implicit indicateThe text of lower clothing is implicit to be indicatedShoes skVision implicit indicateShoes skText
This implicit expressionIt is decoded as jacket tiOptical rehabilitation vectorJacket tiText rebuild vectorLower clothing bjReconstruction to
AmountThe reconstruction vector of lower clothingShoes skOptical rehabilitation vectorShoes skText rebuild vector
As a further improvement of the present invention, in step (3): establishing the relational model rebuild between vector and input feature vector
The step of are as follows:
Wherein, lAE(x) relational model that clothes x is rebuild between vector and visual signature and text feature is indicated;l(vx) table
Show the optical rehabilitation vector of clothes x and the reconstruction error of visual signature;l(cx) indicate that the text of clothes x rebuilds vector and text
The reconstruction error of feature;Indicate the optical rehabilitation vector of clothes x;vxIndicate the visual signature of clothes x;Indicate clothes x's
Text rebuilds vector;cxIndicate the text feature of clothes x;
Establish the relational model of entire clothes rebuild between vector and input feature vector:
lAE=lAE(ti)+lAE(bj)+lAE(sk)。
As a further improvement of the present invention, in step (4): establishing clothes compatibility model compijk:
compijk=compik+compjk;
Wherein, compikIndicate the compatible model of jacket and shoes;compjkIndicate the compatible mould of lower clothing and shoes
Type;
As a further improvement of the present invention, in step (4): clothes compatibility model is based on, using Bayes's personalization
Sort algorithm constructs compatible preference pattern lbpr:
Wherein,Indicate jacket and lower clothing to the compatible preference of positive example shoes,Indicate jacket and
Compatible preference of the lower clothing to negative example shoes;Indicate the threshold function table of neural network;(i,j,
k+,k-) indicate jacket tiWith lower clothing bjRelative to shoesFor be more suitable for and shoesCollocation.
As a further improvement of the present invention, in step (5): visual signature is implicit to be indicated and the implicit expression of text feature
Consistency model lvc(ti):
Wherein,Indicate the threshold function table of neural network;Indicate clothes tiVisual signature implicit indicate;Table
Show clothes tiText feature implicit indicate.
As a further improvement of the present invention, in step (5): the multi-modal hidden feature consistency model l of clothesmod:
Wherein, lvc(ti) indicate the implicit expression of jacket visual signature and the implicit consistency model indicated of text feature;lvc
(bj) indicate the implicit expression of lower clothing visual signature and the implicit consistency model indicated of text feature;Indicate positive example shoes
Sub- visual signature is implicit to be indicated and the implicit consistency model indicated of text feature;Indicate negative example shoes visual signature
It is implicit to indicate and the implicit consistency model indicated of text feature.
As a further improvement of the present invention, in step (6): the multi-modal complementary garment coordination based on deep neural network
Model:
L=lAE+lmod+lbpr;
As a further improvement of the present invention, in step (6): to the multi-modal complementary garment coordination model ginseng having been built up
Number is trained by stochastic gradient descent method, is restrained by iteration to the model, and final argument is exported.
As a further improvement of the present invention, in step (6): utilizing trained based on the more of deep neural network
Mode complementation garment coordination carries out garment coordination recommendation:
By trained final argument, all comp are calculatedijkValue;
Wherein compijkShoes corresponding to maximum value are the shoes that jacket and lower clothing are arranged in pairs or groups the most.
As a second aspect of the invention, multi-modal complementary garment coordination system neural network based is provided;
Multi-modal complementary garment coordination system neural network based, comprising: memory, processor and be stored in storage
The computer instruction run on device and on a processor, when the computer instruction is run by processor, completes any of the above-described side
Step described in method.
As the third aspect of the present invention, a kind of computer readable storage medium is provided;
A kind of computer readable storage medium, is stored thereon with computer instruction, and the computer instruction is transported by processor
When row, step described in any of the above-described method is completed.
Compared with prior art, the beneficial effects of the present invention are:
1, the multi-modal complementary garment coordination method based on deep neural network can arrange in pairs or groups to multiple complementary goods;
2, what this method can be seamless excavates the multi-modal information (i.e. vision and text modality) between commodity;
3, the model can effectively solve the problems, such as the sparsity of clothes;
4, the model effectively can carry out compatible modeling to the collocation preference multiple clothes.
Detailed description of the invention
The accompanying drawings constituting a part of this application is used to provide further understanding of the present application, and the application's shows
Meaning property embodiment and its explanation are not constituted an undue limitation on the present application for explaining the application.
Fig. 1 is flow chart of the invention;
Fig. 2 is the commodity multi-modal information used in the present invention, and the picture comprising clothes, category hierarchy and title are retouched
It states;
Fig. 3 (a) is that most popular jacket and lower clothing are not arranged in pairs or groups;
Fig. 3 (b) is most popular lower clothing and the collocation of shoes classification.
Specific embodiment
It is noted that following detailed description is all illustrative, it is intended to provide further instruction to the application.Unless another
It indicates, all technical and scientific terms used herein has usual with the application person of an ordinary skill in the technical field
The identical meanings of understanding.
It should be noted that term used herein above is merely to describe specific embodiment, and be not intended to restricted root
According to the illustrative embodiments of the application.As used herein, unless the context clearly indicates otherwise, otherwise singular
Also it is intended to include plural form, additionally, it should be understood that, when in the present specification using term "comprising" and/or " packet
Include " when, indicate existing characteristics, step, operation, device, component and/or their combination.
Mainly include the following contents:
Multi-modal information (the i.e. visual modalities and text mould of commodity are excavated by depth convolutional neural networks and bag of words
State);
Pass through the potential compatible space of the multiple multi-modal commodity of self-encoding encoder neural network learning;
Using the implicit feedback between compatible commodity, complementation is further excavated using Bayes's personalization ranking frame
Collocation preference between commodity.
It is proposed that the personalized complementary garment coordination frame of the multi-modal Bayes based on content effectively solves the problems, such as sparsity,
And compatible modeling jointly is carried out to the preference implied between the different modalities relationship and commodity of commodity.
As one embodiment of the invention, multi-modal complementary garment coordination method neural network based is provided;
Multi-modal complementary garment coordination method neural network based, comprising:
Step (1): obtaining visual signature from the picture of clothes, meanwhile, it is special that text is obtained from the verbal description of clothes
Sign;
As a further improvement of the present invention, in step (1):
The clothes, comprising: jacket, lower clothing and shoes;
The picture of the clothes refers to the color image of jacket, lower clothing or shoes;
The verbal description of the clothes, comprising: pattern, function and the classification of clothes;
The visual signature, comprising: the visual signature of the visual signature of jacket, the visual signature of lower clothing and shoes;
The text feature, comprising: the text feature of the text feature of jacket, the text feature of lower clothing and shoes;
Visual signature is obtained in the picture from clothes, is obtained from the picture of clothes by depth convolutional neural networks
Take visual signature;
Text feature is obtained in the verbal description from clothes, is obtained from the verbal description of clothes by bag of words
Take text feature;
Step (2): it using the visual signature of self-encoding encoder study different garment and the compatible space of text feature, obtains
The implicit expression of visual signature and the implicit expression of text feature;
As a further improvement of the present invention, in step (2):
By jacket tiVisual signatureJacket tiText featureLower clothing bjVisual signatureLower clothing bjText
FeatureShoes skVisual signatureWith shoes skText featureIt is input in the encoder of self-encoding encoder;Export jacket
tiVision implicit indicateJacket tiText implicit indicateLower clothing bjVision implicit indicateThe text of lower clothing is hidden
Containing expressionShoes skVision implicit indicateWith shoes skText implicit indicate
Step (3): the implicit expression for the visual signature for utilizing decoder to obtain step (2) and the implicit table of text feature
Show and is decoded as rebuilding vector;It establishes and rebuilds the vector relational model between input feature vector respectively;
As a further improvement of the present invention, in step (3):
By the decoder of self-encoding encoder, by jacket tiVision implicit indicateJacket tiText implicit indicate
Lower clothing bjVision implicit indicateThe text of lower clothing is implicit to be indicatedShoes skVision implicit indicateShoes skText
This implicit expressionIt is decoded as jacket tiOptical rehabilitation vectorJacket tiText rebuild vectorLower clothing bjReconstruction to
AmountThe reconstruction vector of lower clothingShoes skOptical rehabilitation vectorShoes skText rebuild vector
As a further improvement of the present invention, in step (3): establishing the relational model rebuild between vector and input feature vector
The step of are as follows:
Wherein, lAE(x) relational model that clothes x is rebuild between vector and visual signature and text feature is indicated;l(vx) table
Show the optical rehabilitation vector of clothes x and the reconstruction error of visual signature;l(cx) indicate that the text of clothes x rebuilds vector and text
The reconstruction error of feature;Indicate the optical rehabilitation vector of clothes x;vxIndicate the visual signature of clothes x;Indicate clothes x's
Text rebuilds vector;cxIndicate the text feature of clothes x;
Establish the relational model of entire clothes rebuild between vector and input feature vector:
lAE=lAE(ti)+lAE(bj)+lAE(sk)。
Step (4): being based on the implicit expression for the visual signature that step (2) obtain and the implicit expression of text feature, establishes
Clothes compatibility model;Then, it is based on clothes compatibility model, compatible preference is constructed using Bayes's personalized ordering algorithm
Model;
As a further improvement of the present invention, in step (4): establishing clothes compatibility model compijk:
compijk=compik+compjk;
Wherein, compikIndicate the compatible model of jacket and shoes;compjkIndicate the compatible mould of lower clothing and shoes
Type;
As a further improvement of the present invention, in step (4): clothes compatibility model is based on, using Bayes's personalization
Sort algorithm constructs compatible preference pattern lbpr:
Wherein,Indicate jacket and lower clothing to the compatible preference of positive example shoes,Indicate jacket and
Compatible preference of the lower clothing to negative example shoes;Indicate the threshold function table of neural network;(i,j,
k+,k-) indicate jacket tiWith lower clothing bjRelative to shoesFor be more suitable for and shoesCollocation.
Step (5): being based on the implicit expression for the visual signature that step (2) obtain and the implicit expression of text feature, establishes
Visual signature is implicit to be indicated and the implicit consistency model indicated of text feature;Then, the multi-modal hidden feature of clothes is established
Consistency model;
As a further improvement of the present invention, in step (5): visual signature is implicit to be indicated and the implicit expression of text feature
Consistency model lvc(ti):
Wherein,Indicate the threshold function table of neural network;Indicate clothes tiVisual signature implicit indicate;Table
Show clothes tiText feature implicit indicate.
As a further improvement of the present invention, in step (5): the multi-modal hidden feature consistency model l of clothesmod:
Wherein, lvc(ti) indicate the implicit expression of jacket visual signature and the implicit consistency model indicated of text feature;lvc
(bj) indicate the implicit expression of lower clothing visual signature and the implicit consistency model indicated of text feature;Indicate positive example shoes
Sub- visual signature is implicit to be indicated and the implicit consistency model indicated of text feature;Indicate negative example shoes visual signature
It is implicit to indicate and the implicit consistency model indicated of text feature.
Step (6): the calculated result based on step (3), step (4) and step (5) is constructed based on deep neural network
Multi-modal complementation garment coordination model;The multi-modal complementary garment coordination model having been built up is trained;Using having instructed
The multi-modal complementary garment coordination model perfected carries out garment coordination recommendation.
As a further improvement of the present invention, in step (6): the multi-modal complementary garment coordination based on deep neural network
Model:
L=lAE+lmod+lbpr;
As a further improvement of the present invention, in step (6): to the multi-modal complementary garment coordination model ginseng having been built up
Number is trained by stochastic gradient descent method, is restrained by iteration to the model, and final argument is exported.
As a further improvement of the present invention, in step (6): utilizing trained based on the more of deep neural network
Mode complementation garment coordination carries out garment coordination recommendation:
By trained final argument, all comp are calculatedijkValue;
Wherein compijkShoes corresponding to maximum value are the shoes that jacket and lower clothing are arranged in pairs or groups the most.
As second embodiment of the invention, multi-modal complementary garment coordination system neural network based is provided;
Multi-modal complementary garment coordination system neural network based, comprising: memory, processor and be stored in storage
The computer instruction run on device and on a processor, when the computer instruction is run by processor, completes any of the above-described side
Step described in method.
As third embodiment of the invention, a kind of computer readable storage medium is provided;
A kind of computer readable storage medium, is stored thereon with computer instruction, and the computer instruction is transported by processor
When row, step described in any of the above-described method is completed.
The solution of the present invention is as shown in Figure 1, be described the technical solution in the present invention below with reference to specific example:
S1: we illustrate the sample of several clothes by Fig. 2 first, and each sample includes the picture of clothes, clothes
Generic and title description.Wherein the information such as the color of clothes, pattern can intuitively be shown from picture, and clothes
The information such as pattern, function and classification can effectively be obtained from text information.And different mode, i.e. visual modalities and text
Mode can indicate the same article in terms of different.
S2: we further explore the validity of text information, and Fig. 3 (a) is the classification of most popular jacket and lower clothing
Collocation, wherein each dot represents a classification, and light gray represents jacket classification, and Dark grey indicates that lower clothing is other, circle
The width of area and line is directly proportional to the co-occurrence that the clothes quantity of respective classes and different classes of phase are arranged in pairs or groups respectively.We can
With see sweater, T-shirt and and knee longuette be more clothes of arranging in pairs or groups with other classifications, and we can also be seen that housing and even clothing
Skirt is more taken, and sweater then and and knee longuette collocation it is more.Equally, from the most popular lower clothing of Fig. 3 (b) and shoes collocation classification in I
As can be seen that and knee longuette and high-heeled shoes relatively take, and ankle boots and skinny jeans are relatively taken.
S3: carrying out Visual Feature Retrieval Process by the Alexnet network of the pre-training provided by caffe, it includes five volumes
Lamination and 3 full articulamentums.We extract the output result of ' fc7 ' layer of Alexnet network as visual signature, it is final we
The feature of one 4096 dimension is obtained, for jacket ti, lower clothing bjWith shoes sk, visual signature can be expressed as
S4: it by goods categories and title description building vocabulary and is filtered.Then pass through bag of words, Wo Menfen
Jacket t is not obtainedi, lower clothing bjWith shoes skText featureIts feature is 3345 dimensions.
S5: the present invention makes the semantic gap of article in isomeric space up by study one potential compatible space,
Rather than the compatibility of jacket in isomeric space, lower clothing and shoes is directly directly calculated by above-mentioned feature vector.This
Potential compatibility space can allow different types of article to have maximum compatibility, such as jacket, lower clothing and shoes have
Different pattern and function etc..The invention can learn a compatible space, so that different types of clothes are from color, figure
Its consistency is found in the factors such as case, style and material.Here we learn jacket by a multiple self-encoding encoder
ti, lower clothing bjWith shoes skVisual signatureAnd text featureCompatible space.By self-editing
The encoder of code device, we respectively obtain, and jacket, the vision of lower clothing and shoes and text are implicit to be indicatedBy the decoder of self-encoding encoder, these implicit expressions are decoded as rebuilding and be indicated by weFor article x, the vector after reconstruction should be close with vector when input, therefore we are most
Smallization following model is learnt:
We are simultaneously to jacket t in the present inventioni, lower clothing bjWith shoes skLearnt:
lAE=lAE(ti)+lAE(bj)+lAE(sk)
S6: modeling the compatibility of article, here for for jacket and lower clothing collocation shoes: passing through study one
Potential compatibility space, we respectively obtain, and jacket, the vision of lower clothing and text and text are implicit to be indicated Based on these implicit expressions, we obtain the compatible model of jacket, lower clothing and shoes:
compijk=compik+compjk
Wherein,The compatibility of lower clothing and shoes can similarly be obtained
compjk。
S7: for the vision and text modality for sufficiently excavating article, we are to commodity xiVision and text hidden feature
Consistency is modeled:
Jacket t is considered simultaneouslyi, lower clothing bjWith shoes sk, the multi-modal hidden feature consistency mould of our available threes
Type:
S8: sufficiently to excavate the compatibility of jacket, lower clothing and shoes in collocation, we use Bayes's personalization frame
Frame, for for jacket and lower clothing recommendation shoes, model is as follows:
WhereinWithIndicate jacket and lower clothing to the compatible preference of different shoes, i.e. jacket tiWith
Lower clothing bjRelative to shoesFor be more suitable for and shoesCollocation.
S9: final we obtain the multi-modal complementary garment coordination model based on deep neural network:
L=lAE+lmod+lbpr
The invention passes through the training model, finally obtains the multi-modal compatible space of jacket, lower clothing and shoes, wherein
compijkShoes corresponding to maximum value are the shoes that jacket and lower clothing are arranged in pairs or groups the most.
The foregoing is merely preferred embodiment of the present application, are not intended to limit this application, for the skill of this field
For art personnel, various changes and changes are possible in this application.Within the spirit and principles of this application, made any to repair
Change, equivalent replacement, improvement etc., should be included within the scope of protection of this application.
Claims (10)
1. multi-modal complementary garment coordination method neural network based, characterized in that include:
Step (1): obtaining visual signature from the picture of clothes, meanwhile, text feature is obtained from the verbal description of clothes;
Step (2): using the visual signature of self-encoding encoder study different garment and the compatible space of text feature, vision is obtained
The implicit expression of feature and the implicit expression of text feature;
Step (3): the implicit expression for the visual signature for utilizing decoder to obtain step (2) and the implicit expression solution of text feature
Code is reconstruction vector;It establishes and rebuilds the vector relational model between input feature vector respectively;
Step (4): it is based on the implicit expression for the visual signature that step (2) obtain and the implicit expression of text feature, establishes clothes
Compatible model;Then, it is based on clothes compatibility model, compatible preference mould is constructed using Bayes's personalized ordering algorithm
Type;
Step (5): it is based on the implicit expression for the visual signature that step (2) obtain and the implicit expression of text feature, establishes vision
Feature is implicit to be indicated and the implicit consistency model indicated of text feature;Then, the multi-modal hidden feature for establishing clothes is consistent
Property model;
Step (6): the calculated result based on step (3), step (4) and step (5) constructs the multimode based on deep neural network
State complementation garment coordination model;The multi-modal complementary garment coordination model having been built up is trained;Using having trained
Multi-modal complementary garment coordination model carry out garment coordination recommendation.
2. multi-modal complementary garment coordination method neural network based as described in claim 1, characterized in that step (1)
In:
Visual signature is obtained in the picture from clothes, is to obtain view from the picture of clothes by depth convolutional neural networks
Feel feature;
Text feature is obtained in the verbal description from clothes, is to obtain text from the verbal description of clothes by bag of words
Eigen.
3. multi-modal complementary garment coordination method neural network based as described in claim 1, characterized in that step (2)
In:
By jacket tiVisual signatureJacket tiText featureLower clothing bjVisual signatureLower clothing bjText featureShoes skVisual signatureWith shoes skText featureIt is input in the encoder of self-encoding encoder;Export jacket ti's
Vision is implicit to be indicatedJacket tiText implicit indicateLower clothing bjVision implicit indicateThe text of lower clothing implies table
ShowShoes skVision implicit indicateWith shoes skText implicit indicate
4. multi-modal complementary garment coordination method neural network based as claimed in claim 3, characterized in that step (3)
In:
By the decoder of self-encoding encoder, by jacket tiVision implicit indicateJacket tiText implicit indicateLower clothing bj
Vision implicit indicateThe text of lower clothing is implicit to be indicatedShoes skVision implicit indicateShoes skText it is implicit
It indicatesIt is decoded as jacket tiOptical rehabilitation vectorJacket tiText rebuild vectorLower clothing bjReconstruction vector
The reconstruction vector of lower clothingShoes skOptical rehabilitation vectorShoes skText rebuild vector
5. multi-modal complementary garment coordination method neural network based as described in claim 1, characterized in that step (3)
In: the step of establishing the relational model rebuild between vector and input feature vector are as follows:
Wherein, lAE(x) relational model that clothes x is rebuild between vector and visual signature and text feature is indicated;l(vx) indicate clothes
Fill the optical rehabilitation vector of x and the reconstruction error of visual signature;l(cx) indicate that the text of clothes x rebuilds vector and text feature
Reconstruction error;Indicate the optical rehabilitation vector of clothes x;vxIndicate the visual signature of clothes x;Indicate the text of clothes x
Rebuild vector;cxIndicate the text feature of clothes x;
Establish the relational model of entire clothes rebuild between vector and input feature vector:
lAE=lAE(ti)+lAE(bj)+lAE(sk)。
6. multi-modal complementary garment coordination method neural network based as described in claim 1, characterized in that step (4)
In: establish clothes compatibility model compijk:
compijk=compik+compjk;
Wherein, compikIndicate the compatible model of jacket and shoes;compjkIndicate the compatible model of lower clothing and shoes;
Based on clothes compatibility model, compatible preference pattern l is constructed using Bayes's personalized ordering algorithmbpr:
Wherein,Indicate jacket and lower clothing to the compatible preference of positive example shoes,Indicate jacket and lower clothing pair
The compatible preference of negative example shoes;Indicate the threshold function table of neural network;(i,j,k+,k-) table
Show jacket tiWith lower clothing bjRelative to shoesFor be more suitable for and shoesCollocation.
7. multi-modal complementary garment coordination method neural network based as described in claim 1, characterized in that step (5)
In: visual signature is implicit to be indicated and the implicit consistency model l indicated of text featurevc(ti):
Wherein,Indicate the threshold function table of neural network;Indicate clothes tiVisual signature implicit indicate;Indicate clothes
Fill tiText feature implicit indicate;
The multi-modal hidden feature consistency model l of clothesmod:
Wherein, lvc(ti) indicate the implicit expression of jacket visual signature and the implicit consistency model indicated of text feature;lvc(bj)
Indicating that lower clothing visual signature is implicit indicates and the implicit consistency model indicated of text feature;Indicate positive example shoes view
Feeling that feature is implicit indicates and the implicit consistency model indicated of text feature;Indicate that negative example shoes visual signature is implicit
It indicates and text feature implies the consistency model indicated.
8. multi-modal complementary garment coordination method neural network based as described in claim 1, characterized in that step (6)
In: the multi-modal complementary garment coordination model based on deep neural network:
L=lAE+lmod+lbpr;
The multi-modal complementary garment coordination model parameter having been built up is trained by stochastic gradient descent method, by repeatedly
Generation to the model is restrained, and final argument is exported;
Garment coordination recommendation is carried out using the trained multi-modal complementary garment coordination based on deep neural network:
By trained final argument, all comp are calculatedijkValue;
Wherein compijkShoes corresponding to maximum value are the shoes that jacket and lower clothing are arranged in pairs or groups the most.
9. multi-modal complementary garment coordination system neural network based, characterized in that include: memory, processor and deposit
The computer instruction run on a memory and on a processor is stored up, when the computer instruction is run by processor, in completion
State step described in claim 1-8 either method.
10. a kind of computer readable storage medium, characterized in that be stored thereon with computer instruction, the computer instruction quilt
When processor is run, step described in the claims 1-8 either method is completed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810501840.4A CN108960959B (en) | 2018-05-23 | 2018-05-23 | Multi-mode complementary clothing matching method, system and medium based on neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810501840.4A CN108960959B (en) | 2018-05-23 | 2018-05-23 | Multi-mode complementary clothing matching method, system and medium based on neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108960959A true CN108960959A (en) | 2018-12-07 |
CN108960959B CN108960959B (en) | 2020-05-12 |
Family
ID=64499884
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810501840.4A Active CN108960959B (en) | 2018-05-23 | 2018-05-23 | Multi-mode complementary clothing matching method, system and medium based on neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108960959B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109871531A (en) * | 2019-01-04 | 2019-06-11 | 平安科技(深圳)有限公司 | Hidden feature extracting method, device, computer equipment and storage medium |
CN110110181A (en) * | 2019-05-09 | 2019-08-09 | 湖南大学 | A kind of garment coordination recommended method based on user styles and scene preference |
CN110211196A (en) * | 2019-05-28 | 2019-09-06 | 山东大学 | A kind of virtually trying method and device based on posture guidance |
CN110458638A (en) * | 2019-06-26 | 2019-11-15 | 平安科技(深圳)有限公司 | A kind of Method of Commodity Recommendation and device |
CN110807477A (en) * | 2019-10-18 | 2020-02-18 | 山东大学 | Attention mechanism-based neural network garment matching scheme generation method and system |
CN110825963A (en) * | 2019-10-18 | 2020-02-21 | 山东大学 | Generation-based auxiliary template enhanced clothing matching scheme generation method and system |
CN111383081A (en) * | 2020-03-24 | 2020-07-07 | 东华大学 | Intelligent recommendation method for clothing matching |
CN111400525A (en) * | 2020-03-20 | 2020-07-10 | 中国科学技术大学 | Intelligent fashionable garment matching and recommending method based on visual combination relation learning |
CN112860928A (en) * | 2021-02-08 | 2021-05-28 | 天津大学 | Clothing retrieval method based on class perception graph neural network |
CN113692598A (en) * | 2019-02-14 | 2021-11-23 | 凯首公司 | System and method for automatic training and prediction of garment usage models |
CN114707427A (en) * | 2022-05-25 | 2022-07-05 | 青岛科技大学 | Personalized modeling method of graph neural network based on effective neighbor sampling maximization |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016502713A (en) * | 2012-11-12 | 2016-01-28 | シンガポール・ユニバーシティ・オブ・テクノロジー・アンド・デザインSingapore University of Technologyand Design | Clothing matching system and method |
CN107123033A (en) * | 2017-05-04 | 2017-09-01 | 北京科技大学 | A kind of garment coordination method based on depth convolutional neural networks |
CN107870992A (en) * | 2017-10-27 | 2018-04-03 | 上海交通大学 | Editable image of clothing searching method based on multichannel topic model |
CN107909436A (en) * | 2017-11-14 | 2018-04-13 | 成都爆米花信息技术有限公司 | It is a kind of to recommend method suitable for the fitting based on big data of shopping online platform |
-
2018
- 2018-05-23 CN CN201810501840.4A patent/CN108960959B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016502713A (en) * | 2012-11-12 | 2016-01-28 | シンガポール・ユニバーシティ・オブ・テクノロジー・アンド・デザインSingapore University of Technologyand Design | Clothing matching system and method |
CN107123033A (en) * | 2017-05-04 | 2017-09-01 | 北京科技大学 | A kind of garment coordination method based on depth convolutional neural networks |
CN107870992A (en) * | 2017-10-27 | 2018-04-03 | 上海交通大学 | Editable image of clothing searching method based on multichannel topic model |
CN107909436A (en) * | 2017-11-14 | 2018-04-13 | 成都爆米花信息技术有限公司 | It is a kind of to recommend method suitable for the fitting based on big data of shopping online platform |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109871531A (en) * | 2019-01-04 | 2019-06-11 | 平安科技(深圳)有限公司 | Hidden feature extracting method, device, computer equipment and storage medium |
CN113692598A (en) * | 2019-02-14 | 2021-11-23 | 凯首公司 | System and method for automatic training and prediction of garment usage models |
CN110110181A (en) * | 2019-05-09 | 2019-08-09 | 湖南大学 | A kind of garment coordination recommended method based on user styles and scene preference |
CN110211196A (en) * | 2019-05-28 | 2019-09-06 | 山东大学 | A kind of virtually trying method and device based on posture guidance |
CN110458638A (en) * | 2019-06-26 | 2019-11-15 | 平安科技(深圳)有限公司 | A kind of Method of Commodity Recommendation and device |
CN110458638B (en) * | 2019-06-26 | 2023-08-15 | 平安科技(深圳)有限公司 | Commodity recommendation method and device |
CN110825963A (en) * | 2019-10-18 | 2020-02-21 | 山东大学 | Generation-based auxiliary template enhanced clothing matching scheme generation method and system |
CN110807477A (en) * | 2019-10-18 | 2020-02-18 | 山东大学 | Attention mechanism-based neural network garment matching scheme generation method and system |
CN110825963B (en) * | 2019-10-18 | 2022-03-25 | 山东大学 | Generation-based auxiliary template enhanced clothing matching scheme generation method and system |
CN110807477B (en) * | 2019-10-18 | 2022-06-07 | 山东大学 | Attention mechanism-based neural network garment matching scheme generation method and system |
CN111400525A (en) * | 2020-03-20 | 2020-07-10 | 中国科学技术大学 | Intelligent fashionable garment matching and recommending method based on visual combination relation learning |
CN111400525B (en) * | 2020-03-20 | 2023-06-16 | 中国科学技术大学 | Fashion clothing intelligent matching and recommending method based on vision combination relation learning |
CN111383081A (en) * | 2020-03-24 | 2020-07-07 | 东华大学 | Intelligent recommendation method for clothing matching |
CN112860928A (en) * | 2021-02-08 | 2021-05-28 | 天津大学 | Clothing retrieval method based on class perception graph neural network |
CN114707427A (en) * | 2022-05-25 | 2022-07-05 | 青岛科技大学 | Personalized modeling method of graph neural network based on effective neighbor sampling maximization |
Also Published As
Publication number | Publication date |
---|---|
CN108960959B (en) | 2020-05-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108960959A (en) | Multi-modal complementary garment coordination method, system and medium neural network based | |
Luce | Artificial intelligence for fashion: How AI is revolutionizing the fashion industry | |
Zhou et al. | Photorealistic facial expression synthesis by the conditional difference adversarial autoencoder | |
Cui et al. | FashionGAN: Display your fashion design using conditional generative adversarial nets | |
CN110110181A (en) | A kind of garment coordination recommended method based on user styles and scene preference | |
CN108921123A (en) | A kind of face identification method based on double data enhancing | |
US20160026926A1 (en) | Clothing matching system and method | |
CN109299396A (en) | Merge the convolutional neural networks collaborative filtering recommending method and system of attention model | |
Yan et al. | Toward intelligent design: An ai-based fashion designer using generative adversarial networks aided by sketch and rendering generators | |
CN110909754A (en) | Attribute generation countermeasure network and matching clothing generation method based on same | |
CN110956579B (en) | Text picture rewriting method based on generation of semantic segmentation map | |
CN109754317A (en) | Merge interpretation clothes recommended method, system, equipment and the medium of comment | |
CN108875910A (en) | Garment coordination method, system and the storage medium extracted based on attention knowledge | |
Yu et al. | DressUp!: outfit synthesis through automatic optimization. | |
Xu et al. | [Retracted] Innovative Design of Intangible Cultural Heritage Elements in Fashion Design Based on Interactive Evolutionary Computation | |
CN113592609A (en) | Personalized clothing matching recommendation method and system using time factors | |
Yang et al. | Combining users’ cognition noise with interactive genetic algorithms and trapezoidal fuzzy numbers for product color design | |
Jia et al. | Learning to appreciate the aesthetic effects of clothing | |
Wang et al. | Learning outfit compatibility with graph attention network and visual-semantic embedding | |
Wu et al. | A computer-aided coloring method for virtual agents based on personality impression, color harmony, and designer preference | |
Wang et al. | Learning compatibility knowledge for outfit recommendation with complementary clothing matching | |
Zhuo et al. | 3D modeling design and rapid style recommendation of polo shirt based on interactive genetic algorithm | |
Lai et al. | Theme-matters: fashion compatibility learning via theme attention | |
Wu et al. | An AIGC-empowered methodology to product color matching design | |
Mu et al. | Fashion intelligence in the Metaverse: promise and future prospects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |