CN109754317A - Merge interpretation clothes recommended method, system, equipment and the medium of comment - Google Patents
Merge interpretation clothes recommended method, system, equipment and the medium of comment Download PDFInfo
- Publication number
- CN109754317A CN109754317A CN201910024347.2A CN201910024347A CN109754317A CN 109754317 A CN109754317 A CN 109754317A CN 201910024347 A CN201910024347 A CN 201910024347A CN 109754317 A CN109754317 A CN 109754317A
- Authority
- CN
- China
- Prior art keywords
- picture
- garment
- visual
- jacket
- lower garment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000003062 neural network model Methods 0.000 claims abstract description 27
- 238000013135 deep learning Methods 0.000 claims abstract description 24
- 238000004088 simulation Methods 0.000 claims abstract description 10
- 230000000007 visual effect Effects 0.000 claims description 129
- 239000013598 vector Substances 0.000 claims description 77
- 239000011159 matrix material Substances 0.000 claims description 20
- 238000013528 artificial neural network Methods 0.000 claims description 19
- 238000012549 training Methods 0.000 claims description 18
- 230000007246 mechanism Effects 0.000 claims description 17
- 238000011176 pooling Methods 0.000 claims description 14
- 230000000306 recurrent effect Effects 0.000 claims description 14
- 238000004422 calculation algorithm Methods 0.000 claims description 12
- 230000006870 function Effects 0.000 claims description 12
- 230000002452 interceptive effect Effects 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 9
- 239000000284 extract Substances 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 8
- 238000003860 storage Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000009826 distribution Methods 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 125000004122 cyclic group Chemical group 0.000 claims description 2
- 230000014509 gene expression Effects 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 6
- 230000004927 fusion Effects 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 239000000047 product Substances 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 239000013589 supplement Substances 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 241000287196 Asthenes Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The present disclosure discloses interpretation clothes recommended method, system, equipment and the media of fusion comment, comprising: coder-decoder neural network model of the building based on deep learning;Coder-decoder neural network model based on deep learning is trained;By jacket picture and lower clothing picture to be recommended, it is input to trained coder-decoder neural network model simultaneously, the model gives a mark to the matching degree of jacket picture and lower clothing picture, provides recommendation results according to marking sequence, while providing the simulation comment of matching degree.It trains recommended models using the useful information hidden in user comment, to promote the effect of recommendation, while energy analog subscriber generates explanation of the comment as recommendation to the result recommended, and improves the interpretation of recommendation.
Description
Technical Field
The present disclosure relates to the field of clothing recommendation, and in particular, to a method, system, device, and medium for comment-fused interpretable clothing recommendation.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
The purpose of clothing recommendation is to promote people's interest and participation in online shopping by recommending fashion clothing that may be of interest to the user. The clothing recommendation technology can help users to quickly search for satisfactory clothing in the on-line fashion goods full of linnams on one hand, and can also help on-line retailers to improve service quality and increase income on the other hand. Therefore, the clothing recommendation technology plays an increasingly important role in the online retail market, and also draws wide attention from the industry and academia.
The field of clothing recommendation contains a number of problems, and the present invention is directed to the particular problem of recommending a suitable under-garment (e.g., skirt, shorts, etc.) for a given top (e.g., T-shirt, coat, etc.) of a user, and vice versa. The solution to this problem can help the user to better match his own clothing, making the user more fashionable. Early garment recommendation studies were based on expert labeled data sets, which were too small, limiting the development of complex models (e.g., deep learning based models). In recent years, with the advent of fashion-oriented online communities (e.g., Polyvoid and Chictopia), people can share and comment on clothing matches. In addition to the large number of clothing matches, this crowd-sourced data also contains other valuable information (e.g., large numbers of user reviews) that can be used to build a more accurate and intelligent recommendation system.
Most of the current clothing recommendation technologies simply rely on extracting visual features from pictures of a jacket and a jacket to judge the matching degree between a given jacket and a candidate jacket. All of them ignore information in the user comments, and do not refer to the user comments to learn the matching rules among common clothes. Meanwhile, the existing clothing recommendation technology only gives a judgment result, and the reason for simulating a user to generate comment recommendation is not provided. This makes recommendations less transparent and trustworthy.
Disclosure of Invention
In order to solve the defects of the prior art, the disclosure provides a comment-fused interpretable clothing recommendation method, system, device and medium, which train a recommendation model by using useful information hidden in user comments, so that the recommendation effect is improved, meanwhile, comments generated by a user for a recommendation result can be simulated to serve as the explanation of the recommendation, and the interpretability of the recommendation is improved.
In a first aspect, the present disclosure provides a method for interpretable garment recommendation incorporating comments;
the interpretable clothing recommending method fusing comments comprises the following steps:
constructing a deep learning-based encoder-decoder neural network model;
training a deep learning based encoder-decoder neural network model;
and simultaneously inputting the upper garment picture and the lower garment picture to be recommended into a trained encoder-decoder neural network model, scoring the matching degree of the upper garment picture and the lower garment picture by the model, giving a recommendation result according to the scoring sequence, and simultaneously giving a simulation comment of the matching degree.
As one possible implementation, the deep learning based encoder-decoder neural network model includes:
an upper garment encoder, a lower garment encoder, a matching decoder and a generating decoder;
the jacket encoder is used for receiving the jacket picture and extracting jacket visual characteristics and jacket code representation of the jacket picture; the jacket code representation comprises matching information between a jacket picture and a lower clothes picture;
the lower garment encoder is used for receiving the lower garment picture and extracting the lower garment visual characteristics and the lower garment encoding representation of the lower garment picture; the lower garment code representation comprises matching information between an upper garment picture and a lower garment picture;
the matching decoder is used for scoring the matching degree between the upper garment picture and the lower garment picture according to the upper garment code representation and the lower garment code representation;
the generation decoder is used for generating the simulation comment according to the coat visual characteristic, the coat code representation, the underwear visual characteristic and the underwear code representation.
As a possible implementation manner, the specific steps of extracting the jacket visual features of the jacket picture are as follows:
the jacket encoder includes: the first coiling layer, the second coiling layer, the first splicing layer and the first pooling layer are sequentially connected;
the first convolution layer extracts visual features of the jacket picture to obtain first visual features;
the second convolution layer extracts visual features of the jacket picture to obtain second visual features;
the first splicing layer is used for splicing the first visual feature and the second visual feature in series, and a third visual feature obtained by splicing is sent to the first pooling layer;
and the first pooling layer processes the third visual characteristics to obtain the coat visual characteristics of the coat picture.
As a possible implementation manner, the specific steps of extracting the visual characteristics of the lower garment picture are as follows:
the lower garment encoder comprises: the third convolution layer, the fourth convolution layer, the second splicing layer and the second pooling layer are connected in sequence;
the third convolution layer extracts visual features of the lower garment picture to obtain a fourth visual feature;
the fourth convolution layer extracts visual features of the lower garment picture to obtain fifth visual features;
the second splicing layer is used for splicing the fourth visual feature and the fifth visual feature in series, and the sixth visual feature obtained after splicing is sent to the second pooling layer;
and the second pooling layer processes the sixth visual characteristic to obtain the visual characteristic of the lower clothes picture.
As a possible implementation manner, the specific steps of extracting the jacket code representation of the jacket picture are as follows:
and coding the matching information between the upper garment picture and the lower garment picture into the extracted visual features of the upper garment picture by utilizing an interactive attention mechanism to obtain the coded representation of the upper garment picture.
As a possible implementation manner, the specific steps of extracting the lower garment code representation of the lower garment picture are as follows:
and coding the matching information between the upper garment picture and the lower garment picture into the extracted visual features of the lower garment picture by utilizing an interactive attention mechanism to obtain the coded representation of the lower garment picture.
As a possible implementation manner, the specific steps of coding and representing the upper garment picture by coding the matching information between the upper garment picture and the lower garment picture into the extracted visual features of the upper garment picture by using an interactive attention mechanism are as follows:
firstly, obtaining the global characteristics of a lower garment picture by calculating the average value of the visual characteristics of the lower garment;
then, for each visual feature of the jacket picture, calculating the attention weight of the global feature of the lower jacket picture to the visual feature of the jacket picture; carrying out normalization processing on the attention weight value;
secondly, weighting and summing the visual features of the jacket picture by using the global features of the lower jacket picture to the attention weight of the visual features of the jacket picture to obtain the attention global features of the jacket picture;
thirdly, mapping the attention global features of the jacket picture into visual feature vectors;
thirdly, the visual feature vector of the coat picture and the coat article vector corresponding to the coat picture are spliced in series, and the spliced result is the code representation of the final coat picture.
As a possible implementation manner, the acquisition step of the jacket item vector is as follows:
firstly, randomly initializing a jacket article vector matrix, wherein each row of the jacket article vector matrix corresponds to a jacket;
then, according to the input jacket picture, acquiring a corresponding vector from the jacket article vector matrix for later calculation;
and finally, updating the coat article vector matrix and parameters of the neural network by a back propagation BP algorithm by taking the loss function value as a target, and finally obtaining an updated coat article vector.
The jacket object vector is used for capturing useful information in history matching records through a back propagation BP algorithm to supplement visual features of the jacket.
The random initialization uses one of an Xavier method, a uniform distribution method, or a normal distribution method.
As a possible implementation manner, the matching information between the upper garment picture and the lower garment picture is coded into the extracted visual features of the lower garment picture by using an interactive attention mechanism, and the specific steps of obtaining the coded representation of the lower garment picture are as follows:
firstly, obtaining the global features of a coat picture by calculating the average value of the visual features of the coat;
then, for each visual feature of the lower clothes picture, calculating the visual features of the global feature of the upper clothes picture to the lower clothes picture
(ii) a signed attention weight; carrying out normalization processing on the attention weight value;
secondly, weighting and summing the visual features of the lower clothing pictures by using the global features of the upper clothing pictures to the attention weights of the visual features of the lower clothing pictures to obtain the attention global features of the lower clothing pictures;
thirdly, mapping the attention global features of the lower clothes picture into visual feature vectors;
and thirdly, serially splicing the visual characteristic vector of the lower clothing picture and the lower clothing product vector corresponding to the lower clothing picture, wherein the spliced result is the code representation of the final lower clothing picture.
As a possible implementation manner, the following steps of obtaining the vector of the clothing item are:
firstly, randomly initializing a next clothing item vector matrix, wherein each row corresponds to a next clothing;
then, according to the input clothing unloading picture, acquiring a corresponding vector from a clothing unloading product vector matrix for later calculation;
and finally, updating the lower clothing vector matrix and the parameters of the neural network by a back propagation BP algorithm with the aim of minimizing the loss function value to finally obtain an updated lower clothing vector.
The under-garment item vector captures useful information in the history matching records through a back propagation BP algorithm as a supplement to the under-garment visual features.
As a possible implementation manner, the specific steps of scoring the matching degree between the upper garment picture and the lower garment picture according to the upper garment code representation and the lower garment code representation are as follows:
and inputting the upper garment code representation and the lower garment code representation as input values into an MLP multilayer perceptron, and outputting the input values to obtain a matching scoring result of the upper garment picture and the lower garment picture.
As a possible implementation manner, the step of generating the simulation comment for the combination of the upper garment picture and the lower garment picture according to the upper garment visual feature, the upper garment code representation, the lower garment visual feature and the lower garment code representation comprises the following steps:
step (1): constructing a gated recurrent neural network GRU;
step (2): calculating the initial state of a gated recurrent neural network GRU by using the coded representation of the upper garment and the lower garment;
and (3): the gated recurrent neural network GRU performs the cyclic operation of steps (31) to (33) until a complete sentence is generated:
step (31): firstly, processing the visual features of the upper garment and the visual features of the lower garment by using a cross-modal attention mechanism to obtain a context vector of the current time step;
step (32): inputting the state of the last time step of the gated recurrent neural network GRU, the word vector of the word generated in the last time step and the context vector of the current time step into the gated recurrent neural network GRU to obtain the new state of the current time step and the prediction probability distribution of the currently generated word;
step (33): selecting the word with the highest probability as the current generation result; the word comprises punctuation; if the current generation result is a period number, which indicates that a complete sentence has been generated, all words generated at the time step are sequentially concatenated into a sentence, and the sentence is returned.
As a possible implementation manner, the specific steps of processing the visual features of the upper garment and the visual features of the lower garment by using a cross-modal attention mechanism to obtain a context vector are as follows:
firstly, the visual features of the upper garment and the visual features of the lower garment are correspondingly combined in series one by one;
then, calculating the attention weight of each combination after the state of the last time step of the gated recurrent neural network GRU is combined in series with the visual features of the upper garment and the visual features of the lower garment;
then, the calculated attention weight value is used for carrying out weighted summation on all the series combinations, and finally, the returned result is the context vector of the current time step.
As a possible implementation manner, the word vector is obtained by:
firstly, randomly initializing a word vector matrix, wherein each row corresponds to a word;
then, according to the currently input word, acquiring a corresponding vector from the word vector matrix for later calculation;
finally, the word vector matrix is updated by a back propagation BP algorithm with the parameters of the neural network and the minimum loss function as a target.
As a possible implementation manner, the specific steps of training the deep learning-based encoder-decoder neural network model are as follows:
the training set comprises matched upper garment and lower garment combinations given by real users crawled from an online fashion community website, and each combination comprises an upper garment picture, a lower garment picture, praise number and user comments;
considering the combination with the praise number larger than the threshold value as a matching combination; then obtaining a mismatching combination through negative sampling, namely randomly selecting an upper garment and a lower garment to form a combination, and if the combination does not appear in the matching combination, regarding the combination as the mismatching combination; respectively extracting the visual characteristics of the upper garment picture, the coded representation of the upper garment picture, the visual characteristics of the lower garment picture and the coded representation of the lower garment picture from the upper garment picture and the lower garment picture in the matched combination;
respectively extracting the visual characteristics of the upper garment picture, the coded representation of the upper garment picture, the visual characteristics of the lower garment picture and the coded representation of the lower garment picture from the upper garment picture and the lower garment picture in the unmatched combination;
and training the encoder-decoder neural network model based on deep learning by using all the characteristics and all the coding expressions extracted by the matching combination and the mismatching combination until the loss function value is minimum, and finishing training to obtain the well-trained encoder-decoder neural network model based on deep learning.
The encoder-decoder neural network model based on deep learning learns network parameters, upper garment article vectors, lower garment article vectors and word vectors through a training set.
In the training process, the loss function comprises: matching loss, generation loss and regularization loss; wherein,
the matching loss is measured by the accuracy degree of matching prediction, and the more accurate the prediction is, the smaller the loss is;
the generation loss is measured by the probability of generating real comments by the network, and the larger the probability is, the smaller the loss is.
The regularization loss is used for constraining the parameters in the network to avoid being too large, and the smaller the parameter value in the network is, the smaller the loss is.
Network parameters, top item vectors, bottom item vectors, and word vectors are updated using a back propagation BP algorithm to reduce losses.
Therefore, in the application embodiment, the recommendation model is trained by using useful information hidden in the user comments, so that the recommendation effect is improved, meanwhile, comments generated by the user for the recommendation result can be simulated to serve as the explanation of the recommendation, and the interpretability of the recommendation is improved.
In a second aspect, the present disclosure also provides a comment-fused interpretable garment recommendation system;
an interpretable garment recommendation system incorporating comments, comprising:
a model construction module configured to construct a deep learning based encoder-decoder neural network model;
a model training module configured to train a deep learning based encoder-decoder neural network model;
and the model using module is configured to simultaneously input the upper garment picture and the lower garment picture to be recommended into the trained encoder-decoder neural network model, score the matching degree of the upper garment picture and the lower garment picture, give a recommendation result according to the scoring sequence and simultaneously give a simulation comment of the matching degree.
Therefore, in the application embodiment, the recommendation model is trained by using useful information hidden in the user comments, so that the recommendation effect is improved, meanwhile, comments generated by the user for the recommendation result can be simulated to serve as the explanation of the recommendation, and the interpretability of the recommendation is improved.
In a third aspect, the present disclosure also provides an electronic device, including a memory, a processor, and computer instructions stored in the memory and executed on the processor, where the computer instructions, when executed by the processor, implement the method in any possible implementation manner of the first aspect.
In a fourth aspect, the present disclosure also provides a computer-readable storage medium for storing computer instructions, which when executed by a processor, perform the steps of the method in any possible implementation manner of the first aspect.
Compared with the prior art, the beneficial effect of this disclosure is:
the invention aims to improve the effect of the clothing recommendation and the interpretability of the clothing recommendation by combining the comment generation. Compared with the conventional clothing recommendation method, the method provided by the invention has the advantages that the useful information in the user comments is utilized to train the model, so that a plurality of evaluation indexes in the clothing recommendation field are improved. Meanwhile, the recommendation method and the recommendation system can simulate the user to generate comments during recommendation, so that the interpretability of the recommendation is greatly improved, the recommendation system becomes more transparent and credible, and the user can be helped to make faster and better decisions.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application.
FIG. 1 is a flowchart of the operation of a NOR of one or more embodiments;
FIG. 2 illustrates a top encoder and a bottom encoder in accordance with one or more embodiments;
FIG. 3 is a matching decoder of one or more embodiments;
FIG. 4 is a block diagram of a generative decoder according to one or more embodiments.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The first embodiment is as follows:
the invention adopts the popular encoder-decoder framework based on deep learning at present, which is named as neural out Recommendation (NOR for short) and comprises three parts: upper garment and lower garment encoders, match decoder and generate decoder. Wherein the coat and shirts encoders are used to extract visual features from the coat and shirts pictures. An interactive attention mechanism is provided for the upper garment and lower garment encoders, and matching information between the upper garment and the lower garment can be encoded into the extracted visual features. The match decoder derives a score based on the extracted visual features to evaluate the degree of match between a given top and a candidate bottom garment. The production decoder may then use the extracted visual features to produce a sentence as a comment on the jacket and under-coat combination. We propose a cross-modality attention mechanism for the generation decoder that can more efficiently use visual features to generate each word. The NOR operation flow diagram is shown in fig. 1.
The respective parts of NOR are described in detail below.
1. Upper garment and lower garment encoder
The upper garment encoder and the lower garment encoder use two convolutional neural networks (CNN for short) with the same structure and shared parameters, and the working flow is shown in FIG. 2;
firstly, two layers of convolution layers are used for inputting upper garment and lower garment pictures to extract visual features, then the two layers of convolution features are spliced together along the axis of a channel, and finally, one layer of pooling layer is used for obtaining the features which are marked as Andwhere L is the number of features and D is the dimension of the feature.
Then we apply an interactive attention mechanism to encode the matching information between the upper garment picture and the lower garment picture into the extracted features. Here, the attention weight of the upper garment picture to the lower garment picture is calculated as an example. First we compute the global feature g of the jacket picture using global poolingt∈RDAs shown in formula (1):
an ith feature representing a jacket picture;
then toWe calculate g using equation (2)tAttention weighted value e to itt,i:
Wherein WaAnd Ua∈RD×D,va∈RDIs a parameter in the network. Then we need to pair et,iAnd (3) carrying out normalization:
finally, the visual features of the lower clothes pictures are weighted and summed by the attention weights of the upper clothes pictures to the lower clothes pictures to obtain the attention global features of the lower clothes pictures
The ith characteristic representing the lower clothes picture;
the attention weight of the lower garment picture to the upper garment picture is calculated in the same way, the global attention feature of the upper garment picture is obtained and recorded asThen, we willAndfurther mapping into two visual feature vectorsAndas shown in formula (5):
whereinIs a parameter in the network. To learn useful information from the history of fashion items, we also learned an item vector representation for each top and bottom garment, noted asAndwe splice together the visual feature vectors and the item vectors as the final coded representation v of the top and bottom coatstAnd vb∈RmAs shown in formula (6):
wherein m is 2mv。
2. Matching decoder
We express v based on the resulting codes for upper and lower coatstAnd vbA multi-level perceptron (MLP for short) is used to predict the matching score between a given top and bottom garment, as shown in fig. 3:
the specific mathematical process is shown in formula (7) and formula (8):
hr=ReLU(Wsvt+Usvb) (7)
p(rtb)=softmax(Wrhr) (8)
wherein h isr∈Rn,Wsand Us∈Rn×m,Wr∈R2×nIs a parameter in the network. P (r) of last outputtb) Is a probability distribution corresponding to p (r)tb0) and p (r)tb1) where r istb1 denotes that a given upper and lower garment match, and rtb0 means that the given upper garment and lower garment do not match. We match the degree of the upper and lower clothes, i.e. p (r)tb1) are scored as a match.
3. Generating a decoder
To generate reviews for a given combination of top and bottom garments, we use a gated recurrent neural network (briefly GRU) as the generating encoder, as shown in fig. 4:
first we calculate the initial state s of the GRU using coded representations of the upper and lower garments0∈RqAs shown in formula (9):
s0=tanh(Wivt+Uivb) (9)
wherein WiAnd Ut∈Rq×mIs a parameter in the network. At each time step τ thereafter, we input to the GRU the word vector w of the previously output wordτ-1∈ReCurrent context vector ctxτ∈RDAnd the previous state sτ-1∈RqTo calculate a new state sτAnd the current output oτ∈RqAs shown in formula (10):
sτ,oτ=GRU(wτ-1,ctxτ,sτ-1) (10)
where context vector ctxτCalculated by our proposed cross-modal attention mechanism. Specifically, the extracted visual features of the upper garment and the visual features of the lower garment are combined together to obtain the visual feature combination of the upper garment and the lower garmentWe then calculate ctx as per equations (11) through (13)τ:
Wherein Wg∈Rq×DIs a parameter in the network. By means of the cross-modal attention mechanism, the generation decoder can focus on effective visual features to ensure full utilization of the extracted visual features. Finally we predict the word to be generated at the current time step according to equation (14):
p(wτ|w1,…,wτ-1)=softmax(Wooτ+Uoctxτ) (14)
wherein Wo∈R|V|×q,Uo∈R|V|×DAnd V is our dictionary. p (w)τ|w1,…,wτ-1) Returned is the τ th word wτAnd (4) in probability distribution on the whole dictionary, and taking the word with the highest probability as the current prediction result in prediction. If the current prediction result is a period number, which indicates that a complete sentence has been generated, all words generated at the time step are sequentially concatenated into a sentence, and the sentence is returned.
NOR requires learning network parameters, item vectors, and word vectors on a training set before application. The training set consists of jacket and under-coat combinations and user comments that are crawled from the online fashion community as deemed matching by the real user. Meanwhile, the upper garment and lower garment combinations which are considered not matched by us are obtained by a negative sampling technology. We then define the loss function as shown in equations (15) to (18):
L=Lmat+Lgen+Lreg(18)
wherein P is+Is a matched set of combinations, P-Is a set of unmatched combinations, CtbIs a set of comments matching the combination (t, b), Θ is all the parameters in the network and, LmatCorresponding to a matching loss, LgenCorresponding to the generation loss, LregCorresponding to the regularization loss. Since we have no comment on the unmatched combinations, we do not consider the generation loss therein. From these real user comments, the NOR can learn useful clothing matching information. Then we use the backward propagation algorithm (abbreviated as BP algorithm) commonly used in deep learning to update the parameters of the network to reduce the loss.
After NOR training is complete, the parameters, item vectors, and word vectors are all fixed and can then be used to score and comment generation for a given top and bottom look prediction match. When recommending shirts for a given top, we first score each candidate shirts for its match with the top using NOR, and then sort the shirts according to the score to get the recommendation. NOR also generates comments as a reason for the recommendation. The same is true for recommending a jacket for getting off the garment.
The invention aims to improve the effect of the clothing recommendation and the interpretability of the clothing recommendation by combining the comment generation. Compared with the conventional clothing recommendation method, the method provided by the invention has the advantages that the useful information in the user comments is utilized to train the model, so that a plurality of evaluation indexes in the clothing recommendation field are improved. Meanwhile, the recommendation method and the recommendation system can simulate the user to generate comments during recommendation, so that the interpretability of the recommendation is greatly improved, the recommendation system becomes more transparent and credible, and the user can be helped to make faster and better decisions.
Example two:
the present disclosure also provides an interpretable garment recommendation system incorporating the comments;
an interpretable garment recommendation system incorporating comments, comprising:
a model construction module configured to construct a deep learning based encoder-decoder neural network model;
a model training module configured to train a deep learning based encoder-decoder neural network model;
and the model using module is configured to simultaneously input the upper garment picture and the lower garment picture to be recommended into the trained encoder-decoder neural network model, score the matching degree of the upper garment picture and the lower garment picture, give a recommendation result according to the scoring sequence and simultaneously give a simulation comment of the matching degree.
Example three:
the present disclosure also provides an electronic device, which includes a memory, a processor, and a computer instruction stored in the memory and executed on the processor, where when the computer instruction is executed by the processor, each operation in the method is completed, and details are not described herein for brevity.
It should be understood that in the present disclosure, the processor may be a central processing unit CPU, but may also be other general purpose processors, digital signal processors DSP, application specific integrated circuits ASIC, off-the-shelf programmable gate arrays FPGA or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include both read-only memory and random access memory, and may provide instructions and data to the processor, and a portion of the memory may also include non-volatile random access memory. For example, the memory may also store device type information.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The steps of a method disclosed in connection with the present disclosure may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, among other storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here. Those of ordinary skill in the art will appreciate that the various illustrative elements, i.e., algorithm steps, described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is merely a division of one logic function, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Example four:
the functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (10)
1. The interpretable clothing recommendation method fusing comments is characterized by comprising the following steps of:
constructing a deep learning-based encoder-decoder neural network model;
training a deep learning based encoder-decoder neural network model;
and simultaneously inputting the upper garment picture and the lower garment picture to be recommended into a trained encoder-decoder neural network model, scoring the matching degree of the upper garment picture and the lower garment picture by the model, giving a recommendation result according to the scoring sequence, and simultaneously giving a simulation comment of the matching degree.
2. The method of claim 1, wherein the deep learning based encoder-decoder neural network model comprises:
an upper garment encoder, a lower garment encoder, a matching decoder and a generating decoder;
the jacket encoder is used for receiving the jacket picture and extracting jacket visual characteristics and jacket code representation of the jacket picture; the jacket code representation comprises matching information between a jacket picture and a lower clothes picture;
the lower garment encoder is used for receiving the lower garment picture and extracting the lower garment visual characteristics and the lower garment encoding representation of the lower garment picture; the lower garment code representation comprises matching information between an upper garment picture and a lower garment picture;
the matching decoder is used for scoring the matching degree between the upper garment picture and the lower garment picture according to the upper garment code representation and the lower garment code representation;
the generation decoder is used for generating the simulation comment according to the coat visual characteristic, the coat code representation, the underwear visual characteristic and the underwear code representation.
3. The method as claimed in claim 2, wherein the step of extracting the visual features of the jacket picture comprises the following steps:
the jacket encoder includes: the first coiling layer, the second coiling layer, the first splicing layer and the first pooling layer are sequentially connected;
the first convolution layer extracts visual features of the jacket picture to obtain first visual features;
the second convolution layer extracts visual features of the jacket picture to obtain second visual features;
the first splicing layer is used for splicing the first visual feature and the second visual feature in series, and a third visual feature obtained by splicing is sent to the first pooling layer;
the first pooling layer processes the third visual feature to obtain the coat visual feature of the coat picture;
or,
the specific steps for extracting the visual characteristics of the lower garment picture are as follows:
the lower garment encoder comprises: the third convolution layer, the fourth convolution layer, the second splicing layer and the second pooling layer are connected in sequence;
the third convolution layer extracts visual features of the lower garment picture to obtain a fourth visual feature;
the fourth convolution layer extracts visual features of the lower garment picture to obtain fifth visual features;
the second splicing layer is used for splicing the fourth visual feature and the fifth visual feature in series, and the sixth visual feature obtained after splicing is sent to the second pooling layer;
and the second pooling layer processes the sixth visual characteristic to obtain the visual characteristic of the lower clothes picture.
4. The method of claim 2, wherein the extracting the jacket code representation of the jacket picture comprises the steps of:
by utilizing an interactive attention mechanism, the matching information between the upper garment picture and the lower garment picture is coded into the visual characteristics of the extracted upper garment picture to obtain the coded representation of the upper garment picture;
or,
the specific steps of extracting the lower garment code representation of the lower garment picture are as follows:
and coding the matching information between the upper garment picture and the lower garment picture into the extracted visual features of the lower garment picture by utilizing an interactive attention mechanism to obtain the coded representation of the lower garment picture.
5. The method as claimed in claim 4, wherein the step of coding the matching information between the upper garment picture and the lower garment picture into the extracted visual features of the upper garment picture by using an interactive attention mechanism comprises the specific steps of:
firstly, obtaining the global characteristics of a lower garment picture by calculating the average value of the visual characteristics of the lower garment;
then, for each visual feature of the jacket picture, calculating the attention weight of the global feature of the lower jacket picture to the visual feature of the jacket picture; carrying out normalization processing on the attention weight value;
secondly, weighting and summing the visual features of the jacket picture by using the global features of the lower jacket picture to the attention weight of the visual features of the jacket picture to obtain the attention global features of the jacket picture;
thirdly, mapping the attention global features of the jacket picture into visual feature vectors;
thirdly, serially splicing the visual characteristic vector of the coat picture and the coat article vector corresponding to the coat picture, wherein the spliced result is the code representation of the final coat picture;
or,
by utilizing an interactive attention mechanism, the matching information between the upper garment picture and the lower garment picture is coded into the extracted visual features of the lower garment picture, and the specific steps of obtaining the coded representation of the lower garment picture are as follows:
firstly, obtaining the global features of a coat picture by calculating the average value of the visual features of the coat;
then, for each visual feature of the lower clothes picture, calculating the visual features of the global feature of the upper clothes picture to the lower clothes picture
(ii) a signed attention weight; carrying out normalization processing on the attention weight value;
secondly, weighting and summing the visual features of the lower clothing pictures by using the global features of the upper clothing pictures to the attention weights of the visual features of the lower clothing pictures to obtain the attention global features of the lower clothing pictures;
thirdly, mapping the attention global features of the lower clothes picture into visual feature vectors;
and thirdly, serially splicing the visual characteristic vector of the lower clothing picture and the lower clothing product vector corresponding to the lower clothing picture, wherein the spliced result is the code representation of the final lower clothing picture.
6. The method of claim 5, wherein the step of obtaining the item vector comprises:
firstly, randomly initializing a jacket article vector matrix, wherein each row of the jacket article vector matrix corresponds to a jacket;
then, according to the input jacket picture, acquiring a corresponding vector from the jacket article vector matrix for later calculation;
finally, the coat article vector matrix and parameters of the neural network are updated through a back propagation BP algorithm by taking the loss function value as a target, and finally an updated coat article vector is obtained;
or,
the following steps of obtaining the vector of the clothing items are as follows:
firstly, randomly initializing a next clothing item vector matrix, wherein each row corresponds to a next clothing;
then, according to the input clothing unloading picture, acquiring a corresponding vector from a clothing unloading product vector matrix for later calculation;
and finally, updating the lower clothing vector matrix and the parameters of the neural network by a back propagation BP algorithm with the aim of minimizing the loss function value to finally obtain an updated lower clothing vector.
7. The method of claim 2, wherein the step of scoring the degree of match between the jacket picture and the jersey picture based on the jacket code representation and the jersey code representation comprises the steps of:
the upper garment code representation and the lower garment code representation are used as input values and input into an MLP multilayer perceptron, and the output is a matching scoring result of the upper garment picture and the lower garment picture;
or,
the steps of generating the simulation comment for the combination of the jacket picture and the under-garment picture according to the jacket visual characteristic, the jacket code representation, the under-garment visual characteristic and the under-garment code representation are as follows:
step (1): constructing a gated recurrent neural network GRU;
step (2): calculating the initial state of a gated recurrent neural network GRU by using the coded representation of the upper garment and the lower garment;
and (3): the gated recurrent neural network GRU performs the cyclic operation of steps (31) to (33) until a complete sentence is generated:
step (31): firstly, processing the visual features of the upper garment and the visual features of the lower garment by using a cross-modal attention mechanism to obtain a context vector of the current time step;
step (32): inputting the state of the last time step of the gated recurrent neural network GRU, the word vector of the word generated in the last time step and the context vector of the current time step into the gated recurrent neural network GRU to obtain the new state of the current time step and the prediction probability distribution of the currently generated word;
step (33): selecting the word with the highest probability as the current generation result; the word comprises punctuation; if the current generation result is a period number, which indicates that a complete sentence has been generated, all words generated at the time step are sequentially connected in series to form a sentence, and the sentence is returned;
or,
the specific steps of processing the visual features of the upper garment and the visual features of the lower garment by using a cross-modal attention mechanism to obtain the context vector of the current time step are as follows:
firstly, the visual features of the upper garment and the visual features of the lower garment are correspondingly combined in series one by one;
then, calculating the attention weight of each combination after the state of the last time step of the gated recurrent neural network GRU is combined in series with the visual features of the upper garment and the visual features of the lower garment;
then, performing weighted summation on all the series combinations by using the calculated attention weight value, and finally returning a result which is the context vector of the current time step;
or,
the word vector is obtained in the following way:
firstly, randomly initializing a word vector matrix, wherein each row corresponds to a word;
then, according to the currently input word, acquiring a corresponding vector from the word vector matrix for later calculation;
finally, the word vector matrix and the parameters of the neural network are updated through a back propagation BP algorithm by taking the minimum loss function as a target;
or,
the specific steps for training the encoder-decoder neural network model based on deep learning are as follows:
the training set comprises matched upper garment and lower garment combinations given by real users crawled from an online fashion community website, and each combination comprises an upper garment picture, a lower garment picture, praise number and user comments;
considering the combination with the praise number larger than the threshold value as a matching combination; then obtaining a mismatching combination through negative sampling, namely randomly selecting an upper garment and a lower garment to form a combination, and if the combination does not appear in the matching combination, regarding the combination as the mismatching combination; respectively extracting the visual characteristics of the upper garment picture, the coded representation of the upper garment picture, the visual characteristics of the lower garment picture and the coded representation of the lower garment picture from the upper garment picture and the lower garment picture in the matched combination;
respectively extracting the visual characteristics of the upper garment picture, the coded representation of the upper garment picture, the visual characteristics of the lower garment picture and the coded representation of the lower garment picture from the upper garment picture and the lower garment picture in the unmatched combination;
and training the encoder-decoder neural network model based on deep learning by using all the characteristics and all the coding expressions extracted by the matching combination and the mismatching combination until the loss function value is minimum, and finishing training to obtain the well-trained encoder-decoder neural network model based on deep learning.
8. An interpretable garment recommendation system incorporating comments, comprising:
a model construction module configured to construct a deep learning based encoder-decoder neural network model;
a model training module configured to train a deep learning based encoder-decoder neural network model;
and the model using module is configured to simultaneously input the upper garment picture and the lower garment picture to be recommended into the trained encoder-decoder neural network model, score the matching degree of the upper garment picture and the lower garment picture, give a recommendation result according to the scoring sequence and simultaneously give a simulation comment of the matching degree.
9. An electronic device comprising a memory and a processor and computer instructions stored on the memory and executable on the processor, the computer instructions when executed by the processor performing the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium storing computer instructions which, when executed by a processor, perform the steps of the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910024347.2A CN109754317B (en) | 2019-01-10 | 2019-01-10 | Comment-fused interpretable garment recommendation method, system, device and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910024347.2A CN109754317B (en) | 2019-01-10 | 2019-01-10 | Comment-fused interpretable garment recommendation method, system, device and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109754317A true CN109754317A (en) | 2019-05-14 |
CN109754317B CN109754317B (en) | 2020-11-06 |
Family
ID=66405439
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910024347.2A Active CN109754317B (en) | 2019-01-10 | 2019-01-10 | Comment-fused interpretable garment recommendation method, system, device and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109754317B (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110188449A (en) * | 2019-05-27 | 2019-08-30 | 山东大学 | Interpretable clothing information recommended method, system, medium and equipment based on attribute |
CN110321473A (en) * | 2019-05-21 | 2019-10-11 | 山东省计算中心(国家超级计算济南中心) | Diversity preference information method for pushing, system, medium and equipment based on multi-modal attention |
CN110688832A (en) * | 2019-10-10 | 2020-01-14 | 河北省讯飞人工智能研究院 | Comment generation method, device, equipment and storage medium |
CN110765353A (en) * | 2019-10-16 | 2020-02-07 | 腾讯科技(深圳)有限公司 | Processing method and device of project recommendation model, computer equipment and storage medium |
CN110807477A (en) * | 2019-10-18 | 2020-02-18 | 山东大学 | Attention mechanism-based neural network garment matching scheme generation method and system |
CN111046286A (en) * | 2019-12-12 | 2020-04-21 | 腾讯科技(深圳)有限公司 | Object recommendation method and device and computer storage medium |
CN111400525A (en) * | 2020-03-20 | 2020-07-10 | 中国科学技术大学 | Intelligent fashionable garment matching and recommending method based on visual combination relation learning |
CN111476622A (en) * | 2019-11-21 | 2020-07-31 | 北京沃东天骏信息技术有限公司 | Article pushing method and device and computer readable storage medium |
CN112667839A (en) * | 2019-10-16 | 2021-04-16 | 阿里巴巴集团控股有限公司 | Data processing method, data retrieval device and data retrieval equipment |
CN113158045A (en) * | 2021-04-20 | 2021-07-23 | 中国科学院深圳先进技术研究院 | Interpretable recommendation method based on graph neural network reasoning |
CN113850656A (en) * | 2021-11-15 | 2021-12-28 | 内蒙古工业大学 | Personalized clothing recommendation method and system based on attention perception and integrating multi-mode data |
CN114943035A (en) * | 2022-06-08 | 2022-08-26 | 青岛文达通科技股份有限公司 | User dressing recommendation method and system based on self-encoder and memory network |
CN117994007A (en) * | 2024-04-03 | 2024-05-07 | 山东科技大学 | Social recommendation method based on multi-view fusion heterogeneous graph neural network |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140129371A1 (en) * | 2012-11-05 | 2014-05-08 | Nathan R. Wilson | Systems and methods for providing enhanced neural network genesis and recommendations |
US20150339757A1 (en) * | 2014-05-20 | 2015-11-26 | Parham Aarabi | Method, system and computer program product for generating recommendations for products and treatments |
CN106815739A (en) * | 2015-12-01 | 2017-06-09 | 东莞酷派软件技术有限公司 | A kind of recommendation method of clothing, device and mobile terminal |
CN107590584A (en) * | 2017-08-14 | 2018-01-16 | 上海爱优威软件开发有限公司 | Dressing collocation reviewing method |
CN107993131A (en) * | 2017-12-27 | 2018-05-04 | 广东欧珀移动通信有限公司 | Wear to take and recommend method, apparatus, server and storage medium |
CN108734557A (en) * | 2018-05-18 | 2018-11-02 | 北京京东尚科信息技术有限公司 | Methods, devices and systems for generating dress ornament recommendation information |
CN109117779A (en) * | 2018-08-06 | 2019-01-01 | 百度在线网络技术(北京)有限公司 | One kind, which is worn, takes recommended method, device and electronic equipment |
-
2019
- 2019-01-10 CN CN201910024347.2A patent/CN109754317B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140129371A1 (en) * | 2012-11-05 | 2014-05-08 | Nathan R. Wilson | Systems and methods for providing enhanced neural network genesis and recommendations |
US20150339757A1 (en) * | 2014-05-20 | 2015-11-26 | Parham Aarabi | Method, system and computer program product for generating recommendations for products and treatments |
CN106815739A (en) * | 2015-12-01 | 2017-06-09 | 东莞酷派软件技术有限公司 | A kind of recommendation method of clothing, device and mobile terminal |
CN107590584A (en) * | 2017-08-14 | 2018-01-16 | 上海爱优威软件开发有限公司 | Dressing collocation reviewing method |
CN107993131A (en) * | 2017-12-27 | 2018-05-04 | 广东欧珀移动通信有限公司 | Wear to take and recommend method, apparatus, server and storage medium |
CN108734557A (en) * | 2018-05-18 | 2018-11-02 | 北京京东尚科信息技术有限公司 | Methods, devices and systems for generating dress ornament recommendation information |
CN109117779A (en) * | 2018-08-06 | 2019-01-01 | 百度在线网络技术(北京)有限公司 | One kind, which is worn, takes recommended method, device and electronic equipment |
Non-Patent Citations (2)
Title |
---|
JIANMO NI: "Estimating Reactions and Recommending Products with Generative Models of Reviews", 《PROCEEDINGS OF THE THE 8TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING》 * |
金泰伟: "基于用户行为数据和评论数据的推荐模型研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110321473A (en) * | 2019-05-21 | 2019-10-11 | 山东省计算中心(国家超级计算济南中心) | Diversity preference information method for pushing, system, medium and equipment based on multi-modal attention |
CN110321473B (en) * | 2019-05-21 | 2021-05-25 | 山东省计算中心(国家超级计算济南中心) | Multi-modal attention-based diversity preference information pushing method, system, medium and device |
CN110188449A (en) * | 2019-05-27 | 2019-08-30 | 山东大学 | Interpretable clothing information recommended method, system, medium and equipment based on attribute |
CN110688832A (en) * | 2019-10-10 | 2020-01-14 | 河北省讯飞人工智能研究院 | Comment generation method, device, equipment and storage medium |
CN110688832B (en) * | 2019-10-10 | 2023-06-09 | 河北省讯飞人工智能研究院 | Comment generation method, comment generation device, comment generation equipment and storage medium |
CN110765353B (en) * | 2019-10-16 | 2022-03-08 | 腾讯科技(深圳)有限公司 | Processing method and device of project recommendation model, computer equipment and storage medium |
CN110765353A (en) * | 2019-10-16 | 2020-02-07 | 腾讯科技(深圳)有限公司 | Processing method and device of project recommendation model, computer equipment and storage medium |
CN112667839A (en) * | 2019-10-16 | 2021-04-16 | 阿里巴巴集团控股有限公司 | Data processing method, data retrieval device and data retrieval equipment |
CN110807477A (en) * | 2019-10-18 | 2020-02-18 | 山东大学 | Attention mechanism-based neural network garment matching scheme generation method and system |
CN110807477B (en) * | 2019-10-18 | 2022-06-07 | 山东大学 | Attention mechanism-based neural network garment matching scheme generation method and system |
CN111476622A (en) * | 2019-11-21 | 2020-07-31 | 北京沃东天骏信息技术有限公司 | Article pushing method and device and computer readable storage medium |
CN111476622B (en) * | 2019-11-21 | 2021-05-25 | 北京沃东天骏信息技术有限公司 | Article pushing method and device and computer readable storage medium |
CN111046286A (en) * | 2019-12-12 | 2020-04-21 | 腾讯科技(深圳)有限公司 | Object recommendation method and device and computer storage medium |
CN111046286B (en) * | 2019-12-12 | 2023-04-18 | 腾讯科技(深圳)有限公司 | Object recommendation method and device and computer storage medium |
CN111400525A (en) * | 2020-03-20 | 2020-07-10 | 中国科学技术大学 | Intelligent fashionable garment matching and recommending method based on visual combination relation learning |
CN111400525B (en) * | 2020-03-20 | 2023-06-16 | 中国科学技术大学 | Fashion clothing intelligent matching and recommending method based on vision combination relation learning |
CN113158045A (en) * | 2021-04-20 | 2021-07-23 | 中国科学院深圳先进技术研究院 | Interpretable recommendation method based on graph neural network reasoning |
CN113158045B (en) * | 2021-04-20 | 2022-11-01 | 中国科学院深圳先进技术研究院 | Interpretable recommendation method based on graph neural network reasoning |
CN113850656A (en) * | 2021-11-15 | 2021-12-28 | 内蒙古工业大学 | Personalized clothing recommendation method and system based on attention perception and integrating multi-mode data |
CN114943035A (en) * | 2022-06-08 | 2022-08-26 | 青岛文达通科技股份有限公司 | User dressing recommendation method and system based on self-encoder and memory network |
CN117994007A (en) * | 2024-04-03 | 2024-05-07 | 山东科技大学 | Social recommendation method based on multi-view fusion heterogeneous graph neural network |
Also Published As
Publication number | Publication date |
---|---|
CN109754317B (en) | 2020-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109754317B (en) | Comment-fused interpretable garment recommendation method, system, device and medium | |
CN107679522B (en) | Multi-stream LSTM-based action identification method | |
CN111325579A (en) | Advertisement click rate prediction method | |
CN107066445B (en) | The deep learning method of one attribute emotion word vector | |
CN104598611B (en) | The method and system being ranked up to search entry | |
CN112561064B (en) | Knowledge base completion method based on OWKBC model | |
CN110826338B (en) | Fine-grained semantic similarity recognition method for single-selection gate and inter-class measurement | |
CN109584006B (en) | Cross-platform commodity matching method based on deep matching model | |
Deb et al. | Graph convolutional networks for assessment of physical rehabilitation exercises | |
CN110955826A (en) | Recommendation system based on improved recurrent neural network unit | |
CN113722583A (en) | Recommendation method, recommendation model training method and related products | |
CN110580341A (en) | False comment detection method and system based on semi-supervised learning model | |
Naim | E-learning engagement through convolution neural networks in business education | |
CN111538841B (en) | Comment emotion analysis method, device and system based on knowledge mutual distillation | |
CN113157678A (en) | Multi-source heterogeneous data association method | |
CN112364236A (en) | Target object recommendation system, method and device, and data processing method and device | |
CN114528490A (en) | Self-supervision sequence recommendation method based on long-term and short-term interests of user | |
CN110738314B (en) | Click rate prediction method and device based on deep migration network | |
CN115880027A (en) | Electronic commerce website commodity seasonal prediction model creation method | |
CN113888238A (en) | Advertisement click rate prediction method and device and computer equipment | |
Cao et al. | A dual attention model based on probabilistically mask for 3D human motion prediction | |
CN111241372B (en) | Method for predicting color harmony degree according to user preference learning | |
Ito et al. | Efficient and accurate skeleton-based two-person interaction recognition using inter-and intra-body graphs | |
CN113139133B (en) | Cloud exhibition content recommendation method, system and equipment based on generation countermeasure network | |
CN114758149A (en) | Fashion compatibility analysis method and system based on deep multi-modal feature fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |