CN110807477B - Attention mechanism-based neural network garment matching scheme generation method and system - Google Patents

Attention mechanism-based neural network garment matching scheme generation method and system Download PDF

Info

Publication number
CN110807477B
CN110807477B CN201910993603.9A CN201910993603A CN110807477B CN 110807477 B CN110807477 B CN 110807477B CN 201910993603 A CN201910993603 A CN 201910993603A CN 110807477 B CN110807477 B CN 110807477B
Authority
CN
China
Prior art keywords
garment
jacket
text
visual
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910993603.9A
Other languages
Chinese (zh)
Other versions
CN110807477A (en
Inventor
刘金环
宋雪萌
马军
聂礼强
陈竹敏
甘甜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201910993603.9A priority Critical patent/CN110807477B/en
Publication of CN110807477A publication Critical patent/CN110807477A/en
Application granted granted Critical
Publication of CN110807477B publication Critical patent/CN110807477B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The utility model discloses a neural network clothes collocation scheme generation method and system based on an attention mechanism, which is used for constructing a neural network model based on the attention mechanism; constructing a training set; inputting the training set into a constructed attention-based neural network model, training the attention-based neural network model, stopping training when a loss function of the model is converged, and outputting the trained attention-based neural network model; acquiring a jacket to be matched; obtaining a plurality of existing lower clothes; inputting the upper garment to be matched and a plurality of existing lower garments into a trained attention mechanism-based neural network model, and outputting the lower garments matched with the upper garment to be matched; and finally, outputting the upper garment to be matched and the matched lower garment as the optimal dressing matching scheme.

Description

Attention mechanism-based neural network garment matching scheme generation method and system
Technical Field
The disclosure relates to the technical field of clothing matching, in particular to a neural network clothing matching scheme generation method and system based on an attention mechanism.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
On-line clothing sales have become a major trend for people's clothing consumption. In addition to meeting the great demands of people in daily life, more people are pursuing the development of fashion and individuality. The current clothes matching mainly depends on shopping guide or friend recommendation, which brings great inconvenience to people. Therefore, the automatic clothes matching method can effectively solve the problems.
In the course of implementing the present disclosure, the inventors found that the following technical problems exist in the prior art:
the garment matching problem also presents the following challenges: 1) the clothing categories vary greatly. Such as T-shirts, jeans, skirts, etc., there are great differences in the shape and function of different categories, and there is a great challenge how to effectively model the compatibility of different categories of garments. 2) The clothing characteristics have different attributes. Different clothes have different characteristics, such as colors, patterns, lace patterns and the like, and different characteristics have different influences on clothes matching, so that how to adopt an attention mechanism to mine different contributions of the characteristic difference to the clothes matching becomes an important challenge. 3) The clothes mode is diversified. A garment may be represented in a variety of ways, such as video, images, textual descriptions and comments. How to effectively utilize multi-mode information of different commodities to improve the matching effect of clothes also has great challenge.
Disclosure of Invention
In order to overcome the defects of the prior art, the present disclosure provides a neural network garment collocation scheme generation method and system based on an attention mechanism;
in a first aspect, the present disclosure provides a neural network garment collocation scheme generation method based on an attention mechanism;
the neural network clothing matching scheme generation method based on the attention mechanism comprises the following steps:
constructing a neural network model based on an attention mechanism; constructing a training set;
inputting the training set into a constructed attention-based neural network model, training the attention-based neural network model, stopping training when a loss function of the model is converged, and outputting trained attention-based neural network model parameters;
acquiring a jacket to be matched; obtaining a plurality of existing lower clothes; inputting the upper garment to be matched and a plurality of existing lower garments into a trained attention mechanism-based neural network model, and outputting the lower garments matched with the upper garment to be matched;
and finally, outputting the upper garment to be matched and the matched lower garment as the optimal dressing matching scheme.
In a second aspect, the present disclosure also provides a neural network garment collocation scheme generation system based on an attention mechanism;
the neural network clothing matching scheme generation system based on the attention mechanism comprises:
a model building module configured to: constructing a neural network model based on an attention mechanism; constructing a training set;
a training module configured to: inputting the training set into a constructed attention-based neural network model, training the attention-based neural network model, stopping training when a loss function of the model is converged, and outputting the trained attention-based neural network model;
a recipe output module configured to: acquiring a jacket to be matched; obtaining a plurality of existing lower clothes; inputting the upper garment to be matched and a plurality of existing lower garments into a trained attention mechanism-based neural network model, and outputting the lower garments matched with the upper garment to be matched;
and finally, outputting the upper garment to be matched and the matched lower garment as the optimal dressing matching scheme.
In a third aspect, the present disclosure also provides an electronic device comprising a memory and a processor, and computer instructions stored on the memory and executed on the processor, wherein the computer instructions, when executed by the processor, perform the steps of the method of the first aspect.
In a fourth aspect, the present disclosure also provides a computer-readable storage medium for storing computer instructions which, when executed by a processor, perform the steps of the method of the first aspect.
Compared with the prior art, the beneficial effect of this disclosure is:
different attribute information of different types of clothes can be effectively learned, the importance of different characteristic attributes can be learned in a self-adaptive manner through the attention model, and different confidence coefficients are distributed to different characteristics, so that the different types of clothes are modeled comprehensively, and automatic clothes matching is realized.
The end-to-end deep neural framework based on the attention mechanism can effectively perform compatibility matching on multi-modal information of different types of clothes.
The proposed feature-level attention mechanism can model different degrees of contribution of paired features for different garments.
The method can effectively mine the characteristic attributes (namely color, shape, pattern and the like) in the clothing matching.
The model can effectively model matching preference of complementary clothes.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application.
FIG. 1 is a description of the main research mission of the first embodiment of the present disclosure;
FIG. 2 is an architecture of a feature level attention model of embodiment one of the present disclosure;
fig. 3 is a workflow of an end-to-end attention-based neural model according to an embodiment of the disclosure.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The first embodiment provides a neural network garment collocation scheme generation method based on an attention mechanism;
as shown in fig. 3, the method for generating a neural network clothing matching scheme based on attention mechanism includes:
s1: constructing a neural network model based on an attention mechanism; constructing a training set;
s2: inputting the training set into a constructed attention-based neural network model, training the attention-based neural network model, stopping training when a loss function of the model is converged, and outputting the trained attention-based neural network model;
s3: acquiring a jacket to be matched; obtaining a plurality of existing lower clothes; inputting the upper garment to be matched and a plurality of existing lower garments into a trained attention mechanism-based neural network model, and outputting the lower garments matched with the upper garment to be matched;
and finally, outputting the upper garment to be matched and the matched lower garment as the optimal dressing matching scheme.
As one or more embodiments, the constructing an attention-based neural network model; the method comprises the following specific steps:
s101: crawling an upper garment from a fashionable garment matching website, finding a matched lower garment for the upper garment as a positive lower garment, and randomly finding a lower garment for the upper garment as a negative lower garment; each upper garment or each lower garment is provided with corresponding images and text descriptions;
s102: visual features are extracted from the upper garment and the formal lower garment to obtain the visual features of the upper garment and the formal lower garment;
extracting text features from the upper garment and the formal lower garment to obtain upper garment text features and formal lower garment text features;
s103: mapping the visual features of the jacket into visual implicit vector representation of the jacket; mapping the visual features of the positive example lower garment into visual implicit vector representation of the positive example lower garment;
mapping the coat text features into a coat text implicit vector representation; mapping the characteristics of the right example lower clothing text into implicit vector representation of the right example lower clothing text;
s104: calculating visual feature confidence coefficients of the jacket and the right-example lower garment based on the jacket visual implicit vector representation and the right-example lower garment visual implicit vector representation;
calculating text feature confidence coefficients of the jacket and the right case lower garment based on the jacket text implicit vector representation and the right case lower garment text implicit vector representation;
s105: calculating the matching degree of the jacket and the right-example lower garment based on the attention mechanism based on the jacket visual implicit vector representation, the right-example lower garment visual implicit vector representation, the visual feature confidence degrees of the jacket and the right-example lower garment, the jacket text implicit vector representation, the right-example lower garment text implicit vector representation and the text feature confidence degrees of the jacket and the right-example lower garment;
s106: replacing the positive example lower garment in the S102-S105 with the negative example lower garment, and then calculating the matching degree of the upper garment based on the attention mechanism and the negative example lower garment;
s107: and calculating a difference value between the two matching degrees according to the matching degree of the upper garment based on the attention mechanism and the positive lower garment and the matching degree of the upper garment based on the attention mechanism and the negative lower garment, and obtaining a neural network garment collocation preference model based on the attention mechanism according to the obtained difference value between the two matching degrees.
Further, in S102, the extracting the visual features is to extract the visual features through Alexnet;
it should be understood that, in S102, the images of the respective garments are subjected to visual feature coding through the pre-trained neural network Alexnet, and finally a 4096-dimensional feature is obtained for the top tiLower garment bjThe visual characteristics of which can be respectively expressed as
Figure BDA0002239068290000061
Further, in S102, the Text feature extraction is obtained by mapping the Text description of the upper garment or the lower garment into a word vector through word2vector, and then performing Text coding through a Text-CNN model.
It should be understood that in S102, each word of the costume is mapped to a 300-dimensional vector by a pre-trained word2 vector. We concatenate the word vectors for each garment, Text-coded via the Text-CNN model, which consists of a convolutional layer and a max pooling layer. We use kernels of different sizes [2, 3, 4, 5 ]]Encoding is carried out, and finally, each garment extracts a 400-dimensional text feature. We use separately
Figure BDA0002239068290000062
Show the jacket tiAnd a lower garment bjThe text feature of (1).
Further, in S103, the visual features of the jacket are mapped into the visual implicit vector representation of the jacket, which is implemented by a convolutional neural network.
It will be appreciated that multi-modal information (i.e., visual modality and text modality) of heterogeneous space garments (i.e., upper and lower garments) is unified by learning a potential space through a convolutional neural network to model semantic relationships between different classes of fashion items. With a jacket tiIs visually encoded
Figure BDA0002239068290000063
To implicit vector representation
Figure BDA0002239068290000064
For example, the implicit vector is represented as:
Figure BDA0002239068290000071
Figure BDA0002239068290000072
wherein the content of the first and second substances,
Figure BDA0002239068290000073
and
Figure BDA0002239068290000074
for the relevant parameters, σ (-) denotes a sigmoid function,
Figure BDA0002239068290000075
for each layer output, implicit vector representation
Figure BDA0002239068290000076
For the jacket tiText coding of
Figure BDA0002239068290000077
Lower garment bjIs visually encoded
Figure BDA0002239068290000078
Lower garment bjText coding of
Figure BDA0002239068290000079
Obtaining visual implicit vector representation of the jacket
Figure BDA00022390682900000710
Positive under-garment text implicit vector representation
Figure BDA00022390682900000711
Positive under-garment text implicit vector representation
Figure BDA00022390682900000712
Further, in S104, the confidence of the visual features of the upper garment and the lower garment of the right case
Figure BDA00022390682900000713
The calculation formula of (2) is as follows:
Figure BDA00022390682900000714
wherein, wT
Figure BDA00022390682900000715
baAnd c is a parameter related to the network, which indicates that the corresponding element multiplies,
Figure BDA00022390682900000716
the code is a one-hot code,
Figure BDA00022390682900000717
is an upper garment tiAnd bjThe ith pair of visual feature confidence;
then, the confidence of the visual features is normalized by the first step, and the normalized confidence is
Figure BDA00022390682900000718
Can be expressed as:
Figure BDA00022390682900000719
where exp (·) represents an exponential function with a natural constant e as the base. Similarly, the confidence coefficient of the ith pair of text features can be obtained
Figure BDA00022390682900000720
It should be understood that the present invention proposes a feature-level attention model as shown in fig. 2, with the input being visual and textual visual implicit vector representations for the top and bottom coats. The model may learn the confidence of each pair of features for different classes of clothing.
Further, in S105, the degree of matching between the upper garment based on the attention mechanism and the lower garment of the regular example is determined
Figure BDA00022390682900000721
Calculating the formula:
Figure BDA0002239068290000081
further, in S107, the loss function l of the attention-based neural network modelEANThe expression formula is:
Figure BDA0002239068290000082
wherein the content of the first and second substances,
Figure BDA0002239068290000083
show the jacket tiAnd bjClothes b under the negativekThe preference for compatibility between the two components,
Figure BDA0002239068290000084
can be obtained by reacting with
Figure BDA0002239068290000085
In a similar manner.
As one or more embodiments, the constructing the training set specifically includes:
crawling a plurality of coats from a fashionable wearing website, correspondingly setting an optimal matching coat for each coat, and setting a plurality of coat taking negative examples for each coat; the lower clothes negative example is the lower clothes which is not matched with the upper clothes; each upper or lower garment includes a corresponding image and text description.
It should be understood that the invention also includes the use of a Bayesian personalized ranking model and the establishment of a top tiUnder the right side, the clothes bjAnd example bkThe matching preference between the upper garment and the lower garment is comprehensively modeled.
The main research task of the invention is as shown in fig. 1, given the picture, text description and classification information of a jacket and a lower garment, our model can capture each characteristic attribute of fashion articles, and learn different confidence values according to different paired characteristic contributions between the jacket and the lower garment, thereby realizing the complementary matching of clothes.
Multi-modal feature coding: extracting characteristic attributes (such as color, shape, material and the like) of a visual mode and a text mode through Alexnet and TextCNN respectively;
mapping different types of clothes to the same space through a convolutional neural network, and further learning semantic relations among different types of clothes features;
learning different contributions of different characteristics of different clothes to clothes matching in a self-adaptive manner through the characteristic-level attention model, and distributing corresponding confidence coefficients for the different contributions;
and further modeling compatibility preference between the upper garment and different lower garments through a Bayes personalized ranking model.
An EAN framework based on an attention mechanism from end to end is provided, so that feature codes of multi-mode information of the clothes can be effectively learned, and matching degrees of different types of clothes can be modeled.
The second embodiment also provides a neural network clothing matching scheme generating system based on the attention mechanism;
in a third embodiment, the present embodiment further provides an electronic device, which includes a memory, a processor, and computer instructions stored in the memory and executed on the processor, where the computer instructions, when executed by the processor, implement the steps of the method in the first embodiment.
In a fourth embodiment, the present invention further provides a computer-readable storage medium for storing computer instructions, and when the computer instructions are executed by a processor, the computer instructions perform the steps of the method according to the first embodiment.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (9)

1. The method for generating the neural network clothing matching scheme based on the attention mechanism is characterized by comprising the following steps of:
constructing a neural network model based on an attention mechanism; constructing a training set;
the neural network model based on the attention mechanism is constructed; the method comprises the following specific steps:
s101: crawling an upper garment from a fashionable garment matching website, finding a matched lower garment for the upper garment as a positive case lower garment, and randomly selecting a lower garment for the upper garment as a negative case lower garment; each upper garment or each lower garment is provided with corresponding images and text descriptions;
s102: visual features are extracted from the upper garment and the formal lower garment to obtain the visual features of the upper garment and the formal lower garment;
extracting text features from the upper garment and the formal lower garment to obtain upper garment text features and formal lower garment text features;
s103: mapping the visual features of the jacket into visual implicit vector representation of the jacket; mapping the visual features of the positive example lower garment into visual implicit vector representation of the positive example lower garment;
mapping the coat text features into a coat text implicit vector representation; mapping the characteristics of the right example lower clothing text into implicit vector representation of the right example lower clothing text;
s104: calculating visual feature confidence coefficients of the jacket and the right-example lower garment based on the jacket visual implicit vector representation and the right-example lower garment visual implicit vector representation;
calculating text feature confidence coefficients of the jacket and the right case lower garment based on the jacket text implicit vector representation and the right case lower garment text implicit vector representation;
s105: calculating the matching degree of the jacket and the jacket under the positive example based on the attention mechanism based on the jacket visual implicit vector representation, the visual implicit vector representation of the jacket under the positive example, the visual feature confidence coefficient of the jacket and the jacket under the positive example, the jacket text implicit vector representation, the text implicit vector representation of the jacket under the positive example and the text feature confidence coefficient of the jacket and the jacket under the positive example;
s106: replacing the positive example lower garment in the S102-S105 with the negative example lower garment, and then calculating the matching degree of the upper garment based on the attention mechanism and the negative example lower garment;
s107: calculating a difference value between the two matching degrees according to the matching degree of the upper garment based on the attention mechanism and the positive lower garment and the matching degree of the upper garment based on the attention mechanism and the negative lower garment, and obtaining a neural network garment matching preference model based on the attention mechanism according to the obtained difference value between the two matching degrees;
inputting the training set into a constructed attention-based neural network model, training the attention-based neural network model, stopping training when a loss function of the model is converged, and outputting the trained attention-based neural network model;
acquiring a jacket to be matched; obtaining a plurality of existing lower clothes; inputting the upper garment to be matched and a plurality of existing lower garments into a trained attention mechanism-based neural network model, and outputting the lower garments matched with the upper garment to be matched;
and finally, outputting the upper garment to be matched and the matched lower garment as the optimal dressing matching scheme.
2. The method as claimed in claim 1, wherein the extracting visual features is visual feature extraction through Alexnet; the Text features are extracted by mapping the Text description of the upper garment or the lower garment into word vectors through word2vector and then performing Text coding through a Text-CNN model.
3. The method of claim 1, wherein mapping the visual features of the jacket to a visual implicit vector representation of the jacket is performed by a convolutional neural network.
4. The method of claim 1, wherein the confidence level of the visual features of the upper garment and the right lower garment is calculated by the formula:
Figure FDA0003442398560000021
wherein the content of the first and second substances,
Figure FDA0003442398560000022
coding for one-hot;
Figure FDA0003442398560000023
is an upper garment tiAnd bjThe ith pair of visual feature confidence; w is aT
Figure FDA0003442398560000031
baC is a related parameter of the network, which indicates multiplication of corresponding elements, respectively;
Figure FDA0003442398560000032
representing the visual implicit vector of the garment under the positive example,
Figure FDA0003442398560000033
encoding the mapped implicit vector for the coat vision;
then, the confidence of the ith visual feature is normalized:
Figure FDA0003442398560000034
exp (-) represents an exponential function with a natural constant e as a base, and similarly, the confidence coefficient of the ith pair of text features can be obtained
Figure FDA0003442398560000035
5. The method according to claim 1, wherein in S105, the degree of matching of the attention-based jacket with the regular lower jacket; calculating the formula:
Figure FDA0003442398560000036
wherein the content of the first and second substances,
Figure FDA0003442398560000037
the confidence level after the ith normalization of the visual features is shown,
Figure FDA0003442398560000038
representing the confidence level of the ith normalized text feature,
Figure FDA0003442398560000039
an implicit vector to which visual coding representing the ith pair of jackets is mapped,
Figure FDA00034423985600000310
indicating the visual implicit vector of the garment under the ith pair of normal conditions,
Figure FDA00034423985600000311
indicating the visual implicit vector of the ith pair of jackets,
Figure FDA00034423985600000312
representing the implicit vector of the clothing text under the ith positive example;
in S107, the attention mechanism-based neural network model has an expression formula as follows:
lEAN=∑(i,j,k)-ln(σ(mijk));
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA00034423985600000313
Figure FDA00034423985600000314
the matching degree between the upper garment and the lower garment and the obtaining mode
Figure FDA00034423985600000315
Similarly, σ (·) denotes a sigmoid function, lEANIs a loss function of a neural network model based on an attention mechanism.
6. The method of claim 1, wherein the step of constructing the training set comprises:
crawling a plurality of coats from a fashionable wearing website, correspondingly setting an optimal matching coat for each coat, and setting a plurality of coat taking negative examples for each coat; the lower clothes negative example is the lower clothes which is not matched with the upper clothes; each top or bottom garment includes a corresponding image and textual description.
7. The neural network clothing matching scheme generation system based on the attention mechanism is characterized by comprising the following steps:
a model building module configured to: constructing a neural network model based on an attention mechanism; constructing a training set; the neural network model based on the attention mechanism is constructed; the method comprises the following specific steps:
s101: crawling an upper garment from a fashionable garment matching website, finding a matched lower garment for the upper garment as a positive case lower garment, and randomly selecting a lower garment for the upper garment as a negative case lower garment; each upper garment or each lower garment is provided with corresponding images and text descriptions;
s102: visual features are extracted from the upper garment and the formal lower garment to obtain the visual features of the upper garment and the formal lower garment;
extracting text features from the upper garment and the formal lower garment to obtain upper garment text features and formal lower garment text features;
s103: mapping the visual features of the jacket into visual implicit vector representation of the jacket; mapping the visual features of the positive example lower garment into visual implicit vector representation of the positive example lower garment;
mapping the coat text features into the implicit vector representation of the coat text; mapping the characteristics of the right example lower clothing text into implicit vector representation of the right example lower clothing text;
s104: calculating visual feature confidence coefficients of the jacket and the right-example lower garment based on the jacket visual implicit vector representation and the right-example lower garment visual implicit vector representation;
calculating text feature confidence coefficients of the jacket and the right case lower garment based on the jacket text implicit vector representation and the right case lower garment text implicit vector representation;
s105: calculating the matching degree of the jacket and the right-example lower garment based on the attention mechanism based on the jacket visual implicit vector representation, the right-example lower garment visual implicit vector representation, the visual feature confidence degrees of the jacket and the right-example lower garment, the jacket text implicit vector representation, the right-example lower garment text implicit vector representation and the text feature confidence degrees of the jacket and the right-example lower garment;
s106: replacing the positive example lower garment in the S102-S105 with the negative example lower garment, and then calculating the matching degree of the upper garment based on the attention mechanism and the negative example lower garment;
s107: calculating a difference value between the two matching degrees according to the matching degree of the upper garment based on the attention mechanism and the positive lower garment and the matching degree of the upper garment based on the attention mechanism and the negative lower garment, and obtaining a neural network garment matching preference model based on the attention mechanism according to the obtained difference value between the two matching degrees;
a training module configured to: inputting the training set into a constructed attention-based neural network model, training the attention-based neural network model, stopping training when a loss function of the model is converged, and outputting the trained attention-based neural network model;
a recipe output module configured to: acquiring a jacket to be matched; obtaining a plurality of existing lower clothes; inputting the upper garment to be matched and a plurality of existing lower garments into a trained attention mechanism-based neural network model, and outputting the lower garments matched with the upper garment to be matched;
and finally, outputting the upper garment to be matched and the matched lower garment as the optimal dressing matching scheme.
8. An electronic device comprising a memory and a processor and computer instructions stored on the memory and executable on the processor, the computer instructions when executed by the processor performing the steps of the method of any of claims 1 to 6.
9. A computer-readable storage medium storing computer instructions which, when executed by a processor, perform the steps of the method of any one of claims 1 to 6.
CN201910993603.9A 2019-10-18 2019-10-18 Attention mechanism-based neural network garment matching scheme generation method and system Active CN110807477B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910993603.9A CN110807477B (en) 2019-10-18 2019-10-18 Attention mechanism-based neural network garment matching scheme generation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910993603.9A CN110807477B (en) 2019-10-18 2019-10-18 Attention mechanism-based neural network garment matching scheme generation method and system

Publications (2)

Publication Number Publication Date
CN110807477A CN110807477A (en) 2020-02-18
CN110807477B true CN110807477B (en) 2022-06-07

Family

ID=69488798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910993603.9A Active CN110807477B (en) 2019-10-18 2019-10-18 Attention mechanism-based neural network garment matching scheme generation method and system

Country Status (1)

Country Link
CN (1) CN110807477B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111597870B (en) * 2020-03-26 2022-05-03 中国电子科技集团公司第五十二研究所 Human body attribute identification method based on attention mechanism and multi-task learning
CN112860928A (en) * 2021-02-08 2021-05-28 天津大学 Clothing retrieval method based on class perception graph neural network
CN113850656B (en) * 2021-11-15 2022-08-23 内蒙古工业大学 Personalized clothing recommendation method and system based on attention perception and integrating multi-mode data
CN114707427B (en) * 2022-05-25 2022-09-06 青岛科技大学 Personalized modeling method of graph neural network based on effective neighbor sampling maximization

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875910A (en) * 2018-05-23 2018-11-23 山东大学 Garment coordination method, system and the storage medium extracted based on attention knowledge
CN108960959A (en) * 2018-05-23 2018-12-07 山东大学 Multi-modal complementary garment coordination method, system and medium neural network based
CN109754317A (en) * 2019-01-10 2019-05-14 山东大学 Merge interpretation clothes recommended method, system, equipment and the medium of comment
CN110188449A (en) * 2019-05-27 2019-08-30 山东大学 Interpretable clothing information recommended method, system, medium and equipment based on attribute

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190205761A1 (en) * 2017-12-28 2019-07-04 Adeptmind Inc. System and method for dynamic online search result generation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875910A (en) * 2018-05-23 2018-11-23 山东大学 Garment coordination method, system and the storage medium extracted based on attention knowledge
CN108960959A (en) * 2018-05-23 2018-12-07 山东大学 Multi-modal complementary garment coordination method, system and medium neural network based
CN109754317A (en) * 2019-01-10 2019-05-14 山东大学 Merge interpretation clothes recommended method, system, equipment and the medium of comment
CN110188449A (en) * 2019-05-27 2019-08-30 山东大学 Interpretable clothing information recommended method, system, medium and equipment based on attribute

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
结合评论生成的可解释性时尚推荐研究;林于杰;《信息科技辑》;20190930;全文 *

Also Published As

Publication number Publication date
CN110807477A (en) 2020-02-18

Similar Documents

Publication Publication Date Title
CN110807477B (en) Attention mechanism-based neural network garment matching scheme generation method and system
CN108388876B (en) Image identification method and device and related equipment
CN108960959B (en) Multi-mode complementary clothing matching method, system and medium based on neural network
US11640634B2 (en) Deep learning based visual compatibility prediction for bundle recommendations
KR102326902B1 (en) Image-based Posture Preservation Virtual Fitting System Supporting Multi-Poses
CN108154156B (en) Image set classification method and device based on neural topic model
CN108319888A (en) The recognition methods of video type and device, terminal
Jia et al. Learning to appreciate the aesthetic effects of clothing
CN111985532B (en) Scene-level context-aware emotion recognition deep network method
KR102461863B1 (en) System and method for recommending personalized styling
CN111400525A (en) Intelligent fashionable garment matching and recommending method based on visual combination relation learning
CN111612090B (en) Image emotion classification method based on content color cross correlation
KR102524001B1 (en) Method for matching text and design using trend word and apparatus thereof
CN115953590B (en) Sectional type fine granularity commodity image description generation method, device and medium
CN104598866B (en) A kind of social feeling quotrient based on face promotes method and system
CN113420797B (en) Online learning image attribute identification method and system
US20240046529A1 (en) Method and apparatus for generating design based on learned condition
CN110298065A (en) Clothes fashion personalized designs method based on deep learning
CN113822183B (en) Zero sample expression recognition method and system based on AU-EMO association and graph neural network
CN110825963B (en) Generation-based auxiliary template enhanced clothing matching scheme generation method and system
CN115082963A (en) Human body attribute recognition model training and human body attribute recognition method and related device
KR102626945B1 (en) Method and apparatus for converting 3d clothing data based on artificial intelligence model
CN115331293A (en) Expression information processing method, expression recognition method, device, equipment and medium
CN117851625A (en) Picture description method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant