CN108829855A - It is worn based on the clothing that condition generates confrontation network and takes recommended method, system and medium - Google Patents

It is worn based on the clothing that condition generates confrontation network and takes recommended method, system and medium Download PDF

Info

Publication number
CN108829855A
CN108829855A CN201810646047.3A CN201810646047A CN108829855A CN 108829855 A CN108829855 A CN 108829855A CN 201810646047 A CN201810646047 A CN 201810646047A CN 108829855 A CN108829855 A CN 108829855A
Authority
CN
China
Prior art keywords
network
clothing
layer
clothing image
generates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810646047.3A
Other languages
Chinese (zh)
Other versions
CN108829855B (en
Inventor
刘治
朱耀文
肖晓燕
曹艳坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201810646047.3A priority Critical patent/CN108829855B/en
Publication of CN108829855A publication Critical patent/CN108829855A/en
Application granted granted Critical
Publication of CN108829855B publication Critical patent/CN108829855B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

It is worn the invention discloses the clothing for generating confrontation network based on condition and takes recommended method, system and medium, including:True clothing image data set is established, each picture in data set is labeled with corresponding attribute tags;Confrontation net structure, which is generated, based on condition generates network G;Attribute tags are input to and are generated in network G, output clothing image pattern;Confrontation net structure, which is generated, based on condition differentiates network D;It by true clothing the image data true clothing image concentrated and obtained clothing image pattern while being input in differentiation network D, the attribute tags of output clothing image and the true and false judging result for wearing image clothes;Alternating iteration training differentiates network D and generates network G;Attribute tags content to be recommended is received, generates to wear clothes to wear using trained generation network G and takes picture.

Description

It is worn based on the clothing that condition generates confrontation network and takes recommended method, system and medium
Technical field
The invention belongs to depth learning technology field, wears to take more particularly to the clothing for generating confrontation network based on condition and push away Recommend method, system and medium.
Background technique
Deep learning is the key areas of machine learning, it can complete to need the artificial intelligence of high abstraction feature to appoint Business, it made breakthrough progress in the application of the multiclass such as voice, image recognition and retrieval, natural language understanding in recent years. Generating confrontation network (GAN) is the rising star in nearly 2 years deep learning fields, it is by generation network and differentiates that network is constituted, its benefit With " two-person game " thought, updates two networks respectively by back-propagation algorithm to execute competitive study and reach trained mesh 's.Thus according to the thought for generating confrontation network, it derives many variants and is applied well.Generate confrontation network (GAN) characterization learnt can be used for a variety of applications, including image synthesis, semantic image editor, Style Transfer, image super-resolution Technology and classification etc..
With the development of economy and society, suitable clothing wear take it is more important.The many existing clothing in market, which are worn, at present takes Software is mainly recommended according to gender, season and style, and clothing is all with purchase link mostly, and software is mostly with sale For the purpose of profit.However, when not premised on buying clothes, people want according to the weather on the same day (as rained, fine day) and fit Existing clothes of being arranged in pairs or groups with occasion (such as dancing party is gone to school) becomes problem.Even dependence parent also in need is come dress of arranging in pairs or groups Young student possibly according to factors such as weather can not suitably wear and take when leaving parent.
Summary of the invention
In order to solve the deficiencies in the prior art, is worn the present invention provides the clothing for generating confrontation network based on condition and take recommendation Method, system and medium can be effectively combined user demand and obtain suitably wearing clothes to wear taking recommendation picture.
As the first aspect of the present invention, the clothing for generating confrontation network based on condition is provided to wear and take recommended method;
To achieve the goals above, the technical solution adopted by the present invention is as follows:
It is worn based on the clothing that condition generates confrontation network and takes recommended method, including:
Step (1):True clothing image data set is established, each picture in data set is labeled with corresponding attribute Label;
Step (2):Confrontation net structure, which is generated, based on condition generates network G;The attribute tags of step (1) are input to life At in network G, image pattern is worn in output clothes;
Step (3):Confrontation net structure, which is generated, based on condition differentiates network D;By the true clothing image data of step (1) The obtained clothing image pattern of true clothing image and step (2) concentrated is input to simultaneously to be differentiated in network D, and output clothing is schemed The attribute tags of picture and the true and false judging result of clothing image;
Step (4):Alternating iteration training differentiates network D and generates network G;
Step (5):Attribute tags content to be recommended is received, generates to wear clothes to wear using trained generation network G and takes figure Piece.
As a further improvement of the present invention, the step of step (1) is:
Step (101):Acquire the unified whole clothing picture comprising jacket and lower clothing of background color;According to setting pixel All clothing pictures are normalized in size;
Step (102):Corresponding attribute tags are marked for each clothing picture;The attribute tags include:Gender, day Gas and occasion.
As a further improvement of the present invention, the step of step (2) are:
The network structure for generating network G uses super-resolution depth residual error network model SRResNet, generates the G's of network Network structure specifically includes 39 layers:
First layer is full articulamentum, and input generates the characteristic patterns of 1024 channel 16*16 resolution ratio after first layer, the 2nd, 5,8,11,14,17,20,23,28,32,36 layers are crowd normalization layer BN, the 4th, 7,10,13,16,19,22,26,30,34,38 Layer is convolutional layer Conv, and the 3rd, 6,12,18,24,29,33,37 layer is active coating Relu, and the 9th, 15,21,25 layer is to ask by element With layer Elementwise Sum, the 27th, 31,35 layer is shuffled a layer Pixel Shuffler for pixel, and the 39th layer is output layer.
The attribute tags of each picture in random noise signal and step (1) in data set are input to generation network In G, output clothing image pattern;
As a further improvement of the present invention, the step of step (3) are:
Differentiate that the network structure of the D of network specifically includes 72 layers:
1st, 3,5,8,10,13,15,17,21,23,27,29,31,35,37,41,43,45,49,51,55,57,59, 63,65,69 layers are convolutional layer Conv,
2nd, 4,7,9,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48, 50,52,54,56,58,60,62,64,66,68,70 layers are Leaky active coating Leaky Relu,
6th, 11,19,25,33,39,47,53,61,67 layer is the layer Elementwise Sum that sums by element,
71st, 72 layer is full articulamentum;
By the true clothing image that the true clothing image data of step (1) is concentrated and the clothing image that step (2) obtains Sample is input to simultaneously to be differentiated in network D, the attribute tags of output clothing image and the true and false judging result of clothing image;
As a further improvement of the present invention, the step of step (4) are:
Step (401):Network D total loss function is differentiated when setting training;
Step (402):Network G total loss function is generated when setting training;
Step (403):Update the parameter for differentiating network D;
Step (404):The parameter of more newly-generated network G;
Step (405):Independent alternating iteration training differentiates network D and generates network G, repeat step (403) and (404), until reaching the number of iterations of setting.
The step of step (401) is:Differentiate the total loss function of network D by the first Classification Loss function, first pair Anti- loss function and gradient punishment loss function composition, differentiate the total loss function L (D) of network D:
L (D)=Lcls(D)+λadvLadv(D)+λgpLgp(D) (1)
Wherein, LclsIt (D) is the first Classification Loss function for differentiating network D, Ladv(D) the first confrontation to differentiate network D Loss function, LgpIt (D) is the gradient punishment loss function for differentiating network D, λadvTo differentiate that network D's fights the flat of loss function Weigh the factor, and λ is arrangedadvEqual to the quantity of attribute tags, λgpFor the balance factor of the gradient penalty of differentiation network D;
Wherein, differentiate the first Classification Loss function L of network Dcls(D):
Wherein, PdataIndicate the distribution for really wearing image data set x obtained in step (1) clothes, PnoiseExpression is made an uproar at random The distribution of acoustical signal z, PcondExpression has distributed the prior distribution of label c,For the first Classification Loss [logP in functionD[labelx| x]] expectation,For in the first Classification Loss function log(PD[c | G (z, c)]) expectation;
Wherein, differentiate the first confrontation loss function L of network Dadv(D):
Wherein,The expectation for fighting logD (x) in loss function for first,For the expectation of log (1-D (G (z, c))) in the first confrontation loss function;
Wherein, differentiate the gradient penalty L of network Dgp(D):
Wherein, PpertubeddataIndicate the distribution of interference data,Letter is punished for gradient In numberExpectation.
The step of step (402) is:
Network G total loss function is generated when training to be made of the second Classification Loss function and the second confrontation loss function, It is as follows to generate the total loss function of network G:
L (G)=Lcls(G)+λadv′Ladv(G) (5)
Wherein, Lcls(G) the second Classification Loss function of network G, L are made a living intoadv(G) second pair of damage-retardation of network is made a living into Lose function, λadv' make a living into network confrontation loss function balance factor, be arranged λadv' equal to the quantity of attribute tags;
Wherein, the second Classification Loss function L of network G is generatedcls(G) as follows:
Wherein, PnoiseIndicate the distribution of random noise signal z, PcondExpression has distributed the prior distribution of label c,For log (P in the second Classification Loss functionD[c | G (z, c)]) expectation;
Wherein, the second confrontation loss function L of network G is generatedadv(G) as follows:
Wherein,Indicate the expectation of log (D (G (z, c))) in the second confrontation loss function;
The step of step (403) is:
True clothing image data set x that step (1) obtains and corresponding attribute tags are input to and differentiated in network D, According to the logP in the first Classification Loss function for differentiating network DD[labelx| x] part, first confrontation loss function in LogD (x) punishes loss function with gradient partially to update the parameter for differentiating network D;The fixed parameter for differentiating network D, will be random Noise signal z and 3 dimension attribute label vector c are input to generation network G, then wear the generation for generating network G output clothes image Sample, which is input to, to be differentiated in network D, further according to the log (P in the first Classification Loss function for differentiating network DD[c|G(z,c)]) Partially, the log (1-D (G (z, c))) in the first confrontation loss function partially punishes loss function with gradient to update differentiation network The parameter of D.
The step of step (404) is:The model parameter of the fixed differentiation network D obtained by step (403), will be random Noise signal z and 3 dimension attribute label vector c, which is input to, to be generated in network G, according to the Classification Loss function and life for generating network G Carry out the parameter of more newly-generated network G at the confrontation loss function of network G.
As a further improvement of the present invention, the step of step (5) are:
The customized 3 dimension attribute label of random noise signal and user is input to trained generation network G, generates net Network can generate the clothing image of corresponding attribute.
Such as inputting 3 dimension attribute labels is " female, fine day, dancing party ", can generate corresponding clothing image, thus user can incite somebody to action The clothing of customized generation, which is worn, takes image as reference, come the clothing that oneself same day of arranging in pairs or groups is applicable.
As a second aspect of the invention, the clothing for generating confrontation network based on condition is provided to wear and take recommender system;
To achieve the goals above, the technical solution adopted by the present invention is as follows:
It is worn based on the clothing that condition generates confrontation network and takes recommender system, including:It memory, processor and is stored in The computer instruction run on reservoir and on a processor when the computer instruction is run by processor, is completed any of the above-described Step described in method.
As the third aspect of the present invention, a kind of computer readable storage medium is provided;
To achieve the goals above, the technical solution adopted by the present invention is as follows:
A kind of computer readable storage medium, is stored thereon with computer instruction, and the computer instruction is transported by processor When row, step described in any of the above-described method is completed.
Compared with prior art, the beneficial effects of the invention are as follows:
The present invention generates confrontation network model using based on DRAGAN network model come structural environment, and the model is than other Calculation amount needed for part generates confrontation network model is relatively fewer, and restrains comparatively fast during model training, can make to generate Network quickly generates more stable generation image;
The demand that the present invention can effectively provide user, which is integrated into, to be come, and is with sale different from the market most of Purpose and be that leading clothing is worn and takes software with clothes style, user only needs the three attribute " property of customed clothing image Not ", " weather " and " occasion ", the clothing that can obtain generation network output quickly, which is worn, takes image, and as with reference to next The existing clothes of user is arranged in pairs or groups out proper applicable dress.
Detailed description of the invention
The accompanying drawings constituting a part of this application is used to provide further understanding of the present application, and the application's shows Meaning property embodiment and its explanation are not constituted an undue limitation on the present application for explaining the application.
Fig. 1 is holistic approach flow chart of the invention;
Fig. 2 (a)-Fig. 2 (c) makes a living into the network structure of network G;
Fig. 3 (a)-Fig. 3 (c) is the network structure for differentiating network D.
Specific embodiment
It is noted that following detailed description is all illustrative, it is intended to provide further instruction to the application.Unless another It indicates, all technical and scientific terms used herein has usual with the application person of an ordinary skill in the technical field The identical meanings of understanding.
It should be noted that term used herein above is merely to describe specific embodiment, and be not intended to restricted root According to the illustrative embodiments of the application.As used herein, unless the context clearly indicates otherwise, otherwise singular Also it is intended to include plural form, additionally, it should be understood that, when in the present specification using term "comprising" and/or " packet Include " when, indicate existing characteristics, step, operation, device, component and/or their combination.
As shown in Figure 1, a kind of clothing for generating confrontation network based on condition is worn and takes recommended method, include the following steps:
(1) true clothing image data set is established;
(2) network G is generated based on DRAGAN network model construction;
(3) network D is differentiated based on DRAGAN network model construction;
(4) training differentiates network D and generates network G;
(5) it is worn using the customized generation clothing of trained generation network G and takes picture.
In the step (1), true clothing image data set is established, is specifically included:
(1-1):The whole clothing picture for collecting a large amount of high quality and the upper and lower clothing of background color unification, to all clothing Picture carries out size normalization processing, the image that processing pixel is 128 × 128;
(1-2):The attribute for setting up clothing image is three kinds:Gender, weather and occasion, for every clothing picture mark this three Attribute label, then every picture has corresponding three attribute.
In the step (2), network G is generated based on DRAGAN network model construction, is specifically included:
The input for generating network G is random noise signal z and 3 dimension attribute label vector c, generates the clothing of network G output Image pattern will be as the input for differentiating network D.
The network structure for generating network G uses super-resolution depth residual error network model SRResNet, including 16 residual values Block (ResBlocks) and 3 sub-pixel-level convolutional layer CNN for characteristics of image up-sampling, specific structure such as Fig. 2 (a)-figure Shown in 2 (c), the input for generating network G generates the clothing image of 3 channel 128*128 resolution ratio after generating network G, and will It is as the input for differentiating network D.
In the step (3), network D is differentiated based on DRAGAN network model construction, is specifically included:
Differentiate that the true clothing of foundation in the clothing image pattern and step (1) of network G generation is made a living into the input of network D Image pattern, output is for judging to wear the true and false of image clothes and judging the attribute tags of clothing image.Differentiate the network knot of network D Structure includes 10 residual value blocks (ResBlocks), and all batches of normalization layers (BN) are removed in differentiating network D, and last Convolutional layer on additionally plus one layer of full articulamentum is as multi-tag classifier, specific structure such as Fig. 3 (a)-Fig. 3 (c) is shown, by step Suddenly the clothing image that the 3 channel 128*128 resolution ratio that network G generates are generated in (2) exports clothing respectively after differentiating network D The attribute tags of image and the true and false judging result of clothing image.
In the step (4), training differentiates network D and generates network G, and specific steps include:
(4-1):Differentiate that the total loss function of network D is lost by the first Classification Loss function, the first confrontation when setting training Function and gradient punishment loss function composition, differentiate that the total loss function of network D is as follows:
L (D)=Lcls(D)+λadvLadv(D)+λgpLgp(D) (1)
Wherein, LclsIt (D) is the Classification Loss function for differentiating network, LadvIt (D) is the confrontation loss function for differentiating network, Lgp It (D) is the gradient punishment loss function for differentiating network, λadvFor the balance factor for fighting loss function for differentiating network, λ is setadv Quantity (λ other equal to tag classadv=3), λgpFor the balance factor of the gradient penalty of differentiation network, λ is setgp=0.5;
Wherein, differentiate the Classification Loss function L of network Dcls(D) as follows:
Wherein, PdataIndicate the distribution for really wearing image data set x obtained in step (1) clothes, PnoiseExpression is made an uproar at random The distribution of acoustical signal z, PcondExpression has distributed the prior distribution of label c,For the first Classification Loss [logP in functionD[labelx| x]] expectation,For in the first Classification Loss function log(PD[c | G (z, c)]) expectation;
Wherein, differentiate the confrontation loss function L of network Dadv(D) as follows:
Wherein,The expectation for fighting logD (x) in loss function for first,For the expectation of log (1-D (G (z, c))) in the first confrontation loss function;
Wherein, differentiate the gradient penalty L of network Dgp(D) as follows:
Wherein, PpertubeddataIndicate the distribution of interference data,Letter is punished for gradient In numberExpectation;
(4-2):Network G total loss function is generated when setting training to be lost by the second Classification Loss function and the second confrontation It is as follows to generate the total loss function of network G for function composition:
L (G)=Lcls(G)+λadvLadv(G) (5)
Wherein, Lcls(G) the Classification Loss function of network, L are made a living intoadv(G) the confrontation loss function of network is made a living into, λadvThe balance factor for making a living into the confrontation loss function of network, is arranged λadvQuantity (λ other equal to tag classadv=3);
Wherein, the Classification Loss function L of network G is generatedcls(G) as follows:
Wherein, PnoiseIndicate the distribution of random noise signal z, PcondExpression has distributed the prior distribution of label c,For log (P in the second Classification Loss functionD[c | G (z, c)]) expectation;
Wherein, the confrontation loss function L of network G is generatedadv(G) as follows:
Wherein,Indicate the expectation of log (D (G (z, c))) in the second confrontation loss function;
(4-3):Update the parameter for differentiating network D, the true clothing image data set x that step (1) is obtained and corresponding Class label, which is input to, to be differentiated in network D, according to the logP in the first Classification Loss function for differentiating network DD[labelx| x] portion Point, first confrontation loss function in logD (x) partially with gradient punish loss function come update differentiate network D parameter, Gu Surely the parameter for differentiating network D, is input to generation network G for random noise signal z and 3 dimension attribute label vector c, then will generate The generation clothing image pattern of network G output, which is input to, to be differentiated in network D, further according to the first Classification Loss letter for differentiating network D Log (P in numberD[c | G (z, c)]) part, the log (1-D (G (z, c))) in the first confrontation loss function partially punish with gradient Loss function is penalized to update the parameter for differentiating network D;
(4-4):The parameter of more newly-generated network G, the model parameter of the fixed differentiation network D obtained by step (4-3), will Random noise signal z and 3 dimension attribute label vector c, which is input to, to be generated in network G, according to the second Classification Loss for generating network G Function and the second confrontation loss function carry out the parameter of more newly-generated network G;
(4-5):Independent alternating iteration training differentiates network D and generates network G, repeats step (4-3) and (4-4), Until reaching the number of iterations of setting.
In the step (5), is worn using the customized generation clothing of trained generation network G and take picture, specifically included:It will Random noise signal and the customized 3 dimension attribute label of user are input to trained generation network G, and generating network can generate pair The clothing image of attribute is answered, such as inputting 3 dimension attribute labels is " female, fine day, dancing party ", corresponding clothing image can be generated, thus User, which can wear the clothing of customized generation, takes image as reference, come the clothing that oneself same day of arranging in pairs or groups is applicable.
The foregoing is merely preferred embodiment of the present application, are not intended to limit this application, for the skill of this field For art personnel, various changes and changes are possible in this application.Within the spirit and principles of this application, made any to repair Change, equivalent replacement, improvement etc., should be included within the scope of protection of this application.

Claims (8)

1. being worn based on the clothing that condition generates confrontation network and taking recommended method, characterized in that including:
Step (1):True clothing image data set is established, each picture in data set is labeled with corresponding attribute mark Label;
Step (2):Confrontation net structure, which is generated, based on condition generates network G;The attribute tags of step (1) are input to generation net In network G, output clothing image pattern;
Step (3):Confrontation net structure, which is generated, based on condition differentiates network D;The true clothing image data of step (1) is concentrated True clothing image and the obtained clothing image pattern of step (2) while being input to differentiate in network D, output clothing image The true and false judging result of attribute tags and clothing image;
Step (4):Alternating iteration training differentiates network D and generates network G;
Step (5):Attribute tags content to be recommended is received, generates to wear clothes to wear using trained generation network G and takes picture.
2. the clothing for generating confrontation network based on condition as described in claim 1, which is worn, takes recommended method, characterized in that
The step of step (1) is:
Step (101):Acquire the unified whole clothing picture comprising jacket and lower clothing of background color;According to setting pixel size All clothing pictures are normalized;
Step (102):Corresponding attribute tags are marked for each clothing picture;The attribute tags include:Gender, weather and Occasion.
3. the clothing for generating confrontation network based on condition as described in claim 1, which is worn, takes recommended method, characterized in that
The step of step (2) is:
The network structure for generating network G uses super-resolution depth residual error network model SRResNet, generates the network of the G of network Structure specifically includes 39 layers:
First layer is full articulamentum, and input generates the characteristic patterns of 1024 channel 16*16 resolution ratio after first layer, the 2nd, 5,8, 11,14,17,20,23,28,32,36 layers are crowd normalization layer BN, and the 4th, 7,10,13,16,19,22,26,30,34,38 layer is Convolutional layer Conv, the 3rd, 6,12,18,24,29,33,37 layer is active coating Relu, and the 9th, 15,21,25 layer is by element summation layer Elementwise Sum, the 27th, 31,35 layer is shuffled a layer Pixel Shuffler for pixel, and the 39th layer is output layer;
The attribute tags of each picture in random noise signal and step (1) in data set are input to and are generated in network G, Output clothing image pattern.
4. the clothing for generating confrontation network based on condition as described in claim 1, which is worn, takes recommended method, characterized in that
The step of step (3) is:
Differentiate that the network structure of the D of network specifically includes 72 layers:
1st, 3,5,8,10,13,15,17,21,23,27,29,31,35,37,41,43,45,49,51,55,57,59,63,65, 69 layers are convolutional layer Conv,
2nd, 4,7,9,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52, 54,56,58,60,62,64,66,68,70 layers are Leaky active coating Leaky Relu,
6th, 11,19,25,33,39,47,53,61,67 layer is the layer Elementwise Sum that sums by element,
71st, 72 layer is full articulamentum;
By the true clothing image that the true clothing image data of step (1) is concentrated and the clothing image pattern that step (2) obtains It is input to and is differentiated in network D simultaneously, the attribute tags of output clothing image and the true and false judging result of clothing image.
5. the clothing for generating confrontation network based on condition as described in claim 1, which is worn, takes recommended method, characterized in that
The step of step (4) is:
Step (401):Network D total loss function is differentiated when setting training;
Step (402):Network G total loss function is generated when setting training;
Step (403):Update the parameter for differentiating network D;
Step (404):The parameter of more newly-generated network G;
Step (405):Independent alternating iteration training differentiates network D and generates network G, repeats step (403) and (404), Until reaching the number of iterations of setting.
6. the clothing for generating confrontation network based on condition as described in claim 1, which is worn, takes recommended method, characterized in that
The step of step (5) is:
The customized 3 dimension attribute label of random noise signal and user is input to trained generation network G, generates network meeting Generate the clothing image of corresponding attribute.
7. being worn based on the clothing that condition generates confrontation network and taking recommender system, characterized in that including:Memory, processor and The computer instruction run on a memory and on a processor is stored, when the computer instruction is run by processor, is completed Step described in the claims 1-6 either method.
8. a kind of computer readable storage medium, characterized in that be stored thereon with computer instruction, the computer instruction is located When managing device operation, step described in the claims 1-6 either method is completed.
CN201810646047.3A 2018-06-21 2018-06-21 Clothing wearing recommendation method, system and medium for generating confrontation network based on condition Active CN108829855B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810646047.3A CN108829855B (en) 2018-06-21 2018-06-21 Clothing wearing recommendation method, system and medium for generating confrontation network based on condition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810646047.3A CN108829855B (en) 2018-06-21 2018-06-21 Clothing wearing recommendation method, system and medium for generating confrontation network based on condition

Publications (2)

Publication Number Publication Date
CN108829855A true CN108829855A (en) 2018-11-16
CN108829855B CN108829855B (en) 2021-02-19

Family

ID=64143166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810646047.3A Active CN108829855B (en) 2018-06-21 2018-06-21 Clothing wearing recommendation method, system and medium for generating confrontation network based on condition

Country Status (1)

Country Link
CN (1) CN108829855B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829537A (en) * 2019-01-30 2019-05-31 华侨大学 Style transfer method and equipment based on deep learning GAN network children's garment clothes
CN110134395A (en) * 2019-05-17 2019-08-16 广东工业大学 A kind of generation method of icon generates system and relevant apparatus
CN110135032A (en) * 2019-04-30 2019-08-16 厦门大学 A kind of auxiliary clothes generation method and device generating network based on confrontation
CN110441061A (en) * 2019-08-13 2019-11-12 哈尔滨理工大学 Planet wheel bearing life-span prediction method based on C-DRGAN and AD
CN110647986A (en) * 2019-08-13 2020-01-03 杭州电子科技大学 Road damage image generation method based on countermeasure type generation network
CN110909770A (en) * 2019-11-05 2020-03-24 上海眼控科技股份有限公司 ACGAN-combined image sample processing method, apparatus, system, and medium
CN111476200A (en) * 2020-04-27 2020-07-31 华东师范大学 Face de-identification generation method based on generation of confrontation network
CN111783980A (en) * 2020-06-28 2020-10-16 大连理工大学 Ranking learning method based on dual cooperation generation type countermeasure network
CN111794741A (en) * 2020-08-11 2020-10-20 中国石油天然气集团有限公司 Method for realizing sliding directional drilling simulator
CN112000769A (en) * 2020-08-17 2020-11-27 东北林业大学 Clothing commodity advertisement pattern generation method based on confrontation network
CN112100908A (en) * 2020-08-31 2020-12-18 西安工程大学 Garment design method for generating confrontation network based on multi-condition deep convolution
CN112230210A (en) * 2020-09-09 2021-01-15 南昌航空大学 HRRP radar target identification method based on improved LSGAN and CNN
CN112329116A (en) * 2020-11-23 2021-02-05 恩亿科(北京)数据科技有限公司 Distortion zero space planning design generation method and system based on generation of countermeasure network
CN112435083A (en) * 2019-08-26 2021-03-02 珠海格力电器股份有限公司 Article recommendation method based on brain wave recognition and electronic equipment
CN112613445A (en) * 2020-12-29 2021-04-06 深圳威富优房客科技有限公司 Face image generation method and device, computer equipment and storage medium
CN112927801A (en) * 2021-01-24 2021-06-08 武汉东湖大数据交易中心股份有限公司 Health prediction model construction method and device based on dressing meteorological index
CN113033595A (en) * 2020-12-24 2021-06-25 重庆大学 Multi-label automobile model generation method based on generation countermeasure network
CN113555106A (en) * 2020-04-23 2021-10-26 浙江远图互联科技股份有限公司 Intelligent traditional Chinese medicine remote auxiliary diagnosis and treatment platform based on generation countermeasure network

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803082A (en) * 2017-01-23 2017-06-06 重庆邮电大学 A kind of online handwriting recognition methods based on conditional generation confrontation network
CN107220600A (en) * 2017-05-17 2017-09-29 清华大学深圳研究生院 A kind of Picture Generation Method and generation confrontation network based on deep learning
CN107563509A (en) * 2017-07-17 2018-01-09 华南理工大学 A kind of dynamic adjustment algorithm for the condition DCGAN models that feature based returns
US20180082348A1 (en) * 2016-09-16 2018-03-22 Conduent Business Services, Llc Method and system for data processing to recommend list of physical stores in real-time
US20180101770A1 (en) * 2016-10-12 2018-04-12 Ricoh Company, Ltd. Method and system of generative model learning, and program product
CN107993131A (en) * 2017-12-27 2018-05-04 广东欧珀移动通信有限公司 Wear to take and recommend method, apparatus, server and storage medium
CN108022213A (en) * 2017-11-29 2018-05-11 天津大学 Video super-resolution algorithm for reconstructing based on generation confrontation network
CN108109049A (en) * 2017-12-29 2018-06-01 广东欧珀移动通信有限公司 Clothing matching Forecasting Methodology, device, computer equipment and storage medium
CN108132983A (en) * 2017-12-14 2018-06-08 北京小米移动软件有限公司 The recommendation method and device of clothing matching, readable storage medium storing program for executing, electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180082348A1 (en) * 2016-09-16 2018-03-22 Conduent Business Services, Llc Method and system for data processing to recommend list of physical stores in real-time
US20180101770A1 (en) * 2016-10-12 2018-04-12 Ricoh Company, Ltd. Method and system of generative model learning, and program product
CN106803082A (en) * 2017-01-23 2017-06-06 重庆邮电大学 A kind of online handwriting recognition methods based on conditional generation confrontation network
CN107220600A (en) * 2017-05-17 2017-09-29 清华大学深圳研究生院 A kind of Picture Generation Method and generation confrontation network based on deep learning
CN107563509A (en) * 2017-07-17 2018-01-09 华南理工大学 A kind of dynamic adjustment algorithm for the condition DCGAN models that feature based returns
CN108022213A (en) * 2017-11-29 2018-05-11 天津大学 Video super-resolution algorithm for reconstructing based on generation confrontation network
CN108132983A (en) * 2017-12-14 2018-06-08 北京小米移动软件有限公司 The recommendation method and device of clothing matching, readable storage medium storing program for executing, electronic equipment
CN107993131A (en) * 2017-12-27 2018-05-04 广东欧珀移动通信有限公司 Wear to take and recommend method, apparatus, server and storage medium
CN108109049A (en) * 2017-12-29 2018-06-01 广东欧珀移动通信有限公司 Clothing matching Forecasting Methodology, device, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHAO WANG等: "CGAN-PLANKTON: TOWARDS LARGE-SCALE IMBALANCED CLASS GENERATION AND FINE-GRAINED CLASSIFICATION", 《IEEE》 *
徐一峰等: "生成对抗网络理论模型和应用综述", 《金华职业技术学院学报》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829537B (en) * 2019-01-30 2023-10-24 华侨大学 Deep learning GAN network children's garment based style transfer method and equipment
CN109829537A (en) * 2019-01-30 2019-05-31 华侨大学 Style transfer method and equipment based on deep learning GAN network children's garment clothes
CN110135032A (en) * 2019-04-30 2019-08-16 厦门大学 A kind of auxiliary clothes generation method and device generating network based on confrontation
CN110134395A (en) * 2019-05-17 2019-08-16 广东工业大学 A kind of generation method of icon generates system and relevant apparatus
CN110441061A (en) * 2019-08-13 2019-11-12 哈尔滨理工大学 Planet wheel bearing life-span prediction method based on C-DRGAN and AD
CN110647986A (en) * 2019-08-13 2020-01-03 杭州电子科技大学 Road damage image generation method based on countermeasure type generation network
CN112435083A (en) * 2019-08-26 2021-03-02 珠海格力电器股份有限公司 Article recommendation method based on brain wave recognition and electronic equipment
CN110909770A (en) * 2019-11-05 2020-03-24 上海眼控科技股份有限公司 ACGAN-combined image sample processing method, apparatus, system, and medium
CN113555106A (en) * 2020-04-23 2021-10-26 浙江远图互联科技股份有限公司 Intelligent traditional Chinese medicine remote auxiliary diagnosis and treatment platform based on generation countermeasure network
CN111476200B (en) * 2020-04-27 2022-04-19 华东师范大学 Face de-identification generation method based on generation of confrontation network
CN111476200A (en) * 2020-04-27 2020-07-31 华东师范大学 Face de-identification generation method based on generation of confrontation network
CN111783980A (en) * 2020-06-28 2020-10-16 大连理工大学 Ranking learning method based on dual cooperation generation type countermeasure network
CN111794741A (en) * 2020-08-11 2020-10-20 中国石油天然气集团有限公司 Method for realizing sliding directional drilling simulator
CN111794741B (en) * 2020-08-11 2023-08-18 中国石油天然气集团有限公司 Method for realizing sliding directional drilling simulator
CN112000769A (en) * 2020-08-17 2020-11-27 东北林业大学 Clothing commodity advertisement pattern generation method based on confrontation network
CN112100908A (en) * 2020-08-31 2020-12-18 西安工程大学 Garment design method for generating confrontation network based on multi-condition deep convolution
CN112100908B (en) * 2020-08-31 2024-03-22 西安工程大学 Clothing design method for generating countermeasure network based on multi-condition deep convolution
CN112230210A (en) * 2020-09-09 2021-01-15 南昌航空大学 HRRP radar target identification method based on improved LSGAN and CNN
CN112329116A (en) * 2020-11-23 2021-02-05 恩亿科(北京)数据科技有限公司 Distortion zero space planning design generation method and system based on generation of countermeasure network
CN113033595A (en) * 2020-12-24 2021-06-25 重庆大学 Multi-label automobile model generation method based on generation countermeasure network
CN112613445A (en) * 2020-12-29 2021-04-06 深圳威富优房客科技有限公司 Face image generation method and device, computer equipment and storage medium
CN112613445B (en) * 2020-12-29 2024-04-30 深圳威富优房客科技有限公司 Face image generation method, device, computer equipment and storage medium
CN112927801A (en) * 2021-01-24 2021-06-08 武汉东湖大数据交易中心股份有限公司 Health prediction model construction method and device based on dressing meteorological index
CN112927801B (en) * 2021-01-24 2024-05-14 武汉东湖大数据科技股份有限公司 Construction method and device of health prediction model based on dressing meteorological index

Also Published As

Publication number Publication date
CN108829855B (en) 2021-02-19

Similar Documents

Publication Publication Date Title
CN108829855A (en) It is worn based on the clothing that condition generates confrontation network and takes recommended method, system and medium
Xu et al. Imagereward: Learning and evaluating human preferences for text-to-image generation
You et al. Relative CNN-RNN: Learning relative atmospheric visibility from images
Srinivasan et al. Biases in generative art: A causal look from the lens of art history
Liu et al. Cones: Concept neurons in diffusion models for customized generation
CN109657156A (en) A kind of personalized recommendation method generating confrontation network based on circulation
CN109754317A (en) Merge interpretation clothes recommended method, system, equipment and the medium of comment
CN112308115B (en) Multi-label image deep learning classification method and equipment
Yu et al. DressUp!: outfit synthesis through automatic optimization.
CN112380453B (en) Article recommendation method and device, storage medium and equipment
Li et al. Retrieving real world clothing images via multi-weight deep convolutional neural networks
CN109902912A (en) A kind of personalized image aesthetic evaluation method based on character trait
Zhou et al. Improved cross-label suppression dictionary learning for face recognition
CN110175469A (en) A kind of social media privacy of user leakage detection method, system, equipment and medium
Wang et al. Proximity-based group formation game model for community detection in social network
He et al. Analysis of the communication method of national traditional sports culture based on deep learning
Wang et al. Learning outfit compatibility with graph attention network and visual-semantic embedding
CN110334185A (en) The treating method and apparatus of data in a kind of platform
El Zant et al. Interactions and influence of world painters from the reduced Google matrix of Wikipedia networks
Islam et al. Learning character design from experts and laymen
Mougiakakou et al. SCAPEVIEWER: preliminary results of a landscape perception classification system based on neural network technology
Wang et al. [Retracted] Design of Sports Training Simulation System for Children Based on Improved Deep Neural Network
Tripathi et al. Facial expression recognition using data mining algorithm
CN113947798A (en) Background replacing method, device and equipment of application program and storage medium
CN114283300A (en) Label determining method and device, and model training method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant