CN111967930A - Clothing style recognition recommendation method based on multi-network fusion - Google Patents
Clothing style recognition recommendation method based on multi-network fusion Download PDFInfo
- Publication number
- CN111967930A CN111967930A CN202010661708.7A CN202010661708A CN111967930A CN 111967930 A CN111967930 A CN 111967930A CN 202010661708 A CN202010661708 A CN 202010661708A CN 111967930 A CN111967930 A CN 111967930A
- Authority
- CN
- China
- Prior art keywords
- clothing
- network
- image
- style
- human body
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0631—Item recommendations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Abstract
The invention discloses a clothing style identification recommendation method based on multi-network fusion, which specifically comprises the following steps: step 1: training and saving an MNF network model; step 2: calling an MNF network model, acquiring a human body image by using a camera, and preprocessing the human body image to obtain an original image; and step 3: the original image is subjected to VGG16 network to obtain the global features of the garment; and 4, step 4: obtaining a human body segmentation image by image segmentation of the original image; and 5: the human body segmentation image is subjected to DenseNet to obtain local characteristics of the garment; step 6: fusing the global features obtained in the step 3 and the local features obtained in the step 5 to obtain final features of the garment; and 7: finally, obtaining a clothing style classification label through the classifier according to the characteristics; and 8: and traversing the clothing database under the line by the clothing classification label to obtain clothing recommendation results with the same style. The method solves the problem of the identification precision of clothes of different styles in a complex environment.
Description
Technical Field
The invention belongs to the technical field of image classification equipment, and particularly relates to a clothing style identification recommendation method based on multi-network fusion.
Background
With the rapid development of online retail, online off-line stores are increasingly popular, after the epidemic situation of 2020, a large number of garment enterprises begin to repeat work and produce again, the online off-line stores also begin to come in new spring, and a large number of consumers begin to flow in various large garment factories, so that the consumption of the online off-line stores is stimulated. Aiming at the requirements of consumers on fashionable brands and clothes of different styles, an off-line store needs to utilize an intelligent recognition machine, and the existing clothes recognition technology cannot accurately recognize different clothes styles in a complex environment, so that a new method is provided to accurately recognize the style of the consumer entering the store, the clothes of the mood of the consumer can be recommended quickly and accurately according to the style of the clothes, the consumer is helped to make a quick selection, and the development of the off-line store is promoted.
Disclosure of Invention
The invention aims to provide a clothing style identification recommendation method based on multi-network fusion (MNF), which solves the problem of identification precision of clothing of different styles in a complex environment and facilitates more intelligent development of an online store.
The invention adopts the technical scheme that a clothing style identification recommendation method based on multi-network fusion is implemented according to the following steps:
step 1: training and saving an MNF network model;
step 2: calling an MNF network model, acquiring a human body image by using a camera, and preprocessing the human body image to obtain an original image;
and step 3: the original image is subjected to VGG16 network to obtain the global features of the garment;
and 4, step 4: obtaining a human body segmentation image by image segmentation of the original image;
and 5: the human body segmentation image is subjected to DenseNet to obtain local characteristics of the garment;
step 6: fusing the global features obtained in the step 3 and the local features obtained in the step 5 to obtain final features of the garment;
and 7: finally, obtaining a clothing style classification label through the classifier according to the characteristics;
and 8: and traversing the clothing database under the line by the clothing classification label to obtain clothing recommendation results with the same style.
The present invention is also characterized in that,
the step 1 is implemented according to the following steps:
step 1.1: establishing a clothing style database, and dividing clothing style indexes of a clothing style recommendation system into a plurality of types through online and offline data collection; giving a plurality of types of clothing style categories and corresponding indexes;
step 1.2: defining an MNF network model by using a torch, defining a class definition loss function, namely adopting a cross entropy loss function; defining an optimizer: using SGD;
step 1.3: setting a group of input variables and inputting data, wherein each image uses a 13-dimensional feature vector X ═ X1,x2....x13]Representation, where X represents all of the clothing style feature vectors, X1,x2...x13Feature vectors respectively representing the 13 types of clothing styles;
step 1.4: initializing MNF network parameters and weighting values wlSetting a value of 0-1, wherein the weight is randomly set, and l represents the network layer of the MNF;
step 1.5: dividing training set sample D { (X)(1),y(1)),(X(2),y(2)),...,(X(N),y(N)) Inputting into MNF network, where N represents the number of samples in training set, X(N)All the clothing style feature vectors, y, of the Nth sample image(N)A true costume style label representing the nth sample image;
step 1.6: updating the weight W and the offset b; forward propagation the net input z for each layer is computed(l)And an activation value a(l)Until the last layer, back-propagation calculates the error of each layer(l)And l represents the network layer of the MNF; calculate the derivative of each layer parameter: whereinClothes style label representing Nth sample image prediction, L (-) represents y(N)Anderror function between, W(l)Represents the weight of l layers, b(l)Represents the bias of l layers, and T represents the transposition of the vector; updating parameters: w(l)←W(l)-α((l)(a(l-1))T+λW(l));b(l)←b(l)-αb(l)(ii) a Wherein λ represents a regularization coefficient and α represents a learning rate; and storing the trained MNF model and parameters until the network converges.
The step 2 of preprocessing the human body image specifically comprises the following steps: the image collected by the camera is transmitted to a computer for preprocessing, and is converted into an original image I of RGB with the format of jpg and the size of 224 x 3 through DCT (discrete cosine transformation)0。
The step 3 specifically comprises the following steps: the VGG16 network model has 13 convolution layers and 3 full-connection layers, the first 13 convolution layers are adopted, and the feature map with the size of 14 × 512 is output, namely the original image I is input0The image-to-global feature map f can be obtained through the front 13 layers of the VGG16 networkglobal(I0)。
The specific method for segmenting the human body image in the step 4 comprises the following steps: the Mask-RCNN is adopted to segment the original image I0Obtaining a background-free human body segmentation image I1。
In step 5, the method for obtaining the local characteristics of the clothing by the human body segmentation image through the DenseNet network specifically comprises the following steps:
step 5.1: construction of DenseNet network: the DenseNet model is combined in a serial mode:whereinThe network layer of the DenseNet is denoted,is a mixed function, which is a combination of three operations, namely: BN>ReLU>Conv (3 × 3), BN denoting the batch normalization algorithm, ReLU being the activation function, Conv (3 × 3) denoting the convolution layer of 3 × 3;representing the result of processing by the mixing functionA layer characteristic diagram is obtained by the method,represents that 0 is toThe output characteristic diagram of the layer is merged into a channel,represents 0 toThe characteristic diagrams of the layers are respectively output, so that a Dense Block module is added between each convolution layer of the DenseNet network, and BN>Conv(1*1)>GAP, wherein the GAP represents a global average pooling layer, and the first three Dense blocks are selected as the network framework;
step 5.2: acquiring a local feature map: inputting human body segmentation image I through DenseNet established in step 5.11After the first three Dense blocks, a feature map with the size of 14 × 512 is obtained as a local feature map flocal(I1)。
In step 6, the specific method for obtaining the final characteristics of the garment by fusing the global characteristics obtained in step 3 and the local characteristics obtained in step 5 comprises the following steps: global feature map fglobal(I0) By globalAveraging the pooling layers to highlight all garment features; local feature map flocal(I1) The main characteristics of the garment are highlighted through the global maximum pooling layer, and the main characteristics and the weighted characteristics are fused to obtain the final characteristic flast(I) Wherein I represents I0And I1And (5) weighting and fusing the images.
In step 7, the method for obtaining the clothing style classification label through the classifier based on the final characteristics specifically comprises the following steps: final characteristic flast(I) 4096-dimensional features obtained through the two full-connection layers enter a classifier softmax layer to be finally output, and a label of the clothing image is obtained.
In step 8, the method for obtaining the clothing recommendation result with the same style from the clothing database under the clothing classification label traverse line specifically comprises the following steps: and traversing the clothing database under the clothing classification label line, and adopting a top-k method, namely selecting the clothing recommendation result of the first two as a final recommendation result.
The invention has the beneficial effects that: according to the clothing style recognition recommendation method based on multi-network fusion, deeper clothing features can be obtained by fusing the features through the feature extraction of the whole image of the human body and the feature extraction of the clothing outline of the image of the human body, and the clothing features with higher identifiability are found out, so that the problem of the identification precision of clothing of different styles in a complex environment is solved, and more intelligent development of an offline store is facilitated.
Drawings
FIG. 1 is a flowchart illustrating a method for identifying and recommending clothing style based on multi-network convergence according to the present invention;
FIG. 2 is an image captured by a camera, as an example;
FIG. 3 is a graph showing the results of Mask-RCNN segmentation performed as shown in FIG. 2;
FIG. 4 is a graph of the results of the completion of classification of FIG. 2 using a Multiple Network Fusion (MNF) model;
FIG. 5 shows result one of the tag matching recommendation of FIG. 4;
fig. 6 shows a second result of the tag matching recommendation performed on fig. 4.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention relates to a clothing style recognition recommendation method based on multi-network fusion, which is described by taking specific MNF model classification prediction and recommendation of a customer image as an embodiment in a PyTorch development environment, and is implemented according to the following steps as shown in FIGS. 1-6:
step 1: training and saving an MNF network model;
the step 1 is implemented according to the following steps:
step 1.1: establishing a clothing style database, and classifying the clothing style indexes of the clothing style recommendation system into 13 types through online and offline data collection; the data set has 5.5 ten thousand pieces, and 13 costume style categories and corresponding indexes are listed in the table 1; the data set may be loaded using a torchvision.
TABLE 1 garment Style Classification
Step 1.2: defining an MNF network model by using a torch, defining a class definition loss function, namely adopting a cross entropy loss function; defining an optimizer: using SGD;
step 1.3: setting a group of input variables and inputting data, wherein each image uses a 13-dimensional feature vector X ═ X1,x2....x13]Representation, where X represents all of the clothing style feature vectors, X1,x2...x13Feature vectors respectively representing the 13 types of clothing styles;
step 1.4: initializing MNF network parameters and weighting values wlSetting a value of 0-1, wherein the weight is randomly set, and l represents the network layer of the MNF;
step 1.5: dividing training set sample D { (X)(1),y(1)),(X(2),y(2)),...,(X(N),y(N)) Inputting into MNF network, where N represents the number of samples in training set, X(N)All clothing wind representing the Nth sample imageLattice feature vector, y(N)A true costume style label representing the nth sample image;
step 1.6: updating the weight W and the offset b; forward propagation the net input z for each layer is computed(l)And an activation value a(l)Until the last layer, back-propagation calculates the error of each layer(l)And l represents the network layer of the MNF; calculate the derivative of each layer parameter: whereinClothes style label representing Nth sample image prediction, L (-) represents y(N)Anderror function between, W(l)Represents the weight of l layers, b(l)Represents the bias of l layers, and T represents the transposition of the vector; updating parameters: w(l)←W(l)-α((l)(a(l-1))T+λW(l));b(l)←b(l)-αb(l)(ii) a Wherein λ represents a regularization coefficient and α represents a learning rate; and storing the trained MNF model and parameters until the network converges.
Step 2: calling an MNF network model, acquiring a human body image by using a camera, and preprocessing the human body image to obtain an original image;
the step 2 of preprocessing the human body image specifically comprises the following steps: the image collected by the camera is transmitted to a computer for preprocessing, and is converted into an original image I of RGB with the format of jpg and the size of 224 x 3 through DCT (discrete cosine transformation)0. As illustrated in fig. 2.
And step 3: the original image is subjected to VGG16 network to obtain the global features of the garment;
the step 3 specifically comprises the following steps: extracting original image I by using VGG16 network0Clothes ofAnd the characteristic is set, and the background of the original image is complex and has more interference information, so that the characteristic can be used as the global characteristic of the image. The VGG16 network model has 13 convolution layers and 3 full-connection layers, the first 13 convolution layers are adopted in the invention, and the feature map with the size of 14 × 512 is output, namely the original image I is input0The image-to-global feature map f can be obtained through the front 13 layers of the VGG16 networkglobal(I0)。
And 4, step 4: obtaining a human body segmentation image by image segmentation of the original image;
the specific method for segmenting the human body image in the step 4 comprises the following steps: the Mask-RCNN is adopted to segment the original image I0Obtaining a background-free human body segmentation image I1The Mask-RCNN is a pre-trained network model that can be directly used for segmenting the clothing outline, as shown in fig. 3, clothing of different clothing styles can be separated from a complex background according to the clothing outline.
And 5: the human body segmentation image is subjected to DenseNet to obtain local characteristics of the garment;
in step 5, the method for obtaining the local characteristics of the clothing by the human body segmentation image through the DenseNet network specifically comprises the following steps:
step 5.1: construction of DenseNet network: the DenseNet model is combined in a serial mode:whereinThe network layer of the DenseNet is denoted,is a mixed function, which is a combination of three operations, namely: BN>ReLU>Conv (3 × 3), BN denoting the batch normalization algorithm, ReLU being the activation function, Conv (3 × 3) denoting the convolution layer of 3 × 3;representing the result of processing by the mixing functionA layer characteristic diagram is obtained by the method,represents that 0 is toThe output characteristic diagram of the layer is merged into a channel,represents 0 toThe feature diagrams output respectively by the layers require the feature diagrams of different layers to be consistent in the series operation, and the size of the feature diagrams can be changed through the pooling layer, so that a Dense Block module is added between every two convolutional layers of a DenseNet network, and a BN (boron nitride) module is used for realizing the function of the graph>Conv(1*1)>GAP, wherein the GAP represents a global average pooling layer, and the first three Dense blocks are selected as the network framework;
step 5.2: acquiring a local feature map: inputting human body segmentation image I through DenseNet established in step 5.11After the first three Dense blocks, a feature map with the size of 14 × 512 is obtained as a local feature map flocal(I1)。
Step 6: fusing the global features obtained in the step 3 and the local features obtained in the step 5 to obtain final features of the garment;
in step 6, the specific method for obtaining the final characteristics of the garment by fusing the global characteristics obtained in step 3 and the local characteristics obtained in step 5 comprises the following steps: as shown in fig. 4, a global feature map fglobal(I0) Highlighting all garment features through a global average pooling layer (GAP); local feature map flocal(I1) Highlighting main characteristics of the garment through a global maximum pooling layer (GMP), and performing weighted characteristic fusion on the main characteristics and the main characteristics to obtain a final characteristic flast(I) Wherein I represents I0And I1And (5) weighting and fusing the images.
And 7: finally, obtaining a clothing style classification label through the classifier according to the characteristics;
in step 7, the method for obtaining the clothing style classification label through the classifier based on the final characteristics specifically comprises the following steps: final characteristic flast(I) 4096-dimensional features obtained through the two full-connection layers enter a classifier softmax layer to be finally output, and a label of the clothing image is obtained. The image classification label result obtained by the trained MNF network model is shown in fig. 4.
And 8: traversing the clothing database under the clothing classification label to obtain clothing recommendation results with the same style;
in step 8, the method for obtaining the clothing recommendation result with the same style from the clothing database under the clothing classification label traverse line specifically comprises the following steps: and traversing the clothing database under the clothing classification label line, and adopting a top-k method, namely selecting the clothing recommendation result of the first two as a final recommendation result. Fig. 5-6 show a set of athletic style garments obtained from traversing an offline garment database.
Claims (9)
1. A clothing style identification recommendation method based on multi-network fusion is characterized by comprising the following steps:
step 1: training and saving an MNF network model;
step 2: calling an MNF network model, acquiring a human body image by using a camera, and preprocessing the human body image to obtain an original image;
and step 3: the original image is subjected to VGG16 network to obtain the global features of the garment;
and 4, step 4: obtaining a human body segmentation image by image segmentation of the original image;
and 5: the human body segmentation image is subjected to DenseNet to obtain local characteristics of the garment;
step 6: fusing the global features obtained in the step 3 and the local features obtained in the step 5 to obtain final features of the garment;
and 7: finally, obtaining a clothing style classification label through the classifier according to the characteristics;
and 8: and traversing the clothing database under the line by the clothing classification label to obtain clothing recommendation results with the same style.
2. The clothing style recognition recommendation method based on multi-network fusion as claimed in claim 1, wherein the step 1 is implemented according to the following steps:
step 1.1: establishing a clothing style database, and dividing clothing style indexes of a clothing style recommendation system into a plurality of types through online and offline data collection; giving a plurality of types of clothing style categories and corresponding indexes;
step 1.2: defining an MNF network model by using a torch, defining a class definition loss function, namely adopting a cross entropy loss function; defining an optimizer: using SGD;
step 1.3: setting a group of input variables and inputting data, wherein each image uses a 13-dimensional feature vector X ═ X1,x2....x13]Representation, where X represents all of the clothing style feature vectors, X1,x2...x13Feature vectors respectively representing the 13 types of clothing styles;
step 1.4: initializing MNF network parameters and weighting values wlSetting a value of 0-1, wherein the weight is randomly set, and l represents the network layer of the MNF;
step 1.5: dividing training set sample D { (X)(1),y(1)),(X(2),y(2)),...,(X(N),y(N)) Inputting into MNF network, where N represents the number of samples in training set, X(N)All the clothing style feature vectors, y, of the Nth sample image(N)A true costume style label representing the nth sample image;
step 1.6: updating the weight W and the offset b; forward propagation the net input z for each layer is computed(l)And an activation value a(l)Until the last layer, back-propagation calculates the error of each layer(l)And l represents the network layer of the MNF; calculate the derivative of each layer parameter: whereinClothes style label representing Nth sample image prediction, L (-) represents y(N)Anderror function between, W(l)Represents the weight of l layers, b(l)Represents the bias of l layers, and T represents the transposition of the vector; updating parameters: w(l)←W(l)-α((l)(a(l-1))T+λW(l));b(l)←b(l)-αb(l)(ii) a Wherein λ represents a regularization coefficient and α represents a learning rate; and storing the trained MNF model and parameters until the network converges.
3. The clothing style recognition recommendation method based on multi-network fusion as claimed in claim 1, wherein the preprocessing of the human body image in step 2 specifically comprises: the image collected by the camera is transmitted to a computer for preprocessing, and is converted into an original image I of RGB with the format of jpg and the size of 224 x 3 through DCT (discrete cosine transformation)0。
4. The clothing style recognition recommendation method based on multi-network fusion as claimed in claim 1, wherein step 3 specifically comprises: the VGG16 network model has 13 convolution layers and 3 full-connection layers, the first 13 convolution layers are adopted, and the feature map with the size of 14 × 512 is output, namely the original image I is input0The image-to-global feature map f can be obtained through the front 13 layers of the VGG16 networkglobal(I0)。
5. The clothing style recognition recommendation method based on multi-network fusion as claimed in claim 1, wherein the specific method for segmenting the human body image in step 4 is as follows: adoptMask-RCNN segmentation original image I0Obtaining a background-free human body segmentation image I1。
6. The clothing style recognition recommendation method based on multi-network fusion as claimed in claim 1, wherein in step 5, the method for obtaining the local features of clothing from the human body segmentation image through the DenseNet network specifically comprises:
step 5.1: construction of DenseNet network: the DenseNet model is combined in a serial mode:whereinThe network layer of the DenseNet is denoted,is a mixed function, which is a combination of three operations, namely: BN>ReLU>Conv (3 × 3), BN denoting the batch normalization algorithm, ReLU being the activation function, Conv (3 × 3) denoting the convolution layer of 3 × 3;representing the result of processing by the mixing functionA layer characteristic diagram is obtained by the method,represents that 0 is toThe output characteristic diagram of the layer is merged into a channel,represents 0 toThe characteristic diagrams of the layers are respectively output, so that a Dense Block module is added between each convolution layer of the DenseNet network, and BN>Conv(1*1)>GAP, wherein the GAP represents a global average pooling layer, and the first three Dense blocks are selected as the network framework;
step 5.2: acquiring a local feature map: inputting human body segmentation image I through DenseNet established in step 5.11After the first three Dense blocks, a feature map with the size of 14 × 512 is obtained as a local feature map flocal(I1)。
7. The clothing style recognition recommendation method based on multi-network fusion as claimed in claim 1, wherein in step 6, the specific method for obtaining the final clothing features by fusing the global features obtained in step 3 and the local features obtained in step 5 comprises: global feature map fglobal(I0) Highlighting all clothing characteristics through a global average pooling layer; local feature map flocal(I1) The main characteristics of the garment are highlighted through the global maximum pooling layer, and the main characteristics and the weighted characteristics are fused to obtain the final characteristic flast(I) Wherein I represents I0And I1And (5) weighting and fusing the images.
8. The clothing style recognition recommendation method based on multi-network fusion as claimed in claim 1, wherein in step 7, the method for obtaining the clothing style classification label through the classifier by the final features specifically comprises: final characteristic flast(I) 4096-dimensional features obtained through the two full-connection layers enter a classifier softmax layer to be finally output, and a label of the clothing image is obtained.
9. The clothing style recognition and recommendation method based on multi-network fusion as claimed in claim 1, wherein in step 8, the method for obtaining the clothing recommendation result of the same style from the clothing database under the clothing classification label traversal line specifically comprises: and traversing the clothing database under the clothing classification label line, and adopting a top-k method, namely selecting the clothing recommendation result of the first two as a final recommendation result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010661708.7A CN111967930A (en) | 2020-07-10 | 2020-07-10 | Clothing style recognition recommendation method based on multi-network fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010661708.7A CN111967930A (en) | 2020-07-10 | 2020-07-10 | Clothing style recognition recommendation method based on multi-network fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111967930A true CN111967930A (en) | 2020-11-20 |
Family
ID=73361769
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010661708.7A Pending CN111967930A (en) | 2020-07-10 | 2020-07-10 | Clothing style recognition recommendation method based on multi-network fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111967930A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112465567A (en) * | 2020-12-14 | 2021-03-09 | 武汉纺织大学 | Clothing style fashion prediction system and method |
CN112528979A (en) * | 2021-02-10 | 2021-03-19 | 成都信息工程大学 | Transformer substation inspection robot obstacle distinguishing method and system |
CN113159826A (en) * | 2020-12-28 | 2021-07-23 | 武汉纺织大学 | Garment fashion element prediction system and method based on deep learning |
CN113160033A (en) * | 2020-12-28 | 2021-07-23 | 武汉纺织大学 | Garment style migration system and method |
CN114821202A (en) * | 2022-06-29 | 2022-07-29 | 武汉纺织大学 | Clothing recommendation method based on user preference |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120117072A1 (en) * | 2010-11-10 | 2012-05-10 | Google Inc. | Automated Product Attribute Selection |
WO2017215669A1 (en) * | 2016-06-17 | 2017-12-21 | 北京市商汤科技开发有限公司 | Method and device for object recognition, data processing device, and computing device |
WO2018165753A1 (en) * | 2017-03-14 | 2018-09-20 | University Of Manitoba | Structure defect detection using machine learning algorithms |
CN108875754A (en) * | 2018-05-07 | 2018-11-23 | 华侨大学 | A kind of vehicle recognition methods again based on more depth characteristic converged network |
US20190114511A1 (en) * | 2017-10-16 | 2019-04-18 | Illumina, Inc. | Deep Learning-Based Techniques for Training Deep Convolutional Neural Networks |
US20190252073A1 (en) * | 2018-02-12 | 2019-08-15 | Ai.Skopy, Inc. | System and method for diagnosing gastrointestinal neoplasm |
CN110197200A (en) * | 2019-04-23 | 2019-09-03 | 东华大学 | A kind of electronic tag for clothing generation method based on machine vision |
CN111160356A (en) * | 2020-01-02 | 2020-05-15 | 博奥生物集团有限公司 | Image segmentation and classification method and device |
US10671878B1 (en) * | 2019-01-11 | 2020-06-02 | Capital One Services, Llc | Systems and methods for text localization and recognition in an image of a document |
-
2020
- 2020-07-10 CN CN202010661708.7A patent/CN111967930A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120117072A1 (en) * | 2010-11-10 | 2012-05-10 | Google Inc. | Automated Product Attribute Selection |
WO2017215669A1 (en) * | 2016-06-17 | 2017-12-21 | 北京市商汤科技开发有限公司 | Method and device for object recognition, data processing device, and computing device |
WO2018165753A1 (en) * | 2017-03-14 | 2018-09-20 | University Of Manitoba | Structure defect detection using machine learning algorithms |
US20190114511A1 (en) * | 2017-10-16 | 2019-04-18 | Illumina, Inc. | Deep Learning-Based Techniques for Training Deep Convolutional Neural Networks |
US20190252073A1 (en) * | 2018-02-12 | 2019-08-15 | Ai.Skopy, Inc. | System and method for diagnosing gastrointestinal neoplasm |
CN108875754A (en) * | 2018-05-07 | 2018-11-23 | 华侨大学 | A kind of vehicle recognition methods again based on more depth characteristic converged network |
US10671878B1 (en) * | 2019-01-11 | 2020-06-02 | Capital One Services, Llc | Systems and methods for text localization and recognition in an image of a document |
CN110197200A (en) * | 2019-04-23 | 2019-09-03 | 东华大学 | A kind of electronic tag for clothing generation method based on machine vision |
CN111160356A (en) * | 2020-01-02 | 2020-05-15 | 博奥生物集团有限公司 | Image segmentation and classification method and device |
Non-Patent Citations (4)
Title |
---|
张茜;刘骊;付晓东;刘利军;黄青松;: "结合标签优化和语义分割的服装图像检索", 计算机辅助设计与图形学学报 * |
董洪义编著: "《深度学习之PyTorch物体检测实战》", 机械工业出版社 * |
袁培森;黎薇;任守纲;徐焕良;: "基于卷积神经网络的菊花花型和品种识别", 农业工程学报 * |
贾小军;叶利华;邓洪涛;刘子豪;陆锋杰;: "基于卷积神经网络的蓝印花布纹样基元分类", 纺织学报 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112465567A (en) * | 2020-12-14 | 2021-03-09 | 武汉纺织大学 | Clothing style fashion prediction system and method |
CN112465567B (en) * | 2020-12-14 | 2022-10-04 | 武汉纺织大学 | Clothing style fashion prediction system and method |
CN113159826A (en) * | 2020-12-28 | 2021-07-23 | 武汉纺织大学 | Garment fashion element prediction system and method based on deep learning |
CN113160033A (en) * | 2020-12-28 | 2021-07-23 | 武汉纺织大学 | Garment style migration system and method |
CN113159826B (en) * | 2020-12-28 | 2022-10-18 | 武汉纺织大学 | Garment fashion element prediction system and method based on deep learning |
CN112528979A (en) * | 2021-02-10 | 2021-03-19 | 成都信息工程大学 | Transformer substation inspection robot obstacle distinguishing method and system |
CN114821202A (en) * | 2022-06-29 | 2022-07-29 | 武汉纺织大学 | Clothing recommendation method based on user preference |
CN114821202B (en) * | 2022-06-29 | 2022-10-04 | 武汉纺织大学 | Clothing recommendation method based on user preference |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111967930A (en) | Clothing style recognition recommendation method based on multi-network fusion | |
CN109325952B (en) | Fashionable garment image segmentation method based on deep learning | |
US11080918B2 (en) | Method and system for predicting garment attributes using deep learning | |
CN108182441B (en) | Parallel multichannel convolutional neural network, construction method and image feature extraction method | |
CN109598268B (en) | RGB-D (Red Green blue-D) significant target detection method based on single-stream deep network | |
CN105809672B (en) | A kind of image multiple target collaboration dividing method constrained based on super-pixel and structuring | |
CN110033007B (en) | Pedestrian clothing attribute identification method based on depth attitude estimation and multi-feature fusion | |
CN108090472B (en) | Pedestrian re-identification method and system based on multi-channel consistency characteristics | |
EP3300002A1 (en) | Method for determining the similarity of digital images | |
KR101992986B1 (en) | A recommending learning methods of apparel materials using image retrieval | |
CN111340123A (en) | Image score label prediction method based on deep convolutional neural network | |
CN108009560B (en) | Commodity image similarity category judgment method and device | |
CN108319633B (en) | Image processing method and device, server, system and storage medium | |
CN110458178B (en) | Multi-mode multi-spliced RGB-D significance target detection method | |
Huynh et al. | Craft: Complementary recommendation by adversarial feature transform | |
CN112200818A (en) | Image-based dressing area segmentation and dressing replacement method, device and equipment | |
CN114581456B (en) | Multi-image segmentation model construction method, image detection method and device | |
CN110516512B (en) | Training method of pedestrian attribute analysis model, pedestrian attribute identification method and device | |
Liu et al. | Cbl: A clothing brand logo dataset and a new method for clothing brand recognition | |
CN106407281B (en) | Image retrieval method and device | |
CN107622071B (en) | Clothes image retrieval system and method under non-source-retrieval condition through indirect correlation feedback | |
CN112699261A (en) | Automatic clothing image generation system and method | |
CN112508114A (en) | Intelligent clothing recommendation system and method | |
CN108446605A (en) | Double interbehavior recognition methods under complex background | |
KR102057837B1 (en) | Apparatus and method for fabric pattern generation based on artificial intelligence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |