CN112256895A - Fabric image retrieval method based on multi-task learning - Google Patents

Fabric image retrieval method based on multi-task learning Download PDF

Info

Publication number
CN112256895A
CN112256895A CN202011108362.4A CN202011108362A CN112256895A CN 112256895 A CN112256895 A CN 112256895A CN 202011108362 A CN202011108362 A CN 202011108362A CN 112256895 A CN112256895 A CN 112256895A
Authority
CN
China
Prior art keywords
fabric
fabric image
network
image retrieval
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011108362.4A
Other languages
Chinese (zh)
Inventor
潘如如
向军
张宁
周建
高卫东
王蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN202011108362.4A priority Critical patent/CN112256895A/en
Publication of CN112256895A publication Critical patent/CN112256895A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a fabric image retrieval method based on multitask learning, which comprises the following steps: 1, acquiring a clear fabric image by using a stable and fixed resolution ratio of a light source; 2, manually labeling the collected partial fabric images according to a designed knowledge system to establish a data set for training a multi-task learning model; 3, a multi-task learning model with hard parameter sharing is built, homomorphic uncertain losses are used for weighting among different task losses, and therefore back propagation is carried out; 4, extracting high-dimensional features of all fabric images; training a deep hash coding model by using the extracted part of high-dimensional features; coding all the extracted high-dimensional features by using a trained coding model, and storing the coded high-dimensional features and corresponding picture link addresses in a database; and building a fabric image retrieval system. The invention is used for the fabric database with various varieties and large data scale, and the user can adjust the characteristic weight according to the requirement so as to search more fabrics meeting the requirement, thereby having good application prospect.

Description

Fabric image retrieval method based on multi-task learning
Technical Field
The invention relates to the field of fabric image retrieval methods, in particular to a fabric image retrieval method based on multi-task learning.
Background
With the improvement of living standard, the demands of consumers on textile products are not limited to practical performance but tend to be attractive and diversified, the styles of the textile products are changed day by day and the styles are diversified, so that small batches and multiple varieties become brand-new production modes of many textile enterprises increasingly. In the design of the sample, the process parameters of the sample need to be manually analyzed, and whether the same or similar products exist or not is searched in a warehouse and historical production data, so that the subsequent design and production are guided. The manual completion of the work is time-consuming and labor-consuming, the searching precision and efficiency are low, the production period of the product is too long, and the utilization rate of the historical production data of the enterprise is low. At present, the search method commonly adopted in textile enterprises is based on keywords, namely, related textile products are searched through labeled keywords by storing textile product images and manually labeling the images. The method has the advantages that the query speed is high, the influence of the subjectivity of manual labeling cannot be eliminated, the retrieval mode is single, and different query requirements are difficult to meet.
There have been studies to apply content-based image retrieval techniques to fabric retrieval, and the key to content-based image retrieval is characterization of image content. Most of the existing methods applied to fabric image retrieval adopt manually designed features (low-order features) to represent image contents, and the methods are poor in robustness, only suitable for specific classes of fabrics and poor in large-scale fabric image retrieval effect. The method for representing images by using the deep neural network generally only expresses the characteristics of a certain dimension (color and pattern) of a fabric, and for the fabric with fine texture, the image is often not enough to be described from a single dimension. Therefore, in the existing content-based fabric retrieval technology, the description of the fabric by the image characterization method is not comprehensive enough, so that the fabric image cannot be accurately retrieved.
Disclosure of Invention
In view of the above, the present invention provides a fast, efficient, and accurate fabric image retrieval method based on multi-task learning, which searches related products from a database quickly to achieve fast response of textile enterprise production.
Based on the above purpose, the invention provides a fabric image retrieval method based on multitask learning, which comprises the following steps:
s1: acquiring a clear image of the fabric by adopting stable image acquisition equipment, fixing a light source and shooting resolution, and recommending to use a scanner;
s2: respectively labeling the acquired images from four dimensions of coarse texture, fine texture, style and pattern forming modes to construct a data set of a training image representation model;
s3: building a convolutional neural network framework of multi-task learning by a parameter hard sharing method;
s4: training a multitask fabric representation model in the constructed fabric image data set and extracting high-dimensional features;
s5: training a deep hash network model by using part of the high-dimensional features extracted in the step 4;
s6: the extracted high-order features are encoded using a deep hash network and the extracted hash codes are stored in a database.
S7: and building a fabric image retrieval system.
In the third step, Resnet is adopted as a main network of the convolutional neural network framework for multi-task learning, each branch task is in full connection, and a loss function of the whole network adopts homomorphic uncertain loss as shown in the following formula.
Figure BDA0002727718340000021
Where W represents the weight and bias of the designed multitask learning deep network, σ1234Weight parameters representing individual task loss functions, the four parameters being trained, i.e. learned, L1,L2,L3,L4Respectively representing the Softmax loss function corresponding to each task.
Further, the multitask fabric representation model in the fourth step has four classification tasks, and the fabric image representation is guided and learned through the classification tasks.
Further, in the second step, the standard of labeling data is that the number of samples in each subdivision class is not less than 1000.
Further, in the sixth step, the high-order features of the fabric come from the last hidden layer of each task branch, which are four branches, and finally the fabric is characterized from the corresponding dimensions of the four tasks.
Further, the deep hash network is a fully connected neural network with 5 layers, and activation functions all use Relu and are expressed as:
Figure BDA0002727718340000031
further, the final output of the deep hash network is converted into binary code by the following function:
Figure BDA0002727718340000032
further, the hash algorithm is an unsupervised method, i.e. training is guided without using labeled data, and the hash algorithm is obtained by self-learning according to a designed loss function, which is shown as the following formula:
Figure BDA0002727718340000033
Figure BDA0002727718340000034
Figure BDA0002727718340000035
Figure BDA0002727718340000036
Figure BDA0002727718340000037
where W and c represent the weight and bias, respectively, of the deep hash network to be optimized, L0Representing the generated binary code B and the last layer output H of the hash network5Quantization loss between, L1Aiming to maximize the variance between the generated binary codes to balance the different codes, L2The function of (1) is to add an orthogonal constraint to each mapping matrix W to maximize the independence between each mapping, I is an orthogonal matrix, and L3 is a fifth-layer weight parameter W of the controlled depth hash network5And a bias parameter c5A scale regularizer, said optimization problem being a convex optimization problem.
Further, the optimization method of the deep hash network uses a random gradient descent algorithm, and finally a string of binary codes, namely hash codes, is output.
Further, the fabric image retrieval system is divided into an offline part and an online part, the offline part comprises the construction of a database and the warehousing operation of new data, and the online part is the process of inputting a fabric image by a user for retrieval.
The invention has the beneficial effects that: the invention aims at the accurate retrieval of fabric images with fine textures, and clear fabric images are acquired under a stable illumination condition and a fixed resolution. Marking the collected partial fabric images according to a knowledge base for visually understanding the fabric, and ensuring that the fabric image of each subclass is not lower than 1000. The constructed parameter hard sharing multi-task learning framework adopts homomorphic uncertain loss to optimize parameters in the multi-task deep network through an Adam optimizer. And reducing the dimension of the high-dimensional features extracted by the multitask network by using an unsupervised deep Hash network. The system receives the inquiry pictures input by the user, inquires images with similar characteristics in the database and feeds back the images to the user. The method is suitable for various fabric images, in particular to a large-scale fabric database with a wide variety.
Drawings
Fig. 1 is a flowchart of a fabric image retrieval method based on multi-task learning according to a preferred embodiment of the present invention.
Fig. 2 is an example of a captured fabric image.
FIG. 3 is an example of a fabric knowledge system classification.
FIG. 4 is a multi-task learning model framework for parameter hard sharing.
Fig. 5 is a diagram of four residual network structures.
Fig. 6 is a diagram of a deep hash network architecture.
FIG. 7 is a block diagram of an overall search system.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to specific embodiments and the accompanying drawings.
The embodiment of the invention provides a fabric image retrieval method based on multitask learning, which comprises the following steps:
s1: clear fabric images are collected by adopting stable image collecting equipment under a stable light environment and a fixed resolution ratio to construct an image database for fabric image retrieval, and 24-bit RGB color images of image positions are collected; the clear image means that the texture structure of the surface of the fabric can be judged by naked eyes, and the pattern edge on the surface of the fabric are clear; for the newly added pictures in the subsequent database, the pictures are also collected in the fixed environment.
S2: establishing a knowledge system of the visual cognitive fabric, namely a classification standard, wherein the classification dimension is from four layers: macro texture (coarse texture) of the surface of the fabric, fine texture (fine texture) of the surface of the fabric, fabric style and pattern forming mode of the surface of the fabric; the fabric style classification should have clear boundaries, and a marking person can distinguish the fabric style classification at a glance without mixing too many subjective components; the fabric surface pattern forming mode only aims at the fabric with patterns on the surface, and the classification can not be considered for pure-color or plain-color fabrics; labeling the partial images acquired in the step 1 according to the established classification system, under normal conditions, simultaneously labeling each fabric image by two labeling personnel, completing labeling of the image if the labeling results are the same, and submitting the image to a third person for re-labeling if the labeling results of the two persons are different, and taking the labeling result of the third person;
s3: building a convolutional neural network framework of multi-task learning by a parameter hard sharing method; the parameter hard sharing method refers to that all tasks share a part of parameters, each subtask has an independent parameter, and the structure can avoid the risk of overfitting; the shared parameter layer adopts Resnet as a main network for extracting image aggregation characteristics; the unique parameter layers of each task have the same structure, all adopt a full-connection network and adopt a Relu activation function; the loss function of each classification task uses a Softmax loss function; and weighting and summing the loss of each task by adopting homomorphic uncertain loss, wherein the specific weighting method is as follows:
Figure BDA0002727718340000051
s4: training and optimizing the multi-task learning model set up in the step 3 by using the data set labeled in the step 2, and updating network parameters by using an Adam optimizer in the training process; judging the training state of the model by observing a loss curve in the training process, and when the curve tends to be convergent, indicating that the model is trained well; extracting the four-dimensional features of all the images acquired in the step 1 by using a trained multi-task learning model, wherein the output of the last hidden layer of each independent task network is the feature to be extracted, namely the expression vector of the fabric on the corresponding dimension; and storing all the extracted features in a database for later use.
S5: training a deep hash network by using the part of the data extracted in the step 4; the deep hash network is a fully-connected neural network with 5 hidden layers; the output of this neural network is converted into a binary code B by a coding function sgn (v); the Hash algorithm is an unsupervised method, namely training is guided without using labeled data, and the Hash algorithm is obtained by self learning according to a designed loss function, wherein the designed loss function is shown as the following formula:
Figure BDA0002727718340000052
Figure BDA0002727718340000053
Figure BDA0002727718340000061
Figure BDA0002727718340000062
Figure BDA0002727718340000063
where W and c represent the weight and bias, respectively, of the deep hash network to be optimized, L0Representation generated binary code B and hashLast layer output H of the network5Quantization loss between, L1Aiming to maximize the variance between the generated binary codes to balance the different codes, L2The function of (1) is to add an orthogonal constraint to each mapping matrix W to maximize the independence between each mapping, I is an orthogonal matrix, and L3 is a fifth-layer weight parameter W of the controlled depth hash network5And a bias parameter c5A scale regularizer, said optimization problem being a convex optimization problem; this convex optimization problem is optimized using a random gradient descent algorithm.
S6: encoding all the features extracted in the step 5 by using an optimized deep hash network, wherein each fabric image corresponds to a string of binary hash codes in four dimensions respectively; the four strings of hash codes are used as indexes of corresponding images for retrieving the images; the extracted binary hash codes are stored in a database and associated with links corresponding to the fabric images.
S7: building a fabric image retrieval system, wherein the retrieval system comprises two parts: the method comprises the following steps that 1) operation of the offline part comprises integrating an image feature extraction model 2) extracting features of images in an image database to form a feature database and associating the feature database with corresponding images 3) new data warehousing process design, the online part mainly comprises the steps that a system receives images input by a user, the design process extracts indexes of the input images, and the most similar images in the database are inquired according to the indexes and fed back to the user; the similarity is quantified by the hamming distance between different hash codes.
To illustrate the specific embodiment of the present invention, the present invention uses a scanner to acquire a total of 80,000 images of different fabrics at the factory. As a preferred embodiment, a classification system for cognitive fabrics is designed, see fig. 3. Fig. 1 is a flowchart of a fabric image retrieval method based on multi-task learning according to a preferred embodiment of the present invention.
The method of the embodiment comprises the following steps:
s1: a scanner is used as an image acquisition device to acquire a fabric image, see fig. 2.
S2: and (3) designing a classification system of the cognitive fabric according to the characteristics of the collected image, referring to fig. 3, and manually labeling the 30,000 pictures acquired in the step (1) according to the classification system.
S3: a multi-task learning model is built by using a parameter hard sharing method, the building process is based on a deep learning framework, all tasks are classified tasks, each subtask uses a softmax loss function, and an activation function is Relu.
In this step, the multitask learning model refers to fig. 4, where the sharing layer uses Resnet50 as the main feature aggregation network, and inputs RGB images with an image size of 224 × 224, and the size of the feature map output by the convolutional layer with kernel size of 7 × 7, depth of 64, and step size of 2 is 112 × 112; then, through the largest pooling layer with the size of 3 multiplied by 3 and the step size of 2, the size of the output characteristic diagram is 56 multiplied by 56; then, through 3 residual error structures shown in fig. 5(a), the size of an output characteristic diagram is 28 × 28; then, through the residual error structure shown in 4 graphs (b), the size of the output feature graph is 14 × 14, and then through the residual error structure shown in 6 graphs (c), the size of the output feature graph is 7 × 7; then, through 3 residual error structures shown in fig. 5(d), the size of the output aggregation feature is 2048 × 1, and the output is used as the input of each task branch full-connection network; the number of nodes of each layer of the four branch networks is {2048,2048,1024,1024}, and Relu is used for all the activation functions; the losses of the four branches are weighted with homomorphic uncertain losses.
S4: training the multitask learning model set up in the step 3 by using the image data marked in the step 2, optimizing model parameters by Adam in a back propagation process, judging the training state of the model by observing a loss curve of the model on a training set and a test set, and indicating that the model is well trained when the curve tends to converge to a certain value; extracting four-dimensional aggregation features of all the images acquired in the step 1 by using a trained multi-task learning model, wherein the output of the last hidden layer of each independent task network is the feature to be extracted, and the feature dimension is 1024, namely the expression vector of the fabric in the corresponding dimension; storing all the extracted features for later use.
S5: training a deep hash network by using the features partially extracted in the step 4, wherein the structure of the deep hash network refers to fig. 6, the hash network adopted in the embodiment is a fully-connected neural network with five layers, the number of neurons in each layer is {1024,512,512,256,128}, the output of the last layer of the network is a 128-dimensional vector, and the vector is converted into a 128-dimensional binary code B by a coding function sgn (v); the Hash algorithm is a complete unsupervised method, namely, training is guided without using labeled data, and the Hash algorithm is self-optimized according to a designed loss function, wherein the designed loss function is shown as the following formula:
Figure BDA0002727718340000071
the optimization problem is a convex optimization problem; the convex optimization problem is optimized by a random gradient descent algorithm at a learning rate of 0.0001, and the loss of the deep hash model is converged after 300 times of iteration optimization.
S6: coding all the high-dimensional features extracted in the step 4 by adopting the deep hash model trained in the step 5, wherein each picture can obtain four 128-dimensional hash codes; the extracted four strings of codes and the link addresses of the corresponding pictures are simultaneously stored for retrieval using the sqlite database.
S7: a fabric image retrieval system is built, and a system framework built in the embodiment refers to FIG. 7; the retrieval system is two parts: the online part receives images input by a user, extracts indexes of the images, queries the most similar images in the database according to the indexes and feeds the most similar images back to the user according to Hamming distances.

Claims (8)

1. A fabric image retrieval method based on multitask learning is characterized by comprising the following steps:
s1: acquiring a clear image of the fabric by adopting stable image acquisition equipment, fixing a light source and shooting resolution, and recommending to use a scanner;
s2: respectively labeling the acquired images from four dimensions of coarse texture, fine texture, style and pattern forming modes to construct a data set of a training image representation model;
s3: building a convolutional neural network framework of multi-task learning by a parameter hard sharing method;
s4: training a multitask fabric representation model in the constructed fabric image data set and extracting high-dimensional features;
s5: training a deep hash network model by using part of the high-dimensional features extracted in the step 4;
s6: encoding the extracted high-order features by using a deep hash network, and storing the extracted hash codes in a database;
s7: building a fabric image retrieval system;
in the third step, Resnet is adopted as a main network of a convolutional neural network framework for multi-task learning, each branch task is in full connection, and a loss function of the whole network adopts homomorphic uncertain loss as shown in the following formula;
Figure FDA0002727718330000011
where W represents the weight and bias of the designed multitask learning deep network, σ1234Weight parameters representing individual task loss functions, the four parameters being trained, i.e. learned, L1,L2,L3,L4Respectively representing the Softmax loss function corresponding to each task.
2. The fabric image retrieval method based on multitask learning as claimed in claim 1, wherein the deep hash network is a 5-layer fully-connected neural network, and activation functions all use Relu and are expressed as:
Figure FDA0002727718330000012
the final output of the deep hash network is converted into binary code by the following function:
Figure FDA0002727718330000021
the hash algorithm of the deep hash network is an unsupervised method, namely, the training is guided without using labeled data, and the hash algorithm is obtained by self-learning according to a designed loss function, wherein the designed loss function is shown as the following formula:
Figure FDA0002727718330000022
Figure FDA0002727718330000023
Figure FDA0002727718330000024
Figure FDA0002727718330000025
Figure FDA0002727718330000026
where W and c represent the weight and bias, respectively, of the deep hash network to be optimized, L0Representing the generated binary code B and the last layer output H of the hash network5Quantization loss between, L1Aiming to maximize the variance between the generated binary codes to balance the different codes, L2The function of (1) is to add an orthogonal constraint to each mapping matrix W to maximize the independence between each mapping, I is an orthogonal matrix, and L3 is a fifth-layer weight parameter W of the controlled depth hash network5And a bias parameter c5Scale regularizerThe optimization problem is a convex optimization problem, and the optimization method of the deep hash network uses a random gradient descent algorithm to finally output a string of binary codes, namely hash codes.
3. The fabric image retrieval method based on multitask learning according to claim 1 or 2, characterized in that in said second step, the criterion for labeling data is that the number of samples in each subdivision class is not less than 1000; and the multitask fabric representation model in the fourth step has four classification tasks, and the fabric image representation is guided and learned through the classification tasks.
4. The fabric image retrieval method based on multitask learning as claimed in claim 1 or 2, characterized in that in the sixth step, the high-order features of the fabric come from the last hidden layer of each task branch, and are four branches in total, and finally the fabric is characterized from the corresponding dimensions of the four tasks.
5. The fabric image retrieval method based on multitask learning as claimed in claim 3, characterized in that in the sixth step, the high-order features of the fabric come from the last hidden layer of each task branch, and are divided into four branches, and finally the fabric is characterized from the corresponding dimensions of the four tasks.
6. The fabric image retrieval method based on multitask learning as claimed in claim 1, 2 or 5, characterized in that said fabric image retrieval system is divided into an offline part and an online part, the offline part includes the construction of database, the warehousing operation of new data, and the online part is the process of inputting fabric image by user for retrieval.
7. The fabric image retrieval method based on multitask learning as claimed in claim 3, characterized in that said fabric image retrieval system is divided into an offline part and an online part, the offline part comprises the construction of database, the warehousing operation of new data, and the online part is the process of inputting fabric image by user for retrieval.
8. The fabric image retrieval method based on multitask learning as claimed in claim 4, characterized in that said fabric image retrieval system is divided into an offline part and an online part, the offline part includes the construction of database, the warehousing operation of new data, and the online part is the process of inputting fabric image by user for retrieval.
CN202011108362.4A 2020-10-16 2020-10-16 Fabric image retrieval method based on multi-task learning Pending CN112256895A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011108362.4A CN112256895A (en) 2020-10-16 2020-10-16 Fabric image retrieval method based on multi-task learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011108362.4A CN112256895A (en) 2020-10-16 2020-10-16 Fabric image retrieval method based on multi-task learning

Publications (1)

Publication Number Publication Date
CN112256895A true CN112256895A (en) 2021-01-22

Family

ID=74244310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011108362.4A Pending CN112256895A (en) 2020-10-16 2020-10-16 Fabric image retrieval method based on multi-task learning

Country Status (1)

Country Link
CN (1) CN112256895A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112800260A (en) * 2021-04-09 2021-05-14 北京邮电大学 Multi-label image retrieval method and device based on deep hash energy model
WO2022256962A1 (en) * 2021-06-07 2022-12-15 浙江大学 Freestyle acquisition method for high-dimensional material

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108984642A (en) * 2018-06-22 2018-12-11 西安工程大学 A kind of PRINTED FABRIC image search method based on Hash coding
CN110188227A (en) * 2019-05-05 2019-08-30 华南理工大学 A kind of hashing image search method based on deep learning and low-rank matrix optimization
CN111125397A (en) * 2019-11-28 2020-05-08 苏州正雄企业发展有限公司 Cloth image retrieval method based on convolutional neural network
WO2020182019A1 (en) * 2019-03-08 2020-09-17 苏州大学 Image search method, apparatus, device, and computer-readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108984642A (en) * 2018-06-22 2018-12-11 西安工程大学 A kind of PRINTED FABRIC image search method based on Hash coding
WO2020182019A1 (en) * 2019-03-08 2020-09-17 苏州大学 Image search method, apparatus, device, and computer-readable storage medium
CN110188227A (en) * 2019-05-05 2019-08-30 华南理工大学 A kind of hashing image search method based on deep learning and low-rank matrix optimization
CN111125397A (en) * 2019-11-28 2020-05-08 苏州正雄企业发展有限公司 Cloth image retrieval method based on convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JUN XIANG等: "Fabric Image Retrieval System Using Hierarchical Search Based on Deep Convolutional Neural Network", IEEE ACCESS, vol. 7, pages 35405 - 35417, XP011716330, DOI: 10.1109/ACCESS.2019.2898906 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112800260A (en) * 2021-04-09 2021-05-14 北京邮电大学 Multi-label image retrieval method and device based on deep hash energy model
WO2022256962A1 (en) * 2021-06-07 2022-12-15 浙江大学 Freestyle acquisition method for high-dimensional material

Similar Documents

Publication Publication Date Title
CN104834748B (en) It is a kind of to utilize the image search method based on deep semantic sequence Hash coding
CN108595636A (en) The image search method of cartographical sketching based on depth cross-module state correlation study
Gaur Neural networks in data mining
CN108932314A (en) A kind of chrysanthemum image content retrieval method based on the study of depth Hash
CN110309195B (en) FWDL (full Width Domain analysis) model based content recommendation method
CN110175235A (en) Intelligence commodity tax sorting code number method and system neural network based
CN108984642A (en) A kind of PRINTED FABRIC image search method based on Hash coding
CN112256895A (en) Fabric image retrieval method based on multi-task learning
CN111563770A (en) Click rate estimation method based on feature differentiation learning
CN114693397A (en) Multi-view multi-modal commodity recommendation method based on attention neural network
WO2020233245A1 (en) Method for bias tensor factorization with context feature auto-encoding based on regression tree
CN112733602A (en) Relation-guided pedestrian attribute identification method
Nawaz et al. Automatic categorization of traditional clothing using convolutional neural network
CN110569761A (en) Method for retrieving remote sensing image by hand-drawn sketch based on counterstudy
CN116244484B (en) Federal cross-modal retrieval method and system for unbalanced data
Yang Clothing design style recommendation using decision tree algorithm combined with deep learning
Wang et al. A convolutional neural network image classification based on extreme learning machine
CN116662532A (en) Neural time gate self-adaptive fusion session recommendation method based on graph neural network
CN116258938A (en) Image retrieval and identification method based on autonomous evolution loss
Zheng et al. An end-to-end image retrieval system Based on gravitational field deep learning
CN112667919A (en) Personalized community correction scheme recommendation system based on text data and working method thereof
CN114254199A (en) Course recommendation method based on bipartite graph projection and node2vec
CN114741590A (en) Multi-interest recommendation method based on self-attention routing and Transformer
CN114564594A (en) Knowledge graph user preference entity recall method based on double-tower model
CN112307288A (en) User clustering method for multiple channels

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination