CN116307446B - Clothing supply chain management system - Google Patents

Clothing supply chain management system Download PDF

Info

Publication number
CN116307446B
CN116307446B CN202211546069.5A CN202211546069A CN116307446B CN 116307446 B CN116307446 B CN 116307446B CN 202211546069 A CN202211546069 A CN 202211546069A CN 116307446 B CN116307446 B CN 116307446B
Authority
CN
China
Prior art keywords
fabric
feature
demand
image
purchasing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211546069.5A
Other languages
Chinese (zh)
Other versions
CN116307446A (en
Inventor
梁家明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Xingxiang Network Technology Co ltd
Original Assignee
Zhejiang Xingxiang Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Xingxiang Network Technology Co ltd filed Critical Zhejiang Xingxiang Network Technology Co ltd
Priority to CN202211546069.5A priority Critical patent/CN116307446B/en
Publication of CN116307446A publication Critical patent/CN116307446A/en
Application granted granted Critical
Publication of CN116307446B publication Critical patent/CN116307446B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/04Manufacturing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Multimedia (AREA)
  • Strategic Management (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Operations Research (AREA)
  • Biomedical Technology (AREA)
  • Manufacturing & Machinery (AREA)
  • Educational Administration (AREA)
  • Primary Health Care (AREA)
  • Development Economics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of supply chain management, and particularly discloses a clothing supply chain management system, which comprises the steps of firstly acquiring fabric purchasing requirements and detection images of purchasing fabrics to be evaluated; and then, mapping the fabric purchasing requirements and the detection images of the purchasing fabrics to be evaluated into a high-dimensional feature space through a Clip model and a convolutional neural network model respectively, judging whether the purchasing fabrics to be evaluated meet the fabric purchasing requirements or not through calculating the difference between feature distribution of the two in the high-dimensional feature space, and accordingly performing adaptation degree analysis on the fabric purchasing requirements and the purchasing fabrics to be evaluated in the high-dimensional feature space to determine whether the purchasing fabrics to be evaluated meet the fabric purchasing requirements or not.

Description

Clothing supply chain management system
Technical Field
The present application relates to the field of supply chain management technology, and more particularly, to a garment supply chain management system.
Background
The clothing supply chain is a functional network chain structure which surrounds a core enterprise (clothing brand), starts from a designer design draft, makes sample clothing and final large goods through integrating face auxiliary materials and clothing peptide sites, finally sends the clothing to a consumer through a sales network, and connects suppliers (face auxiliary materials) to the clothing factory, brand agency and distributor until the final consumer user is connected into a whole. The quality of a clothing industry-level supply chain system often directly determines the comprehensive strength and competitiveness of enterprises and industries.
In clothing supply chain management, especially in quality management, fabric detection is a very important link. The existing fabric detection is to evaluate whether the purchased fabric is matched with the preset fabric performance through manual quality inspection, so that the efficiency is low, and the thinking deviation is easy to generate.
Accordingly, an optimized garment supply chain management system is desired.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides a clothing supply chain management system, which comprises the steps of firstly acquiring fabric purchasing requirements and detection images of purchased fabrics to be evaluated; and then, mapping the fabric purchasing requirements and the detection images of the purchasing fabrics to be evaluated into a high-dimensional feature space through a Clip model and a convolutional neural network model respectively, judging whether the purchasing fabrics to be evaluated meet the fabric purchasing requirements or not through calculating the difference between feature distribution of the two in the high-dimensional feature space, and accordingly performing adaptation degree analysis on the fabric purchasing requirements and the purchasing fabrics to be evaluated in the high-dimensional feature space to determine whether the purchasing fabrics to be evaluated meet the fabric purchasing requirements or not.
According to one aspect of the present application, there is provided a garment supply chain management system comprising:
the fabric purchasing demand invoking module is used for acquiring fabric purchasing demands, wherein the fabric purchasing demands comprise demand text description and display pictures;
the purchasing demand encoding module is used for enabling the fabric purchasing demand to pass through a Clip model to obtain a fabric demand image feature matrix;
the purchasing fabric image acquisition module is used for acquiring a detection image of the purchasing fabric to be evaluated;
the purchasing fabric image feature extraction module is used for enabling the detection image to pass through a convolutional neural network model serving as a feature extractor to obtain a detection feature matrix;
the difference module is used for calculating a difference feature matrix between the detection feature matrix and the fabric demand image feature matrix;
the small-scale optimization module is used for carrying out feature distribution optimization on the differential feature matrix to obtain an optimized differential feature matrix; and
the management result generation module is configured to pass the optimized differential feature matrix through a classifier to obtain the purchasing demand encoding module in the clothing supply chain management system, where the purchasing demand encoding module includes:
a display picture coding unit, configured to code the display image by using an image encoder of the Clip model to obtain an image feature vector;
The demand text description coding unit is used for carrying out semantic coding on the demand text description by using a sequence coder of the Clip model so as to obtain a demand description semantic feature vector; and
and the joint optimization coding unit is used for optimizing the coding of the image feature vector based on the demand description semantic feature vector by using a joint coding module of the Clip model so as to obtain the fabric demand image feature matrix.
In the above clothing supply chain management system, the display picture coding unit is further configured to: each layer of the image encoder using the Clip model performs convolution processing, pooling processing, and nonlinear activation processing on input data in forward transfer of layers, respectively, to output the image feature vector by the last layer of the image encoder of the Clip model.
In the above clothing supply chain management system, the demand text description encoding unit includes:
the word segmentation processing subunit is used for carrying out word segmentation processing on the required text description to obtain a word sequence;
an embedding vectorization subunit, configured to map each word in the word sequence into a word embedding vector by using an embedding layer of a sequence encoder of the Clip model to obtain a sequence of word embedding vectors;
A context coding subunit, configured to perform global context semantic coding on the sequence of embedded vectors using a converter-based Bert model of a sequence encoder of the Clip model to obtain a plurality of feature vectors; and
and the cascading subunit is used for cascading the feature vectors to obtain the demand description semantic feature vector.
In the above clothing supply chain management system, the joint optimization coding unit is further configured to: based on the demand description semantic feature vector, performing image attribute coding optimization on the image feature vector by using the following formula to obtain the fabric demand image feature matrix;
wherein, the formula is:
wherein M is b Is the required image feature matrix of the fabric, V 1 Is the demand description semantic feature vector, V 2 Is a feature vector of the image in question,representing a matrix multiplication.
In the above clothing supply chain management system, the purchasing fabric image feature extraction module is further configured to: each layer using the convolutional neural network model performs the following steps on input data in forward transfer of the layer:
performing convolution processing based on a two-dimensional convolution kernel on the input data based on the convolution check to obtain a convolution feature map;
Carrying out pooling treatment based on a local feature matrix on the convolution feature map to obtain a pooled feature map; and
non-linear activation is carried out on the pooled feature map so as to obtain an activated feature map;
the input of the first layer of the convolutional neural network model is the detection image, the input of the second layer to the last layer of the convolutional neural network model is the output of the last layer, and the output of the last layer of the convolutional neural network model is the detection feature matrix.
In the above clothing supply chain management system, the difference module is further configured to: calculating a differential feature matrix between the detection feature matrix and the fabric demand image feature matrix according to the following formula;
wherein, the formula is:
wherein M is a Representing the matrix of the detected characteristics,representing difference in position, M b Representing the fabric demand image feature matrix, and M c Representing the differential feature matrix.
In the above clothing supply chain management system, the small-scale optimization module includes:
the optimized weight matrix generation unit is used for calculating the small-scale local derivative feature matrix of the detection feature matrix and the fabric demand image feature matrix; and
And the weighted optimization unit is used for multiplying the differential feature matrix according to position points by taking the small-scale local derivative feature matrix as a weight matrix to obtain the optimized differential feature matrix.
In the above clothing supply chain management system, the optimization weight matrix generating unit is further configured to: calculating small-scale local derivative feature matrixes of the detection feature matrix and the fabric demand image feature matrix according to the following formula;
wherein, the formula is:
is the eigenvalue of the (i, j) th position of the detection eigenvalue matrix, +.>Is the characteristic value of the (i, j) th position of the fabric demand image characteristic matrix, and +.>Is the eigenvalue of the (i, j) th position of the small-scale locally derived eigenvalue matrix.
In the above clothing supply chain management system, the management result generation module includes:
the classification feature vector acquisition unit is used for carrying out full-connection coding on the optimized differential feature matrix by using a full-connection layer of the classifier so as to obtain classification feature vectors;
the probability value calculation unit is used for passing the classification feature vector through a Softmax classification function of the classifier to obtain a first probability of being attributed to the purchasing fabric to be evaluated meeting the fabric purchasing demand and a second probability of being attributed to the purchasing fabric to be evaluated not meeting the fabric purchasing demand; and
And a result determining unit configured to determine the classification result based on a comparison between the first probability and the second probability.
Compared with the prior art, the clothing supply chain management system provided by the application firstly acquires the fabric purchasing requirement and the detection image of the purchased fabric to be evaluated; and then, mapping the fabric purchasing requirements and the detection images of the purchasing fabrics to be evaluated into a high-dimensional feature space through a Clip model and a convolutional neural network model respectively, judging whether the purchasing fabrics to be evaluated meet the fabric purchasing requirements or not through calculating the difference between feature distribution of the two in the high-dimensional feature space, and accordingly performing adaptation degree analysis on the fabric purchasing requirements and the purchasing fabrics to be evaluated in the high-dimensional feature space to determine whether the purchasing fabrics to be evaluated meet the fabric purchasing requirements or not.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing embodiments of the present application in more detail with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 is a block diagram schematic of a garment supply chain management system according to an embodiment of the application.
FIG. 2 is a block diagram of a purchasing demand encoding module in a garment supply chain management system according to an embodiment of the application.
FIG. 3 is a block diagram of a demand text description encoding unit in a garment supply chain management system according to an embodiment of the present application.
Fig. 4 is a block diagram of a management result generation module in a clothing supply chain management system according to an embodiment of the present application.
FIG. 5 is a flowchart of a method of clothing supply chain management according to an embodiment of the application.
FIG. 6 is a schematic diagram of a system architecture of a method for managing a clothing supply chain according to an embodiment of the application.
Fig. 7 is a block diagram of an electronic device according to an embodiment of the application.
Detailed Description
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Scene overview
As described above, fabric detection is a very important link in garment supply chain management, especially in quality management. The existing fabric detection is to evaluate whether the purchased fabric is matched with the preset fabric performance through manual quality inspection, so that the efficiency is low, and the thinking deviation is easy to generate. Accordingly, an optimized garment supply chain management system is desired.
In the technical scheme, considering that fabric purchasing requirements are submitted in the clothing supply chain system firstly when fabric purchasing is carried out, the fabric purchasing requirements comprise requirement text description and display images, wherein the requirement text description comprises type description, texture description, purchasing purposes and the like of the fabric to be purchased, and the display images are image data of related fabrics meeting the requirements. Meanwhile, after corresponding fabric is purchased, a detection image of the purchased fabric is shot and stored in the clothing supply chain system. Therefore, in the technical scheme of the application, the fabric purchasing requirement and the purchasing fabric to be evaluated can be analyzed to determine whether the purchasing fabric to be evaluated meets the fabric purchasing requirement.
Specifically, the requirement text description contains text data in consideration of the fact that the fabric purchasing requirement contains multi-mode data, and the display image is image data. Therefore, in order to fully mine information in each mode data and association among each mode data, the fabric purchasing requirement is passed through a Clip model to obtain a fabric requirement image feature matrix.
In the technical scheme of the application, the Clip model comprises a parallel image encoder and a serial encoder, and a joint coding module which is cascaded with the image encoder and the serial encoder at the same time. In the coding process of the Clip model, firstly, an image coder of the Clip model is used for coding the display image to obtain an image feature vector; simultaneously, using a sequence encoder of the Clip model to carry out semantic coding on the demand text description so as to obtain a demand description semantic feature vector; and then, optimizing the coding of the image feature vector based on the demand description semantic feature vector by using a joint coding module of the Clip model to obtain the fabric demand image feature matrix.
More specifically, in one specific example of the present application, the image encoder of the Clip model uses a deep convolutional neural network model as an image feature extraction network. Those skilled in the art will appreciate that, compared with the traditional image feature extraction algorithm based on a statistical model or based on feature engineering, the deep convolutional neural network model has excellent performance in terms of image feature extraction, has stronger generalization capability and characterization capability, and can get rid of the limitations of artificial experience and knowledge. In one specific example, the deep convolutional neural network model is a deep residual network model, e.g., resNet 50, resNet 150, etc.
In a specific example of the present application, the sequence encoder of the Clip model is a context encoder including a word embedding layer, where, during the encoding process of the sequence encoder, word segmentation is performed on the required text description to obtain a word sequence, and then the word embedding layer is used to map each word in the word sequence into a word embedding vector to obtain a word embedding vector sequence; finally, the context encoder uses a converter-based semantic encoder (e.g., a converter-based Bert model) to globally-based context-semantic encode the sequence of word-embedded vectors to obtain the requirement-description semantic feature vectors.
Furthermore, the joint coding module of the Clip model optimizes the coding of the image feature vector based on the demand description semantic feature vector to obtain the fabric demand image feature matrix. In particular, in one specific example of the present application, the joint encoding module optimizes the encoding of the image feature vectors based on the demand description semantic feature vectors in the following manner: and calculating the product between the transpose vector of the image feature vector and the demand description semantic feature vector to obtain the fabric demand image feature matrix, and changing the coding of the image along the specific direction of the demand description semantic feature vector to code the corresponding attribute of the image in such a way as to obtain the fabric demand image feature matrix.
And aiming at the detection image of the purchased fabric to be evaluated, passing the detection image through a convolutional neural network model serving as a feature extractor to obtain a detection feature matrix. Namely, the convolutional neural network model is also used as an image feature extraction model to capture the image local high-dimensional implicit features of the detection image of the purchasing fabric to be evaluated.
After the fabric demand image feature matrix and the detection feature matrix are obtained, calculating a differential feature matrix between the detection feature matrix and the fabric demand image feature matrix. The characteristic distribution difference of the detection characteristic matrix and the fabric demand image characteristic matrix in the high-dimensional characteristic space is represented by a differential characteristic matrix between the detection characteristic matrix and the fabric demand image characteristic matrix.
In the technical scheme of the application, the differential feature matrix is obtained by calculating the position-by-position difference between the detection feature matrix and the fabric demand image feature matrix, but because the fabric demand image feature matrix is obtained by restricting the image feature semantics of the display picture based on the demand text description of the fabric purchasing demand through a Cl ip model and the detection feature matrix is obtained by directly extracting the image association feature from the detection image, the problem that the correlation between the detection feature matrix and the fabric demand image feature matrix on a small scale for the feature expression of the differential feature matrix is insufficient can exist, thereby affecting the accuracy of the classification result obtained by the differential feature matrix through a classifier.
Thus, the detection feature matrix M is calculated 1 And the fabric demand image feature matrix M 2 As weighted feature matrices, the small-scale locally derived feature matrices of (a) are expressed as:
and->The detection feature matrix M 1 The fabric demand image feature matrix M 2 And the weighted feature matrix, e.g. denoted as M w Characteristic values of the (i, j) th position of (c).
Here, the detection feature matrix M is calculated 1 And the fabric demand image feature matrix M 2 Small-scale local derivative features in between can be based on the detection feature matrix M 1 And the fabric demand image feature matrix M 2 The geometric approximation of the corresponding positions between the feature matrices mimics the physical properties of the mutual expression between high-dimensional features, thereby enhancing the local nonlinear dependence of the cross-feature-domain positions by position-by-position regression between feature matrices. Thus, by using the weighted feature matrix M w And carrying out dot multiplication on the differential feature matrix to weight the feature value, so that the expression capability of the differential feature matrix on the small-scale associated features between the detection feature matrix and the fabric demand image feature matrix can be improved, and the accuracy of the classification result of the differential feature matrix is improved.
That is, in the technical scheme of the application, the feature distribution optimization is performed on the differential feature matrix based on the small-scale local derivative feature matrix to obtain an optimized differential feature matrix. And the optimized differential feature matrix is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the purchased fabric to be evaluated meets the fabric purchasing requirement. In this way, in the clothing supply chain management system, the fabric purchasing requirement and the purchasing fabric to be evaluated are subjected to fitness analysis in a high-dimensional feature space so as to determine whether the purchasing fabric to be evaluated meets the fabric purchasing requirement.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Exemplary System
FIG. 1 is a block diagram schematic of a garment supply chain management system according to an embodiment of the application. As shown in fig. 1, the clothing supply chain management system 100 according to an embodiment of the present application includes: a fabric purchase demand retrieving module 110, configured to obtain a fabric purchase demand, where the fabric purchase demand includes a demand text description and a display picture; the purchasing demand encoding module 120 is configured to pass the purchasing demand of the fabric through a Clip model to obtain a fabric demand image feature matrix; the purchasing fabric image acquisition module 130 is used for acquiring a detection image of the purchasing fabric to be evaluated; the procurement fabric image feature extraction module 140 is configured to pass the detected image through a convolutional neural network model serving as a feature extractor to obtain a detected feature matrix; the difference module 150 is configured to calculate a difference feature matrix between the detection feature matrix and the fabric demand image feature matrix; the small-scale optimization module 160 is configured to perform feature distribution optimization on the differential feature matrix to obtain an optimized differential feature matrix; and a management result generating module 170, configured to pass the optimized differential feature matrix through a classifier to obtain a classification result, where the classification result is used to indicate whether the to-be-evaluated purchasing fabric meets the fabric purchasing requirement.
In the embodiment of the present application, the fabric purchase demand retrieving module 110 is configured to obtain a fabric purchase demand, where the fabric purchase demand includes a demand text description and a display picture. As described above, fabric detection is a very important link in garment supply chain management, especially in quality management. The existing fabric detection is to evaluate whether the purchased fabric is matched with the preset fabric performance through manual quality inspection, so that the efficiency is low, and the thinking deviation is easy to generate. Accordingly, an optimized garment supply chain management system is desired. In the technical scheme, considering that fabric purchasing requirements are submitted in the clothing supply chain system firstly when fabric purchasing is carried out, the fabric purchasing requirements comprise requirement text description and display images, wherein the requirement text description comprises type description, texture description, purchasing purposes and the like of the fabric to be purchased, and the display images are image data of related fabrics meeting the requirements. Meanwhile, after corresponding fabric is purchased, a detection image of the purchased fabric is shot and stored in the clothing supply chain system. Therefore, in the technical scheme of the application, the fabric purchasing requirement and the purchasing fabric to be evaluated can be analyzed to determine whether the purchasing fabric to be evaluated meets the fabric purchasing requirement.
In the embodiment of the present application, the purchase demand encoding module 120 is configured to pass the fabric purchase demand through a Clip model to obtain a fabric demand image feature matrix. It should be understood that, considering that the fabric purchasing requirement includes multi-mode data, the requirement text description includes text data, and the display image is image data. Therefore, in order to fully mine information in each mode data and association among each mode data, the fabric purchasing requirement is passed through a Clip model to obtain a fabric requirement image feature matrix. In the technical scheme of the application, the Clip model comprises a parallel image encoder and a serial encoder, and a joint coding module which is cascaded with the image encoder and the serial encoder at the same time.
FIG. 2 is a block diagram of a purchasing demand encoding module in a garment supply chain management system according to an embodiment of the application. As shown in fig. 2, in a specific example of the present application, the purchase demand encoding module 120 includes: a display picture coding unit 121, configured to code the display image by using an image encoder of the Clip model to obtain an image feature vector; a requirement text description encoding unit 122, configured to use a sequence encoder of the Clip model to perform semantic encoding on the requirement text description to obtain a requirement description semantic feature vector; and a joint optimization coding unit 123, configured to optimize coding of the image feature vector based on the demand description semantic feature vector by using a joint coding module of the Clip model to obtain the fabric demand image feature matrix.
In a specific example of the present application, the presentation picture coding unit is further configured to: each layer of the image encoder using the Clip model performs convolution processing, pooling processing, and nonlinear activation processing on input data in forward transfer of layers, respectively, to output the image feature vector by the last layer of the image encoder of the Clip model. It should be appreciated that the deep convolutional neural network model has excellent performance in terms of image feature extraction and has stronger generalization and characterization capabilities, while being free of limitations of human experience and knowledge, compared to conventional statistical model-based or feature engineering-based image feature extraction algorithms.
More specifically, in one embodiment of the present application, at each layer of the image encoder of the Clip model, first, sliding over the input data by a convolution kernel, and performing value calculation at each position to extract a high-dimensional local implicit feature of the input data, to obtain the convolution feature map; and then carrying out average pooling processing or maximum pooling processing on the convolution characteristic diagram based on the local characteristic matrix to obtain the pooled characteristic diagram, and extracting main characteristics through pooling processing, and simultaneously reducing the number of parameters and reducing overfitting. Then, an activation function is selected to activate the pooled feature map row to obtain an activation feature map, such as a Sigmoid activation function, and a nonlinear factor is introduced through the activation function so as to increase the characterization capability of the whole network. Finally, the activation feature map is output to the next layer as an output feature map. The input of the first layer of the image encoder of the Clip model is the display image, the input of the second layer to the last layer of the image encoder of the Clip model is the output of the last layer, and the output of the last layer of the image encoder of the Clip model is the image feature vector.
In another specific example of the present application, the depth convolutional neural network model is a depth residual network model, and considering that the depth convolutional neural network has gradient dispersion problem with more layers, resulting in the increase of training error rate and test error rate, therefore, the depth residual network can be further used for feature extraction, a residual module is added in the conventional convolutional neural network in the depth residual network, the residual module is used for obtaining an output feature map before two layers, the output feature map before two layers is added with the output feature map of the last layer to obtain a new feature map, the new feature map is used as an input of the layer, and more specifically, when the output feature map before two layers is different from the input feature map of the layer in dimension, the dimension is increased by using convolution of 1x1 to realize dimension unification. More specifically, the depth residual network may be ResNet-50, resNet-101, and ResNet-152.
FIG. 3 is a block diagram of a demand text description encoding unit in a garment supply chain management system according to an embodiment of the present application. As shown in fig. 3, in a specific example of the present application, the requirement text description encoding unit 122 includes: a word segmentation processing subunit 1221, configured to perform word segmentation processing on the required text description to obtain a word sequence; an embedding vectorization subunit 1222, configured to map each word in the word sequence into a word embedding vector by using an embedding layer of a sequence encoder of the Clip model to obtain a sequence of word embedding vectors; a context encoding subunit 1223, configured to perform global-based context semantic encoding on the sequence of embedded vectors using a converter-based Bert model of a sequence encoder of the Clip model to obtain a plurality of feature vectors; and a concatenation subunit 1224, configured to concatenate the plurality of feature vectors to obtain the requirement description semantic feature vector.
In a specific example of the present application, the joint optimization coding unit is further configured to: based on the demand description semantic feature vector, performing image attribute coding optimization on the image feature vector by using the following formula to obtain the fabric demand image feature matrix;
wherein, the formula is:
wherein M is b Is the required image feature matrix of the fabric, V 1 Is the demand description semantic feature vector, V 2 Is a feature vector of the image in question,representing a matrix multiplication. That is, in this manner, the encoding of the image is changed along a particular direction of the demand description semantic feature vector to encode the corresponding attribute of the image to obtain the fabric demand image feature matrix.
In the embodiment of the present application, the purchasing fabric image acquisition module 130 is configured to acquire a detection image of the purchasing fabric to be evaluated. Specifically, a detected image of the purchased fabric is captured by a camera and stored to the garment supply chain system.
In the embodiment of the present application, the procurement fabric image feature extraction module 140 is configured to pass the detection image through a convolutional neural network model serving as a feature extractor to obtain a detection feature matrix. It should be appreciated that, given the significant advantages of the convolutional neural network model in terms of image feature extraction, the detected image is passed through the convolutional neural network model as a feature extractor to obtain a detected feature matrix containing locally implicit features of the detected image.
In a specific example of the present application, the purchasing fabric image feature extraction module is further configured to: each layer using the convolutional neural network model performs the following steps on input data in forward transfer of the layer: performing convolution processing based on a two-dimensional convolution kernel on the input data based on the convolution check to obtain a convolution feature map; carrying out pooling treatment based on a local feature matrix on the convolution feature map to obtain a pooled feature map; performing nonlinear activation on the pooled feature map to obtain an activated feature map; the input of the first layer of the convolutional neural network model is the detection image, the input of the second layer to the last layer of the convolutional neural network model is the output of the last layer, and the output of the last layer of the convolutional neural network model is the detection feature matrix.
In the embodiment of the present application, the difference module 150 is configured to calculate a difference feature matrix between the detection feature matrix and the fabric demand image feature matrix. That is, the difference between the face fabric purchasing demand and the feature distribution of the purchasing face fabric to be evaluated in the high-dimensional feature space is calculated by calculating the differential feature matrix between the detection feature matrix and the face fabric demand image feature matrix.
In a specific example of the present application, the differential module is further configured to: calculating a differential feature matrix between the detection feature matrix and the fabric demand image feature matrix according to the following formula;
wherein, the formula is:
wherein M is a Representing the matrix of the detected characteristics,representing difference in position, M b Representing the fabric demand image feature matrix, and M c Representing the differential feature matrix.
In the embodiment of the present application, the small-scale optimization module 160 is configured to perform feature distribution optimization on the differential feature matrix to obtain an optimized differential feature matrix. In the technical scheme of the application, the differential feature matrix is obtained by calculating the position-by-position difference between the detection feature matrix and the fabric demand image feature matrix, but because the fabric demand image feature matrix is obtained by restraining the image feature semantics of the display picture based on the demand text description of the fabric purchasing demand through a Clip model and the detection feature matrix is obtained by directly extracting the image association feature from the detection image, the problem that the correlation between the detection feature matrix and the fabric demand image feature matrix on a small scale for the feature expression of the differential feature matrix is insufficient can exist, thereby influencing the accuracy of the classification result obtained by the differential feature matrix through a classifier. Thus, the detection feature matrix M is calculated 1 And the fabric demand image feature matrix M 2 As weighted feature matrices.
In a specific example of the application, the small-scale optimization module comprises an optimization weight matrix generation unit, a detection weight matrix generation unit and a small-scale local derivative feature matrix generation unit, wherein the optimization weight matrix generation unit is used for calculating the detection feature matrix and the fabric demand image feature matrix; and the weighted optimization unit is used for multiplying the differential feature matrix by position points by taking the small-scale local derivative feature matrix as a weight matrix to obtain the optimized differential feature matrix.
In a specific example of the present application, the optimization weight matrix generating unit is further configured to: calculating small-scale local derivative feature matrixes of the detection feature matrix and the fabric demand image feature matrix according to the following formula;
wherein, the formula is:
is the eigenvalue of the (i, j) th position of the detection eigenvalue matrix, +.>Is the characteristic value of the (i, j) th position of the fabric demand image characteristic matrix, and +.>Is the eigenvalue of the (i, j) th position of the small-scale locally derived eigenvalue matrix.
Here, the detection feature matrix M is calculated 1 And the fabric demand image feature matrix M 2 Small-scale local derivative features in between can be based on the detection feature matrix M 1 And the fabric demand image feature matrix M 2 The geometric approximation of the corresponding positions between the feature matrices mimics the physical properties of the mutual expression between high-dimensional features, thereby enhancing the local nonlinear dependence of the cross-feature-domain positions by position-by-position regression between feature matrices. Thus, by using the weighted feature matrix M w And carrying out dot multiplication on the differential feature matrix to weight the feature value, so that the expression capability of the differential feature matrix on the small-scale associated features between the detection feature matrix and the fabric demand image feature matrix can be improved, and the accuracy of the classification result of the differential feature matrix is improved.
In the embodiment of the present application, the management result generating module 170 is configured to pass the optimized differential feature matrix through a classifier to obtain a classification result, where the classification result is used to indicate whether the purchased fabric to be evaluated meets the fabric purchasing requirement. The method comprises the steps of carrying out feature distribution optimization on the differential feature matrix based on the small-scale local derivative feature matrix to obtain an optimized differential feature matrix, and then, passing the optimized differential feature matrix through a classifier to obtain a classification result.
Fig. 4 is a block diagram of a management result generation module in a clothing supply chain management system according to an embodiment of the present application. As shown in fig. 4, in a specific example of the present application, the management result generating module 170 includes: a classification feature vector obtaining unit 171, configured to perform full-connection encoding on the optimized differential feature matrix by using a full-connection layer of the classifier to obtain a classification feature vector; the probability value calculation unit 172 is configured to pass the classification feature vector through a Softmax classification function of the classifier to obtain a first probability that the to-be-evaluated purchasing fabric meets the fabric purchasing requirement and a second probability that the to-be-evaluated purchasing fabric does not meet the fabric purchasing requirement; and a result determination unit 173 for determining the classification result based on a comparison between the first probability and the second probability.
That is, the full-connection layer of the classifier is used for full-connection coding of the optimized differential feature matrix so as to fully utilize the information of each position in the optimized differential feature matrix to obtain the classification feature vector. And then, calculating the Softmax function value of the one-dimensional classification feature vector to obtain a first probability of conforming to the fabric purchasing demand of the purchasing fabric to be evaluated and a second probability of not conforming to the fabric purchasing demand of the purchasing fabric to be evaluated. And finally, determining the classification result based on the comparison between the first probability and the second probability, wherein the classification result is that the to-be-evaluated purchasing fabric meets the fabric purchasing requirement when the first probability is larger than the second probability, and the classification result is that the to-be-evaluated purchasing fabric does not meet the fabric purchasing requirement when the first probability is larger than the second probability.
In summary, according to the clothing supply chain management system of the embodiment of the application, firstly, the fabric purchasing requirement and the detection image of the purchased fabric to be evaluated are obtained; and then, mapping the fabric purchasing requirements and the detection images of the purchasing fabrics to be evaluated into a high-dimensional feature space through a Clip model and a convolutional neural network model respectively, judging whether the purchasing fabrics to be evaluated meet the fabric purchasing requirements or not through calculating the difference between feature distribution of the two in the high-dimensional feature space, and accordingly performing adaptation degree analysis on the fabric purchasing requirements and the purchasing fabrics to be evaluated in the high-dimensional feature space to determine whether the purchasing fabrics to be evaluated meet the fabric purchasing requirements or not.
As described above, the clothing supply chain management system 100 according to an embodiment of the present application may be implemented in various terminal devices, for example, a server or the like where clothing supply chain management algorithms are deployed. In one example, the garment supply chain management system 100 may be integrated into the terminal device as a software module and/or hardware module. For example, the garment supply chain management system 100 may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the garment supply chain management system 100 could equally be one of a number of hardware modules of the terminal device.
Alternatively, in another example, the garment supply chain management system 100 and the terminal device may be separate devices, and the garment supply chain management system 100 may be connected to the terminal device via a wired and/or wireless network and transmit the interactive information in a agreed data format.
Exemplary method
FIG. 5 is a flowchart of a method of clothing supply chain management according to an embodiment of the application. As shown in fig. 5, the clothing supply chain management system according to an embodiment of the present application includes: s110, acquiring fabric purchasing requirements, wherein the fabric purchasing requirements comprise requirement text description and display pictures; s120, passing the fabric purchasing demand through a Clip model to obtain a fabric demand image feature matrix; s130, acquiring a detection image of the purchased fabric to be evaluated; s140, passing the detection image through a convolutional neural network model serving as a feature extractor to obtain a detection feature matrix; s150, calculating a differential feature matrix between the detection feature matrix and the fabric demand image feature matrix; s160, performing feature distribution optimization on the differential feature matrix to obtain an optimized differential feature matrix; and S170, enabling the optimized differential feature matrix to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the purchased fabric to be evaluated meets fabric purchasing requirements or not.
FIG. 6 is a schematic diagram of a system architecture of a method for managing a clothing supply chain according to an embodiment of the application. In the embodiment of the present application, as shown in fig. 6, first, a display picture is obtained, and the display picture is passed through an image encoder of a Clip model to obtain an image feature vector. Meanwhile, a demand text description is acquired, and the demand text description passes through a sequence encoder of the Clip model to obtain a demand description semantic feature vector. And then, optimizing the coding of the image feature vector by using a joint coding module of the Clip model based on the demand description semantic feature vector to obtain the fabric demand image feature matrix. And simultaneously, obtaining a detection image of the purchased fabric to be evaluated, and obtaining a detection feature matrix by taking the detection image of the purchased fabric to be evaluated through a convolutional neural network model serving as a feature extractor. And then, calculating a differential feature matrix between the detection feature matrix and the fabric demand image feature matrix, and carrying out feature distribution optimization on the differential feature matrix to obtain an optimized differential feature matrix. And finally, the optimized differential feature matrix is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the purchased fabric to be evaluated meets the fabric purchasing requirement.
In a specific example of the present application, the step of passing the fabric purchasing requirement through a Clip model to obtain a fabric requirement image feature matrix includes: encoding the display image by using an image encoder of the Clip model to obtain an image feature vector; using a sequence encoder of the Clip model to carry out semantic encoding on the demand text description so as to obtain a demand description semantic feature vector; and optimizing the coding of the image feature vector based on the demand description semantic feature vector by using a joint coding module of the Clip model to obtain the fabric demand image feature matrix.
In a specific example of the present application, the encoding the presentation image by the image encoder using the Clip model to obtain an image feature vector includes: each layer of the image encoder using the Clip model performs convolution processing, pooling processing, and nonlinear activation processing on input data in forward transfer of layers, respectively, to output the image feature vector by the last layer of the image encoder of the Clip model.
In a specific example of the present application, the semantic encoding of the demand text description by the sequence encoder using the Clip model to obtain a demand description semantic feature vector includes: word segmentation processing is carried out on the required text description to obtain a word sequence; mapping each word in the word sequence into a word embedding vector by using an embedding layer of a sequence encoder of the Clip model to obtain a sequence of word embedding vectors; performing global-based context semantic coding on the sequence of the embedded vectors by using a converter-based Bert model of a sequence encoder of the Clip model to obtain a plurality of feature vectors; and cascading the plurality of feature vectors to obtain the requirement description semantic feature vector.
In a specific example of the present application, the semantic encoding of the demand text description by the sequence encoder using the Clip model to obtain a demand description semantic feature vector includes: based on the demand description semantic feature vector, performing image attribute coding optimization on the image feature vector by using the following formula to obtain the fabric demand image feature matrix;
wherein, the formula is:
wherein M is b Is the required image feature matrix of the fabric, V 1 Is the demand description semantic feature vector, V 2 Is a feature vector of the image in question,representing a matrix multiplication.
In a specific example of the present application, the passing the detected image through a convolutional neural network model as a feature extractor to obtain a detected feature matrix includes: each layer using the convolutional neural network model performs the following steps on input data in forward transfer of the layer: performing convolution processing based on a two-dimensional convolution kernel on the input data based on the convolution check to obtain a convolution feature map; carrying out pooling treatment based on a local feature matrix on the convolution feature map to obtain a pooled feature map; performing nonlinear activation on the pooled feature map to obtain an activated feature map; the input of the first layer of the convolutional neural network model is the detection image, the input of the second layer to the last layer of the convolutional neural network model is the output of the last layer, and the output of the last layer of the convolutional neural network model is the detection feature matrix.
In a specific example of the present application, the calculating a differential feature matrix between the detection feature matrix and the fabric demand image feature matrix includes: calculating a differential feature matrix between the detection feature matrix and the fabric demand image feature matrix according to the following formula;
wherein, the formula is:
wherein, the liquid crystal display device comprises a liquid crystal display device,M a representing the matrix of the detected characteristics,representing difference in position, M b Representing the fabric demand image feature matrix, and M c Representing the differential feature matrix.
In a specific example of the application, the feature distribution optimization is performed on the differential feature matrix to obtain an optimized differential feature matrix, which comprises the steps of calculating small-scale local derivative feature matrices of the detection feature matrix and the fabric demand image feature matrix; and multiplying the differential feature matrix by position points by taking the small-scale local derivative feature matrix as a weight matrix to obtain the optimized differential feature matrix.
In a specific example of the present application, the calculating the small-scale local derivative feature matrix of the detection feature matrix and the fabric demand image feature matrix includes: calculating small-scale local derivative feature matrixes of the detection feature matrix and the fabric demand image feature matrix according to the following formula;
Wherein, the formula is:
is the eigenvalue of the (i, j) th position of the detection eigenvalue matrix, +.>Is the fabric requirement
The feature value of the (i, j) th position of the image feature matrix, andis the eigenvalue of the (i, j) th position of the small-scale locally derived eigenvalue matrix.
In a specific example of the present application, the step of passing the optimized differential feature matrix through a classifier to obtain a classification result includes: performing full-connection coding on the optimized differential feature matrix by using a full-connection layer of the classifier to obtain a classification feature vector; the classification feature vector is passed through a Softmax classification function of the classifier to obtain a first probability of belonging to the purchased fabric to be evaluated meeting the fabric purchasing demand and a second probability of belonging to the purchased fabric to be evaluated not meeting the fabric purchasing demand; and determining the classification result based on a comparison between the first probability and the second probability.
Here, it will be understood by those skilled in the art that the specific operations of the respective steps in the above-described clothing supply chain management method have been described in detail in the above description of the clothing supply chain management system with reference to fig. 1 to 4, and thus, repetitive descriptions thereof will be omitted.
Exemplary electronic device
Next, an electronic device according to an embodiment of the present application is described with reference to fig. 7.
Fig. 7 is a block diagram of an electronic device according to an embodiment of the application.
As shown in fig. 7, the electronic device 10 includes one or more processors 11 and a memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 11 to perform the garment supply chain management and/or other desired functions of the various embodiments of the application described above. Various content such as demand text descriptions, presentation pictures, and detected images of the purchased face fabric to be evaluated may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
The input means 13 may comprise, for example, a keyboard, a mouse, etc.
The output device 14 may output various information to the outside, including a classification result indicating whether the purchased fabric to be evaluated meets the fabric purchasing demand, and the like. The output means 14 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Exemplary computer program product and computer readable storage Medium
In addition to the methods and apparatus described above, embodiments of the application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps of a garment supply chain management method according to various embodiments of the application described in the "exemplary methods" section of this specification.
The computer program product may write program code for performing operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform the steps of a garment supply chain management method according to various embodiments of the present application described in the "exemplary methods" section of the present specification.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not intended to be limiting, and these advantages, benefits, effects, etc. are not to be considered as essential to the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not necessarily limited to practice with the above described specific details.
The block diagrams of the devices, apparatuses, devices, systems referred to in the present application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent aspects of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (8)

1. A garment supply chain management system, comprising:
the fabric purchasing demand invoking module is used for acquiring fabric purchasing demands, wherein the fabric purchasing demands comprise demand text description and display pictures;
the purchasing demand encoding module is used for enabling the fabric purchasing demand to pass through a Clip model to obtain a fabric demand image feature matrix;
the purchasing fabric image acquisition module is used for acquiring a detection image of the purchasing fabric to be evaluated;
the purchasing fabric image feature extraction module is used for enabling the detection image to pass through a convolutional neural network model serving as a feature extractor to obtain a detection feature matrix;
the difference module is used for calculating a difference feature matrix between the detection feature matrix and the fabric demand image feature matrix;
the small-scale optimization module is used for carrying out feature distribution optimization on the differential feature matrix to obtain an optimized differential feature matrix; the management result generation module is used for enabling the optimized differential feature matrix to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether the fabric to be evaluated meets the fabric purchasing requirement or not;
The small-scale optimization module comprises:
the optimized weight matrix generation unit is used for calculating the small-scale local derivative feature matrix of the detection feature matrix and the fabric demand image feature matrix; the weighted optimization unit is used for multiplying the differential feature matrix according to position points by taking the small-scale local derivative feature matrix as a weight matrix to obtain the optimized differential feature matrix;
the optimized weight matrix generating unit is further configured to: calculating small-scale local derivative feature matrixes of the detection feature matrix and the fabric demand image feature matrix according to the following formula;
wherein, the formula is:
is the eigenvalue of the (i, j) th position of the detection eigenvalue matrix, +.>Is the characteristic value of the (i, j) th position of the fabric demand image characteristic matrix, and +.>Is the eigenvalue of the (i, j) th position of the small-scale locally derived eigenvalue matrix.
2. The garment supply chain management system of claim 1, wherein the procurement requirements encoding module comprises:
a display picture coding unit, configured to code the display image by using an image encoder of the Clip model to obtain an image feature vector;
The demand text description coding unit is used for carrying out semantic coding on the demand text description by using a sequence coder of the Clip model so as to obtain a demand description semantic feature vector; and the joint optimization coding unit is used for optimizing the coding of the image feature vector based on the demand description semantic feature vector by using a joint coding module of the Clip model so as to obtain the fabric demand image feature matrix.
3. The garment supply chain management system of claim 2, wherein the display picture encoding unit is further configured to: each layer of the image encoder using the Clip model performs convolution processing, pooling processing, and nonlinear activation processing on input data in forward transfer of layers, respectively, to output the image feature vector by the last layer of the image encoder of the Clip model.
4. The garment supply chain management system of claim 3, wherein the demand text description encoding unit comprises:
the word segmentation processing subunit is used for carrying out word segmentation processing on the required text description to obtain a word sequence;
an embedding vectorization subunit, configured to map each word in the word sequence into a word embedding vector by using an embedding layer of a sequence encoder of the Clip model to obtain a sequence of word embedding vectors;
A context coding subunit, configured to perform global context semantic coding on the sequence of embedded vectors using a converter-based Bert model of a sequence encoder of the Clip model to obtain a plurality of feature vectors; and the cascading subunit is used for cascading the feature vectors to obtain the demand description semantic feature vector.
5. The garment supply chain management system of claim 4, wherein the joint optimization coding unit is further configured to: based on the demand description semantic feature vector, performing image attribute coding optimization on the image feature vector by using the following formula to obtain the fabric demand image feature matrix;
wherein, the formula is:
wherein M is b Is the required image feature matrix of the fabric, V 1 Is the demand description semantic feature vector, V 2 Is a feature vector of the image in question,representing a matrix multiplication.
6. The garment supply chain management system of claim 5, wherein the procurement fabric image feature extraction module is further configured to: each layer using the convolutional neural network model performs the following steps on input data in forward transfer of the layer:
Performing convolution processing based on a two-dimensional convolution kernel on the input data based on the convolution check to obtain a convolution feature map;
carrying out pooling treatment based on a local feature matrix on the convolution feature map to obtain a pooled feature map; non-linear activation is carried out on the pooled feature map so as to obtain an activated feature map;
the input of the first layer of the convolutional neural network model is the detection image, the input of the second layer to the last layer of the convolutional neural network model is the output of the last layer, and the output of the last layer of the convolutional neural network model is the detection feature matrix.
7. The garment supply chain management system of claim 6, wherein the differencing module is further to: calculating a differential feature matrix between the detection feature matrix and the fabric demand image feature matrix according to the following formula;
wherein, the formula is:
wherein M is a Representing the matrix of the detected characteristics,representing difference in position, M b Representing the fabric demand image feature matrix, and M c Representing the differential feature matrix.
8. The garment supply chain management system of claim 7, wherein the management result generation module comprises:
The classification feature vector acquisition unit is used for carrying out full-connection coding on the optimized differential feature matrix by using a full-connection layer of the classifier so as to obtain classification feature vectors;
the probability value calculation unit is used for passing the classification feature vector through a Softmax classification function of the classifier to obtain a first probability of being attributed to the purchasing fabric to be evaluated meeting the fabric purchasing demand and a second probability of being attributed to the purchasing fabric to be evaluated not meeting the fabric purchasing demand; and
and a result determining unit configured to determine the classification result based on a comparison between the first probability and the second probability.
CN202211546069.5A 2022-12-05 2022-12-05 Clothing supply chain management system Active CN116307446B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211546069.5A CN116307446B (en) 2022-12-05 2022-12-05 Clothing supply chain management system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211546069.5A CN116307446B (en) 2022-12-05 2022-12-05 Clothing supply chain management system

Publications (2)

Publication Number Publication Date
CN116307446A CN116307446A (en) 2023-06-23
CN116307446B true CN116307446B (en) 2023-10-27

Family

ID=86782192

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211546069.5A Active CN116307446B (en) 2022-12-05 2022-12-05 Clothing supply chain management system

Country Status (1)

Country Link
CN (1) CN116307446B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308102A (en) * 2019-08-01 2021-02-02 北京易真学思教育科技有限公司 Image similarity calculation method, calculation device, and storage medium
CN113449135A (en) * 2021-08-31 2021-09-28 阿里巴巴达摩院(杭州)科技有限公司 Image generation system and method
CN115203380A (en) * 2022-09-19 2022-10-18 山东鼹鼠人才知果数据科技有限公司 Text processing system and method based on multi-mode data fusion
CN115330381A (en) * 2022-08-23 2022-11-11 陕西省君凯电子科技有限公司 Intelligent payment management method and system for goods and method thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3790680B2 (en) * 2001-05-25 2006-06-28 株式会社東芝 Image processing system and driving support system using the same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308102A (en) * 2019-08-01 2021-02-02 北京易真学思教育科技有限公司 Image similarity calculation method, calculation device, and storage medium
CN113449135A (en) * 2021-08-31 2021-09-28 阿里巴巴达摩院(杭州)科技有限公司 Image generation system and method
CN115330381A (en) * 2022-08-23 2022-11-11 陕西省君凯电子科技有限公司 Intelligent payment management method and system for goods and method thereof
CN115203380A (en) * 2022-09-19 2022-10-18 山东鼹鼠人才知果数据科技有限公司 Text processing system and method based on multi-mode data fusion

Also Published As

Publication number Publication date
CN116307446A (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN115203380B (en) Text processing system and method based on multi-mode data fusion
CN111950638B (en) Image classification method and device based on model distillation and electronic equipment
CN108959482B (en) Single-round dialogue data classification method and device based on deep learning and electronic equipment
CN112100401B (en) Knowledge graph construction method, device, equipment and storage medium for science and technology services
CN114973222B (en) Scene text recognition method based on explicit supervision attention mechanism
CN111797589A (en) Text processing network, neural network training method and related equipment
CN116015837A (en) Intrusion detection method and system for computer network information security
CN116308754B (en) Bank credit risk early warning system and method thereof
CN115827257B (en) CPU capacity prediction method and system for processor system
US20240135174A1 (en) Data processing method, and neural network model training method and apparatus
CN116797533B (en) Appearance defect detection method and system for power adapter
CN116151845A (en) Product full life cycle management system and method based on industrial Internet of things technology
CN108268629B (en) Image description method and device based on keywords, equipment and medium
CN111507288A (en) Image detection method, image detection device, computer equipment and storage medium
CN110659701B (en) Information processing method, information processing apparatus, electronic device, and medium
CN116307446B (en) Clothing supply chain management system
CN115906863B (en) Emotion analysis method, device, equipment and storage medium based on contrast learning
CN113806747B (en) Trojan horse picture detection method and system and computer readable storage medium
CN116977021B (en) Automatic pushing method for system butt joint based on big data
JP2019139626A (en) Similarity assessment program
CN115617986B (en) Intelligent bid-recruiting management system and management method thereof
CN116187294B (en) Method and system for rapidly generating electronic file of informationized detection laboratory
CN116703687B (en) Image generation model processing, image generation method, image generation device and computer equipment
CN117055507A (en) Automatic packaging paper tube production line control system and method thereof
CN117874990A (en) Data identification system and method for intelligent basic simulation platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant