CN112270615A - Intelligent decomposition method for manufacturing BOM (Bill of Material) by complex equipment based on semantic calculation - Google Patents
Intelligent decomposition method for manufacturing BOM (Bill of Material) by complex equipment based on semantic calculation Download PDFInfo
- Publication number
- CN112270615A CN112270615A CN202011153334.4A CN202011153334A CN112270615A CN 112270615 A CN112270615 A CN 112270615A CN 202011153334 A CN202011153334 A CN 202011153334A CN 112270615 A CN112270615 A CN 112270615A
- Authority
- CN
- China
- Prior art keywords
- bom
- text
- workpiece
- vector
- manufacturing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004519 manufacturing process Methods 0.000 title claims abstract description 93
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000000354 decomposition reaction Methods 0.000 title claims abstract description 31
- 239000000463 material Substances 0.000 title description 9
- 238000004364 calculation method Methods 0.000 title description 5
- 238000000605 extraction Methods 0.000 claims abstract description 16
- 230000007246 mechanism Effects 0.000 claims abstract description 14
- 238000007781 pre-processing Methods 0.000 claims abstract description 8
- 239000013598 vector Substances 0.000 claims description 70
- 238000012549 training Methods 0.000 claims description 30
- 230000006870 function Effects 0.000 claims description 22
- 238000011176 pooling Methods 0.000 claims description 19
- 238000012545 processing Methods 0.000 claims description 15
- 238000002372 labelling Methods 0.000 claims description 8
- 239000011159 matrix material Substances 0.000 claims description 8
- 230000002457 bidirectional effect Effects 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 4
- 238000005065 mining Methods 0.000 claims description 4
- 238000007476 Maximum Likelihood Methods 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 3
- 230000002776 aggregation Effects 0.000 claims description 3
- 238000004220 aggregation Methods 0.000 claims description 3
- 238000004458 analytical method Methods 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 230000003416 augmentation Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 description 17
- 238000013145 classification model Methods 0.000 description 7
- 238000013461 design Methods 0.000 description 7
- 238000002474 experimental method Methods 0.000 description 5
- 238000013507 mapping Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/04—Manufacturing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Human Resources & Organizations (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Manufacturing & Machinery (AREA)
- Biomedical Technology (AREA)
- Economics (AREA)
- Mathematical Physics (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Databases & Information Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The BOM intelligent decomposition method for manufacturing complex equipment based on BERT-TextCNN comprises the following steps: firstly, performing text preprocessing on a manufactured BOM through 3 steps of category marking, semantic expansion, stop word removal and the like; secondly, vectorizing and describing the preprocessed BOM text statement by an embedded marking method, an MLM and an NSP mechanism; finally, performing feature extraction and classification on the BOM text sentences quantized in opposite directions by using the BERT-TextCNN, and intelligently decomposing the BOM manufactured by the complex equipment according to four types of necessary self-made parts, necessary external cooperation parts, unnecessary self-made parts and unnecessary external cooperation parts; has the characteristics of low cost and high efficiency.
Description
Technical Field
The invention belongs to the technical field of equipment manufacturing, and particularly relates to a semantic computation-based intelligent decomposition method for BOM (bill of material) in complex equipment manufacturing.
Background
With the increasing competition of the equipment manufacturing industry, customized production has become the main direction of future development. The complex equipment comprises a plurality of parts, a complex manufacturing process and a complex assembly flow, which brings great challenges to the production, operation and management of equipment manufacturing enterprises. Currently, most manufacturing companies store the number of parts, manufacturing processes, and assembly processes required for a product from design to manufacturing in a Bill of materials (BOM) list. The BOM, as a concrete representation of the product structure, runs through the whole life cycle of the product from process, purchase, manufacture, logistics, after sale, and service, and is the core data of the enterprise.
The types of parts for manufacturing the BOM can be divided into the following parts according to whether a manufacturing enterprise processes the parts by itself or not, except for standard parts: self-made parts processed in the factory, and external parts processed and manufactured by the external cooperation factory instead of the external cooperation factory due to factors such as cost, completion time and the like. However, in actual production, due to the influence of manufacturing resources of enterprises, a part of the self-made parts and the outside products can be dynamically adjusted, for example, due to the limitation of processing time, the manufacturing enterprises may have an overdue cost caused by the fact that the manufacturing tasks of the workpieces cannot be completed on time, and at this time, a part of the self-made parts are adjusted to the outside products for processing production. However, some of the self-made parts and the outside cooperation parts are limited by the processing capability and cannot be dynamically adjusted, for example, some self-made parts can only be manufactured by self-made in an enterprise or manufactured by an outside cooperation factory. Therefore, how to quickly and accurately find out the parts which must be processed and manufactured by the internal of the enterprise (called as self-contained parts for short), the parts which must be processed and manufactured by the external cooperation factory (called as self-contained parts for short), the parts which do not need to be processed and manufactured by the internal of the enterprise (called as self-contained parts for short), the parts which do not need to be processed and manufactured by the external cooperation factory (called as self-contained parts for short) from the manufacture of the BOM becomes a great importance in the production and manufacturing links.
In terms of BOM decomposition, two methods of traditional manual decomposition and computer-aided decomposition exist at present. The traditional manual decomposition method is that after the process design is finished, a BOM temporary group is formed by multiple departments of personnel such as internal design, process, production and manufacturing of a factory, the designed BOM is decomposed, and the decomposed workpiece is processed according to different processes to form the manufactured BOM. The BOM manufactured by the method has accurate result, but has strong dependence on the professional of personnel, wastes time and labor and is easy to make mistakes. The method utilizes a computer-aided decomposition method, and the core of the method lies in constructing the mapping relation between the design BOM and the manufacturing BOM so as to generate the manufacturing BOM. For example, in the decomposition of the BOM, XU Hanchuan et al firstly defines the strict formal description of data in the BOM by using the quadruple, then proposes an algorithm for converting the technological process of designing the BOM into the manufacturing BOM, and defines a plurality of rules for three key links of product structure, production lead and technological route appearing in the conversion process. Liu Xiao Bing et al propose a BOM mapping method based on feature identification, find out the feature difference of the material from the comparison of the design BOM and the manufacture BOM, distinguish five material types of the inheritance component, the dummy component, the middle component, the outsource component and the outsource component, and modify the design structure according to the corresponding algorithm to form the manufacture BOM. LU Huaijing and the like generate an assembly BOM from an assembly process route, then modify the assembly BOM to obtain an MBOM, and provide solutions for middleware, a multi-view data structure, assembly time and other key problems. Xu tianbao et al propose a bill of material mapping technique based on process management, starting from the root of the bill of material mapping problem. Xiebo researches a new BOM structure conversion algorithm, in the process of designing BOM decomposition, the combination of B + tree depth traversal and breadth traversal is adopted, data cascade data coding and structure quantity are introduced, the affiliation relationship of a parent item and a child item can be stored, all node information can be traced, the material list data can be quickly and accurately obtained for integration, and the purpose of designing BOM decomposition is achieved. From the existing BOM decomposition research literature, the main research content focuses on the conversion of design BOM to manufacture BOM, and further identification and decomposition of manufacture BOM are rarely involved.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide the intelligent decomposition method for manufacturing the BOM by using the complex equipment based on semantic calculation, and the method has the characteristics of low cost and high efficiency.
In order to achieve the purpose, the invention adopts the technical scheme that: the intelligent decomposition method for manufacturing the BOM by the complex equipment based on semantic computation comprises the following steps:
the method comprises the following steps of firstly, carrying out text preprocessing on a BOM through category labeling, semantic expansion and stop word removal, and specifically comprises the following steps:
1) class labeling for manufacturing BOMs
The parts in manufacturing the BOM are divided into: the self-made parts, the necessary external parts, the unnecessary self-made parts and the unnecessary external parts are four types, and the four types can be corresponding to four grades according to the self-made property and the necessity:
workpiece type 1: the workpiece is required to be manufactured by self, and the workpiece is required to be manufactured by self of a factory;
workpiece type 2: the workpiece is processed by other factory generations;
workpiece type 3: the workpiece is primarily designed to be processed by other factories, but can be manufactured by self;
workpiece type 4: the workpiece is not required to be self-manufactured, and the workpiece is primarily finished by self-manufacture, but can be transferred to other factories for processing;
2) semantic augmentation of manufacturing BOMs
Matching specific definitions of the names of the workpieces in the BOM table in a mechanical part noun term illustration dictionary (Chinese-English) and a commonly used mechanical noun term definition dictionary, wherein the obtained text data is formed by three parts of category-workpiece name-name definitions;
3) removing stop words from manufacturing BOM
By adopting a Chinese non-use word list, some words and symbols which are meaningless to the subsequent semantic analysis and the mining target exist in the expanded manufactured BOM, the meaningless words in the text are removed, the words which represent the logical hierarchical relation in the language description and definition of the workpiece name are not removed, and more words which cannot reflect the workpiece definition can be deleted;
step two, vectorizing and representing the preprocessed BOM text sentences through an embedded marking method and an MLM mechanism;
1) data input processing
Respectively embedding word information, sentence/segment information and position information to realize the segmentation of the preprocessed text;
the data input processing specifically comprises the following steps:
a. the word embedding means that each word is embedded into a vector, the initial keyword of each sentence is [ CLS ], and the word embedding is a classification embedding and has the function of representing an aggregation sequence of classification tasks, if no classification task is ignored;
b. sentence and segment embedding means that the learned sentences are embedded into the input sentences and are respectively embedded A, B into two different sentences, if only one sentence is embedded, only A is embedded, and the sentences are divided by using [ SEP ];
c. embedding position information, wherein the longest support is 512-length sequences;
2) pre-training
Further enabling the model to learn deeper sentence relations through an MLM (mask language model) and an NSP (next sentence prediction), wherein the MLM is a complete form and fill mechanism, namely a word randomly covering 15% of a sentence is given, and then the training is carried out through a BERT model, and the purpose of the mechanism is to enable the BERT model to carry out deeper bidirectional representation;
the NSP mechanism is to pre-train a binarization next sentence prediction task from the perspective of sentences, the task can be generated from any monolingual corpus, and the purpose is to enable the BERT model to better learn the relation between sentences;
3) vectorization
Vectorizing and expressing the manufactured BOM text to finally obtain a sentence beginning with a special symbol [ CLS ], and coding to obtain a vector representation of the sentence; as the BOM is manufactured by adopting Chinese expression, the vector generation of the text adopts an open source bert-as-service Chinese version to obtain a text vector, and the generation result is as follows: the dimensionality of each word vector is 768 dimensions, and the numerical type is a 32-bit floating point type;
thirdly, performing feature extraction and classification on the manufacturing BOM text sentences quantized in opposite directions by using BERT-TextCNN;
the textCNN feature extraction layer is divided into convolution and pooling, and the core function of the textCNN feature extraction layer is to extract vector and then manufacture vector features of the workpiece description in the BOM; the convolution operation is used for extracting primary features of the workpiece description vectors from the input fixed-length workpiece description vector sequence by using local word sequence information, and the pooling operation is used for combining the primary features into high-level features;
the BOM text sentence manufacturing method comprises the following specific steps of:
1) convolution and pooling operations for vectorized manufacturing of BOMs
Taking the workpiece description word vectors generated after the vectors as input, wherein the dimensionality of each word vector is 768 dimensions, performing convolution by using convolution kernels with fixed sizes and quantities, and extracting primary features of each workpiece description text vector; when convolution operation is carried out, convolution kernel carries out convolution operation along the length (vertical) of a sentence to obtain a vector c after the sentence is convolvediThe vertical convolution operation can extract features between the work piece description text words by using convolution kernels of different sizes; vector values in the convolution kernel are randomly generated when the model is subjected to first iteration, and the neural network updates the weight according to the loss value along with the increase of the iteration times; the convolution operation can be expressed as:
ci=f(wA[i:i+h-1]+b)i=1,2,…,s-h+1 (1)
wherein d represents width, h represents height, w represents a matrix, h × d parameters of the matrix w need to be updated, s represents a sentence, a matrix A e is obtained after passing through a text representation layer, A [ i: j ] represents the ith to jth rows of A, b represents a bias item, and f represents an activation function;
the pooling operation is to extract the maximum value of the text features extracted by different convolution operations to generate higher-level features of the workpiece description text vectors; extracting the maximum feature of each vector by adopting global maximum pooling, and splicing the convolution results together after global maximum pooling to form a vector for further representing text semantics;
2) vectorized manufacturing BOM classification
Softmax is adopted as a classifier for vectorization manufacturing of BOM, and a training sample set W { (x) is set1,y1),(x2,y2),…,(xw,yw) In which xi∈RnRepresenting a text vector corresponding to the ith training sample, with the dimensionality of n, w training samples in total, yiE (1,2, …, k) represents the class to which the ith training sample belongs, and k is the classCounting; the BOM decomposition is to identify four classes, i.e., necessary self-parts, necessary external parts, unnecessary self-parts, and unnecessary external parts, so that the class number k is 4, and the discriminant function of the Softmax regression model is as follows:
wherein h isθ(xi) Is a k-dimensional vector, each element p (y) in the vectori=k|xi(ii) a θ) represents the current input sample xiThe probability of belonging to the current class k, due to normalization, the sum of the probabilities of all the current class elements is equal to 1, theta is the total parameter of the model,corresponding classifier parameters for each category, wherein each row corresponds to one classifier parameter for each category, and the relationship is as follows:
the parameter estimation of the Softmax regression model is solved by a maximum likelihood method, and the likelihood function is as follows:
solving the Softmax regression model θ by minimizing a loss function, which can be expressed as:
and solving the gradient of the loss function to obtain the parameter theta of the Softmax regression model.
The invention has the beneficial effects that:
compared with a word2vec-TextCNN model through experiments, the model provided by the invention has better evaluation indexes such as accuracy, precision, recall rate, harmonic mean and the like than the word2vec-TextCNN model. When the iteration times are 120, the accuracy, precision, recall rate and harmonic mean of the model provided by the invention reach the maximum, and are 87.58%, 88.26%, 86.54% and 87.39% respectively; by analyzing the error classification, the model provided by the invention has higher accuracy in decomposing the self-made external cooperation mixed part into the self-made part and the external cooperation part in the BOM manufacturing process.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a diagram showing the relationship between the self-manufacturing capability and the necessity of manufacturing BOM parts according to the present invention.
FIG. 3 is a schematic diagram of the data input process for manufacturing a BOM according to the present invention.
FIG. 4 is a MLM mask diagram of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Referring to fig. 1, the invention performs text preprocessing on a manufactured BOM by 3 steps of firstly, category labeling, semantic expansion, and stop word removal; secondly, vectorizing and describing the preprocessed BOM text statement through an embedded marking method, an MLM (mask Language model) and an NSP (Next sequence prediction) mechanism; finally, performing feature extraction and classification on the manufacturing BOM Text sentences quantized relatively by using BERT-TextCNN (Bidirectional Encoder Representations from transform Text Convolutional Neural Networks, the translation is a joint model of Bidirectional code representation and Text Convolutional Neural Networks), and realizing intelligent decomposition on the manufacturing BOM of the complex equipment according to four types of 'necessary self-made parts, necessary external accessory parts, unnecessary self-made parts and unnecessary external accessory parts'.
The manufacturing BOM intelligent decomposition flow based on the BERT-TextCNN and the manufacturing BOM intelligent decomposition general flow based on the BERT-TextCNN are shown in figure 1, and the manufacturing BOM intelligent decomposition general flow based on the BERT-TextCNN mainly comprises the following steps: 1) the system comprises a text preprocessing layer, 2) a text representation layer, 3) a feature extraction layer and 4) an identification classification layer;
text preprocessing layer: the method refers to an input form required by processing an existing text form into a certain mathematical model, and a pre-processed form is usually related to text quality and a mining target, so that an expected text form is obtained through a specific method.
Text presentation layer: and generating vectors from the preprocessed manufacturing BOM texts by using a BERT word vector model so as to facilitate subsequent intelligent calculation.
A feature extraction layer: vector features of the BOM text are manufactured after extracting the vectors using convolution and pooling functions of BERT-TextCNN.
Identifying a classification layer: and according to the vectorized vector characteristics of the manufacturing BOM, a Softmax classifier is adopted to identify the manufacturing BOM, and four types of classification results of necessary self-made parts, necessary external cooperation parts, unnecessary self-made parts and unnecessary external cooperation parts are finally obtained.
The intelligent decomposition method for manufacturing the BOM by the complex equipment based on semantic computation comprises the following steps:
the method comprises the following steps of firstly, carrying out text preprocessing on a BOM through category labeling, semantic expansion and stop word removal, and specifically comprises the following steps: preprocessing a BOM original text, wherein the BOM original text exists in an actual enterprise, and a certain randomness exists because a standard template is not established and a plurality of descriptions are realized by technical personnel; for subsequent intelligent decomposition, the BOM original text must be preprocessed through three links of category marking, semantic expansion, stop word removal and the like,
1) class labeling for manufacturing BOMs
This embodiment divides the parts in manufacturing the BOM into: the self-made part, the external accessory part, the non-self-made part and the non-external accessory part are needed; the four types can be mapped into four classes according to self-control and necessity, as shown in fig. 2, wherein the specific meanings represented by different workpiece class divisions are as follows:
workpiece type 1: the workpiece is required to be manufactured by self, and the workpiece is required to be manufactured by self of a factory;
workpiece type 2: the workpiece is processed by other factory generations;
workpiece type 3: the workpiece is primarily designed to be processed by other factories, but can be manufactured by self;
workpiece type 4: the workpiece is not required to be self-manufactured, and the workpiece is primarily finished by self-manufacture, but can be transferred to other factories for processing;
2) semantic augmentation of manufacturing BOMs
Semantic expansion is to describe each artifact name in detail. Because the information content of the workpiece contained in the workpiece name in the manufactured BOM is too small, semantic calculation is difficult to be performed only by the workpiece name. Semantic expansion is carried out on the workpiece name according to a certain standard, and the further decomposition of the whole manufacturing BOM is decisive. Generally, the name of a workpiece is a term in the field of machine manufacturing, and the terms are accurately and thoroughly explained in tool books such as "mechanical component term graphic dictionary (chinese-english)", "general mechanical term meaning great", and the like. Therefore, the specific process for manufacturing the BOM semantic extension is as follows: the specific definitions of the names of the workpieces in the BOM table are matched in a mechanical part noun term illustration dictionary (Chinese-English) and a commonly used mechanical noun term definition dictionary, and the text data obtained at the moment is formed by three parts of category-workpiece name-name definitions, taking a main shaft as an example, and the expanded text is as follows: class 1 spindles refer to spindles that receive power from an engine or motor and transmit it to other parts;
3) removing stop words from manufacturing BOM
The augmented manufactured BOM may have words and symbols that are meaningless for subsequent semantic analysis and mining of the target, such as "I", "other", and ",". ","! And removing meaningless words in the text, so that the characteristic dimension of the input text can be reduced to a certain extent, and the accuracy and efficiency of text classification processing are improved. In the cleaning of the text, the words are called stop words, and each language has at least one stop word list. In the Chinese inactive word list, the words such as ' from ', ' and the like in the language description and definition of the artifact name, which represent the logical hierarchical relationship, are not removed, but the words such as ' yes ', ' mean ', and the like, which do not reflect the artifact definition more, can be deleted. Taking the principal axis as an example, the result of removing stop words is: class 1 spindles receive power from an engine or motor and will transmit power to other machine shafts.
The text form obtained by category labeling, semantic expansion and stop word removal can be abstracted into the form of category-workpiece name-workpiece description (stop word removal).
Step two, performing vectorization representation (BERT-TextCNN text representation layer) on the preprocessed BOM text sentences through an embedded marking method and an MLM mechanism;
the text presentation layer for manufacturing the BOM mainly carries out vectorization presentation on the preprocessed text through three stages of data input processing, pre-training and vectorization presentation;
1) data input processing
The data input processing for manufacturing the BOM realizes the segmentation of the preprocessed text by respectively embedding word information, sentence/paragraph information and position information, and mainly comprises the following three steps: a. the word embedding means that each word is embedded into a vector, the initial keyword of each sentence is [ CLS ], and the word embedding is a classification embedding and has the function of representing an aggregation sequence of classification tasks, if no classification task is ignored;
b. sentence and segment embedding means that the learned sentences are embedded into the input sentences and are respectively embedded A, B into two different sentences, if only one sentence is embedded, only A is embedded, and the sentences are divided by using [ SEP ];
c. embedding position information, wherein the longest support is 512-length sequences; taking the spindle as an example, the procedure of the BERT input process is shown in fig. 3;
2) pre-training
Further enabling the Model to learn deeper Sentence relations through an MLM (Mask Language Model) and an NSP (Next sequence Prediction translation: Next Sentence Prediction), wherein the MLM is a complete shape filling mechanism, namely, a Sentence is given to randomly cover 15% of words, and then the BERT Model is used for training, and the aim of the mechanism is to enable the BERT Model to carry out deeper bidirectional representation;
in the MLM predictive training, for example, the symbol [ MASK ] randomly replaces the symbol "accepting power from the engine or motor and transmitting it to the other machine shafts". The words such as "middle" and "parallel" are used to mask all information of the words such as "middle" and "parallel" in the hierarchical coding process, as shown in fig. 4. After model coding, the final output result can predict the shielding words such as 'sending' and 'combining' by using a mark [ MASK ]; the following rules are typically followed in MLM training:
A. the 80% probability is replaced by special mark [ MASK ], such as 'receiving power from engine or motor and transmitting to other machine shaft' instead, [ MASK ] receiving power from engine or motor, and [ MASK ] transmitting [ MASK ] to other [ MASK ] shaft;
B. the 10% probability is replaced by a random word such as "receiving power from the engine or motor and transmitting it to the other machine shaft" instead "receiving power from the engine or motor and transmitting it to the other machine shaft".
C. The 10% probability remains the same as "receiving power from the engine or motor and transmitting to the other machine shaft".
The NSP mechanism is to pre-train a binarization next sentence prediction task from the perspective of sentences, which can be generated from any monolingual corpus. The method aims to enable the BERT model to better learn the relation between sentences; specifically, sentences a and B were selected as pre-training samples, respectively: the next sentence of A is probably B in 50 percent, the rest 50 percent of probability is probably from other sentences in the corpus, the binarization output is finally obtained, and the judgment of the sentences is divided by adopting SEP; to "receive power from the engine or motor and transmit it to the other machine shaft. For example, when the power is received from the engine or the motor, the next sentence is' and the power is transmitted to other machine member shafts; "return to" TRUE "when" receiving power from the engine or the motor "and" return to "FALSE" when "is a component of the impulse turbine rotor" in the next sentence.
3) Vectorization
Vectorizing and expressing the manufactured BOM text to finally obtain a sentence beginning with a special symbol [ CLS ], and coding to obtain a vector representation of the sentence; because the BOM is made by Chinese expression, the vector generation of the embodiment adopts an open source "bert-as-service" Chinese version to obtain a text vector, and the generation result is as follows: each word vector dimension (dim) is 768 dimensions and the numeric type (datatype) is a 32-bit floating point type. Generating a main shaft: receives power from the engine or motor and transmits it to the other machine shafts. "the word vector results are shown in table 1.
Table 1BERT generating "principal axis" word vector example
Thirdly, performing feature extraction and classification (based on the feature extraction layer and classification of the BERT-TextCNN) on the oppositely quantized manufactured BOM text sentences by utilizing the BERT-TextCNN;
the textCNN feature extraction layer is divided into convolution and pooling, and the core function of the textCNN feature extraction layer is to extract vector and then manufacture vector features of the workpiece description in the BOM; the convolution operation is used for extracting primary features of the workpiece description vectors from the input fixed-length workpiece description vector sequence by using local word sequence information, and the pooling operation is used for combining the primary features into high-level features;
1) convolution and pooling operations for vectorized manufacturing of BOMs
Taking the workpiece description word vectors generated after the vectors as input, wherein the dimensionality of each word vector is 768 dimensions, performing convolution by using convolution kernels with fixed sizes and quantities, and extracting primary features of each workpiece description text vector; in the embodiment, convolution kernels with the same height of 2 and 3 are adopted for feature extraction; when convolution operation is carried out, convolution kernel carries out convolution operation along the length (vertical) of a sentence to obtain a vector c after the sentence is convolvediThe vertical convolution operation can extract features between the work piece description text words by using convolution kernels of different sizes; vector values in the convolution kernel are randomly generated when the model is subjected to first iteration, and the neural network updates the weight according to the loss value along with the increase of the iteration times; now suppose there is a convolution kernel, a matrix w of width d and height h, then wH x d parameters need to be updated; for a sentence s, a matrix A belonging to Rs multiplied by d can be obtained after passing through a text representation layer; a [ i: j ]]The ith to jth rows of a are represented, and in addition, a bias term b and an activation function f need to be added, and the convolution operation can be represented as:
ci=f(wA[i:i+h-1]+b)i=1,2,…,s-h+1 (1)
the pooling operation is to extract the maximum value of the text features extracted by different convolution operations to generate higher-level features of the workpiece description text vectors; the present embodiment extracts each vector maximum feature using global maximum pooling. Performing global maximum pooling on each convolution result, and then splicing the convolution results together to form a vector for further representing text semantics;
2) vectorized manufacturing BOM classification
Softmax is adopted as a classifier for vectorization manufacturing of BOM, and a training sample set W { (x) is set1,y1),(x2,y2),…,(xw,yw) In which xi∈RnRepresenting a text vector corresponding to the ith training sample, with the dimensionality of n, w training samples in total, yiE (1,2, …, k) represents the category to which the ith training sample belongs, and k is the number of categories; as mentioned above, the BOM decomposition is to identify four categories, i.e., the necessary self-parts, the necessary external parts, the unnecessary self-parts, and the unnecessary external parts, so that the category number k is 4; the discriminant function of the Softmax regression model is as follows:
wherein h isθ(xi) Is a k-dimensional vector, each element p (y) in the vectori=k|xi(ii) a θ) represents the current input sample xiThe probability of belonging to the current class k, due to normalization, is the sum of the probabilities of all the current class elements equal to 1. Theta is the total parameter of the model,corresponding classifier parameters for each class, one class per actionThe corresponding classifier parameters are respectively in the following relationship:
the parameter estimation of the Softmax regression model is solved by a maximum likelihood method, and the likelihood function is as follows:
solving the Softmax regression model θ by minimizing a loss function, which can be expressed as:
and solving the gradient of the loss function to obtain the parameter theta of the Softmax regression model.
Example 2
Manufacturing a BOM data set:
in the experiment, the data of the BOM manufactured by a certain large manufacturing enterprise in Shaanxi after decryption is used as the experimental data set, the data comprises 63 parts of various BOMs manufactured, and 4372 workpieces are accumulated; 2734 standard parts in 62% and 57 standard parts in the workpieces; 662, 15% and 72 of the self-made parts are adopted, 329 of the self-made parts are required, and 333 of the self-made parts is not required; 1014 external cooperation pieces account for 23 percent, 88 external cooperation pieces are required, 487 external cooperation pieces are required, and 527 external cooperation pieces are not required.
The experimental environment is as follows:
the experiment is operated in a windows10 environment, a Pycharm programming platform is used, a Keras deep learning framework is used for building a model, and a 'bert-as-service' service is called. The Keras deep learning framework in this experiment used tensirflow as the background. See table 2 for a detailed experimental environment.
TABLE 2 Experimental Environment and configuration
Model training:
selecting 80% of each category in the data set as a training set, 20% as a test set, and then using 20% of the training set as a verification set; in the training process of the model, firstly, the model is trained on a training set and some hyper-parameters are set, and parameters are adjusted by observing the performance of the model on a verification set in the training process, so that an optimal model is obtained; then testing the optimal model obtained by training on a test set; the parameter settings for BERT-TextCNN in training are shown in Table 3.
TABLE 3 model parameter setting Table
The experimental results are as follows:
and evaluating the model by using a BERT-TextCNN (Bidirectional Encoder Representations from transform Text Convolutional Neural Networks) classification model according to the accuracy, recall rate and harmonic mean of the classification of the self-made external mix component. The accuracy rate is the percentage of the number t of homemade external co-admixtures correctly judged as the class based on the BERT-TextCNN classification model to the total number m of homemade external co-admixtures judged to belong to the class by the classifier (P ═ t/m × 100%). The recall ratio is the percentage of the total number t of external blends of the class and the total number n of homemade external blends of the class, which is correctly judged based on the BERT-TextCNN classification model (R is t/n multiplied by 100%). The harmonic mean is the harmonic mean of the precision rate and the recall rate (F ═ 2PR/(P + R) × 100%). In order to verify the effectiveness of the method, a word2vec-TextCNN classification model is selected for carrying out a comparison experiment. The results of accuracy, precision, recall, and harmonic mean obtained by iterating 40, 60, and 120 times for two different models are shown in table 4.
TABLE 4 BERT-TextCNN model Classification results
As can be seen from table 4, when the model is iterated 40 times, 60 times, and 120 times, the accuracy, precision, recall ratio, and harmonic mean of the decomposition model proposed herein are all higher than those of the word2vec-TextCNN classification model, and especially when the model proposed herein is iterated 40 times, the classification effect is not much different from that of the word2vec-TextCNN classification model after 60 iterations, which indicates that the model proposed herein is better than the word2vec-TextCNN classification model. In addition, as the number of iterations increases, indexes of the two models are increased, but the model efficiency is also reduced. When the number of iterations is 120, the model accuracy, precision, recall, and harmonic mean presented herein are maximized at 87.58%, 88.26%, 86.54%, and 87.39%, respectively.
For the algorithm iteration 120 times, the classification error part counts the categories of necessary artifacts, necessary outsourced artifacts, unnecessary outsourced artifacts and the number of the total error classifications. As shown in table 5. The vertical axis represents the actual workpiece category, and the horizontal axis represents the category to which the workpiece is classified by the model classification.
TABLE 5 workpiece class error Classification distribution Table
Must be self-made | Must be matched | Optionally from parts | Non-necessary external accessory | |
Must be self-suppliedArticle of manufacture | 0.0237 | 0.1966 | 0.0296 | |
Must be matched | 0.0461 | 0.0454 | 0.1585 | |
Optionally from parts | 0.1830 | 0.0411 | 0.0262 | |
Non-necessary external accessory | 0.0309 | 0.1859 | 0.0330 |
As can be seen from table 5, the error classification mainly focuses on the dimension of the classification of the workpiece category that should belong to the necessary self-made workpiece and the necessary external component into the non-necessary self-made workpiece and the non-necessary external component, i.e. the factor affecting the accuracy of the model is the self-control of the workpiece. This is caused because the factory needs to consider not only the semantic meaning of the workpiece name, which is the meaning of the workpiece name, but also the manufacturing resources inside the factory when manually decomposing the BOM workpiece, and the manufacturing resources change with time, and this text analyzes from the perspective of the workpiece semantic meaning, and therefore the accuracy is affected.
Claims (4)
1. The BOM intelligent decomposition method for manufacturing the complex equipment based on semantic computation is characterized by comprising the following steps of:
step one, carrying out text preprocessing on the BOM through category labeling, semantic expansion and removal of stop words,
step two, vectorizing and representing the preprocessed BOM text sentences through an embedded marking method and an MLM mechanism;
thirdly, performing feature extraction and classification on the manufacturing BOM text sentences quantized in opposite directions by using BERT-TextCNN;
the textCNN feature extraction layer is divided into convolution and pooling, and the core function of the textCNN feature extraction layer is to extract vector and then manufacture vector features of the workpiece description in the BOM; convolution operation is used for extracting primary features of the workpiece description vectors from the input fixed-length workpiece description vector sequences by using local word sequence information, and pooling operation is used for combining the primary features into high-level features.
2. The intelligent decomposition method for BOM manufacturing based on semantic computation of claim 1, wherein the first step is specifically:
1) class labeling for manufacturing BOMs
The parts in manufacturing the BOM are divided into: the self-made parts, the necessary external parts, the unnecessary self-made parts and the unnecessary external parts are four types, and the four types can be corresponding to four grades according to the self-made property and the necessity:
workpiece type 1: the workpiece is required to be manufactured by self, and the workpiece is required to be manufactured by self of a factory;
workpiece type 2: the workpiece is processed by other factory generations;
workpiece type 3: the workpiece is primarily designed to be processed by other factories, but can be manufactured by self;
workpiece type 4: the workpiece is not required to be self-manufactured, and the workpiece is primarily finished by self-manufacture, but can be transferred to other factories for processing;
2) semantic augmentation of manufacturing BOMs
Matching specific definitions of the work names of BOM tables in a mechanical part noun term graphic dictionary and a common mechanical noun term definition universe, wherein the obtained text data is formed by three parts of category-work name-name definitions;
3) removing stop words from manufacturing BOM
By adopting the Chinese disabled word list, the expanded manufactured BOM has some meaningless words and symbols for subsequent semantic analysis and target mining, the meaningless words in the text are removed, the words representing the logical hierarchical relationship in the language description and definition of the workpiece name are not removed, and more words which cannot reflect the workpiece definition can be deleted.
3. The BOM intelligent decomposition method based on semantic computation of complex equipment manufacturing according to claim 1, wherein the second step is specifically:
1) data input processing
Respectively embedding word information, sentence/segment information and position information to realize the segmentation of the preprocessed text;
the data input processing specifically comprises the following steps:
a. the word embedding means that each word is embedded into a vector, the initial keyword of each sentence is [ CLS ], and the word embedding is a classification embedding and has the function of representing an aggregation sequence of classification tasks, if no classification task is ignored;
b. sentence and segment embedding means that the learned sentences are embedded into the input sentences and are respectively embedded A, B into two different sentences, if only one sentence is embedded, only A is embedded, and the sentences are divided by using [ SEP ];
c. embedding position information, wherein the longest support is 512-length sequences;
2) pre-training
Further enabling the model to learn deeper sentence relations through MLM and NSP, wherein the MLM is a complete form and fill-in-space mechanism, namely a sentence is given to randomly cover 15% of words, and then the BERT model is used for training, and the mechanism aims at enabling the BERT model to carry out deeper bidirectional representation;
the NSP mechanism is to pre-train a binarization next sentence prediction task from the perspective of sentences, the task can be generated from any monolingual corpus, and the purpose is to enable the BERT model to better learn the relation between sentences;
3) vectorization
Vectorizing and expressing the manufactured BOM text to finally obtain a sentence beginning with a special symbol [ CLS ], and coding to obtain a vector representation of the sentence; as the BOM is manufactured by adopting Chinese expression, the vector generation of the text adopts an open source bert-as-service Chinese version to obtain a text vector, and the generation result is as follows: each word vector has dimensions of 768 dimensions and a numerical type of 32-bit floating point.
4. The intelligent decomposition method for BOM manufacturing based on semantic computation of claim 1, wherein in the third step, the BOM text sentence is subjected to feature extraction and classification, specifically comprising the following steps:
1) convolution and pooling operations for vectorized manufacturing of BOMs
Taking the workpiece description word vectors generated after the vectors as input, wherein the dimensionality of each word vector is 768 dimensions, performing convolution by using convolution kernels with fixed sizes and quantities, and extracting primary features of each workpiece description text vector; when convolution operation is carried out, convolution kernel carries out convolution operation along the length of a sentence to obtain a vector c after the sentence is convolutediThe vertical convolution operation can extract features between the work piece description text words by using convolution kernels of different sizes; vector values in the convolution kernel are randomly generated when the model is subjected to first iteration, and the neural network updates the weight according to the loss value along with the increase of the iteration times; the convolution operation can be expressed as:
ci=f(wA[i:i+h-1]+b)i=1,2,…,s-h+1 (1)
wherein d represents width, h represents height, w represents a matrix, h × d parameters of the matrix w need to be updated, s represents a sentence, a matrix A e is obtained after passing through a text representation layer, A [ i: j ] represents the ith to jth rows of A, b represents a bias item, and f represents an activation function;
the pooling operation is to extract the maximum value of the text features extracted by different convolution operations to generate higher-level features of the workpiece description text vectors; extracting the maximum feature of each vector by adopting global maximum pooling, and splicing the convolution results together after global maximum pooling to form a vector for further representing text semantics;
2) vectorized manufacturing BOM classification
Softmax is adopted as a classifier for vectorization manufacturing of BOM, and a training sample set W { (x) is set1,y1),(x2,y2),…,(xw,yw) In which xi∈RnRepresenting a text vector corresponding to the ith training sample, with the dimensionality of n, w training samples in total, yiE (1,2, …, k) represents the category to which the ith training sample belongs, and k is the number of categories; the BOM decomposition is to identify four classes, i.e., necessary self-parts, necessary external parts, unnecessary self-parts, and unnecessary external parts, so that the class number k is 4, and the discriminant function of the Softmax regression model is as follows:
wherein h isθ(xi) Is a k-dimensional vector, each element p (y) in the vectori=k|xi(ii) a θ) represents the current input sample xiThe probability of belonging to the current class k, due to normalization, the sum of the probabilities of all the current class elements is equal to 1, theta is the total parameter of the model,corresponding classifier parameters for each category, wherein each row corresponds to one classifier parameter for each category, and the relationship is as follows:
the parameter estimation of the Softmax regression model is solved by a maximum likelihood method, and the likelihood function is as follows:
solving the Softmax regression model θ by minimizing a loss function, which can be expressed as:
and solving the gradient of the loss function to obtain the parameter theta of the Softmax regression model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011153334.4A CN112270615A (en) | 2020-10-26 | 2020-10-26 | Intelligent decomposition method for manufacturing BOM (Bill of Material) by complex equipment based on semantic calculation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011153334.4A CN112270615A (en) | 2020-10-26 | 2020-10-26 | Intelligent decomposition method for manufacturing BOM (Bill of Material) by complex equipment based on semantic calculation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112270615A true CN112270615A (en) | 2021-01-26 |
Family
ID=74342862
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011153334.4A Pending CN112270615A (en) | 2020-10-26 | 2020-10-26 | Intelligent decomposition method for manufacturing BOM (Bill of Material) by complex equipment based on semantic calculation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112270615A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112507120A (en) * | 2021-02-07 | 2021-03-16 | 上海二三四五网络科技有限公司 | Prediction method and device for keeping classification consistency |
CN113221531A (en) * | 2021-06-04 | 2021-08-06 | 西安邮电大学 | Multi-model dynamic collaborative semantic matching method |
CN113221548A (en) * | 2021-04-01 | 2021-08-06 | 深圳市猎芯科技有限公司 | BOM table identification method and device based on machine learning, computer equipment and medium |
CN113706074A (en) * | 2021-08-06 | 2021-11-26 | 岚图汽车科技有限公司 | Super BOM resolving method, device and equipment and readable storage medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101957954A (en) * | 2010-10-18 | 2011-01-26 | 上海电机学院 | Management and control optimizing method of discrete manufacture production |
CN104391942A (en) * | 2014-11-25 | 2015-03-04 | 中国科学院自动化研究所 | Short text characteristic expanding method based on semantic atlas |
CN108427775A (en) * | 2018-06-04 | 2018-08-21 | 成都市大匠通科技有限公司 | A kind of project cost inventory sorting technique based on multinomial Bayes |
CN108595643A (en) * | 2018-04-26 | 2018-09-28 | 重庆邮电大学 | Text character extraction and sorting technique based on more class node convolution loop networks |
CN109918497A (en) * | 2018-12-21 | 2019-06-21 | 厦门市美亚柏科信息股份有限公司 | A kind of file classification method, device and storage medium based on improvement textCNN model |
CN110134786A (en) * | 2019-05-14 | 2019-08-16 | 南京大学 | A kind of short text classification method based on theme term vector and convolutional neural networks |
CN110134961A (en) * | 2019-05-17 | 2019-08-16 | 北京邮电大学 | Processing method, device and the storage medium of text |
CN110334210A (en) * | 2019-05-30 | 2019-10-15 | 哈尔滨理工大学 | A kind of Chinese sentiment analysis method merged based on BERT with LSTM, CNN |
CN111339750A (en) * | 2020-02-24 | 2020-06-26 | 网经科技(苏州)有限公司 | Spoken language text processing method for removing stop words and predicting sentence boundaries |
CN111652489A (en) * | 2020-05-26 | 2020-09-11 | 浙江师范大学 | BOM-driven intelligent manufacturing service task decomposition method and system |
-
2020
- 2020-10-26 CN CN202011153334.4A patent/CN112270615A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101957954A (en) * | 2010-10-18 | 2011-01-26 | 上海电机学院 | Management and control optimizing method of discrete manufacture production |
CN104391942A (en) * | 2014-11-25 | 2015-03-04 | 中国科学院自动化研究所 | Short text characteristic expanding method based on semantic atlas |
CN108595643A (en) * | 2018-04-26 | 2018-09-28 | 重庆邮电大学 | Text character extraction and sorting technique based on more class node convolution loop networks |
CN108427775A (en) * | 2018-06-04 | 2018-08-21 | 成都市大匠通科技有限公司 | A kind of project cost inventory sorting technique based on multinomial Bayes |
CN109918497A (en) * | 2018-12-21 | 2019-06-21 | 厦门市美亚柏科信息股份有限公司 | A kind of file classification method, device and storage medium based on improvement textCNN model |
CN110134786A (en) * | 2019-05-14 | 2019-08-16 | 南京大学 | A kind of short text classification method based on theme term vector and convolutional neural networks |
CN110134961A (en) * | 2019-05-17 | 2019-08-16 | 北京邮电大学 | Processing method, device and the storage medium of text |
CN110334210A (en) * | 2019-05-30 | 2019-10-15 | 哈尔滨理工大学 | A kind of Chinese sentiment analysis method merged based on BERT with LSTM, CNN |
CN111339750A (en) * | 2020-02-24 | 2020-06-26 | 网经科技(苏州)有限公司 | Spoken language text processing method for removing stop words and predicting sentence boundaries |
CN111652489A (en) * | 2020-05-26 | 2020-09-11 | 浙江师范大学 | BOM-driven intelligent manufacturing service task decomposition method and system |
Non-Patent Citations (6)
Title |
---|
史振杰等: "基于BERT-CNN的电商评论情感分析", 智能计算机与应用, vol. 10, no. 2, 1 February 2020 (2020-02-01), pages 7 - 11 * |
开课吧组等: "深度学习自然语言处理实战", vol. 1, 31 August 2020, pages: 165 - 168 * |
徐德荣等: "稀疏自编码和Softmax回归的快速高效特征学习", 传感器与微系统, vol. 36, no. 5, 20 May 2017 (2017-05-20), pages 55 - 58 * |
朱丽珍: "基于机器学习的客户智能识别技术研究", 中国优秀硕士学位论文全文数据库 信息科技辑, no. 7, 15 July 2020 (2020-07-15), pages 138 - 429 * |
栾兆东: "云制造环境下SY机床集团协同任务分配优化研究", 中国优秀硕士学位论文全文数据库 经济与管理科学辑, no. 8, 15 August 2019 (2019-08-15), pages 150 - 283 * |
郭之超: "面向飞机装配的制造BOM研究与应用", 中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑, no. 7, 15 July 2013 (2013-07-15), pages 8 - 22 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112507120A (en) * | 2021-02-07 | 2021-03-16 | 上海二三四五网络科技有限公司 | Prediction method and device for keeping classification consistency |
CN113221548A (en) * | 2021-04-01 | 2021-08-06 | 深圳市猎芯科技有限公司 | BOM table identification method and device based on machine learning, computer equipment and medium |
CN113221531A (en) * | 2021-06-04 | 2021-08-06 | 西安邮电大学 | Multi-model dynamic collaborative semantic matching method |
CN113706074A (en) * | 2021-08-06 | 2021-11-26 | 岚图汽车科技有限公司 | Super BOM resolving method, device and equipment and readable storage medium |
CN113706074B (en) * | 2021-08-06 | 2024-03-05 | 岚图汽车科技有限公司 | Super BOM (binary object model) resolving method, device, equipment and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110597735B (en) | Software defect prediction method for open-source software defect feature deep learning | |
CN112270615A (en) | Intelligent decomposition method for manufacturing BOM (Bill of Material) by complex equipment based on semantic calculation | |
CN110287320B (en) | Deep learning multi-classification emotion analysis model combining attention mechanism | |
CN108614875B (en) | Chinese emotion tendency classification method based on global average pooling convolutional neural network | |
CN107273913B (en) | Short text similarity calculation method based on multi-feature fusion | |
CN110046356B (en) | Label-embedded microblog text emotion multi-label classification method | |
CN101661462A (en) | Four-layer structure Chinese text regularized system and realization thereof | |
CN111274817A (en) | Intelligent software cost measurement method based on natural language processing technology | |
CN111259153B (en) | Attribute-level emotion analysis method of complete attention mechanism | |
CN111767398A (en) | Secondary equipment fault short text data classification method based on convolutional neural network | |
CN110472245B (en) | Multi-label emotion intensity prediction method based on hierarchical convolutional neural network | |
CN109783637A (en) | Electric power overhaul text mining method based on deep neural network | |
CN111860981B (en) | Enterprise national industry category prediction method and system based on LSTM deep learning | |
CN111966825A (en) | Power grid equipment defect text classification method based on machine learning | |
CN113434688B (en) | Data processing method and device for public opinion classification model training | |
CN115526236A (en) | Text network graph classification method based on multi-modal comparative learning | |
CN114880468A (en) | Building specification examination method and system based on BilSTM and knowledge graph | |
CN109947936A (en) | A method of based on machine learning dynamic detection spam | |
CN113360654A (en) | Text classification method and device, electronic equipment and readable storage medium | |
CN113627969A (en) | Product problem analysis method and system based on E-commerce platform user comments | |
CN115906842A (en) | Policy information identification method | |
CN113379432B (en) | Sales system customer matching method based on machine learning | |
CN113360647B (en) | 5G mobile service complaint source-tracing analysis method based on clustering | |
CN113868422A (en) | Multi-label inspection work order problem traceability identification method and device | |
CN113159831A (en) | Comment text sentiment analysis method based on improved capsule network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |