CN110688722B - Automatic generation method of part attribute matrix based on deep learning - Google Patents

Automatic generation method of part attribute matrix based on deep learning Download PDF

Info

Publication number
CN110688722B
CN110688722B CN201910986705.8A CN201910986705A CN110688722B CN 110688722 B CN110688722 B CN 110688722B CN 201910986705 A CN201910986705 A CN 201910986705A CN 110688722 B CN110688722 B CN 110688722B
Authority
CN
China
Prior art keywords
parts
matrix
attribute
neurons
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910986705.8A
Other languages
Chinese (zh)
Other versions
CN110688722A (en
Inventor
马腾
马佳
支含绪
邓森洋
陈雨晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Technology Suzhou Co ltd
Original Assignee
Shenzhen Technology Suzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Technology Suzhou Co ltd filed Critical Shenzhen Technology Suzhou Co ltd
Priority to CN201910986705.8A priority Critical patent/CN110688722B/en
Publication of CN110688722A publication Critical patent/CN110688722A/en
Application granted granted Critical
Publication of CN110688722B publication Critical patent/CN110688722B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a deep learning-based automatic generation method of a part attribute matrix, which comprises the following steps: A. acquiring part information of a product serving as a sample, and creating a part dictionary; B. creating a numerical map of the part; C. defining the size of an attribute matrix E; D. according to the structure of the sample product, a part sequence model is established; E. setting a fixed sliding window, dividing a part sequence model, and forming a training sample set D; F. constructing a neural network structure, and determining an input layer, an hidden layer and an output layer of the network; G. training the relevant samples; H. by using the attribute matrix E, the invention provides an automatic generation method of the attribute matrix of the parts based on deep learning, and the attribute matrix can be automatically obtained without manually marking a large number of attributes of a large number of parts one by one.

Description

Automatic generation method of part attribute matrix based on deep learning
Technical Field
The invention relates to the technical field of intelligent manufacturing, in particular to an automatic generation method of a part attribute matrix based on deep learning.
Background
In the current intelligent manufacturing field, when parts are analyzed by means of data mining, artificial intelligence and the like, correlation is often lost easily. And the similarity between parts can be known very effectively by enhancing the relevance of the parts in data mining. When designing the product, the similarity calculation of the parts is accurate and effective, and effective recommendation can be carried out on the selection of the parts, so that the intelligent design efficiency of the product can be improved; when the processing technology is designed, the reusability of technological information such as technological resources, technological parameters and the like can be greatly improved, so that the efficiency of the intelligent processing technology design can be greatly improved; when the assembly process is designed, the assembly process of the component and the reusability of assembly resources can be greatly improved, so that the intelligent design efficiency of the assembly process route can be greatly improved; during simulation analysis, based on recommendation of similarity, grids, loads and the like which are divided in the prior art can be effectively borrowed, and therefore intelligent simulation analysis efficiency is greatly improved. Therefore, the enhancement of the part correlation in the data mining has great promotion effect on various links of the manufacturing industry, and is an extremely important ring in intelligent manufacturing oriented to large-scale customization.
At present, when parts are analyzed by means of data mining or artificial intelligence, one-hot vectors are often adopted to conduct numerical processing on the parts, and the correlation between the parts is lost due to the characteristics of the one-hot vectors, namely that the one-hot vector inner product of any two different parts in a part dictionary is 0. And because the dimension of the one-hot vector is easily affected by the dictionary length, the related calculation tends to increase exponentially with the increase of the vector dimension.
In order to improve the correlation between parts, the attribute information (such as aperture, outer contour dimension, hole feature, outer cylinder feature, etc.) of the parts is often used to vectorize the parts. The attribute related value of each part forms an m×n-dimensional matrix (m is the number of parts in the part dictionary, and n is the number of all the corresponding attributes), namely an attribute matrix. When the component is represented in a vectorization way through the attribute matrix, a vector which is lower than one-hot vector in dimension and contains attribute information of the component is obtained. Because the method comprises the attribute information of the parts, the correlation of the parts can be greatly improved, and the effectiveness of calculating the content such as the similarity of the parts is improved.
However, the marking of attribute information often requires a large amount of manual operations. And, the manual marking and assignment of the attribute information of the parts is a very resource-consuming task.
Disclosure of Invention
The invention aims to provide an automatic generation method of a part attribute matrix based on deep learning, which aims to solve the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions:
the automatic generation method of the component attribute matrix based on deep learning is characterized by comprising the following steps of:
A. acquiring part information of a product serving as a sample, and creating a part dictionary;
B. creating a numerical map of the part;
C. defining the size of an attribute matrix E;
D. according to the structure of the sample product, a part sequence model is established;
E. setting a fixed sliding window, dividing a part sequence model, and forming a training sample set D;
F. constructing a neural network structure, and determining an input layer, an hidden layer and an output layer of the network;
G. training the relevant samples;
H. an attribute matrix E is used.
As a further aspect of the invention: the first step is specifically as follows: and acquiring all the product structures serving as the samples, and creating a part dictionary for part information in the product structures. Different parts are distinguished through the serial numbers of the parts, and each different part is clustered and put into a part dictionary, wherein the number of the parts in the part dictionary is N.
As a further aspect of the invention: the second step is specifically as follows: a numerical map is created for each part in the part dictionary, where the content of the part map is defined as an N-dimensional vector. As shown in fig. 1, the dimension of the vector is determined by the size of the part dictionary (i.e., the dimension of the vector is N), and is defined as a one-hot vector of the part.
As a further aspect of the invention: the third step is specifically as follows: the attribute matrix E is defined as an N x M matrix, wherein the abscissa of the matrix E represents N parts in the dictionary, and the ordinate of the matrix represents M pieces of common feature information of the parts.
As a further aspect of the invention: the fourth step is specifically as follows: and acquiring all the product structures serving as the samples, and processing the serialization of the parts in the product structures according to the structure tree of the product.
As a further aspect of the invention: the fifth step is specifically as follows: setting a fixed sliding window with the number of parts being n, assuming n=3, determining the input and output of a training sample according to three parts and the next part in the fixed sliding window, in a part sequence model, three parts P1, P2 and P3 in the fixed sliding window are the input of the training sample, the next part P4 is the output of the training sample, P1, P2, P3 and P4 are used as one sample to be added to a training sample set, then the fixed sliding window is slid rightwards by one part, the input of the training sample is changed into parts P2, P3 and P4 in the fixed sliding window, the output is changed into the next part P5, P2, P3, P4 and P5 are used as one sample to be added to the training sample set, so that the training sample set D is finally formed by continuously sliding the window after the sequence model of the part product is divided.
As a further aspect of the invention: the sixth step is specifically as follows: according to one-hot vector dimension N of the parts, the number of neurons of an input layer and an output layer of the neural network is created, and because the input of the training sample is three parts in the sliding window and the output is one part behind the three parts, the number of neurons of the input layer is composed of one-hot vectors of the three parts, namely N multiplied by 3 neurons of the input layer; the number of neurons of the output layer is composed of one-hot vector of one part, namely N multiplied by 1 neurons of the output layer, the number of neurons of the hidden layer of the neural network can be determined by an attribute matrix E with the size of N multiplied by M defined in the fourth step, the neurons of the hidden layer are not fully connected, the attribute matrix E is regarded as a weight matrix from the input layer to the hidden layer, the weight matrix is multiplied by one-hot vectors of three parts respectively to obtain embedded vectors of the three parts, and the dimensions of the embedded vectors are equal to M, so that the number of neurons of the hidden layer is composed of the embedded vectors of the three parts, namely M multiplied by 3 neurons of the hidden layer.
As a further aspect of the invention: the seventh step is specifically: the training sample set D is converted into a sample matrix, the sample matrix is placed into a built neural network, a proper activation function is selected, and through calculation, the weight matrix from an input layer to an implicit layer, namely an attribute matrix E, which accords with an expected network structure and all optimal weight parameters is finally obtained.
As a further aspect of the invention: the method comprises the following steps: and multiplying the one-hot vector of the given part by the attribute matrix E obtained by training to obtain the attribute vector corresponding to the part. If the attribute vector is the attribute vector of two parts, the attribute vector can be applied to related algorithms such as cosine included angles, neural networks and the like, and the similarity is calculated.
Compared with the prior art, the invention has the beneficial effects that: the invention provides an automatic generation method of a part attribute matrix based on deep learning, by which the attribute matrix can be automatically obtained without manually marking a large number of attributes of a large number of parts one by one.
Drawings
FIG. 1 is a one-hot vector diagram of an ith part in a part dictionary.
Fig. 2 is a schematic diagram of an attribute matrix E of an artificial mark.
FIG. 3 is a schematic diagram of a component sequence model.
Fig. 4 is a schematic diagram of a training sample set D.
Fig. 5 is a schematic diagram of training an attribute matrix based on deep learning.
FIG. 6 is a schematic diagram of a one-hot vector dimension reduction calculation process.
Fig. 7 is a schematic diagram of a component dictionary.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1-7, example 1: in the embodiment of the invention, the method for automatically generating the attribute matrix of the part based on deep learning comprises the following steps:
1. the part information of the product as a sample is acquired, and a part dictionary is created.
And acquiring all the product structures serving as the samples, and creating a part dictionary for part information in the product structures. Different parts are distinguished through the serial numbers of the parts, and each different part is clustered and put into a part dictionary, wherein the number of the parts in the part dictionary is N.
2. A numerical map of the part is created.
A numerical map is created for each part in the part dictionary, where the content of the part map is defined as an N-dimensional vector. As shown in fig. 1, the dimension of the vector is determined by the size of the part dictionary (i.e., the dimension of the vector is N), and is defined as a one-hot vector of the part.
3. The size of the attribute matrix E is defined.
The attribute matrix E is defined as an nxm size matrix, where the abscissa of the matrix E represents N parts in the dictionary, and the ordinate of the matrix represents M pieces of common feature information of the parts, as shown in fig. 2.
4. And building a part sequence model according to the product structure in the sample.
All the product structures as samples are obtained, and the serialization of the parts in the product structures is processed according to the structure tree of the product, such as the parts shown in fig. 3.
5. And setting a fixed sliding window, and dividing a part sequence model to form a training sample set D.
And setting a fixed sliding window with the number of parts being n, and determining the input and the output of the training sample according to three parts and the subsequent part in the fixed sliding window, wherein n=3. As shown in fig. 4, in the part sequence model, three parts P1, P2, P3 in the fixed sliding window are inputs of training samples, and the next part P4 is an output of training samples, and P1, P2, P3, and P4 are added as one sample to the training sample set. Then the fixed sliding window is slid rightwards by one part, the input of the training sample is changed into parts P2, P3 and P4 in the fixed history window, the output is changed into a part P5 which is the next part, and the parts P2, P3, P4 and P5 are used as one sample to be added into the training sample set. And similarly, when the sequence model of the part product is divided, dividing the sequence model of the next part product according to the process, and finally forming a training sample set D through a continuous sliding window.
6. And constructing a neural network structure, and determining an input layer, an hidden layer and an output layer of the network.
And creating the number of neurons of the input layer and the output layer of the neural network according to the one-hot vector dimension N of the part. Because the input of the training sample is three parts in the fixed history window and the output is one part behind the fixed history window, the number of neurons of the input layer is composed of one-hot vectors of the three parts, namely N multiplied by 3 neurons of the input layer; the number of output layer neurons consists of one-hot vector of one component, i.e., n×1 output layer neurons.
The number of neurons in the hidden layer of the neural network can be deduced from the attribute matrix E of size nxm defined in step 4. As shown in fig. 5, the neurons from the input layer to the hidden layer are not fully connected, the attribute matrix E is regarded as a weight matrix from the input layer to the hidden layer, and the weight matrix E is multiplied by one-hot vectors of three input parts to obtain embedded vectors of the three parts, and the dimensions of the embedded vectors are equal to M, so that the number of the neurons of the hidden layer is composed of the embedded vectors of the three parts, namely m×3 neurons of the hidden layer.
7. The relevant samples are trained.
The training sample set D is converted into a sample matrix (the number of lines of the sample matrix is the number of samples divided by the part sequence model, the number of columns is the dimension of three part one-hot vectors, namely N multiplied by 3), the sample matrix is put into a built neural network, a proper activation function (such as Tanh, sigmoid, reLu, softmax and the like) is selected, and the weight matrix from an input layer to an hidden layer, namely an attribute matrix E, is finally obtained through calculation (BP neural network and the like) according with an expected network structure and all optimal weight parameters.
8. An attribute matrix E is used.
Example 2: in the eighth step, on the basis of embodiment 1, the one-hot vector of the given component is multiplied by the attribute matrix E obtained by training, so as to obtain the attribute vector corresponding to the component. If the attribute vector is the attribute vector of two parts, the attribute vector can be applied to related algorithms such as cosine included angles, neural networks and the like, and the similarity is calculated.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (3)

1. The automatic generation method of the component attribute matrix based on deep learning is characterized by comprising the following steps of:
step one, acquiring part information of a product serving as a sample, and creating a part dictionary; the first step is specifically as follows: acquiring all product structures serving as samples, and creating a part dictionary for part information in the product structures; different parts are distinguished through the serial numbers of the parts, and each different part is clustered and put into a part dictionary, wherein the number of the parts in the part dictionary is N;
step two, creating a numerical mapping of the parts; the second step is specifically as follows: creating a numerical map for each part in the part dictionary, where the content of the part map is defined as an N-dimensional vector, where the dimension of the vector is determined by the size of the part dictionary, i.e., the dimension of the vector is N, and defining it as a one-hot vector of the part;
step three, defining the size of an attribute matrix E; the third step is specifically as follows: defining an attribute matrix E as an N multiplied by M matrix, wherein the abscissa of the matrix E represents N parts in the dictionary, and the ordinate of the matrix represents M pieces of common characteristic information of the parts;
step four, building a part sequence model according to the structure of the sample product; the fourth step is specifically as follows: acquiring all product structures serving as samples, and processing the serialization of the parts in the product structures according to the structure tree of the products;
step five, setting a fixed sliding window, dividing a part sequence model, and forming a training sample set D; the fifth step is specifically as follows: setting a fixed sliding window with the number of parts being n, setting n=3, determining the input and output of a training sample according to three parts in the fixed sliding window and the next part, in a part sequence model, adding three parts P1, P2 and P3 in the fixed sliding window as the input of the training sample, adding the next part P4 as the output of the training sample, taking P1, P2, P3 and P4 as one sample to a training sample set, sliding the fixed sliding window rightwards by one part, changing the input of the training sample into parts P2, P3 and P4 in the fixed sliding window, changing the output into the next part P5, taking P2, P3, P4 and P5 as one sample to the training sample set, and pushing the same, and after the sequence model of the part product is divided into the next part product sequence model according to the process, and finally forming a training sample set D through the continuous sliding window;
step six, constructing a neural network structure, and determining an input layer, an hidden layer and an output layer of the network; the sixth step is specifically as follows: according to one-hot vector dimension N of the parts, the number of neurons of an input layer and an output layer of the neural network is created, and because the input of the training sample is three parts in the fixed sliding window and the output is one part after the three parts, the number of neurons of the input layer is composed of one-hot vectors of the three parts, namely N multiplied by 3 neurons of the input layer; the number of neurons of the output layer is composed of one-hot vector of one part, namely N multiplied by 1 neurons of the output layer, the number of neurons of an hidden layer of the neural network can be determined by an attribute matrix E with the size of N multiplied by M defined in the fourth step, the neurons of the hidden layer are not fully connected, the attribute matrix E is set as a weight matrix from the input layer to the hidden layer and is multiplied by one-hot vectors of three parts respectively to obtain embedded vectors of the three parts, and the dimensions of the embedded vectors are equal to M, so that the number of neurons of the hidden layer is composed of the embedded vectors of the three parts, namely M multiplied by 3 neurons of the hidden layer;
step seven, training related samples;
and step eight, using an attribute matrix E.
2. The automatic generation method of the component attribute matrix based on deep learning according to claim 1, wherein the step seven specifically comprises: the training sample set D is converted into a sample matrix, the sample matrix is placed into a built neural network, a proper activation function is selected, and through calculation, the weight matrix from an input layer to an implicit layer, namely an attribute matrix E, which accords with an expected network structure and all optimal weight parameters is finally obtained.
3. The automatic generation method of the component attribute matrix based on deep learning according to claim 1, wherein the step eight specifically comprises: multiplying a one-hot vector of a given part by an attribute matrix E obtained by training to obtain an attribute vector corresponding to the part; if the attribute vectors are the attribute vectors of the two parts, the attribute vectors can be applied to related algorithms such as cosine included angles and neural networks to calculate the similarity.
CN201910986705.8A 2019-10-17 2019-10-17 Automatic generation method of part attribute matrix based on deep learning Active CN110688722B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910986705.8A CN110688722B (en) 2019-10-17 2019-10-17 Automatic generation method of part attribute matrix based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910986705.8A CN110688722B (en) 2019-10-17 2019-10-17 Automatic generation method of part attribute matrix based on deep learning

Publications (2)

Publication Number Publication Date
CN110688722A CN110688722A (en) 2020-01-14
CN110688722B true CN110688722B (en) 2023-08-08

Family

ID=69113315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910986705.8A Active CN110688722B (en) 2019-10-17 2019-10-17 Automatic generation method of part attribute matrix based on deep learning

Country Status (1)

Country Link
CN (1) CN110688722B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111625912A (en) * 2020-06-04 2020-09-04 深制科技(苏州)有限公司 Deep learning oriented Bom structure and creation method thereof

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105405060A (en) * 2015-12-01 2016-03-16 中国计量学院 Customized product similarity calculation method based on structure editing operation
CN105607288A (en) * 2015-12-29 2016-05-25 大连楼兰科技股份有限公司 Intelligent glasses omnibearing vehicle part completeness detection method based on acoustic detection assistance
CN106372732A (en) * 2016-08-22 2017-02-01 中国北方车辆研究所 Hybrid decision method for maintainability and general characteristics of armored vehicle
CN106874914A (en) * 2017-01-12 2017-06-20 华南理工大学 A kind of industrial machinery arm visual spatial attention method based on depth convolutional neural networks
CN107153877A (en) * 2017-05-24 2017-09-12 艾凯克斯(嘉兴)信息科技有限公司 A kind of machine learning method of rule-based matrix multiway tree
CN107204043A (en) * 2017-06-23 2017-09-26 艾凯克斯(嘉兴)信息科技有限公司 A kind of distributed design approach of feature based mapping
CN108009527A (en) * 2017-12-26 2018-05-08 东北大学 A kind of intelligent characteristic recognition methods towards STEP-NC2.5D manufacturing features
CN108197702A (en) * 2018-02-09 2018-06-22 艾凯克斯(嘉兴)信息科技有限公司 A kind of method of the product design based on evaluation network and Recognition with Recurrent Neural Network
CN108230121A (en) * 2018-02-09 2018-06-29 艾凯克斯(嘉兴)信息科技有限公司 A kind of product design method based on Recognition with Recurrent Neural Network
CN108280057A (en) * 2017-12-26 2018-07-13 厦门大学 A kind of microblogging rumour detection method based on BLSTM
CN108280746A (en) * 2018-02-09 2018-07-13 艾凯克斯(嘉兴)信息科技有限公司 A kind of product design method based on bidirectional circulating neural network
CN108388651A (en) * 2018-02-28 2018-08-10 北京理工大学 A kind of file classification method based on the kernel of graph and convolutional neural networks
CN108563863A (en) * 2018-04-11 2018-09-21 北京交通大学 The energy consumption calculation and dispatching method of City Rail Transit System
CN108596327A (en) * 2018-03-27 2018-09-28 中国地质大学(武汉) A kind of seismic velocity spectrum artificial intelligence pick-up method based on deep learning
CN108763445A (en) * 2018-05-25 2018-11-06 厦门智融合科技有限公司 Construction method, device, computer equipment and the storage medium in patent knowledge library
WO2018204410A1 (en) * 2017-05-04 2018-11-08 Minds Mechanical, Llc Metrology system for machine learning-based manufacturing error predictions
CN109344405A (en) * 2018-09-25 2019-02-15 艾凯克斯(嘉兴)信息科技有限公司 A kind of similarity processing method based on TF-IDF thought and neural network
CN109740536A (en) * 2018-06-12 2019-05-10 北京理工大学 A kind of relatives' recognition methods based on Fusion Features neural network
CN109977972A (en) * 2019-03-29 2019-07-05 东北大学 A kind of intelligent characteristic recognition methods based on STEP

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170091615A1 (en) * 2015-09-28 2017-03-30 Siemens Aktiengesellschaft System and method for predicting power plant operational parameters utilizing artificial neural network deep learning methodologies

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105405060A (en) * 2015-12-01 2016-03-16 中国计量学院 Customized product similarity calculation method based on structure editing operation
CN105607288A (en) * 2015-12-29 2016-05-25 大连楼兰科技股份有限公司 Intelligent glasses omnibearing vehicle part completeness detection method based on acoustic detection assistance
CN106372732A (en) * 2016-08-22 2017-02-01 中国北方车辆研究所 Hybrid decision method for maintainability and general characteristics of armored vehicle
CN106874914A (en) * 2017-01-12 2017-06-20 华南理工大学 A kind of industrial machinery arm visual spatial attention method based on depth convolutional neural networks
WO2018204410A1 (en) * 2017-05-04 2018-11-08 Minds Mechanical, Llc Metrology system for machine learning-based manufacturing error predictions
CN107153877A (en) * 2017-05-24 2017-09-12 艾凯克斯(嘉兴)信息科技有限公司 A kind of machine learning method of rule-based matrix multiway tree
CN107204043A (en) * 2017-06-23 2017-09-26 艾凯克斯(嘉兴)信息科技有限公司 A kind of distributed design approach of feature based mapping
CN108280057A (en) * 2017-12-26 2018-07-13 厦门大学 A kind of microblogging rumour detection method based on BLSTM
CN108009527A (en) * 2017-12-26 2018-05-08 东北大学 A kind of intelligent characteristic recognition methods towards STEP-NC2.5D manufacturing features
CN108230121A (en) * 2018-02-09 2018-06-29 艾凯克斯(嘉兴)信息科技有限公司 A kind of product design method based on Recognition with Recurrent Neural Network
CN108280746A (en) * 2018-02-09 2018-07-13 艾凯克斯(嘉兴)信息科技有限公司 A kind of product design method based on bidirectional circulating neural network
CN108197702A (en) * 2018-02-09 2018-06-22 艾凯克斯(嘉兴)信息科技有限公司 A kind of method of the product design based on evaluation network and Recognition with Recurrent Neural Network
CN108388651A (en) * 2018-02-28 2018-08-10 北京理工大学 A kind of file classification method based on the kernel of graph and convolutional neural networks
CN108596327A (en) * 2018-03-27 2018-09-28 中国地质大学(武汉) A kind of seismic velocity spectrum artificial intelligence pick-up method based on deep learning
CN108563863A (en) * 2018-04-11 2018-09-21 北京交通大学 The energy consumption calculation and dispatching method of City Rail Transit System
CN108763445A (en) * 2018-05-25 2018-11-06 厦门智融合科技有限公司 Construction method, device, computer equipment and the storage medium in patent knowledge library
CN109740536A (en) * 2018-06-12 2019-05-10 北京理工大学 A kind of relatives' recognition methods based on Fusion Features neural network
CN109344405A (en) * 2018-09-25 2019-02-15 艾凯克斯(嘉兴)信息科技有限公司 A kind of similarity processing method based on TF-IDF thought and neural network
CN109977972A (en) * 2019-03-29 2019-07-05 东北大学 A kind of intelligent characteristic recognition methods based on STEP

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宋晶晶.产品技术特征与零部件特征配置及其优化技术研究.《中国优秀硕士学位论文全文数据库》.2011,(第12期),C028-27. *

Also Published As

Publication number Publication date
CN110688722A (en) 2020-01-14

Similar Documents

Publication Publication Date Title
CN109960759B (en) Recommendation system click rate prediction method based on deep neural network
CN106503035A (en) A kind of data processing method of knowledge mapping and device
CN112685504B (en) Production process-oriented distributed migration chart learning method
CN112860904B (en) External knowledge-integrated biomedical relation extraction method
CN114861890A (en) Method and device for constructing neural network, computing equipment and storage medium
CN113255844A (en) Recommendation method and system based on graph convolution neural network interaction
CN110688722B (en) Automatic generation method of part attribute matrix based on deep learning
CN114548591A (en) Time sequence data prediction method and system based on hybrid deep learning model and Stacking
CN110289987B (en) Multi-agent system network anti-attack capability assessment method based on characterization learning
CN113449878B (en) Data distributed incremental learning method, system, equipment and storage medium
CN116302088B (en) Code clone detection method, storage medium and equipment
CN111797135A (en) Structured data processing method based on entity embedding
CN116561885A (en) Data-driven supercritical airfoil and mapping method and system of flow field of data-driven supercritical airfoil
CN112149826B (en) Profile graph-based optimization method in deep neural network inference calculation
CN116263849A (en) Injection molding process parameter processing method and device and computing equipment
CN115544307A (en) Directed graph data feature extraction and expression method and system based on incidence matrix
CN114254199A (en) Course recommendation method based on bipartite graph projection and node2vec
CN106909649A (en) Big data profile inquiry processing method based on Recognition with Recurrent Neural Network
CN110705650B (en) Sheet metal layout method based on deep learning
Martino et al. Semantic techniques for discovering architectural patterns in building information models
CN112990618A (en) Prediction method based on machine learning method in industrial Internet of things
JPH09326100A (en) Unit and method for traffic control
Navin et al. Modeling of random variable with digital probability hyper digraph: data-oriented approach
CN115544876B (en) Design method for intelligent manufacturing software of productivity center
CN113722951B (en) Scatterer three-dimensional finite element grid optimization method based on neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant