NL2032523B1 - Flowsheet digitization with computer vision, automatic simulation, and flowsheet (auto)completion with machine learning - Google Patents
Flowsheet digitization with computer vision, automatic simulation, and flowsheet (auto)completion with machine learning Download PDFInfo
- Publication number
- NL2032523B1 NL2032523B1 NL2032523A NL2032523A NL2032523B1 NL 2032523 B1 NL2032523 B1 NL 2032523B1 NL 2032523 A NL2032523 A NL 2032523A NL 2032523 A NL2032523 A NL 2032523A NL 2032523 B1 NL2032523 B1 NL 2032523B1
- Authority
- NL
- Netherlands
- Prior art keywords
- chemical
- digitized
- node
- directed graph
- setup
- Prior art date
Links
- 238000010801 machine learning Methods 0.000 title claims description 12
- 238000004088 simulation Methods 0.000 title description 3
- 238000000034 method Methods 0.000 claims abstract description 256
- 230000008569 process Effects 0.000 claims abstract description 164
- 238000001311 chemical methods and process Methods 0.000 claims abstract description 31
- 239000000126 substance Substances 0.000 claims abstract description 22
- 230000003993 interaction Effects 0.000 claims abstract description 16
- 230000031018 biological processes and functions Effects 0.000 claims abstract description 6
- 230000002906 microbiologic effect Effects 0.000 claims abstract description 6
- 238000002156 mixing Methods 0.000 claims abstract description 5
- 238000010977 unit operation Methods 0.000 claims description 37
- 238000001514 detection method Methods 0.000 claims description 27
- 238000013473 artificial intelligence Methods 0.000 claims description 16
- 238000012549 training Methods 0.000 claims description 13
- 238000013527 convolutional neural network Methods 0.000 claims description 10
- 239000000376 reactant Substances 0.000 claims description 10
- 239000013626 chemical specie Substances 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 8
- 238000003058 natural language processing Methods 0.000 claims description 7
- 239000000047 product Substances 0.000 claims description 7
- 238000005070 sampling Methods 0.000 claims description 7
- 230000003416 augmentation Effects 0.000 claims description 6
- 239000003054 catalyst Substances 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 5
- 230000009850 completed effect Effects 0.000 claims description 5
- 238000002372 labelling Methods 0.000 claims description 5
- 230000015572 biosynthetic process Effects 0.000 claims description 4
- 238000004821 distillation Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- -1 pH values Substances 0.000 claims description 4
- 238000003786 synthesis reaction Methods 0.000 claims description 4
- 102000004190 Enzymes Human genes 0.000 claims description 3
- 108090000790 Enzymes Proteins 0.000 claims description 3
- 108091028043 Nucleic acid sequence Proteins 0.000 claims description 3
- 239000006096 absorbing agent Substances 0.000 claims description 3
- 238000000137 annealing Methods 0.000 claims description 3
- 239000000872 buffer Substances 0.000 claims description 3
- 239000000969 carrier Substances 0.000 claims description 3
- 239000003638 chemical reducing agent Substances 0.000 claims description 3
- 238000004590 computer program Methods 0.000 claims description 3
- 238000001816 cooling Methods 0.000 claims description 3
- 238000013499 data model Methods 0.000 claims description 3
- 238000010438 heat treatment Methods 0.000 claims description 3
- 230000008018 melting Effects 0.000 claims description 3
- 238000002844 melting Methods 0.000 claims description 3
- 239000000203 mixture Substances 0.000 claims description 3
- 239000003607 modifier Substances 0.000 claims description 3
- 150000007523 nucleic acids Chemical group 0.000 claims description 3
- 238000010992 reflux Methods 0.000 claims description 3
- 239000002904 solvent Substances 0.000 claims description 3
- 239000003381 stabilizer Substances 0.000 claims description 3
- 230000001502 supplementing effect Effects 0.000 claims description 3
- 239000007800 oxidant agent Substances 0.000 claims description 2
- 238000005804 alkylation reaction Methods 0.000 claims 2
- 239000000543 intermediate Substances 0.000 claims 2
- 230000029936 alkylation Effects 0.000 claims 1
- 238000005576 amination reaction Methods 0.000 claims 1
- 238000002425 crystallisation Methods 0.000 claims 1
- 230000008025 crystallization Effects 0.000 claims 1
- 230000018044 dehydration Effects 0.000 claims 1
- 238000006297 dehydration reaction Methods 0.000 claims 1
- 238000006356 dehydrogenation reaction Methods 0.000 claims 1
- 230000008021 deposition Effects 0.000 claims 1
- 230000032050 esterification Effects 0.000 claims 1
- 238000005886 esterification reaction Methods 0.000 claims 1
- 238000005984 hydrogenation reaction Methods 0.000 claims 1
- 230000007062 hydrolysis Effects 0.000 claims 1
- 238000006460 hydrolysis reaction Methods 0.000 claims 1
- 230000003647 oxidation Effects 0.000 claims 1
- 238000007254 oxidation reaction Methods 0.000 claims 1
- 238000005191 phase separation Methods 0.000 claims 1
- 238000006068 polycondensation reaction Methods 0.000 claims 1
- 238000006116 polymerization reaction Methods 0.000 claims 1
- 238000006722 reduction reaction Methods 0.000 claims 1
- 238000006277 sulfonation reaction Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 abstract description 50
- 238000000926 separation method Methods 0.000 abstract description 4
- 238000009835 boiling Methods 0.000 abstract description 2
- 238000004422 calculation algorithm Methods 0.000 description 12
- 238000013461 design Methods 0.000 description 9
- 238000013459 approach Methods 0.000 description 8
- 230000013016 learning Effects 0.000 description 8
- 239000013598 vector Substances 0.000 description 7
- 238000000746 purification Methods 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000009826 distribution Methods 0.000 description 4
- 241000894007 species Species 0.000 description 4
- 230000007704 transition Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000013434 data augmentation Methods 0.000 description 3
- 238000000354 decomposition reaction Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000010845 search algorithm Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 102100031315 AP-2 complex subunit mu Human genes 0.000 description 2
- 101000796047 Homo sapiens AP-2 complex subunit mu Proteins 0.000 description 2
- 230000002152 alkylating effect Effects 0.000 description 2
- 230000001364 causal effect Effects 0.000 description 2
- 229910052729 chemical element Inorganic materials 0.000 description 2
- 229940000425 combination drug Drugs 0.000 description 2
- 239000000306 component Substances 0.000 description 2
- RWGFKTVRMDUZSP-UHFFFAOYSA-N cumene Chemical compound CC(C)C1=CC=CC=C1 RWGFKTVRMDUZSP-UHFFFAOYSA-N 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 238000000151 deposition Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000002140 halogenating effect Effects 0.000 description 2
- 230000003301 hydrolyzing effect Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000001546 nitrifying effect Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000001590 oxidative effect Effects 0.000 description 2
- 230000000379 polymerizing effect Effects 0.000 description 2
- 238000004513 sizing Methods 0.000 description 2
- 239000000243 solution Substances 0.000 description 2
- 238000013526 transfer learning Methods 0.000 description 2
- 101100162825 Caenorhabditis elegans dpy-23 gene Proteins 0.000 description 1
- 206010013710 Drug interaction Diseases 0.000 description 1
- 206010022528 Interactions Diseases 0.000 description 1
- 241000272168 Laridae Species 0.000 description 1
- 241000613130 Tima Species 0.000 description 1
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000005119 centrifugation Methods 0.000 description 1
- 238000003889 chemical engineering Methods 0.000 description 1
- 238000010960 commercial process Methods 0.000 description 1
- 230000002860 competitive effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 239000008358 core component Substances 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 239000013067 intermediate product Substances 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000010327 methods by industry Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 239000002994 raw material Substances 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B17/00—Systems involving the use of models or simulators of said systems
- G05B17/02—Systems involving the use of models or simulators of said systems electric
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/32—Operator till task planning
- G05B2219/32104—Data extraction from geometric models for process planning
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/35—Nc in input of data, input till input file format
- G05B2219/35203—Parametric modelling, variant programming, process planning
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
- Preparation Of Compounds By Using Micro-Organisms (AREA)
Abstract
The present invention is in the field of physical processes, chemical processes, biological processes, and microbiological processes in general, apparatuses for such processes, such as for boiling, for separation, for mixing, for dissolving, for reacting, for controlling, and in particular a process comprising a plurality of such apparatuses and processes or process steps, as well as the interaction between said apparatuses and processes or process steps, such as in terms of flows of chemicals between apparatuses. To indicate such general flow aspects a process flow diagram may be used. The process flow diagram displays the relationship between major equipment of a plant facility and does not show minor details.
Description
P100801NL00
Flowsheet digitization with computer vision, automatic simulation, and flowsheet (auto)completion with machine learning
The present invention is in the field of physical processes, chemical processes, biologi- cal processes, and microbiological processes in general, apparatuses for such processes, such as for boiling, for separation, for mixing, for dissolving, for reacting, for controlling, and in particular a process comprising a plurality of such apparatuses and processes or process steps, as well as the interaction between said apparatuses and processes or process steps, such as in terms of flows of chemicals between apparatuses. To indicate such general flow aspects a process flow diagram may be used. The process flow diagram is aimed to visually display the relationship between major equipment of a plant facility and does not show mi- nor details.
In the representation of physical processes, chemical processes, biological processes, and microbiological processes in general, apparatuses for such processes, and the interaction between said apparatuses and processes or process steps, process flow diagrams may be used. A first step leading to a construction of a process plant and its use in the manufacture of a product is typically the conception of a process, typically involving process steps. The process concept may then be visualized by a process flow diagram, representing the process steps, and main details thereof, or likewise, a method of producing. Process design can then then proceed on the basis of the process flow diagram chosen. Therein also physical proper- ties of the apparatuses are incorporated. Fig. 1 shows some typical elements and symbols used. The elements of such flow diagrams, as well as aspects thereof, such as implementa- tion, typically comply with one or more of the following standard: ISO 15519-1:2010(en):
Specification for diagrams for process industry — Part 1: General rules; ISO 15519- 2:2015(en): Specifications for diagrams for process industry — Part 2: Measurement and control; ISO 10628-1:2014(en): Diagrams for the chemical and petrochemical industry —
Part 1: Specification of diagrams; ISO 10628-2:2012(en): Diagrams for the chemical and petrochemical industry — Part 2: Graphical symbols; ANSI Y32.11: Graphical Symbols For
Process Flow Diagrams (withdrawn 2003); and SAA AS 1109: Graphical Symbols For Pro- cess Flow Diagrams For The Food Industry. These process flow diagrams may be used to perform steady-state and non-steady-state heat and mass balancing, sizing and costing calcu- lations, such as for a chemical process. It 1s considered an essential and core part of process design. Therein nowadays a computer or the like is used, in particular for supporting the cal- culations, and hence process design. Typical steps in process design are an initial step, which may be referred to as synthesis, a step for optimizing the process design, which may involve heat and material balance, sizing of process equipment, and cost calculations, and a control step for assessing topics as safety, operability, and a final step, wherein the process design or parts thereof are further optimized in view of a previous step. In optimization structural [physical] elements of the process design can be optimized, as well as particular setting in the process, such as parameters, e.g. temperature, pressure, flow rate, density, etc., in partic- ular in view of interaction between process steps and apparatuses involved. Initially one could change a selection of the apparatus(es) involved, and then one could change the values of parameters, such as temperature and pressure. Parameter optimization is considered to be a more advanced stage. As mentioned, process flow diagrams play an important role in pro- cess design.
Typically, process flow diagrams of a process may include various elements, such as operational parameter data (see above), references to a mass balance, major equipment items, connections with other systems, identifications, such as process stream names, process piping, and major bypass and recirculation (recycle) streams. The typically do not include minor elements, such as minor bypass lines, instrumentation and details thereof, controllers like level or flow controllers, pipe classes or piping line numbers, isolation and shutoff valves, maintenance vents and drains, relief and safety valves, and flanges, though this is not a general rule. Process flow diagrams of multiple process units, within a large industrial plant, may as a consequence of the size and complexity usually contain less detail.
Nowadays a process flow diagram can be computer generated, such as from process simulators, using CAD packages, or using flow chart software using a library of chemical engineering symbols. Rules and symbols are available from standardization organizations such as DIN, ISO or ANSI as mentioned above. In view of complexity of a typical process, process flow diagrams may be produced on large sheets of paper. However, many non-digit- ized versions of process flow diagrams still exist, and often these are used in valuable and critical processes. Process flow diagrams of many commercial processes can be found in lit- erature, specifically in encyclopedias of chemical technology, although some might be out- dated. More recent ones can be found on-line. Typically these process flow diagrams relate to a pixel-oriented diagram, that 1s, wherein the diagram is present as an image as such, with- out the details of the image being incorporated as separate items or the like. In other words, the meaning of or information relating to various elements in the image in the real world do not form part of the image; as mentioned, often the diagrams are not even digitized at all.
Also digitization of small elements in such diagrams may form a problem. Although promis- ing results have been reported from previous studies, some shortcomings of prior research also becomes apparent. Firstly, all machine learning models (ML) in literature are typically trained on data sets from a single source, mostly a company cooperating with researchers, or even on synthesized data sets. Unsurprisingly, the accuracy of such models is near perfec- tion, as the data exhibits little variation. It needs be acknowledged that retrieving piping and instrumentation diagrams (P&IDs) is not trivial, as companies naturally rarely publish their documentation. It is however doubtful that such models would generalize well to other data distributions, for instance diagrams generated with other CAD editors, making developed digitization approaches very isolated niche solutions. Secondly, most symbol data sets only consist of few categories, not reflecting the variety of equipment used in process industries.
As a consequence of single source data sets, few different symbols are categorized, leading to a lack of a complete symbol categorization. Thirdly, the amount of data used for training is not reflecting the data driven nature of deep learning (DL) models. DL models are com- monly trained on big data. Many DL approaches for P&IDs however rely on very little data with less than a hundred diagrams. Again, a possible explanation for this issue is the lack of publicly available data, combined with the time consuming nature of labeling such diagrams.
Lastly, while there has been made considerable effort towards the task of digitizing P&IDs, to the best of our knowledge DL powered digitization approaches have not been applied to process flow diagrams (PFDs).
So analyzing process flow diagrams in terms of e.g. functionality, digitally communi- cating process flow diagrams, making flow diagrams, appear to be in a stage wherein room for improvement is present.
The present invention relates to an improved system and method for analyzing a (chemical) process and providing a digitized set-up which overcomes one or more of the above disadvantages, without jeopardizing functionality and advantages.
The present invention relates in a first aspect to system for analyzing a chemical process, which system in principle can be used for any process, comprising a computer memory provided with digital representation of a directed graph representation of the chemi- cal process, the graph representation comprising elements selected from apparatuses, flow modifiers, devices, process steps, flows, pipelines, signal lines, pressure regulators, tempera- ture regulators, concentration regulators, chemical species regulators, controllers, and combi- nations thereof, and interactions between these elements, and a data processor provided with a computer program which, when running on the data processor, -provides trained machine learning, which is trained using a selection of a training dataset comprising directed graph representations of chemical processes and/or string representations of the directed graphs and resulting directed graphs and nodes representing elements, and annotated versions thereof, and as this typically is the training data set of the object detection algorithm, it typi- cally includes the location of the objects on the image, e.g., through a bounding box, or a pixel-based mask, and the type of equipment, provides the digital representation, which may be regarded as an image, of the directed graph representation of the chemical process in the computer memory as input to the trained machine learning, and the trained machine learning providing in the computer memory the chemical process as directed graph with nodes and edges, which may be considered interconnections between nodes, defining the elements.
Basically, a bounding box may be considered a box, the mask may be considered a flexible form based on pixels. So one can cut out objects accurately. In particular object detection ar- chitecture, object detection performance metrics, and skeletonization, are used. Therewith a system is provided which solves one or mor of the above disadvantages. The present system, and likewise method, provide a system that detects unit operations and their connectivity in process flowsheets, such as chemical process flows. A directed graph is made therefrom.
Therewith a full digitization is provided. The graph can be read automatically into a process simulation, such as process simulation software. A model of the graph can be created auto- matically. The graph may be considered as a knowledge graph. In the process of making the graph certain elements may be cut out, such as by using a mask, in particular for cutting out unit operations. A neural network or the like may be used, in particular for learning. In addi- tion auto-completion of to be made graphs, such as of chemical flowsheets, is provided.
Therein reinforcement learning and graph representation may be used. A suitable program- ming environment is Python. No graphical user interface is required. The graph results are found to be more accurate compared to prior art methods, and also more meaningful, that is representing the real environment better. It is also found to scale better.
The contribution of this invention is considered manifold. Firstly, inventors devel- oped an extensive catalogue of unit operations in PFDs. As PFDs are only loosely based on a common illustration convention, inventors categorized symbols for unit operations based on their functionality as well as their appearance. Secondly, inventors collected and annotated a large PFD dataset. Inventors mined over 1,000 flowsheets from various sources including scientific literature. Thirdly, inventors developed object detection models that can identify unit operations in PFDs. The present system may be based on a state-of-the-art Faster R-
CNN architecture, or a Mask R-CNN architecture. The present results show that the pro- posed system has competitive performance on the diverse data set. Lastly, inventors im- proved a pixel-based search algorithm to the specifics of PFD illustrations, such as different stream intersection illustrations and text in unit operations.
In a second aspect the present invention relates to a method of providing a digitized process set-up, the digitized process set-up with a sequence of at least two process steps, which sequence may be a linear sequence or a circular sequence or multiple cycles or a com- bination thereof, wherein the at least two process steps are selected from a chemical process step, a physical process step, a biological process step, and a micro-biological process step, in particular wherein process steps are selected from heating, cooling, flowing, reacting, mixing, contacting, depositing, annealing, separating, adding, removing, filtering, crystalliz- ing, phase-separating, distilling, oxidizing, reducing, hydrogenating, de- hydrogenating, pol- ymerizing, poly-condensing, esterifying, alkylating, de-alkylating, aminating, halogenating, sulfonating, nitrifying, de-hydrating, hydrolysing, and melting, comprising optically reading an image of a process set-up, digitizing said optically read process set-up forming a digitized image, which typically comprises pixels, using artificial intelligence, making a directed graph of the digitized image of the process set-up, the directed graph comprising a plurality of unique nodes and at least one [biological-]physical-chemical interaction between each first node and each second node of the plurality of nodes, and optionally at least one direc- 5 tion of said interaction, such as shown in figs. 2a-2d, wherein each node individually is se- lected from an end node, an intermediate node, and an intersection node, using artificial in- telligence, identifying at least one physical object to each node in the directed graph, using artificial intelligence, identifying at least one process path, which may be referred to as inter- action, or edge, or connection, between each first node and each second node of the plurality of nodes, and using rule-based ontology, in particular rule-based ontology obtained from a data model, such as ONTOCAPE, supplementing (also referred to as enriching) the directed graph of the digitized process set-up with the at least one process path and identified objects, or vice versa, in particular wherein the process is a chemical process.
In a third aspect the present invention relates to a use of the digitized process set-up for optimizing the process set-up, for forming a digital twin of the process set-up, for linking the process set-up to operational data, or for building a model of the process set-ups.
In a further aspect the present system may comprise instructions for carrying out the present method.
Thereby the present invention provides a solution to one or more of the above men- tioned problems.
The present invention is also a topic of to be published scientific papers, entitled “Dig- itization of chemical process flowsheets using computer vision on big data” and “LEARN-
ING FROM FLOWSHEETS: A GENERATIVE TRANSFORMER MODEL FOR FLOW-
SHEET COMPLETION”, which reference and its content is incorporated by reference.
Advantages of the present description are detailed throughout the description. Refer- ences to the figures are not limiting, and are only intended to guide the person skilled in the art through details of the present invention.
The present invention relates in a first aspect to a system for analyzing a chemical process.
In an exemplary embodiment of the present method in the process set-up objects are localized and classified, such as a unit operation, an arrow, an intersection, a control unit, and text. An example of such a process flow is given in fig. 3.
In an exemplary embodiment of the present method artificial intelligence is based on a convolutional neural network or a neural network with a transformer architecture. An exam- ple of such a process flow is given in fig. 3.
In an exemplary embodiment of the present method artificial intelligence is trained on labelled data. An example of such a process flow is given in fig. 3. In an exemplary embodiment of the present method each node is provided with supplementary actors, wherein the supplementary actors are selected from chemical species, pressure, temperature, flow, concentration, controls, reactant, catalyst, product, pH values, composition, physical or chemical states, enzyme, biological species, nucleic acid sequence or part thereof, . An ex- ample is given in fig. 4.
In an exemplary embodiment of the present method based on the obtained directed graph or supplemented directed graph a layout of the process set-up is made comprising physical objects, the physical objects selected from apparatuses, in particular wherein appa- ratuses are selected from a tank, a column, a reflux, a reboiler, a boiler, a controller, a valve, a cooler, a mixer, a heater, a heat exchanger, a furnace, a filter, a mixer, a splitter, a phase separator, a absorber, a flash unit, a reactor, a pump, a flow controller, a compressor, a filter, a splitter, and a vessel. An example is given in fig. 5.
In an exemplary embodiment of the present method based on the obtained directed graph or supplemented directed graph a layout of the process set-up is made comprising chemical objects, the chemical objects selected from chemical species, catalysts, solvents, inert species, reactants, carriers, stabilizers, buffers, intermediate products, non-reactants, ox- idants, and reductants. An example is given in fig. 6.
In an exemplary embodiment of the present method the directed graph is supplemented with a standard process model. An example is given in fig. 7.
In an exemplary embodiment of the present method nodes or actors are auto-com- pleted. An example is given in fig. 8.
A novel method is provided to learn from (chemical) process flowsheets and provide flowsheet structure recommendations, such as for engineers performing process synthesis. In this respect inventors created two data sets, the first one consisting of synthetically generated and the second one consisting of real flowsheets in graph format. Using the conversion algo- rithm for the automated conversion between flowsheet graphs and SFILES 2.0 strings, in- ventors automatically generated the corresponding text-based SFILES 2.0 data sets. The pre- sent inventors pre-trained a generative Transformer language model on the data set of syn- thetically generated flowsheets and fine-tuned it on the data set of real flowsheets. The trained generative Transformer model is capable of learning the grammatical structure of the
SFILES 2.0 language and the patterns contained in the flowsheet topologies. Consequently, the results demonstrate that using the trained model for causal language modelling is a strat- egy to auto-complete flowsheet topologies. Using beam search as the decoding strategy yields the highest probability flowsheet completion. On the other hand, if more diverse flow- sheet recommendations are preferred, the top-p sampling decoding strategy is a promising addition to beam search.
The invention is further detailed by the accompanying figures and examples, which are exemplary and explanatory of nature and are not limiting the scope of the invention. To the person skilled in the art, it may be clear that many variants, being obvious or not, may be conceivable falling within the scope of protection, defined by the present claims.
5 Figures 1, 2a-d, and 3-22 show aspects of the present invention.
Figure 1 shows examples of symbols typically used in process flow diagrams.
Fig. 2a shows a non-limitative example of a process flow diagram, having to a certain extent arbitrary elements shown therein. Figure 2b shows a graph representing the process flow diagram of fig. 2a. Figure 2c shows a fully digitized process flow diagram, according to the graph of fig. 2b, and the process flow diagram of fig. 2a. Figure 2d shows schematically the method of providing a digitized process set-up, the digitized process set-up with a se- quence of at least two events, wherein the at least two events are selected from a chemical event, a physical event, a biological event, and a micro-biological event. The process starts with the process flow diagram of fig. 2a, which is digitized. In the process of digitization ob- jects represented in the process flow diagram of fig. 2a are detected. Further a flow path is explored, such that a graph can be made, in particular the graph of fig. 2b. Then the graph of fig. 2b is supplemented or enriched with the elements of fig. 2a, and optional further ele- ments, wherein the elements are selected from physical-chemical interaction between each first node and each second node of the plurality of nodes, from objects within the figure 2a, parameters, etc. as is explained throughout the description and claims.
Figure 3 shows an exemplary further process flow diagram. It is an objective of the present invention to localize and classify objects in flow diagrams, such as unit operations, arrows, intersections, and text, to use a deep learning model, which may be based on convo- lutional neural networks, to further supplement the process flow diagram, and to use a super- vised learning approach, such as wherein the model is trained on labeled data, such as that of figure 3.
Figure 4 shows a complex process flow diagram which is digitized through computer vision, according to the invention.
Figures 5-6 show a use of an advanced model with a mask, in addition to the present method or system. The advanced model can identify a pixel-based mask for each object de- tected. The advanced model may be based on a Mask R-CNN architecture. Therewith basi- cally unit operations are cut out more accurately than in a bounding box approach.
Also, typically it learns better, such as with less data.
Figure 7 shows automatic generation of UniSim models from process flow diagrams.
Figure 8 shows auto-completion of an exemplary process flow diagram. Starting with chemical species Hz and CO;, which are in a first step mixed (dashed oval) the present sys- tem provides suggestions for addition of a next step, apparatus, parameters, etc. (dashed-
dotted oval). In addition thereto, or as an alternative, typically used elements are provided as optional selections at the right hand side of the screen (dashed dark oval). A user may select items from the pictograms on the right.
The invention although described in detailed explanatory context may be best understood in conjunction with the accompanying figures.
Experiment
The below is an example of how the invention could be implemented in practice.
Fig. 2a shows an example flowsheet of a proposed Cumene production plant. The illustration was slightly altered, the flow structure however is kept. Via a procedure known as information extraction inventors automatically retrieved information of the chemical process representation in structured formats from unstructured data through several different methods.
Specifically, an introduction is given to object detection architectures, object de- tection performance measurement and skeletonization.
For object detection a distinction can be made between one-stage and two-stage detectors. Two-stage detectors contain a model that determines regions of interest with high probabilities of containing objects and a second model that classifies found re- gions of interest. On the other hand, one-stage detectors consist of a single network model that simultaneously predicts bounding boxes and classifications. Transfer learn- ing refers to the improvement of model learning in one task by transferring knowledge from a related, previously learned task. With transfer learning, a model can initiate the training process on new data distributions with pre-trained weights, shortening train- ing time and possibly leading to superior performance due to convergence to better op- tima. Backbone models in detection models are usually pre-trained on large datasets such as the ImageNet classification challenge dataset or the Common Objects in Con- text (COCO) dataset and during transfer training, parts of the network are frozen, meaning their parameter are not updated during training. Data augmentation methods are techniques used to increase the size of a limited dataset by adding modified copies of the data. Many augmentation techniques have been applied to image datasets in the literature, such as geometric transformations (e.g., stretching, skewing), flipping, color changes, cropping, rotation, translation, noise injection, random erasing, blurring, and more. Not all data augmentation techniques may apply to every dataset in every do- main. Augmentations could reflect real varieties found in a data distribution. Feature pyramid networks (FPN) are a set of deep CNNs which construct features at different scales while keeping computation feasible. Feature pyramids are an important compo- nent in detection systems that facilitate the recognition of objects at different scales.
The main objective of feature pyramids in a model is to allow a neural network to learn high to low-level features and independently make predictions at each level.
The objective of the object detection model is to localize and classify objects within images. Thus, two performances are typically evaluated, the placement of the bounding box around the object, and the classification accuracy of said bounding box.
The most common performance evaluation metrics used herein are the Average Preci- sion (AP) and Mean Average Precision (mAP), both of which consider correct, missed and false predictions in their respective calculation. The mAP is the primary metric used to measure a detector’s accuracy over all the object categories in a dataset. The mAP is found dependent on the Intersection over Union (IoU) threshold chosen since it determines when a prediction is considered correct. The Pascal VOC AP metric, also known as AP50, is the mAP calculated at an IoU threshold of 0.5. The COCO mAP metric, known simply as mAP, is the average of mAPs with IoU thresholds in the range of [0.5:0.05:0.95]. Comparing the AP50 to the COCO mAP provides valuable insights into the performances of the classification and bounding box placement tasks individually, as a high ap50 and a low mAP suggest that object are correctly but im- precisely detected.
Skeletonization produces a compact representation of objects in images by re- ducing them to their medial axis, effectively transforming shapes to curves of a 1- pixel thickness while preserving their connectivities. Figure 9 presents an example of distillation column skeletonizations. Imperfections in skeletonization can be observed when applying it to unit operations. In the digitization of PFDs, skeletonization facili- tates the application of a graph search algorithm through a rule-based approach.
In the development of efficient ML algorithms through supervised learning methods large amounts of valuable and diverse data for training, testing, and valida- tion were used. As flowsheet digitization represents a gap in current literature, inven- tors further introduce a novel categorization based on visual and functional features with examples. Process flow diagrams were retrieved by applying the flowsheet recog- nition algorithm. The algorithm downloads all full text papers from a given source and extracts all images from said source. Then, a CNN classifier decides whether each fig- ure is a flowsheet, or not. Inventors applied the algorithm to diverse sources, such as a number of journals, process engineering education books, and retrieved about one thousand flowsheets. Very few figures were wrongly classified as flowsheets, which is in accordance with the high accuracy of the algorithm. The diversity in data is found imperative, as ML models regularly fail to extrapolate outside their trained data distri- bution, meaning the object detection algorithm would fail to properly detect unseen ways of illustration unit operations.
Inventors defined main unit operations in chemical processes, and extended fur- ther on to incorporate equipment types and different illustrations. Additionally, class decomposition within unit operation types was utilized to increase model performance and to create a more consistent dataset. Class decomposition describes the method of splitting classes into different, more homogeneous sub-classes, decomposing the de- tection problem into a larger group of separate classes with similar topological charac- teristics. Such a technique can serve many benefits to supervised learning models by improving the class-to-instance association. Each sub-class exhibits more similar pat- terns within itself and more distinguishable patterns to other classes. In the context of
PFD digitization, the class decomposition reasoning was based on two observations.
Firstly, many classes contain clearly identifiable sub-classes of very different illustra- tions for the same equipment. As an example, the category pump was sub-divided into different categories. Another observation made on the flowsheets was that sub-classes could allow for more detailed information to be extracted from the data. For example, the unit operation categorization proposed in literature was a single valve, while in- ventors found a large variety of valves with different functionalities, such as control valves or check valves. Thus, further decomposing provided more information about used equipment. The mined flowsheets, comprising actors, objects, nodes, and interac- tions, were labeled using domain expertise and contextual information. The open- source graphical annotation tool Labellmg was utilized. The quality of data provided to the object detection model is found to directly impact the predicting performance of the model. Thus, correct and consistent annotation of objects in the data are found im- portant. In order to accelerate the annotation process, a semi-automation was em- ployed. With a first batch of data, a preliminary model was trained and used for inter- ference on unannotated data to create annotations. These were then corrected and used for further training of the model. Inventors found that this approach greatly accelerates the process of annotation, as the model quickly learns to detect the most common unit operations and human correction is only rarely necessary for more uncommon objects.
The used digitization approach may involve several distinct steps from an image to a graph representation. First, an object detection model 1s used to detect unit opera- tions, such as those of figure 1. Text as well as arrowheads indicating stream direc- tions may be detected by a second object detection model. The found bounding boxes of arrowheads and unit operations are filled before skeletonization is applied to facili- tate skeletonization. With the skeletonized image and the locations of unit operations known, connectivity among unit operations are explored. In the following, inventors will discuss the steps unit operation detection, and stream recognition, in more detail.
Various information are encoded in flowsheets. Apart from unit operations, there may be important information contained in text and arrows as well. In total, in- ventors trained two separate object detection models for different tasks: (1) detection of unit operations and unknown units, (2) detection of arrows, path intersections, and text. For object detection, the Faster R-CNN architecture was used. The choice of a backbone model is hereby one of the most crucial decisions for performance. Inven- tors used three different backbone models, which mostly differ in their architecture deepness. Pretraining the backbone model, even though on an unrelated dataset, typi- cally increases model performance as the backbone model will learn to extract distinct features. This will help convergence on a flowsheet dataset even with a limited num- ber of flowsheets. To account for imbalance among categories in the dataset, repeat factor sampling is applied. Repeat factor sampling allows to train images with un- derrepresented categories more often to account for slower learning effects. Repeat factor training is especially important for our dataset as some unit operations are sel- dom found in literature, while others, such as heat exchangers or pumps, are naturally often present. Hence, without repeat factor sampling, an imbalance in performance can occur. Furthermore, to increase generalization, several augmentation techniques are applied during training. Thus, a set of applicable augmentation methods were identi- fied, and the effect of data augmentation on the object detection model performance was investigated. Specifically, the techniques of flipping, adding noise, blurring, and repetition of rare objects were applied and studied.
The detection of unit operation is the first step in digitization scheme. After unit operations have been successfully detected, their bounding boxes are processed.
Bounding boxes with significant overlap, measured in intersection over union, are compared and the one with the lower confidence score is removed. This is necessary as rarely the object detection algorithm detects objects twice with different categoriza- tion. Afterwards, detected unit operations with a confidence score lower than a thresh- old are converted to a category X, indicating a low confidence of the model. The flow- sheet image is binarized and then reduced to one-pixel thin layers of object, allowing stream recognition. Once the PFD has gone through the first stage, the skeletonized flowsheet is prepared for the graph search algorithm. First, the skeletonized image is represented as a graph in which each pixel is a node. In this graph, each node has a maximum of 8 edges corresponding to the 8 neighboring pixels. Additionally, each node in the graph contains information on its color and whether it is inside an object bounding box or not. Starting from a unit operation, the program checks for white pixel neighbors along the bounding box border, identifying possible paths. For each path, the algorithm traverses the graph along neighboring white pixels and continues the search. A graphical representation of this procedure is shown in Figure 10. A con- nection between two objects is established when the algorithm reaches a pixel belong- ing to a new unit operation. If the exploration reaches a dead end, it creates an ”In/
Out” stream object, indicating an incoming or outgoing stream of the process. Once all the outgoing paths from a unit operation are explored, the algorithm moves to the next unit and repeats the search, storing information about all detected connections. After the graph search, information is saved on the connections between unit operations. Fi- nally, the graph representation of the flowsheet is constructed using the NetworkX open-source Python package. A graph is created with each unit operating as a node and the streams between them as directed edges. Each edge and node in the graph al- lows for adding attributes, such as associated text and operating conditions and can be handled for further processing
For auto-completion the following example is given. It is noted that the subject matter of the present system and method and the auto-completion may overlap, and therefore that elements of these embodiments may be combined.
The present inventors make use of a transformer-model architecture and decod- ing strategies used for text generation in natural language processing (NLP). Further- more, it recaps the used flowsheet representations, namely flowsheet graphs and the
SFILES 2.0 notation. The latter is used to represent the flowsheet data in a text-based manner in order to enable using NLP models. Transformer-based models increased the performance in several benchmark tasks and also show successful applications beyond the human language. Text may be processed as a sequence of tokens, whereby the to- kens are either words or other chunks of the input sequence. Tokenization is typically the first text processing step in NLP and follows a tokenization strategy. After to- kenizing the input sequence, each token is converted to a vector by using a learned nu- merical embedding. Putting together all inputs’ vectors yields a matrix, called input embedding in the following, which can be processed by the NLP model. In a further example the original Transformer architecture is a neural sequence translation model consisting of an encoder stack of N = 6 identical layers and a decoder stack of N = 6 identical layers in sequence. The decoder uses the encoder’s output and the previously generated outputs to compute the output probabilities for the next token. Each encoder layer contains two sub-layers with subsequent layer normalization. Each decoder layer contains three sub-layers with subsequent layer normalization. Since recurrent compo- nents are completely removed in the Transformer architecture, before input and output embeddings are passed to the encoder and decoder, respectively, positional encoding is applied. Positional encoding ensures that the information of the order of tokens in the sequence is taken into account. The core components of the Transformer architec- ture are the attention sub-layers. The calculation of attention takes a query vector q, key vector k, and value vector v for each input token and compares all queries against all keys resulting in scores for query-key compatibility. The compatibility scores are then used as weights to calculate the attention output as a weighted sum of the values.
In practice, the attention is computed for all inputs of an input sequence in parallel, putting together all query, key, and value vectors in the query matrix Q, key matrix K, and value matrix V. This finally yields a matrix as attention output. In the original architecture, multi-head attention is used as self-attention layers in the encoder, as masked self-attention in the decoder, and as encoder-decoder attention to combine the vector embedding of the encoder with the previous decoder outputs. Hereby, self-at- tention means that query, key, and value matrices are calculated from the same input sequence. Therefore, the computed attention represents each token and its meaning in the sequence. Self-attention in the encoder considers both the left and right context of each token (bidirectional). Contrary, in the case of masked self-attention in the de- coder, only the left context is used, meaning that subsequent positions of each token are masked out (unidirectional). For decoder-only architecture for causal language modeling a GPT-2-like model architecture only containing a decoder stack is used.
Each decoder layer consists of a masked multi-head self-attention sub-layer and a feed-forward sub-layer. Since the encoder is left out, the encoder-decoder attention sub-layer is left out, too. Several decoding strategies may be used.
For auto-completion the following example is given in Figure 11, relating to a simple chemical process flowsheet with branchings, recycle stream, and different mass trains. With the above method figure 12 is obtained, being a Graph representation of flowsheet in Figure 11. Two consecutive unit operations in the string imply a normal stream connection. In the case of a branching such as after a distillation column, all but the last branch are noted in brackets. Recycles are noted by using numbers # to reference the recycle start node and <# to reference the recycle end node. Furthermore, tags in braces are used to indicate whether the branch is a top or bottom product. In the case of converging branches, the second branch is inserted in the string, sur- rounded by <&| and &|. Multi-stream heat exchangers are separated in one node per stream compartment and marked with a number in braces, capturing which streams are heat integrated. In an example inventors subdivided flowsheets into the following sub- process categories; Initialization: Feed(s), Reaction; Thermal separation (distillation, rectification); Countercurrent separation (absorption, extraction), Filtration (gas, liq- uid); Centrifugation; and End: Purification.
As illustrated in Figure 13 the last three blocks relate to a procedure for multiple branches. The block represent from left to right: Initialize graph with feed(s); First subprocess category + pattern in category; Next subprocess category for each stream + pattern in category; and Purification of stream Optional: random heat integration or recycle. After initializing the flowsheet graph with raw materials, including feed pre- processing, the selection of the first sub-process, excluding purification, is a Markov transition with fixed probabilities (transition probabilities do not depend on previous unit operations). Within each sub-process, we further sample from a set of patterns (not shown here) specifying how the inlet and outlet stream(s) are processed, e.g., with additional temperature or pressure change unit operations. Also, we include design heuristics such as adding recycles, performing heat integration in reaction sub-process, or adding reactants. In general, the sub-processes lead to several outlet streams, in the following referred to as branches. For each branch, we transition to the "Next sub-pro- cess" state followed by a Markov transition to the next sub-process. This selection dif- fers from the first sub-process selection by the additional purification sub-process.
Note that once a branch reaches the purification step, it is determined to end as a prod- uct. After each branch ended in the purification step, the flowsheet graph generation is complete.
Figures 14-15 show a completed flowsheet using beam search. Figure 16 sche- matically illustrates the auto-completion of flowsheets using the Generative Flowsheet
Transformer. Inventors achieve this by specifying an input sequence in SFILES 2.0 that represents the incomplete flowsheet and pass it to the Generative Flowsheet
Transformer which auto-completes the sequence in SFILES 2.0 language. The com- pleted flowsheets correspond to the completed SFILES 2.0 sequences with the Genera- tive Flowsheet Transformer. Figures 17-21 show completed flowsheets using top-p sampling.
Table 1/Fig. 22 shows exemplary Unit operations and abbreviations in SFILES 2.0.
It should be appreciated that for commercial application it may be preferable to use one or more variations of the present system, which would similar be to the ones disclosed in the present application and are within the spirit of the invention.
For the purpose of searching the next section is added, of which the subsequent section represents a translation into Dutch. 1. A system for analyzing a chemical process, comprising: a computer memory provided with digital representation of a directed graph representation of the chemical process, the graph representation comprising elements selected from apparat- uses, flow modifiers, devices, process steps, flows, pipelines, signal lines, pressure regula- tors, temperature regulators, concentration regulators, chemical species regulators, control- lers, and elements thereof, and combinations thereof, and interactions between these ele- ments, and a data processor provided with a computer program which, when running on the data proces- sor, -provides trained machine learning, which is trained using a selection of a training dataset comprising directed graph representations of chemical processes and/or string representa- tions of the directed graphs and resulting directed graphs and nodes representing elements, and annotated versions; -provides the digital representation of the directed graph representation of the chemical pro- cess in the computer memory as input to the trained machine learning, and
- the trained machine learning providing in the computer memory the chemical process as directed graph with nodes and edges defining the elements.
2. A method of providing a digitized process set-up, the digitized process set-up with a se-
quence of at least two process steps, wherein the at least two process steps are selected from a chemical process step, a physical process step, a biological process step, and a micro-bio-
logical process step, in particular wherein process steps are selected from heating, cooling,
flowing, reacting, mixing, contacting, depositing, annealing, separating, adding, removing,
filtering, crystallizing, phase-separating, distilling, oxidizing, reducing, hydrogenating, de-
hydrogenating, polymerizing, poly-condensing, esterifying, alkylating, de-alkylating, ami-
nating, halogenating, sulfonating, nitrifying, de-hydrating, hydrolysing, and melting, com- prising optically reading an image of a process set-up, digitizing said optically read process set-up forming a digitized image, using artificial intelligence, making a directed graph of the digitized image of the pro-
cess set-up, the directed graph comprising a plurality of unique nodes and at least one [bio- logical-]physical-chemical interaction between each first node and each second node of the plurality of nodes, and optionally at least one direction of said interaction, wherein each node individually is selected from an end node, an intermediate node, and an intersection node,
using artificial intelligence, identifying at least one physical object to each node in the directed graph,
using artificial intelligence, identifying at least one process path between each first node and each second node of the plurality of nodes, and using rule-based ontology, in particular rule-based ontology obtained from a data model, supplementing the directed graph of the digitized process set-up with the at least one process path and identified objects, or vice versa, in particular wherein the process is a chemical process.
3. The method of providing a digitized process set-up according to embodiment 2, wherein in the process set-up objects are localized and classified, such as a unit operation, an arrow, an intersection, a control unit, and text.
4. The method of providing a digitized process set-up according to any of embodiments 2-3, wherein artificial intelligence is based on a convolutional neural network or a neural network with a transformer architecture.
5. The method of providing a digitized process set-up according to any of embodiments 2-4, wherein artificial intelligence is trained on labelled data.
6. The method of providing a digitized process set-up according to any of embodiments 2-5, wherein each node is provided with supplementary actors, wherein the supplementary actors are selected from chemical species, pressure, temperature, flow, concentration, controls,
reactant, catalyst, product, pH values, composition, physical or chemical states, enzyme, bi-
ological species, nucleic acid sequence or part thereof.
7. The method of providing a digitized process set-up according to any of embodiments 2-6,
wherein based on the obtained directed graph or supplemented directed graph a layout of the process set-up is made comprising physical objects, the physical objects selected from appa-
ratuses, in particular wherein apparatuses are selected from a tank, a column, a reflux, a re-
boiler, a boiler, a controller, a valve, a cooler, a mixer, a heater, a heat exchanger, a furnace,
a filter, a mixer, a splitter, a phase separator, a absorber, a flash unit, a reactor, a pump, a flow controller, a compressor, a filter, a splitter, and a vessel.
8. The method of providing a digitized process set-up according to any of embodiments 2-7, wherein based on the obtained directed graph or supplemented directed graph a layout of the process set-up is made comprising chemical objects, the chemical objects selected from chemical species, catalysts, solvents, inert species, reactants, carriers, stabilizers, buffers, in- termediate products, non-reactants, oxidants, and reductants.
9. The method of providing a digitized process set-up according to any of embodiments 2-8, wherein the directed graph is supplemented with a standard process model.
10. The method of providing a digitized process set-up according to any of embodiments 2- 9, wherein nodes or actors are auto-completed. 11. Use of the digitized process set-up for optimizing the process set-up, for forming a digi-
tal twin of the process set-up, for linking the process set-up to operational data, or for build- ing a model of the process set-ups.
12. The system according to embodiment 1, comprising instructions for carrying out the method of any of embodiments 2-10. 13. The system according to embodiment 1 and/or the method according to any of embodi-
ments 2-10, further comprising one or more elements according to the description, in partic- ular according to the examples, more in particular using one or more of object detection ar- chitecture, object detection performance metrics, skeletonization, processing a bounding box, processing a mask, using a diverse variety of data sources, using data categorization, us- ing data annotation, using labeling of objects, using labeling of actors, repeating one or more steps, unit operation detection, stream recognition, factor sampling, augmentation of objects and/or actors, using pixels, using artificial intelligence-assisted process synthesis, using a transformer-model architecture, using natural language processing, using decoding, tokeni- zation, and numerical embedding.
Claims (13)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
NL2032523A NL2032523B1 (en) | 2022-07-18 | 2022-07-18 | Flowsheet digitization with computer vision, automatic simulation, and flowsheet (auto)completion with machine learning |
PCT/NL2023/050385 WO2024019617A2 (en) | 2022-07-18 | 2023-07-17 | Flowsheet digitization with computer vision, automatic simulation, and flowsheet (auto)completion with machine learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
NL2032523A NL2032523B1 (en) | 2022-07-18 | 2022-07-18 | Flowsheet digitization with computer vision, automatic simulation, and flowsheet (auto)completion with machine learning |
Publications (1)
Publication Number | Publication Date |
---|---|
NL2032523B1 true NL2032523B1 (en) | 2024-01-26 |
Family
ID=84330923
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
NL2032523A NL2032523B1 (en) | 2022-07-18 | 2022-07-18 | Flowsheet digitization with computer vision, automatic simulation, and flowsheet (auto)completion with machine learning |
Country Status (2)
Country | Link |
---|---|
NL (1) | NL2032523B1 (en) |
WO (1) | WO2024019617A2 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170153633A1 (en) * | 2015-01-26 | 2017-06-01 | Fisher-Rosemount Systems, Inc. | Commissioning Field Devices in a Process Control System Supported by Big Data |
WO2020198249A1 (en) * | 2019-03-25 | 2020-10-01 | Schneider Electric Systems Usa, Inc. | Automatic extraction of assets data from engineering data sources |
EP3726442A1 (en) * | 2019-04-18 | 2020-10-21 | Siemens Industry Software Ltd. | Semantic modeling and machine learning-based generation of conceptual plans for manufacturing assemblies |
WO2021145138A1 (en) * | 2020-01-14 | 2021-07-22 | エヌ・ティ・ティ・コミュニケーションズ株式会社 | Display device, display method, and display program |
-
2022
- 2022-07-18 NL NL2032523A patent/NL2032523B1/en active
-
2023
- 2023-07-17 WO PCT/NL2023/050385 patent/WO2024019617A2/en unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170153633A1 (en) * | 2015-01-26 | 2017-06-01 | Fisher-Rosemount Systems, Inc. | Commissioning Field Devices in a Process Control System Supported by Big Data |
WO2020198249A1 (en) * | 2019-03-25 | 2020-10-01 | Schneider Electric Systems Usa, Inc. | Automatic extraction of assets data from engineering data sources |
EP3726442A1 (en) * | 2019-04-18 | 2020-10-21 | Siemens Industry Software Ltd. | Semantic modeling and machine learning-based generation of conceptual plans for manufacturing assemblies |
WO2021145138A1 (en) * | 2020-01-14 | 2021-07-22 | エヌ・ティ・ティ・コミュニケーションズ株式会社 | Display device, display method, and display program |
US20220350942A1 (en) * | 2020-01-14 | 2022-11-03 | Ntt Communications Corporation | Display device, display method, and display program |
Also Published As
Publication number | Publication date |
---|---|
WO2024019617A3 (en) | 2024-02-29 |
WO2024019617A2 (en) | 2024-01-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Baek et al. | LncRNAnet: long non-coding RNA identification using deep learning | |
Liu et al. | Aircraft engine remaining useful life estimation via a double attention-based data-driven architecture | |
CN110909125B (en) | Detection method of media rumor of news-level society | |
CN113838536B (en) | Translation model construction method, product prediction model construction method and prediction method | |
Ma et al. | Consistency regularization auto-encoder network for semi-supervised process fault diagnosis | |
CN112561383A (en) | Real-time anomaly detection method based on generation countermeasure network | |
CN113393370A (en) | Method, system and intelligent terminal for migrating Chinese calligraphy character and image styles | |
JP2023106037A (en) | Driving assist system, driving assist method, and program | |
CN116340726A (en) | Energy economy big data cleaning method, system, equipment and storage medium | |
CN112163429A (en) | Sentence relevancy obtaining method, system and medium combining cycle network and BERT | |
CN113591093A (en) | Industrial software vulnerability detection method based on self-attention mechanism | |
Yang et al. | Foundation models meet visualizations: Challenges and opportunities | |
Osman et al. | Soft sensor modeling of key effluent parameters in wastewater treatment process based on SAE-NN | |
NL2032523B1 (en) | Flowsheet digitization with computer vision, automatic simulation, and flowsheet (auto)completion with machine learning | |
Duan et al. | Learning numeracy: a simple yet effective number embedding approach using knowledge graph | |
CA3189344A1 (en) | Explaining machine learning output in industrial applications | |
Askarian et al. | Data-based fault detection in chemical processes: Managing records with operator intervention and uncertain labels | |
CN115713970A (en) | Transcription factor identification method based on Transformer-Encoder and multi-scale convolutional neural network | |
Du et al. | Unsupervised domain adaptation with unified joint distribution alignment | |
CN113297385A (en) | Multi-label text classification model and classification method based on improved GraphRNN | |
Raisi et al. | Investigation of Deep Learning Optimization Algorithms in Scene Text Detection | |
US20220091594A1 (en) | Method and system to generate control logic for performing industrial processes | |
Xie et al. | Goal-Driven Context-Aware Next Service Recommendation for Mashup Composition | |
EP2565799A1 (en) | Method and device for generating a fuzzy rule base for classifying logical structure features of printed documents | |
CN116612482A (en) | Handwriting formula recognition system and method |