WO2021247831A1 - Extraction et représentation automatisées de connaissances pour systèmes complexes d'ingénierie - Google Patents

Extraction et représentation automatisées de connaissances pour systèmes complexes d'ingénierie Download PDF

Info

Publication number
WO2021247831A1
WO2021247831A1 PCT/US2021/035655 US2021035655W WO2021247831A1 WO 2021247831 A1 WO2021247831 A1 WO 2021247831A1 US 2021035655 W US2021035655 W US 2021035655W WO 2021247831 A1 WO2021247831 A1 WO 2021247831A1
Authority
WO
WIPO (PCT)
Prior art keywords
design
engineering
recited
user
representation
Prior art date
Application number
PCT/US2021/035655
Other languages
English (en)
Inventor
Arun Ramamurthy
Sanjeev SRIVASTAVA
Lucia MIRABELLA
Thomas Gruenewald
Hyunjee JIN
Woongje Sung
Olivia PINON-FISCHER
Original Assignee
Siemens Corporation
Georgia Tech Research Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Corporation, Georgia Tech Research Corporation filed Critical Siemens Corporation
Publication of WO2021247831A1 publication Critical patent/WO2021247831A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • G06N5/025Extracting rules from data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Definitions

  • Engineering design can be generally characterized as a series of decisions that lead to a final prototype. In some cases, throughout the design process, engineers make decisions concerning the type of model(s) to be used, the appropriate parameter settings, the system architecture, etc. These decisions, which are made at different levels of abstraction, are often undertaken with a desired goal. For example, the decisions can define an exploratory search to identify a feasible and viable concept architecture, or a detailed analysis for the purpose of estimating performance metrics.
  • an engineering computing system that includes a design application can train or otherwise help engineers in place of an expert.
  • the engineering systems described herein can improve design cycle times, among other technical improvements.
  • an engineering computing system includes one or more processors and a memory having a plurality of application modules stored thereon.
  • the modules can include a knowledge refiner configured to monitor an engineering design application.
  • the knowledge refiner can be further configured to extract data from the engineering design application.
  • the data can indicate a plurality of states of the engineering design application and a plurality of actions associated with the plurality of states.
  • the modules can further include a representation learner configured to, based on the data extracted from the engineering design application, generate a vectorized representation of the plurality of states and actions of the design application.
  • the engineering computing system can also define a knowledge utilization module configured to, based on the vectorized representation, predict an action for a user of the design application to take so as to define a recommended action associated with a design.
  • the recommended action can be provided to the engineering design application, such that the engineering design application can display the recommended action to the user.
  • the engineering computing system further includes an adaptive schema learning module configured to generate a schema for each state of the plurality of states that is unique, so as to generate a plurality of schemas.
  • the representation learner can be further configured to generate the vectorized representation from the plurality of schemas.
  • the knowledge utilization module can be further configured to train a decision behavior learning model based on the vectorized representation, such that the decision behavior learning model learns mappings between the plurality of states and the plurality of actions.
  • FIG. 1 is a block diagram of an example engineering computing system according to an example embodiment.
  • FIG. 2 depicts a requirements-based contextualization of an example design, in accordance with an example embodiment.
  • FIG. 3 illustrates an example hierarchical decision behavior learning models that included as part of the engineering computing system depicted in FIG. 1, in accordance with an example embodiment.
  • FIG. 4 is a call flow that depicts example operations that can be performed by a knowledge extraction and representation module of the engineering computing system depicted in FIG. 1, in accordance with an example embodiment.
  • FIG. 5 is a call flow that depicts example operations that can be performed by a knowledge utilization module of the engineering computing system depicted in FIG. 1, in accordance with an example embodiment.
  • FIG. 6 shows an example of a computing environment within which embodiments of this disclosure may be implemented.
  • the task of decision making can be mathematically represented by a modification to the Markov Decision Process (MDP) as (5, A, Pa, Ra, c ).
  • MDP Markov Decision Process
  • S corresponds to the set of states that a design system can take at any instant of time
  • A corresponds to the set of actions that can be performed
  • Pa represents the state transition probability matrix
  • Ra is the reward that is perceived by the engineer resulting from the transition
  • the new term c represents the context (e.g., design requirements) associated with the design process.
  • the states of the MDP and the actions can be extracted through intelligent automation. It is recognized herein that reward can be a measure of the conformance of a designed part (or portion of a design system) to the specified requirements, but it can be hard to measure as it is often implicitly assessed by the engineer.
  • a technical challenge in predicting design actions and states is determining the set of possible states apriori, as states can be perceived during run time from a set of past designs.
  • the set of states perceived from a set of past designs might not capture all possible variations of the design.
  • the set of possible actions associated with a design tool or application may be enumerable during the design of the system.
  • changes to the system e.g., version updates
  • Example technical problems include, among others: extracting or perceiving the state of a design system in a such a manner that it can be consumed by machine learning algorithms; evaluating context associated with a design; identifying actions carried out by a user; and generating state-to-state transition modules that can generate appropriate recommendations.
  • an engineering computing system 100 can be configured to extract and represent knowledge from an engineering design system, such that an autonomous agent can learn decision behavior of one or more designers and, based on its learning, generate contextual recommendations that are integrated within the engineering design system.
  • the engineering computing system 100 can include one or more processors and memory having stored thereon applications, agents, and computer program modules including, for example, an engineering design system or application 102 and an autonomous agent module 104.
  • the autonomous agent module 104 can include one or more processors and memory having stored thereon applications, agents, and computer program modules including, for example, a knowledge extraction and representation module 106 and a knowledge utilization module 108.
  • FIG. 1 the program modules, applications, computer-executable instructions, code, or the like depicted in FIG. 1 are merely illustrative and not exhaustive, and that processing described as being supported by any particular module may alternatively be distributed across multiple modules or performed by a different module.
  • various program module(s), script(s), plug-in(s), Application Programming Interface(s) (API(s)), or any other suitable computer-executable code may be provided to support functionality provided by the program modules, applications, or computer-executable code depicted in FIG. 1 and/or additional or alternate functionality.
  • functionality may be modularized differently such that processing described as being supported collectively by the collection of program modules depicted in FIG.
  • program modules that support the functionality described herein may form part of one or more applications executable across any number of systems or devices in accordance with any suitable computing model such as, for example, a client-server model, a peer-to-peer model, and so forth.
  • any of the functionality described as being supported by any of the program modules depicted in FIG. 1 may be implemented, at least partially, in hardware and/or firmware across any number of devices.
  • the autonomous agent module 104 can be adapted to any black-box or open-source design system or application 102, which can be extended with custom knowledge refinement plugins.
  • the design system 102 can define various design applications such as Simcenter 3D, HEEDS, NX CAM, Amesim, Solidworks, CAHA, ModelCenter, iSight, ANSYS Mechanical, ANSYS FLUENT, FreeCAD, OpenSCAD, MeshLAB, Slic3r, or the like.
  • the autonomous agent module 104 can use reinforcement learning to learn decision behavior of a human designer. Further, in an example, the autonomous agent module 104 can interface with external databases, such as a knowledge database 110, to abstract learned decision policies across multiple designers.
  • the knowledge extraction and representation module 106 can be configured to extract representations of the engineering design system 102.
  • the knowledge extraction and representation module 106 can extract representations of application states and user actions associated with the design system 102.
  • the knowledge extraction and representation module 106 can also covert the representations into a form suitable for machine learning.
  • the knowledge extraction and representation module 106 can monitor the design application 102 for user actions and associated changes to the state of the design application 102, if any.
  • design system 102 and design application 102 can be used interchangeably, without limitation.
  • the knowledge extraction and representation module 106 can compile the state of the system 102 by inspecting the instantaneous state of the system 100 in the form specific to the application under consideration. For example, if the design system 102 defines a computer-aided design (CAD) application, the knowledge extraction and representation module 106 can inspect a design representation tree of the CAD application. Furthermore, during the extraction, the knowledge extraction and representation module 106 can retrieve information associated with the action performed by a user (or an automated agent) that results in the system state. The knowledge extraction and representation module 106 can retrieve such information by inspecting the design application 102, for instance using a signal event from the design application 102, or by inspecting an application log file 101 from the design application 102.
  • CAD computer-aided design
  • autonomous agent 104 can define a plugin of the design application 102, and the signal event can be triggered by the design application 102.
  • the design application 102 can inform the autonomous agent 104 when there is any change to the state of the system.
  • a change to the state occurs when the user performs an operation that changes any parameter within the design application 102.
  • the application log file 101 can define an application-specific representation of the associated action.
  • the application log file 101 can define a hash-table representation of the associated action.
  • the knowledge extraction and representation module 106 can extract a state-action pair that can be inserted into a process graph that represents transitions that have been made by a user in reaching the current state of the system 102.
  • the knowledge extraction and representation module 106 can include a knowledge refiner 112 configured to generate the process graphs. The knowledge refiner 112 can insert state-action pairs into a process graph so as to represent the transitions made by a user in reaching the current state of the system 102.
  • the knowledge refiner 112 is specific to the design application (e.g., design application 102) under consideration, while also being configured to be applicable to any design process within the application.
  • the knowledge refiner 112 can generate the design representation tree that represents the feature tree of the CAD model, the hash-table representation, and the process graph.
  • the process graph that is generated can be modeled as a Markov Decision process in which the nodes represent the states encountered by the design application 102 and the edges capture the associated actions that result in the given system state.
  • extracting data from the design system 102 begins when the knowledge refiner 112 parses the state of the system in any supported format, for example, a state tree.
  • the state extracted by the knowledge refiner 112 can be handled by a data management service on the agent module 104 to create an entry in a long-term storage, for instance the knowledge database 110, which can stores the raw large file representation of the state.
  • Example context information can define design requirements in natural language.
  • context information can indicate that a UAV has to cover a distance of 100 meters in two minutes, or that an impeller must have 10 blades and an outer diameter of 120 mm.
  • design requirements can be pre-specified and fixed for a given design process.
  • the knowledge from a design process can be represented in various state, action, and process representations, as described above.
  • the knowledge extraction and representation module 106 can also generate representations that can be used by machine learning algorithms, for instance machine learning performed by the knowledge utilization module 108.
  • the knowledge extraction and representation module 106 can further include an adaptive schema learning module 114 and a manifold or representation learner 116 coupled with the adaptive scheme learning module 114.
  • the adaptive schema learning module 114 and the representation learner 116 can be configured to learn and generate vectorized representations of the knowledge extracted from the design application 102.
  • differences between the various states of the design system 102 coupled with the design context drives the accuracy of the relationships that are learned.
  • learning of the relationships between the state of the design application 102 and the actions performed by an engineer can be affected by the various states of the design application 102 and the design requirements.
  • an encoding model can converge to a static manifold that is unique to each design application, for instance the design application 102.
  • final state of the system for instance the output of a CAD model
  • the requirements may indicate why certain parameter values were set to 30 (as an example) instead of 50.
  • the adaptive schema learning module 114 can learn an adaptive schema such that the minimum amount of information required to uniquely describe the design application state is retained for each unique state of the system 102. Because the states of the system 102 can be parametrized by a varying number of parameters, the schema learning module 114 can be adapted to learn different representations. In some examples, even when the same action set is utilized, the resultant states can be different, such that an extracted state’s schema representation may need to be updated when new information about the design process is gathered. In an example, the result of the schema learning performed by the adaptive schema learning module 114 is that the agent 104 learns a representation for the design application 102, such that each state of the design application 102 can be uniquely described with the minimum amount of information.
  • the adaptive schema learning module 114 can implement a real-time tree differencing algorithm to identify the schema coupled with an automated abstraction to generate the minimum schema for the states.
  • the representation learner 116 can perform a manifold learning that utilizes instantiations of the states stored in the process graph (generated by the knowledge refiner 112) and the corresponding schemas (generated by the adaptive schema learning module 114) such that a vectorized representation can be realized for these states. As new knowledge is gathered that updates previously learned states, the vectorization can also adapt.
  • any updates to states, and thus to the vectorized representation of the states, can be provided to the knowledge utilization module 108, as the knowledge utilization module 108 can operate in parallel with the knowledge extraction routine performed by the knowledge extraction and representation module 106.
  • the vectorization of the states can be performed using various implementations.
  • the data management service of the agent module 104 can extract a schema out of the generated state representation using the adaptive schema learning module 114.
  • the extracted schema can be stored on a schema storage, for instance within the knowledge database 110, wherein the data associated with the states can be serialized.
  • a threaded assessment of the state’s vectorized representation can be carried out based on the extracted schema for each datapoint using the representation learner 116.
  • the representation learner 116 can adapt the manifold of the state in an online fashion, such that the observed design states can be encoded in the common latent manifold.
  • the knowledge extraction and representation module 106 can identify actions carried out by an engineer, such that state and action pairs can also be are added to the semantic storage, where the respective design process graph can be updated with a new node and edge.
  • the above-described process can repeat when an engineer executes an action in the design system 102 with additional data populating the appropriate storage, for instance the knowledge database 110.
  • context information associated with a given design is predetermined or prespecified in terms of a vector of requirements associated with the design. The vector can be stored with each node of a given design, as illustrated in FIG. 2.
  • requirements 202 can be inserted into a design process 200.
  • different states 204 can be reached during the design process 200, and different actions and parameters 206 can be implemented so as to reach the different states 204, and ultimately a final state 204a of the system.
  • the representation learner 116 can generate a natural language representation from the schema that is generated by the adaptive schema learning module 114.
  • the natural language representation can represent the state of the system 102 as a document.
  • the tree representation of the state can be transformed to a natural language representation for the purpose of vector izati on.
  • the document can then be vectorized using natural language processing. Due to the domain specific nature of the document, in some cases, pre-trained models are fine-tuned to learn appropriate manifold embeddings for the specific design application.
  • the representation learner 116 can perform a fine-tuning process such that the manifold learning algorithm can find embeddings that are valid and relevant to the design application under consideration.
  • the representation learner 116 can utilize information propagation using a tree-LSTM encoder. Similar to the natural language encoder, the tree-LSTM can be trained in parallel to the encoding discovery in order to account for the discovery of new states and the potential of change in the design application manifold.
  • each knowledge state that is extracted can be fed to the adaptive schema learning module 114 and the representation (manifold) learner 116 of the module 106.
  • the representation learner 116 can train a manifold learning algorithm online for a fixed number of iterations (e.g., 10).
  • the representation learner 116 can generate a vectorized representation that associated with each state in the associated process graph, such that a machine learning system or algorithm, for instance a machine learning algorithm performed by the knowledge utilization module 108, can retrieve the information during knowledge utilization.
  • an action defines an action type and a set of action parameters.
  • the action type can be indexed using an integer representation according to the sequence of observation.
  • Each integer index can be further attributed with the set of action parameters, which can be adaptively learned based on the operations performed.
  • the design application 102 might not log any information about draft angles. Consequently, the agent 104 can be unaware of the presence of the draft angle as a possible parameter that can be specified by the designer.
  • each action type’s schema can be adapted based on real-time knowledge gathered by the knowledge refiner 112, with updates made to any previously observed instances of the action type under consideration. Default values such as, for example, NaN or infeasible values, can be utilized for such updates.
  • each vectorized action can also associated with the corresponding entry in the process graph to be used later by the knowledge utilization module 108.
  • the knowledge extraction and representation module 106 can also extract and consider design context.
  • the design context can be represented by requirements that are pre-specified by the user.
  • the requirements are in natural language.
  • the knowledge extraction and representation module 106 applies natural language encoding (e.g., using models such as LSTM, BERT or a TF-IDF embedder) to generate the vectorized representation associated with the specified requirements.
  • These vectorized representations can be appended to each state of the design process in the process chain, so as to incorporate information regarding the context of the design.
  • the knowledge utilization module 108 can leverage the extracted and vectorized knowledge from the knowledge extraction and representation module 106, so as to train a mapping between the states (which can be augmented with design context) of the design application 102 and the actions performed by the design engineer.
  • the knowledge utilization module 108 can provide real time contextual recommendations to the design engineer.
  • the real-time context recommendations can be provided by request, or in response to certain time or event triggers.
  • the autonomous agent 104 detecting that the designer is deviating from a traditional design process may define an event trigger.
  • the real-time context recommendations can be provided in the design application 102, so as to define in-product recommendations.
  • the knowledge utilization module 108 can perform or utilize reinforcement learning, imitation learning, and the like to learn contextual relationships between actions and states so as to provide accurate recommendations to a design engineer, thereby enabling the transfer of knowledge from one person to another in an automated and streamlined manner. In some cases, the knowledge transfer is realized through the recommendations that are provided by the autonomous agent 104.
  • hierarchical decision behavior learning models for instance a hierarchical model 300, can be trained to learn mappings between design application states and recommended actions.
  • the knowledge utilization module 108 can include a decision behavior learner 120 configured to learn and generate recommendations, based on the vectorized representations for the states and actions provided by the knowledge extraction and representation module 106.
  • the example hierarchical model 300 is an example conceptual illustration of a probabilistic model that can be used. In some cases, the hierarchical model 300 defines two-levels of hierarchy, although it will be understood that models can generated so as to define an alternative number of levels, and all such models are contemplated as being within the scope of this disclosure. In an example, the hierarchical model 300 can define a first level 302 that can predict action types performed by the user. Based on the predicted action types, the decision behavior learner 120 can identify a second level 304 of the model 300 predicts one or more parameters associated with the action to be performed.
  • parameter prediction models such as the example model 200
  • the decision behavior learner 120 can define an action type prediction model that is trained using imitation learning in which the contextual state-action pair is extracted from the process graph that is constructed via the knowledge extraction process performed by the knowledge extraction and representation module 106.
  • the models are updated in an online manner such that each time new information is gathered, an updated action policy and parameter mapping is learned.
  • the decision behavior learner 120 can update a context-sensitive model.
  • the model can be built specific to a particular type of design component.
  • a model can learn all of the bolts within a given system, so as to generate recommendations specific to the design of bolts.
  • the decision behavior learner 120 generates recommendations to users.
  • a model can learn design components across any users, for instance all users or a subset of users, that publish data to the database 110.
  • the knowledge utilization module 108 can further include a user preference learner 122 configured to learn and generate user preferences, based on the vectorized representations for the states and actions provided by the knowledge extraction and representation module 106.
  • the user preference learner 122 can be trained in parallel with the decision behavior learner 120.
  • the user preference learner 122 can updates the model based on the recommendation provided to the user and the corresponding action taken by the user.
  • the user preference learner 122 defines a Bayesian user preference learning model that is trained based on the design system states to capture the unobserved and unquantifiable preferences of design engineers.
  • Example user preferences include, without limitation, colors of various objects, positioning of various elements, and the like.
  • the user preference learner automatically applies user preferences to improve the user experience in utilization of the design application 102.
  • a sequential Bayesian learning model that represents user preferences as a latent factor can be trained based on a set of non-essential parameters in the design application 102.
  • the non-essential parameters can refer to those parameters that do not affect the performance of the design.
  • the knowledge utilization module 108 is integrated as a recommendation application within the design application 102, such that the utilization module 108 can request the extraction of the instantaneous state of the design application 102 to generate a vectorized representation.
  • the vectorized representation can then used by the trained machine learning models in order to generate recommendations of the best possible actions that yield the desired outcome as dictated by the requirements specified.
  • recommendations are triggered when a user requests a recommendation during a design, for instance during an interactive mode of execution. Based on a request, the agent 102 can automatically generate designs in batch or can automate executions.
  • example operations 400 that can be performed by the knowledge extraction and representation module 106 are shown.
  • a user is tailed or monitored for updates to the state of the design system, for instance the design application 102.
  • the interaction can be monitored, and information associated with the interaction can be extracted.
  • design requirements can be parsed or otherwise obtained from the file system of the design application 102, as to identify the context associated with the particular design process.
  • an infinite tail can launch the design system 102 with configurations for the data management service that can be used to communicate with a server that defines the autonomous agent module 104.
  • the infinite tailing thread can poll the design system 102 for its state periodically, for instance every second, so that the knowledge refiner 112 can create a representation of the state of the system 102 that is processed by the knowledge extraction and representation module 106.
  • the design system 102 defines product lifecycle management (PLM) software (e.g., Siemens NX)
  • PLM product lifecycle management
  • the design system 102 can update its log file each time the engineer performs an action.
  • an interaction can occur on a client machine that includes the design application 102, and the associated output can include a serialized state of the system with related design context.
  • the serialized state can be stored in the long-term storage, for instance the knowledge database 110, for retrieval of state and object information.
  • the serialized state can be passed to the adaptive schema learning module 114 and representation learner 116.
  • the adaptive schema learning module 114 can perform a schema extraction routine that leverages a threaded implementation of a tree- isomorphic state difference calculation, so as to compute incremental differences between each object type within the new state and existing instances in the database 130.
  • the schema can be updated with these differences so as to define an updated schema of the constituent objects that can stored in the schema (meta-data) storage database, for instance the knowledge database 110.
  • these object schemas can be assembled along the tree to generate the schema for the entire state, which can also be stored in the same database (e.g., database 110) under a different collection.
  • the representation learner 116 can initiate the update to the vectorization, in a separate thread, of the state by loading pre-saved models or by creating new encoding models for each object type (e.g., origin, parameters) associated with a state of the system 102. In some cases, the representation learner 116 can execute the update to the vectorized representation based on a single batch and single epoch based on a prioritized set of samples.
  • object type e.g., origin, parameters
  • the representation leaner 116 can update the design process graph with a new node and edge, at 416, so as to complete the vectorization of a state.
  • the node can be populated with the vector representation of the state along with the related contextual information.
  • the representation learner 116 can generate an embedded representation of the action by computing the difference between two adjacent states. In some cases, this is carried out for each triple and a clustering algorithm is initiated. A sequential clustering algorithm, for example, can be executed so as to identity a number of clusters.
  • the edges of the graph can be updated with the action indices.
  • the knowledge extraction and representation module 106 can execute an action schema extraction in which the parameters of the actions are identified by determining a list of parameters that vary for the cluster of actions, which can be stored in the schema storage database (e.g., knowledge database 110).
  • example operations 500 that can be performed by the knowledge utilization module 108 are shown.
  • the user learner 122 can generate reward metrics for each edge in the design process graph so as to construct a dataset for the decision learner 120.
  • the action parameters can be obtained from the knowledge extraction and representation module 106, for instance via a context memory 124 of the agent module 104, to train individual action models.
  • the decision learner 120 can perform a behavior learning algorithm that relies on prioritized sampling of the stored data to train an imitation learning model in an incremental and continuous manner. For each action type, the decision learner 120 can also train a regression model in parallel.
  • the knowledge utilization module 108 can receive a recommendation request from a user, via the design application 102. Based on the recommendation request, at 508, the decision learner 120 can retrieve one or more models from the knowledge database 110. At 510, the decision learner 120 can compute or generate a recommendation, for instance based on one or more action parameter models. In particular, based on the predicted action type, the associated parameters can be predicted and returned to the design system 102 as a recommendation. At 512, the knowledge utilization module 108 can provide the recommendation to the design application 102. At 514, the design application 102 can display the recommendation to a user, for instance a design engineer.
  • the user learner 122 can receive the user preference, compute sample preferences, and update a behavior model.
  • user learner 122 can take as input the recommendations generated and the action taken by the user. Based on those two parameters, for example, an active update to the models is applied.
  • the updated model can be stored in the knowledge database 110 for future use.
  • the knowledge extraction and representation module 106 can identify a schema for the state of the system such that a vectorized representation can be generated for each state.
  • a tree-based representation of the design state can be generated, but alternate state representations are possible due to the extensible nature of the framework, and all such representations are contemplated as being within the scope of this disclosure.
  • a schema can be automatically composed by extracting schemas for the branches of the state tree.
  • a recursive branch processing algorithm can be employed until all the branches of the state tree are processed, employing a minimum edit formulation at each level.
  • the use of the minimum edit identifies the bare minimum information that has to be stored as part of the schema to reconstruct the state uniquely over any given baseline state. Thus, it can minimize the amount of information that has to be stored, enabling the development of a scalable solution.
  • the schema for each branch can be constructed by computing a tree difference between branches of similar type on other state.
  • a branch hashing algorithm can be performed in order to determine the isomorphic equality of the branch, thereby enabling rapid computation of the differences.
  • an encoded representation of the state can be generated by composing the encoded representation of the branches.
  • variational autoencoders are trained incrementally and online to generate the encoded representations of the branches.
  • Single-batch updates on the VAEs can be executed to ensure availability of real-time encodings.
  • the states attain a stable encoding state when trained incrementally provided there is sufficient diversity of the samples.
  • a weighted sampling routine is performed to ensure diversity in the branches.
  • the weighting can be dictated by the reconstruction error akin to that of prioritized experience replay.
  • the generated encoding of the branches can be accumulated to generate the parent branch’s encoded representation.
  • the generated encoding for the state can be appended to the vectorized representation of the requirement.
  • an action in the discovery routine can be implemented so as to automatically extract actions performed by a user from a sequence of states.
  • action discovery can be characterized by encoding generation; cluster identification; and a cluster schema extraction.
  • the action encoding can be realized in a similar manner as the state encoding.
  • An action can be the cause for the transition of the design system from one state to another.
  • the action can be identified.
  • the encodings can be accumulated along the difference tree to generate the action encoding.
  • a k-means clustering algorithm can be executed to identify the number of clusters, so as to identify the number of actions, thereby automatically discovering the actions performed by the user.
  • a sequential clustering can be executed with different number of clusters to compute the associated DBI score for each, picking the least scoring cluster as the number of actions.
  • Actions can include an action type and the parameters of the action, which can differentiate each of the action types.
  • an extrusion operation can be parameterized by the length of the extrusion, the sweep angle, and the resultant body operation (union, intersection, etc.).
  • the recommendation module e.g., knowledge utilization module 108 can predict not only the type of action that is to be performed, but also its parameters.
  • the attributes of the actions within a cluster that change in value across the different instances of the action can be evaluated, so as to result in a flat list of attributes that are used as the parameters of the action.
  • the model can define a first level that predicts the type of action to be performed, and a second level that predicts the parameters of the action.
  • the number of levels of the hierarchy and the complexity of the model can change with different model extensions, and all such models are contemplated as being within the scope of this disclosure.
  • the action type predictor can utilize an imitation learning portion of the Deep Q-Learning from Demonstration algorithm.
  • a feed forward regression model that predicts the parameters of the actions can be trained.
  • the mean- squared error loss metric can be used to train each parameter predictor.
  • the reward can be formulated by processing the structure of the stored design process graph.
  • transitions that result in a backtrack of state can be penalized with a reward of -1, and others are given a reward of +1.
  • Such rewards can minimize the number of actions that are to be taken to reach the final state.
  • both the imitation learning agent and the parameter prediction model can be trained online and incrementally as data is populated in the database. Prioritized sampling can be used to prevent bias in the trained models.
  • the user learner 122 can provides an active learning perspective to the trained models.
  • a recommendation is provided to the engineer, the associated decision made by the engineer is tracked. If the designer performs an operation that is different from the generated recommendation, an immediate update to the model can be enforced.
  • Active learning loss coupled with reinforced sampling can be leveraged to guide the model toward the engineer’s most recent choice.
  • the sampling weights can be updated automatically due to the higher reconstruction error for the new sample.
  • This active learning routine can be applied to the imitation learning agent and the parameter prediction models of the user learner 120, and thus to the knowledge utilization module 108.
  • FIG. 6 illustrates an example of a computing environment within which embodiments of the present disclosure may be implemented.
  • a computing environment 600 includes a computer system 610 that may include a communication mechanism such as a system bus 621 or other communication mechanism for communicating information within the computer system 610.
  • the computer system 610 further includes one or more processors 620 coupled with the system bus 621 for processing the information.
  • the processors 620 may include one or more central processing units (CPUs), graphical processing units (GPUs), or any other processor known in the art. More generally, a processor as described herein is a device for executing machine-readable instructions stored on a computer readable medium, for performing tasks and may comprise any one or combination of, hardware and firmware. A processor may also comprise memory storing machine-readable instructions executable for performing tasks. A processor acts upon information by manipulating, analyzing, modifying, converting or transmitting information for use by an executable procedure or an information device, and/or by routing the information to an output device.
  • CPUs central processing units
  • GPUs graphical processing units
  • a processor may use or comprise the capabilities of a computer, controller or microprocessor, for example, and be conditioned using executable instructions to perform special purpose functions not performed by a general purpose computer.
  • a processor may include any type of suitable processing unit including, but not limited to, a central processing unit, a microprocessor, a Reduced Instruction Set Computer (RISC) microprocessor, a Complex Instruction Set Computer (CISC) microprocessor, a microcontroller, an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), a System-on-a-Chip (SoC), a digital signal processor (DSP), and so forth.
  • RISC Reduced Instruction Set Computer
  • CISC Complex Instruction Set Computer
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • SoC System-on-a-Chip
  • DSP digital signal processor
  • processor(s) 620 may have any suitable microarchitecture design that includes any number of constituent components such as, for example, registers, multiplexers, arithmetic logic units, cache controllers for controlling read/write operations to cache memory, branch predictors, or the like.
  • the microarchitecture design of the processor may be capable of supporting any of a variety of instruction sets.
  • a processor may be coupled (electrically and/or as comprising executable components) with any other processor enabling interaction and/or communication there-between.
  • a user interface processor or generator is a known element comprising electronic circuitry or software or a combination of both for generating display images or portions thereof.
  • a user interface comprises one or more display images enabling user interaction with a processor or other device.
  • the system bus 621 may include at least one of a system bus, a memory bus, an address bus, or a message bus, and may permit exchange of information (e.g., data (including computer-executable code), signaling, etc.) between various components of the computer system 610.
  • the system bus 621 may include, without limitation, a memory bus or a memory controller, a peripheral bus, an accelerated graphics port, and so forth.
  • the system bus 621 may be associated with any suitable bus architecture including, without limitation, an Industry Standard Architecture (ISA), a Micro Channel Architecture (MCA), an Enhanced ISA (EISA), a Video Electronics Standards Association (VESA) architecture, an Accelerated Graphics Port (AGP) architecture, a Peripheral Component Interconnects (PCI) architecture, a PCI-Express architecture, a Personal Computer Memory Card International Association (PCMCIA) architecture, a Universal Serial Bus (USB) architecture, and so forth.
  • ISA Industry Standard Architecture
  • MCA Micro Channel Architecture
  • EISA Enhanced ISA
  • VESA Video Electronics Standards Association
  • AGP Accelerated Graphics Port
  • PCI Peripheral Component Interconnects
  • PCMCIA Personal Computer Memory Card International Association
  • USB Universal Serial Bus
  • the computer system 610 may also include a system memory 630 coupled to the system bus 621 for storing information and instructions to be executed by processors 620.
  • the system memory 630 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only memory (ROM) 631 and/or random access memory (RAM) 632.
  • the RAM 632 may include other dynamic storage device(s) (e.g., dynamic RAM, static RAM, and synchronous DRAM).
  • the ROM 631 may include other static storage device(s) (e.g., programmable ROM, erasable PROM, and electrically erasable PROM).
  • system memory 630 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 620.
  • a basic input/output system 633 (BIOS) containing the basic routines that help to transfer information between elements within computer system 610, such as during start-up, may be stored in the ROM 631.
  • RAM 632 may contain data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 620.
  • System memory 630 may additionally include, for example, operating system 634, application modules 635, and other program modules 636.
  • Application modules 635 may include aforementioned modules described for FIG. 1 and may also include a user portal for development of the application program, allowing input parameters to be entered and modified as necessary.
  • the operating system 634 may be loaded into the memory 630 and may provide an interface between other application software executing on the computer system 610 and hardware resources of the computer system 610. More specifically, the operating system 634 may include a set of computer-executable instructions for managing hardware resources of the computer system 610 and for providing common services to other application programs (e.g., managing memory allocation among various application programs). In certain example embodiments, the operating system 634 may control execution of one or more of the program modules depicted as being stored in the data storage 640.
  • the operating system 634 may include any operating system now known or which may be developed in the future including, but not limited to, any server operating system, any mainframe operating system, or any other proprietary or non-proprietary operating system.
  • the computer system 610 may also include a disk/media controller 643 coupled to the system bus 621 to control one or more storage devices for storing information and instructions, such as a magnetic hard disk 641 and/or a removable media drive 642 (e.g., floppy disk drive, compact disc drive, tape drive, flash drive, and/or solid state drive).
  • Storage devices 640 may be added to the computer system 610 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or FireWire).
  • Storage devices 641, 642 may be external to the computer system 610.
  • the computer system 610 may include a user input interface or graphical user interface (GUI) 661, which may comprise one or more input devices, such as a keyboard, touchscreen, tablet and/or a pointing device, for interacting with a computer user and providing information to the processors 620.
  • GUI graphical user interface
  • the computer system 610 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 620 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 630. Such instructions may be read into the system memory 630 from another computer readable medium of storage 640, such as the magnetic hard disk 641 or the removable media drive 642.
  • the magnetic hard disk 641 and/or removable media drive 642 may contain one or more data stores and data files used by embodiments of the present disclosure.
  • the data store 640 may include, but are not limited to, databases (e.g., relational, object-oriented, etc.), file systems, flat files, distributed data stores in which data is stored on more than one node of a computer network, peer-to-peer network data stores, or the like. Data store contents and data files may be encrypted to improve security.
  • the processors 620 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 630.
  • hard- wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
  • the computer system 610 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments of the invention and for containing data structures, tables, records, or other data described herein.
  • the term “computer readable medium” as used herein refers to any medium that participates in providing instructions to the processors 620 for execution.
  • a computer readable medium may take many forms including, but not limited to, non-transitory, non-volatile media, volatile media, and transmission media.
  • Non-limiting examples of non-volatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as magnetic hard disk 641 or removable media drive 642.
  • Non-limiting examples of volatile media include dynamic memory, such as system memory 630.
  • Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the system bus 621.
  • Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.
  • Computer readable medium instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
  • the computing environment 600 may further include the computer system 610 operating in a networked environment using logical connections to one or more remote computers, such as remote computing device 680.
  • the network interface 670 may enable communication, for example, with other remote devices 680 or systems and/or the storage devices 641, 642 via the network 671.
  • Remote computing device 680 may be a personal computer (laptop or desktop), a mobile device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer system 610.
  • computer system 610 may include modem 672 for establishing communications over a network 671, such as the Internet. Modem 672 may be connected to system bus 621 via user network interface 670, or via another appropriate mechanism.
  • Network 671 may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 610 and other computers (e.g., remote computing device 680).
  • the network 671 may be wired, wireless or a combination thereof. Wired connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-6, or any other wired connection generally known in the art.
  • Wireless connections may be implemented using Wi-Fi, WiMAX, and Bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network 671.
  • program modules, applications, computer-executable instructions, code, or the like depicted in FIG. 6 as being stored in the system memory 630 are merely illustrative and not exhaustive and that processing described as being supported by any particular module may alternatively be distributed across multiple modules or performed by a different module.
  • various program module(s), script(s), plug-in(s), Application Programming Interface(s) (API(s)), or any other suitable computer-executable code hosted locally on the computer system 610, the remote device 680, and/or hosted on other computing device(s) accessible via one or more of the network(s) 671 may be provided to support functionality provided by the program modules, applications, or computer-executable code depicted in FIG.
  • functionality may be modularized differently such that processing described as being supported collectively by the collection of program modules depicted in FIG. 6 may be performed by a fewer or greater number of modules, or functionality described as being supported by any particular module may be supported, at least in part, by another module.
  • program modules that support the functionality described herein may form part of one or more applications executable across any number of systems or devices in accordance with any suitable computing model such as, for example, a client-server model, a peer-to-peer model, and so forth.
  • any of the functionality described as being supported by any of the program modules depicted in FIG. 6 may be implemented, at least partially, in hardware and/or firmware across any number of devices.
  • the computer system 610 may include alternate and/or additional hardware, software, or firmware components beyond those described or depicted without departing from the scope of the disclosure. More particularly, it should be appreciated that software, firmware, or hardware components depicted as forming part of the computer system 610 are merely illustrative and that some components may not be present or additional components may be provided in various embodiments. While various illustrative program modules have been depicted and described as software modules stored in system memory 630, it should be appreciated that functionality described as being supported by the program modules may be enabled by any combination of hardware, software, and/or firmware. It should further be appreciated that each of the above-mentioned modules may, in various embodiments, represent a logical partitioning of supported functionality. This logical partitioning is depicted for ease of explanation of the functionality and may not be representative of the structure of software, hardware, and/or firmware for implementing the functionality.
  • functionality described as being provided by a particular module may, in various embodiments, be provided at least in part by one or more other modules. Further, one or more depicted modules may not be present in certain embodiments, while in other embodiments, additional modules not depicted may be present and may support at least a portion of the described functionality and/or additional functionality. Moreover, while certain modules may be depicted and described as sub-modules of another module, in certain embodiments, such modules may be provided as independent modules or as sub-modules of other modules.
  • any operation, element, component, data, or the like described herein as being based on another operation, element, component, data, or the like can be additionally based on one or more other operations, elements, components, data, or the like. Accordingly, the phrase “based on,” or variants thereof, should be interpreted as “based at least in part on.”
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the Figures.
  • two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

La conception d'ingénierie est un processus complexe et chronophage qui peut être caractérisé par une série de décisions. Un système informatique d'ingénierie qui comprend une application de conception peut entraîner ou d'une autre manière aider des ingénieurs à la place d'un expert. Les systèmes décrits dans l'invention peuvent améliorer les temps de cycle de conception, entre autres améliorations techniques. En particulier, des schémas d'états de systèmes sont générés de manière dynamique et des collecteurs définissant les conceptions sont appris de manière adaptative par le biais d'un affinement en ligne de modèles d'incorporation d'états. Des modèles de décisions de conception peuvent être extraits pour automatiser des décisions de conception et prédire les décisions suivantes qu'un ingénieur peut prendre. De telles prédictions peuvent être obtenues par génération d'une vectorisation, basée sur des caractéristiques, de conceptions et de processus, ce qui rend les connaissances recueillies utilisables dans l'apprentissage par imitation. En outre, le processus d'apprentissage peut être contextualisé par le codage des exigences associées au processus d'ingénierie, ce qui permet la génération de recommandations de décisions contextuelles inhérentes au produit, en temps réel
PCT/US2021/035655 2020-06-03 2021-06-03 Extraction et représentation automatisées de connaissances pour systèmes complexes d'ingénierie WO2021247831A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063033867P 2020-06-03 2020-06-03
US63/033,867 2020-06-03

Publications (1)

Publication Number Publication Date
WO2021247831A1 true WO2021247831A1 (fr) 2021-12-09

Family

ID=76641860

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/035655 WO2021247831A1 (fr) 2020-06-03 2021-06-03 Extraction et représentation automatisées de connaissances pour systèmes complexes d'ingénierie

Country Status (1)

Country Link
WO (1) WO2021247831A1 (fr)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018183275A1 (fr) * 2017-03-27 2018-10-04 Siemens Aktiengesellschaft Système de synthèse de conception générative automatisée utilisant des données provenant d'outils de conception et des connaissances provenant d'un graphe à jumeaux numériques

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018183275A1 (fr) * 2017-03-27 2018-10-04 Siemens Aktiengesellschaft Système de synthèse de conception générative automatisée utilisant des données provenant d'outils de conception et des connaissances provenant d'un graphe à jumeaux numériques

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
AHMED HUSSEIN ET AL: "Imitation Learning", ACM COMPUTING SURVEYS, ACM, NEW YORK, NY, US, US, vol. 50, no. 2, 6 April 2017 (2017-04-06), pages 1 - 35, XP058327577, ISSN: 0360-0300, DOI: 10.1145/3054912 *
CHHABRA JASKANWAL P ET AL: "A method for model selection using reinforcement learning when viewing design as a sequential decision process", STRUCTURAL AND MULTIDISCIPLINARY OPTIMIZATION, SPRINGER BERLIN HEIDELBERG, BERLIN/HEIDELBERG, vol. 59, no. 5, 15 December 2018 (2018-12-15), pages 1521 - 1542, XP036757724, ISSN: 1615-147X, [retrieved on 20181215], DOI: 10.1007/S00158-018-2145-6 *
RAMAMURTHY ARUN: "A REINFORCEMENT LEARNING FRAMEWORK FOR THE AUTOMATION OF ENGINEERING DECISIONS IN COMPLEX SYSTEMS COPYRIGHT © 2019 BY ARUN RAMAMURTHY", 15 January 2019 (2019-01-15), XP055855480, Retrieved from the Internet <URL:https://smartech.gatech.edu/bitstream/handle/1853/62626/RAMAMURTHY-DISSERTATION-2019.pdf> [retrieved on 20211027] *

Similar Documents

Publication Publication Date Title
US20220374719A1 (en) Application Development Platform and Software Development Kits that Provide Comprehensive Machine Learning Services
JP7440420B2 (ja) 包括的機械学習サービスを提供するアプリケーション開発プラットフォームおよびソフトウェア開発キット
US11544604B2 (en) Adaptive model insights visualization engine for complex machine learning models
CN114207635A (zh) 使用元建模对机器学习和深度学习模型进行快速准确的超参数优化
US8935136B2 (en) Multi-component model engineering
US11861469B2 (en) Code generation for Auto-AI
US20200167660A1 (en) Automated heuristic deep learning-based modelling
US20200241878A1 (en) Generating and providing proposed digital actions in high-dimensional action spaces using reinforcement learning models
US20210141779A1 (en) System and method for facilitating an objective-oriented data structure and an objective via the data structure
US11886779B2 (en) Accelerated simulation setup process using prior knowledge extraction for problem matching
CN115427968A (zh) 边缘计算设备中的鲁棒人工智能推理
US20190228297A1 (en) Artificial Intelligence Modelling Engine
US20220036232A1 (en) Technology for optimizing artificial intelligence pipelines
CN112036563A (zh) 使用起源数据的深度学习模型洞察
JP2024516656A (ja) 産業特定機械学習アプリケーション
US11720846B2 (en) Artificial intelligence-based use case model recommendation methods and systems
US11620550B2 (en) Automated data table discovery for automated machine learning
US20230186117A1 (en) Automated cloud data and technology solution delivery using dynamic minibot squad engine machine learning and artificial intelligence modeling
US20210149793A1 (en) Weighted code coverage
WO2020055659A1 (fr) Génération et utilisation de modèles à auto-amélioration pilotés par des données avec simulation sélective de conception d&#39;objets 3d
WO2021247831A1 (fr) Extraction et représentation automatisées de connaissances pour systèmes complexes d&#39;ingénierie
JP6648828B2 (ja) 情報処理システム、情報処理方法、及び、プログラム
US20220300821A1 (en) Hybrid model and architecture search for automated machine learning systems
WO2020056107A1 (fr) Pipeline de simulation automatisée pour conception assistée par ordinateur générée par simulation rapide
CN117151247B (zh) 机器学习任务建模的方法、装置、计算机设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21735548

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21735548

Country of ref document: EP

Kind code of ref document: A1