EP2425331A1 - Procédé de création d'un guide d'utilisation - Google Patents
Procédé de création d'un guide d'utilisationInfo
- Publication number
- EP2425331A1 EP2425331A1 EP10719262A EP10719262A EP2425331A1 EP 2425331 A1 EP2425331 A1 EP 2425331A1 EP 10719262 A EP10719262 A EP 10719262A EP 10719262 A EP10719262 A EP 10719262A EP 2425331 A1 EP2425331 A1 EP 2425331A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- data
- module
- analysis
- knowledge
- application
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/12—Use of codes for handling textual entities
- G06F40/14—Tree-structured documents
- G06F40/143—Markup, e.g. Standard Generalized Markup Language [SGML] or Document Type Definition [DTD]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/103—Formatting, i.e. changing of presentation of documents
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/166—Editing, e.g. inserting or deleting
- G06F40/186—Templates
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/30—Creation or generation of source code
- G06F8/34—Graphical or visual programming
Definitions
- the invention relates to a method for generating at least one application description, with the method step generating the at least one application description with a plurality of application modules.
- the present invention relates to the field of generative programming, whereby the method generates an application description.
- the method is available as an application description generator.
- the application description generator is designed as a computer program
- WO 97/15882 discloses a method for generating an application description and an application description generator.
- the application description generator has an image editor on which a human user can select program definitions, data definitions, and field definitions from a plurality of entered event elements. Different operator guidance questions are retrieved from a hip file of Image editor retrieved when content, programs, sequences, files and records are defined by the user
- WO 99/14651 a method for generating an application description, namely for computer-aided database management software production is known
- An application editor is used here for generating an application description, wherein the application description represents the target software application and is created by the dialogue with a human user.
- the application description is stored in a database - here referred to as a dictionary.
- the application editor allows it to give the user application designs in a hierarchical way, so that undefined application blocks can be referenced in higher application blocks
- An application description generator is known from DE 195 23 036 A1.
- an application description in the form of a source and object code is created automatically by the user working through the process of creating a physical file, a screen file, a form file and a program file
- the generating program is composed of 34 different application modules.
- Each of the 34 application modules represents a source program in partially completed state and consists of an individual component B, which is changed for each application, and a basic component A, which is not changed.
- the application modules can be roughly classified into the following categories
- System input generation system query generation, main warning generation, query window generation, print document generation, and change generation
- the application description generators known in the prior art and the methods for generating application descriptions are not yet optimally designed.
- the known methods or application description generators require a multiplicity of user inputs and special user knowledge about field definitions, form elements and the like.
- the known application description generators are inflexible and each limited to a specific type of work processes
- the invention is therefore based on the object of providing a method and an application description generator for carrying out the method in such a way. To design and further educate, so that a user without specific IT expertise can use the application description generator and different underlying work processes with the application description generator or the method largely automatically mapped
- Reading at least one basic document analyzing the at least one base document, wherein a knowledge base with knowledge elements is built up during the analysis, wherein at least one data field and / or at least one component are recognized as knowledge elements and the knowledge elements vzw are at least partially identified as assumptions, Determining at least one non-contradictory knowledge partition, the at least one knowledge partition each having a set of contradiction-free assumptions, wherein the at least one application description is generated from the at least one knowledge partition with the application building blocks
- This method and in its implementation as a computer program, the appropriate application description generator, enables an application description to be executed on a computer based on electronically present base documents and data source objects such as external databases without the intervention of a programmer or a human developer with IT expertise
- the input in the form of a computer program may be referred to as designer or application designer, since the method creates the application description using the basic documents and thus the underlying application is designed
- the present method offers a number of advantages
- the generated application description is, in principle, independent of a specific operating system. It can be ported to any operating system.
- the generated application description is available after execution of the method in digital form, for example in the form of one or more files
- the computer performs a series of tasks previously reserved to humans in the method.
- the method and the application description generator provide a way to solve a problem through the use of
- the application description generator is not limited to individual work processes such as accounting, order processing, production control and the like
- the application description generator can therefore be applied not only to the aforementioned work processes, but also to other work processes.
- the fact that the method described here does not presuppose the nature of the work process and therefore does not presuppose a specific configuration of the knowledge elements requires the process on the human side her no expertise and is universally applicable
- a work process is understood to mean a sequence of activities that can in principle also be carried out by a computer.
- the work process is described by the basic documents.
- the work process does not have to be explicitly but only implicitly described by the basic documents. It is sufficient for the basic documents to be of their own Form and content are designed so that a person without detailed introduction to the work process based on the basic documents would be able to complete the work process with the basic documents in electronic form or in paper form
- the method receives as input a set of basic documents, in particular at least one base document.
- Several basic documents can be read in.
- the method can receive as input a data source object which is also read in. Reading in means the provision of the basic documents and, if necessary, the data source objects
- Data source objects may in particular serve databases and / or other external "sources of data" such as, for example, interfaces to other programs
- the application generates an application description suitable for implementation on a computer system.
- the application description forms a formal representation of the work process.
- the application description unambiguously and completely describes all parts of the application with the aid of application components.
- the application description represents a complete construction plan of the application.
- the computer program is designed as a runtime environment comparable to an interpreter
- the computer program is designed comparable to a compiler.
- the. can be executed directly by a computer program Application description even in machine language as a computer program and so be executed directly on a computer
- the basic documents and, if necessary, the data source objects are automatically analyzed by the method.
- the process extracts the necessary knowledge from the basic documents and the data source objects to execute the depicted work process.
- a knowledge base with knowledge elements is built up during the analysis
- Data field in particular several data fields and / or at least one component detected
- a component is defined as a set of data fields and the structure that these data fields form together.
- Each component has at least one data field on Vzw but other knowledge elements are also recognized, for example formulas , Conditions, relationships between knowledge elements and data sources, and data source fields to represent the data source objects and examples given in the base documents
- the recognized knowledge elements are at least partially characterized as assumptions.
- the knowledge elements are not marked as assumptions, but the single knowledge partition then consists of the set of all facts.
- the knowledge elements can be identified as facts and assumptions. inaccurate assumptions are plausible and may contradict each other Facts must not contradict each other Assumptions must not contradict the facts.
- the knowledge elements identified in this first sub-step are then analyzed in further sub-steps, whereby further knowledge elements and assumptions are formed. The analysis can be continued until no further assumptions and knowledge elements can be formed anymore and no further analysis is possible
- the method determines at least one non-contradictory knowledge partition.
- the knowledge partition (s) have all the facts and a closed set of contradiction-free assumptions.
- the knowledge partitions consist in particular of the facts and the closed sets of contradiction-free assumptions.
- the knowledge partitions are consistent and vzw completed A set of assumptions is without contradictions, if there are no two assumptions that contradict each other
- the set of assumptions of a knowledge partition is vzw completed
- the set of assumptions is completed when it is not possible to add further assumptions without violating the property of contradiction
- Vzw is assigned a partition plausibility to each knowledge partition.
- the application description belonging to the respective knowledge partition is only generated if the partition plausibility is greater than a certain plausibility. If the plausibility of the partition is large enough, the method generates a possible application description
- the application description is essentially composed by different application modules.
- the at least one application description is created from the at least one knowledge partition with the application modules.
- the application modules used describe the function of the working process, virtually from the general point of view Detail Essentially one can distinguish between two types of application modules, namely
- a first type of application blocks that realize data and data processing of the application description, such as data field blocks, action blocks, formula blocks and document template blocks, and
- a second type of application blocks that implement the presentation and execution of the application description, such as state blocks, form element blocks, task blocks, and condition blocks
- a data field block can have any type Data field blocks can be linked to a data source object, such as a database or a hardware interface, via a data source block, ie load data from the data object and / or then save Action modules or actions are triggered by user entries or system events.
- An action module forms a sequence of
- Formula blocks implement calculations or data manipulations.
- Formula blocks can be bound to data field blocks or called by action blocks.
- Document instances of the base documents can be created from document template blocks using data blocks from the data field blocks
- the visible part of an application described by the application description is described by the state structures.
- the flow logic of the application description is structured by the state structures.
- a state block can essentially correspond to a form or a screen window.
- the visible part of an application that is described by the state structures have a set of form elements or form element blocks
- a form element block can have different functions, such as input and / or presentation of data from the data field blocks, triggering an action block or a task block, changes to a condition with a condition block
- Task modules are action modules which trigger an input and / or output and are started directly by the user or represent these processes.
- Condition modules represent logical expressions which are dependent on data fields or data field modules, in particular their values.
- Condition modules can be composed of elementary conditions mimic any comparison of a data field with a value or other data field, only their comparison must be able to be performed
- the state blocks each contain a set of form element blocks, action blocks and task blocks which are visible and available as long as the application is in the corresponding state.
- the transition between states is described by action blocks and condition blocks.
- Condition blocks can automatically transfer the application to a new state , in which case the new status block becomes active as soon as the conditions are fulfilled. Conversely, a
- Condition module or a condition also prevent a certain state change through action modules or other condition modules is possible.
- the application description makes the application description e.eurated
- This application description forms the implementation of the work process.
- one or more application descriptions based on respective knowledge partitions can be provided to the user of the application description generator or method.
- the user or human planner can select one of several application descriptions, postprocess, and if necessary release for further processing by any of the above computer programs
- Vzw has the possibility to ask questions to the user
- the human user vzw can be given the possible answers from which the user can select the right solution for him
- the method is particularly suitable for work processes in which it comes to a flow of input, processing and output of data, which in principle can be represented without EDP using basic documents - eg forms and forms - by the use of appropriately defined basic documents is In principle, however, also suitable for any work processes.
- the method can be applied to basic documents that describe the input, processing and output of technical control data, used to control a technical control process.
- the application description represents the work process The process automatically extracts the application description by means of a Computer systems from the basic documents
- Application modules as well as all other data structures used are in digital form and are processed by the computer system.
- the method can be executed automatically on a computer system.
- the associated application description is executable as a computer program on the computer system and stored on a storage means
- FIG. 2 is a schematic representation of the method with analysis modules, generation modules and a coordination module, in a schematic representation the interaction between data, functionality and the sequence,
- a second application description generated by the second knowledge partition, 28 is a schematic representation of a third application description generated by the third knowledge partition.
- 29 is a schematic representation of a fourth application description generated by the fourth knowledge partition.
- FIG. 30 is a schematic representation of another basic document in the form of a presentation with a column diagram
- FIG. 31 is a schematic representation of another basic document in the form of a presentation with a line diagram
- 32 is a schematic representation of an analysis module for analyzing comments with a core module and a submodule
- 33 is a schematic representation of the application manager with a user interface and with an interpretation module
- 34 is a schematic representation of the application manager in one
- 36 is a schematic representation of the interaction between screen elements, form fields and data fields, and
- 37 is a schematic representation of the interaction between data fields, formulas, conditions and actions.
- the invention relates to a method for generating at least one application description, with the method step
- At least one data field module and at least one status module and at least one action module are used as application components.
- the application description generator is stored or storable on a computer system with at least one memory means.
- the memory means can either be part of the computer system or be designed as a portable storage means such as CD, DVD, USB stick, magnetic tape or the like, is stored on the storage means of the application description generator or parts of the fürsbeschrei- tion generator
- the application description generator is stored as a computer program.
- the method steps described by the method can be executed or executed by a processor unit of the computer system.
- the computer system has an input unit, in particular an input unit Keyboard - and an output unit - in particular a screen and / or a printer - and a processor unit with at least one processor
- An application description is an exact blueprint for the execution of the work process on a computer Vzw.
- the application description may be implemented with an interpreter or runtime environment during execution of the application description in machine language.
- the runtime environment or interpreter are computer programs suitable for executing the application description on a computer system
- the application description may also designate an executable computer program
- Application modules are the units from which the application description is generated.
- the application description can consist in particular of the sum of all application modules used
- Work process A work process is any sequence of activities that may also repeat or branch, which are fundamentally computer-executable.
- the work process may consist of inputting, processing and outputting data.
- This data may, for example, be control data for a technical work process or Data about another work process be
- Basic documents are any files stored on a storage medium which can make sense to a human user in the context of a work process.
- Basic documents are here the electronic files which the user transfers as input to the process
- a document template is a template derived from a base document that, in the context of the procedure, serves as a template for document instances that can be filled with data values
- a document instance is a document that can be generated by the application description. It is generated from a basic document using a template and filled with data from the application description
- the application description contains the required application modules that describe how the document templates are created and how the document template for document instances is filled with the corresponding data values
- Facts are knowledge elements that are considered to be valid within the scope of the procedure. Facts are safe components of every knowledge challenge and therefore of every solution. Facts must not be contradictory. be each other
- Assumptions are knowledge elements that are not considered safe and are therefore labeled as acceptances. An assumption is obtained from the base documents subject to reservation and subjected to a plausibility check in the course of the further procedure. Assumptions can be contradictory, ie two or more assumptions can be mutually exclusive
- a knowledge partition or solution is a set of knowledge elements that includes all the facts and a consistent set of assumptions. This knowledge partition forms a solution and the basis for generating the application description.
- the application description is generated on the basis of a particular solution or knowledge partition. zess or can give multiple knowledge partitions or solutions to a set of basic documents, several application descriptions can accordingly be generated, from which the user can then select one or more suitable application descriptions
- the knowledge base is defined as all information about the work process that the procedure collects in the course of the analysis, which ultimately forms the basis for the generation of knowledge partitions and the corresponding application descriptions.
- the knowledge base is structured and formalized into the classification into knowledge elements
- a knowledge element is one Amount of information that is a unit in the process and that plays an independent role during the analysis and the generation of the application description
- a coordination module is provided, wherein the coordination module provides a user interface for a human planner or user.
- the coordination module receives and communicates the inputs, ie the basic documents and the data sources to be used
- the user interface enables the selection of basic documents and, if required, data sources.
- the coordination module coordinates the analysis and manages the knowledge elements that result from the individual analysis steps in a knowledge base More specifically, the coordination module can provide a number of functions for further analysis of Further, the coordination module can select knowledge partitions and coordinate the generation of the application description resulting from the knowledge partitions
- the analysis wnd vzw made by analysis modules
- the analysis modules provide vzw results in the form of knowledge elements, assumptions and suggestions for generation to the knowledge base
- the knowledge base is as mentioned by the
- Coordination module manages a category of analysis modules that specializes in the analysis of a particular document type. For example, these analysis modules can specialize in the analysis of MS Word, MS Excel, HTML or similar document types. More analysis modules can be specialized in further analysis of existing knowledge elements
- the coordination module determines the knowledge partition
- the application description is carried out or generated on the basis of the knowledge partitions of, in particular, self-standing generation modules.
- the generation modules are also controlled by the coordination module.
- the generation modules generate the application modules.
- Application modules are summarized in the application description
- the application description is also administered by the coordination module
- the method has the following first six aspects
- a set of knowledge partitions and thus solutions for the work process are determined, a set of generation proposals for generating the application description, the generation proposals being based on the assumptions, a set of analysis modules that build the knowledge base, make assumptions and make production suggestions, all as analysis modules Modules that analyze something and intermediate results provide a coordination module that coordinates the analysis, manages the knowledge base, and / or generates knowledge patches based on the set of assumptions, a set of generation modules that are from knowledge partitions, and the generation suggest the application description using Generating application modules Production modules are all modules that generate the application modules from analysis results
- each of the analysis modules is to generate knowledge elements about the work process that is to be mapped by the application design to be generated. These knowledge elements relate to data, inputs, outputs, functions, structures and relationships of the work process.
- Knowledge elements represented by the analysis modules vzw generated and stored in the knowledge base can be
- At least one data field is identified in the base documents.
- a data field forms a placeholder, which can assume different values of the same data type in the course of the work process
- All data potentially used in a work process is represented as a data field during the analysis.
- the data structure of the work process is not assigned to the data fields.
- This data structure or data fields form the basis for the realization of input masks, outputs and database tables of the application description
- a data field describes a single "date" or data element including one or more possible data types and other properties
- a data field is created during the analysis when an analysis module in a base document recognizes an element or structure that can serve as placeholders for values.
- a data field essentially corresponds to a variable
- a data field is not to be confused with the concrete value of a variable
- a data field has the following properties:
- each data field is called a property vzw a name, a reference to the source from which base document and / or which component this data field comes from, a list of possible data types, a list of the components to which it belongs and / or relationship to other knowledge elements
- a reference to the origin of the data field is a clear reference to the Element or structure that served as the basis for the data field's Eexecution A reference must contain in particular the base document and / or the component of the origin
- the list of data types assigned to the data field may also be empty
- the data field has as a further property a list of components to which the data field is added.
- the data field has references to relationships to other knowledge elements as a property
- Vzw are recognized in the method as data types, integers, decimals, character strings, a date and Boolean values (true / false) It is also conceivable that further data types are analyzed in the analysis
- Bspw can use an analysis module to analyze graphics files that use further data types "point, line and circle", which can then also be processed by a suitable generation module. It is possible for a specific data type or data field to only be used for specific analysis modules and generation modules is known and can be processed by them
- the properties associated with the data fields may be unlimited in principle
- a property must have at least one name and value or set of values. Properties used in the analysis are to store information about a data field that is processed by other analysis modules or generation modules In particular, analysis modules that specialize in the processing of particular data fields may use one or more properties to identify data fields that are relevant to them
- components may be considered as further knowledge elements.
- a component is defined as a set of, in particular, related data fields and the structure that these data fields form together
- the data structure of a work process is reflected in the structure of the basic documents.
- Components serve to evaluate this structure of the basic documents and to group data fields in the sense of a structure
- each component is given at least one name, a list of data fields, a reference to the origin from which document this component is derived and / or possibly associated with relationships to other knowledge elements.
- a component forms part of a basic document or a complete basic document in the analysis. The part of the basic document or the component depicted is distinguished from the rest of the data field by the nature and arrangement of the data fields
- a component may be formed by a table or a list or an address field with name, street, city or the like.
- a component is essentially determined by the set of data fields, together with references to references to other knowledge elements by means of which the purpose of the component can be mapped.
- the component can in particular have the following properties
- the name of a component can be the name of a base document (for example, an Excel spreadsheet), and the name of that component is searched for in the other documents in the analysis
- a reference to the origin of the component ie a clear reference to the basic document and the part of the basic document that served as the basis for generating the component.
- the reference in particular contains or contains the basic document from which the component originated
- the component also has a list of all data fields belonging to the component.
- the list of data fields must not be empty.
- the component can have a list of further properties, which can also be empty
- the components may have references to relationships to other knowledge elements
- Analysis modules can be provided which are specialized in the processing of specific components. These specialized analysis modules can be one or more of the use other properties to identify components relevant to them.
- the properties may consist of a label and optionally of a value
- Components are created during the analysis when an analysis module considers a set of data fields to be related by the way they are arranged, and in this togetherness a purpose suspects examples of recognition such arrangement patterns and relationships will be given in further part of the description by way of example c) formulas
- a formula is a mapping rule that determines a result from a set of inputs, especially with the help of operators Data fields and constant values can be used as input The result of the mapping is stored in a data field Therefore, a formula is usually a data field In the "mapping rule", both calculations and non-numerical representations of a set of inputs to an output are understood here.
- knowledge elements of the formula type are defined as follows
- an operand can again be a formula, a function, a data field or a constant value
- a function maps a set of input values to an output value, where a function is an encapsulated unit
- a “condition” is a special form of a formula that maps a set of inputs "true or false” to one of the two values, in particular using comparison and / or logical operators. For example, conditions are bound to either data fields or components
- a "condition" has one or two operands of data type Boolean
- a condition may have a logical operator. If two operands are provided, an operator must also be provided
- Functions and data fields have the data type Boolean.
- An operand can be a condition, a function, a data field or a comparison
- Comparison consists of a comparison operator and two operands with the same data type Operands can be functions, data fields, and constant values
- the knowledge element "relationship” depicts a connection between two or more knowledge elements.
- the analysis of the relationships between knowledge elements form an important source for assumptions about the process of the work process and thus about the flow logic of the application description. Relationships can exist between all kinds of knowledge elements from a set of knowledge elements and a relationship type
- the relationship type is defined by an analysis module and can be known by all or only one group of analysis and generation modules. Examples of relationships are explained in more detail later by means of an example
- the knowledge element "data source” forms a persistent data object outside the application description or the generated application. from which the application, ie the executed application description, retrieves data and / or to which the application description can supply data
- a working process is usually embedded in an IT and process landscape.
- the working process reads data from existing databases and receives data from external processes or interfaces.
- it stores data in existing databases and supplies data to other partners or interfaces.
- the knowledge element data source forms this communication Furthermore, the knowledge element data source serves to create new data objects for storing data during the execution of the application description.
- the knowledge element data source allows the application, ie the completed application description, to exchange data with data objects outside of the applications Data objects exist outside the
- Data objects can both store data, in particular files, databases, etc., as well as be generated by the application or the application instruction. Data objects can also be hardware interfaces for controlling an external device. Data objects exist outside the application description, which means that it are technically independent data objects which are also available to other programs.
- the method makes it possible to create a data source for all data objects that are not limited in time
- the human user of the method can make data objects known as input in addition to basic documents. This data object then supplies data or can include data.
- a data element knowledge element is assigned to the data object during the analysis.
- Each data source has a unique name as properties, and especially as further properties
- the data structure in particular fields and data types, analyzes relationships between already recognized data fields and components and data sources, in particular on the basis of the names of the data fields and the data types used. Components detected during the analysis can create new ones during the process
- Data sources are created. These newly generated data sources assigned to the components form data memories which, for example, can be generated in the form of a database table
- the data source field knowledge element represents a single field in a data object.
- Data source fields allow data values to be exchanged with the data objects.
- a data source field is structurally similar to the data field knowledge element and can therefore be structured essentially analogously
- the knowledge element example is a data value existing in a basic document for a data field recognized during the analysis or a set of data values for the associated data fields of a single component.
- senselement examples are an important source for information about data types or special properties of data fields or components.
- assumptions can be derived from the data type of a data field.
- the analysis modules are designed in such a way that even more complex knowledge from the analysis of the knowledge elements can be used
- one of the analysis modules may be derived from a set of values of tuples.
- one of the analysis modules may be designed such that a formula is derived from an example with a white tuple
- the knowledge elements are used to represent the work process that is represented by the basic documents.
- the application description is composed of the application modules.
- the application description can essentially consist in particular of the application modules
- At least one data field module, at least one status module and at least one action module are used as application components. This division is based on the basic idea that the business process is due to a triangular relationship between data, the process and a functionality (see FIG The underlying data entered by the user controls the process flow or determines which execution options are available. The data is represented here with data field blocks. The flow of the business cycle again enhances the functionality or makes certain functionalities available to the user and the provision of the functionality w ⁇ d realized by the use of state structural elements The state blocks can act on the data fields, for example by calculations, by changing the data, by displaying the data, etc .. In the following, reference is made to FIG.
- the data field block is used for preserving and manipulating data
- the data field block is assigned to the knowledge element Data field Values or information can be stored in the data field block
- the process of the work process is represented by at least one state block
- the state blocks structure the work process
- the state blocks can, in particular, be input masks and Providing functionality for the user
- the state blocks structure and summarize the interaction of the other application blocks
- Action modules implement the functionality of the work or business process
- Action modules represent the execution of functions, for example, action modules for the calculation, for data manipulation, for the output of data, for the generation of documents, for loading data and fui the Save data to be provided
- action modules represent the execution of functions, for example, action modules for the calculation, for data manipulation, for the output of data, for the generation of documents, for loading data and fui the Save data to be provided
- action modules for the calculation, for data manipulation, for the output of data, for the generation of documents, for loading data and fui the Save data to be provided
- Data field modules can represent individual values of a specific data type in the execution of the application description Unlike the data field knowledge elements, the data field modules have a specific data type.
- the value can be the result of a formula module.
- the changes in the value of a data field block can trigger the recalculation of a formula or the execution of a formula module or a condition module or the execution of an action module
- the data field module can have links to formula modules in which the data field module occurs.
- the data field module can also be linked to form elements or form element modules. When a new data value is entered into the form element during the execution of the application description, the value of the data field module is then changed, for example
- An action module has a sequence of commands which are executed one after the other, whereby jumps are possible depending on the execution
- Command is analogous to a function in a programming language the smallest functional unit of an application description
- a command is defined by a unique identification (ID), which distinguishes the command from all other commands.
- ID unique identification
- a command is characterized by the semantics of its execution or its meaning
- a command is defined by the parameters that the command receives (What does the command do 7 ) Furthermore, the command is defined by the results of its execution (What is the result of the command 9 ) Results of execution of the command can be a value change in data field blocks, a change in one
- Data source for example, values can be written or deleted in the data source, a document can be created or a transition can be made to another state of the application description.
- Each generation module has a repertoire of commands that it can build into an action module. There are commands that are known to all generation modules and those that are known only to one or a group of generation modules.
- Commands are part of the execution of the application description and not part of the method for generating the application description
- a variety of examples are known, such as value changes in data fields, changes in data sources or transition of a State to another state can be realized by means of standard programming languages
- a formula block represents a cumbersome computation, which consists of operators and operands, whereby the operands can be data values, data field blocks or encapsulated function blocks.
- Function blocks are similar to the action blocks by a unique identification (ID ), their parameters, their semantics and their result or the result type are deficient, but the analysis modules are usually responsible for the selection of function blocks.
- ID unique identification
- the analysis modules are usually responsible for the selection of function blocks.
- there are again function blocks whose semantics are known to all analysis modules and whose semantics are only known by one or more Group of analysis modules is known
- a condition module can be considered as a special case of a formula module and can be implemented as a special case of a formula module.
- Condition modules are made up of comparative and / or logical operators and deliver as a result either "true” or "false”.
- a task module is technically an action module, but it can be executed by the user when executing the application description Task modules are linked to one or more state blocks, ie they are only available to the user if one the state or state blocks to which it is bound is currently active When executing the application description, only one state block is always active
- the option of selecting a task block For example, the generation module that generates a task module for generating a document can generate a condition that is fulfilled when certain data field modules required for the document are filled The condition module is then bound to the task module so that the task module is not released until the user has entered the appropriate data. For each task module, it can be specified that the task module must be executed before the current status module or the entire application description is ended For each task module can be determined whether the
- Task module can be executed only once or several times
- a state block is defined by the possibilities which it offers the user when executing the application description Namhch, in particular, an input that the user can or must do, an output that is provided to the user such as task modules that the user can or must call or that are triggered by the user's input, change to other state blocks that are actively triggered by the user or automatically triggered by the fulfillment of specified conditions.
- State blocks can have special form element modules Form element structures or Form element module provides the application description, the
- a Foimularelementbaustein is essentially the visible representative of the underlying data field block
- the user works in particular the various state blocks from a state block includes a lot of Foi a set of form element modules, action modules, and task modules, as well as elements, actions and tasks are visualized by the state block as long as the application is in the corresponding state.
- the transition between different states and in the application description so that the transition between different state blocks is controlled by the action blocks and condition blocks.
- Condition blocks can, for example, automate the applications automatically
- a condition module can also prevent a certain state change through action modules or other condition modules.
- the application description is made by the status modules vzw in individual forms, which correspond to the screen windows displayed one after the other can, represented
- a document template module is generated from a basic document, whereby instances of the basic document filled with data can be created.
- a document simulation module consists of a prepared and empty copy of the basic document and an action module that is executed if one Instance to be created This action module then generates the document instance and fills the document instance with the data.
- This is an example of commands that are only known to a specific generation module.
- For each class of base documents for which document template modules can be created a special generation module with specific commands tailored to the respective class of basic documents
- module element blocks are components of state blocks A
- element block represents a data field block and serves for communication with the user, ie for displaying and / or inputting the data width of the data field block
- Data source modules implement the exchange of data between the application description or the applications executed and outside of the data
- Data objects exist independently of the application description and can both supply data to the application description and receive data from the application.
- Data objects thus serve both for the storage of data and for communication with other applications or technical features
- Data source modules form the interface to the data objects in the application description
- a proposal for generation is always directed to a particular generation module that implements the proposal and can generate appropriate application building blocks.
- a proposal for generation has the following Information on information about the generation module that will implement the proposal, a set of knowledge elements called that
- the proposal contains further information that describes the implementation of the knowledge element in more detail.
- the proposal also contains a list of application modules that the generation module is to create For each application module, the proposal contains more detailed information for the e-module which describes the application module or its generation in more detail
- the knowledge elements are at least partially identified as assumptions. Each knowledge element can be labeled either as a fact or as an assumption. A fact is considered a knowledge element if it is considered safe and does not contradict any other knowledge element. In contrast, assumptions are only plausible and consequently Assumptions may conflict with other assumptions
- the analysis modules also decide whether a knowledge element or an assumption contradicts already existing knowledge elements. If a contradiction to an existing knowledge element is recognized, then the further procedure depends on whether the knowledge elements are facts or assumptions indicated how to deal with a contradiction and the following options
- Deien Plausibihtat can be set at 99%
- the existing knowledge element is a fact and the new knowledge element is an assumption
- the existing knowledge element is converted into an assumption.
- This assumption can in turn be translated into a plausibihtat of eg
- the new knowledge element is considered an assumption whose plausibihtat can be specified as 99%. However, this case can only occur if the existing knowledge element has been generated by another analysis module, since a contradiction between two knowledge elements that are generated by the same analysis module can occur only if the information in the base documents is insufficient to secure one of the knowledge elements as safe and the other knowledge element as not to appear plausible At the assumption about the already existing knowledge element nothing changes here
- the new knowledge element with the plausibility determined by the analysis module is regarded as an assumption.
- the method determines at least one non-contradictory knowledge partition, the at least one knowledge partition each having a set of consistent assumptions.
- the assumptions are in turn linked to knowledge elements or assigned to knowledge elements, so that a knowledge partition simultaneously defines a set of knowledge elements
- suggestions were made
- the determination of the knowledge partition is a preliminary stage on the way to the application description, which serves for the selection of suitable knowledge elements and suggestions for the generation
- the knowledge partition contains not only the assumptions, but in particular all the facts, ie all knowledge elements that are considered safe and all production proposals that are based solely on facts, ie in the implementation of these proposals for generation only knowledge elements are needed that
- the knowledge partition has a particular set of wide-clear claims.
- a set of assumptions is consistent if there are no two assumptions in that set that conflict.
- the set of assumptions is complete when it is not possible to add further assumptions without violating the property of consistency
- the seclusion ensures that the knowledge partitions differ from one another, since there can only be several knowledge partitions if contradictory assumptions exist. Since every completed knowledge partition has at least one different assumption, the knowledge partitions also differ. On the other hand, the number of possible knowledge partitions is kept low due to the closedness of the set of assumptions, since otherwise the number of possible knowledge partitions would greatly increase due to power set formation of contradictory assumptions. If the seclusion is omitted, this would increase the computation time to lead 2 5 Kooidinationsmodul
- the coordination module fulfills one or more of the following tasks
- the coordination module a user interface is made available to the user.
- the user interface enables communication with the user. For example, the user can select the base documents and the data sources.
- the coordination module wild vzw controls the analysis.
- the analysis modules are called by the coordination module Vzw communication modules do not communicate with each other, but via the coordination module
- the coordination module manages the assumptions
- the proposals for generation are vzw Based on the specific knowledge partitions, the coordination module vzw coordinates the work of the generation modules.
- the coordination module With the coordination module, the application description is generated by the generation modules, which is stored with the coordination module and released
- the user can specify parameters for the procedure using the coordination module for analysis and generation.
- the coordination module selects the basic documents to use the method for the analysis.
- vzw will ask questions Ask the user
- These questions can be sent by the analysis modules to the coordination module, where the coordination module issues the questions to the user.
- the analysis modules can provide the user with information via the coordination module, eg about faulty basic documents
- Coordination module gives the user the ability to test and release an application description. However, this is no longer part of the procedure because the process of creating the application descriptions is complete
- the communication between the analysis modules is controlled via the coordination module by requests and feedback messages.
- a request can be sent to the coordination module by an analysis module at any time. Two types of requests are distinguished
- the analysis module informs the coordination module to which other analysis module the request is to be forwarded. In this case, the request is forwarded by the coordination module to the other analysis module
- an open request is provided for an open request.
- the coordinating module is informed of what information the analysis module is providing and what information the analysis module expects in response.
- the coordination module selects one or more suitable analysis modules depending on these two information the open request is forwarded
- the analysis modules communicate to the coordination module which information they can process in a request and which information, in particular which knowledge elements they can then deliver
- an analysis module can process the open request, then the result of the request is returned to the coordination module, whereby a feedback message is forwarded by the coordination module to the originally requesting analysis module.
- the analysis modules were able to communicate directly with each other
- orders are also sent to the coordination module. Orders differ from inquiries in that orders only arrive at a later date, especially after termination of the order
- Each analysis module is vzw.
- a self-contained program unit Bspw can each analysis module by a class, in particular a C ++
- Each analysis module exclusively communicates with the coordination module.
- the analysis modules read and modify the knowledge elements of the knowledge base, as well as create new knowledge elements and write them into the knowledge base Assumptions are made and communicated to the coordination module
- the coordination module can provide the analysis modules with possibilities to manage assumptions
- the analysis modules use the possibilities of the coordination module to manage assumptions
- the analysis modules are used to make suggestions for the generation that can be implemented by the generation modules
- the analysis modules use a collection of heuristics to derive new knowledge elements from the base documents and / or the knowledge base and to identify these as facts or assumptions.
- the method can simply be extended and / or passed on to others Operating systems are ported, since the analysis modules are essentially defined by their input and output behavior.
- analysis modules are presented closer, which play a special role in the process and should therefore also be considered in the preferred embodiment of the method
- Each document analysis module specializes in the analysis of a specific class of base documents with a specific file format. Examples are basic documents with the MS-Word file format, MS Excel, HTML plain text files, MS PowerPoint presentations, open office text documents, source code files for certain programming languages, eg B
- the basic documents are sources of knowledge, whereby the method builds the knowledge base with the knowledge elements based on the analysis of the basic documents.
- the respective document analysis module knows the technical structure regarding the file format of its document class in order to analyze such a special basic document and extract found knowledge elements and store it in the knowledge base.
- Analysis modules provide the data fields, components, formulas, etc. contained in the basic documents.
- the document analysis module works as a closed unit. The implementation of the analysis steps of the document analysis module does not have to be known to the coordination module
- the respective document analysis module In addition to the public knowledge elements that the respective document analysis module adds to the knowledge base, it can generate private knowledge and bind it to the analyzed basic document. This private knowledge is required if a document template is generated from the base document, which in turn is used in the course of execution Application description Document instances are to be generated Essentially, the document analysis module generates knowledge about how the document template should be structured
- the document analysis can make use of various heuristics or approaches in order to analyze the knowledge elements of the basic documents
- a respective document analysis module can recognize knowledge elements from the structure of the basic document or from the structure of parts of the base document This heuristic examines the nature and arrangement of the content of the basic document. For example, lists and tables in Word documents can be recognized by their layout, numbering, if necessary
- the document analysis modules can evaluate comments in the base documents.
- the author of such a base document has the ability to write comments and deposit them in the base document.
- Such comments may be addressed to other readers of the basic document by analyzing the comments by means of the document analysis modules gain further information about the structure or arrangement to which the comment refers If this structure or arrangement is converted into a knowledge element, so the knowledge element can be equipped with further properties due to the commentary of a recognized data field in an MS Word
- the knowledge element analysis module is specialized in further analysis of a particular class of knowledge elements.
- the knowledge element analysis modules may be provided for further analysis of formulas (s. Above) or data fields.
- the knowledge element analysis modules assist the document analysis modules in generating knowledge elements by incorporating aspects of knowledge elements which the document analysis modules do not recognize, or which do not make sense at a later date in the analysis. can be fully processed.
- the component analysis module specializes in the analysis of a particular class of components.
- a class of components is defined by the presence or absence of certain properties of a component. Since a component can in principle be assigned any desired properties, analysis modules that assign component properties (in particular, these are the document analysis modules) and analysis modules that analyze components more closely must be coordinated or use the same property names.
- the relationship analysis module specializes in the analysis of certain relationships between knowledge elements.
- relationship analysis modules are provided for analyzing relationships between various components. From these relationships vzw. the flow of the business process or the flow logic of the application description is created. This will be explained in more detail in the further course of the description (see Fig. 11) using an example.
- the determination of the at least one knowledge partition and thus the determination of possible variants of the application description may be considered again.
- the knowledge partitions are determined on the basis of the assumptions made, ie possible variants of the application description are selected.
- a knowledge partition which can also be called a solution, is a maximum, consistent set of assumptions and facts Knowledge partitions
- Vzw has the coordination module on criteria that evaluate some of the found knowledge partitions as inappropriate and sorted out, so that only knowledge partitions for suitable application descriptions are created. For example, such criteria may be that a knowledge partition is complete and consistent, but small compared to the Other knowledge partitions, ie only a few knowledge elements used Another criterion could be that no input or output is provided for the knowledge partition found. Such recognized criteria can either be used for the coordination module to sort out the knowledge partition "on its own” or making these criteria available to the user of the process to facilitate selection of the appropriate application description
- One possible method for determining the knowledge partitions is based on the representation of the relationship between the assumptions by a graph.
- the assumptions can be contradictory or based on one another (so-called contradictions and basic relationships can be given, which are mapped by a graph) Determination of the knowledge partitions is particularly preferred and is described below by way of example.
- a (assumption) graph is created during the determination.
- the assumption graph or graph has vzw directed and / or unspecified edges.
- the assumption graph is generated in particular by the coordination module
- the graph is defined in a particularly preferred embodiment as follows
- Each node k is marked with a value pk, which corresponds to the absolute plausibility of the assumption for which the node stands.
- the assumption has itself a plausibihtat as a property, but this is the relative plausibihtat on condition that all basic assumptions are fulfilled
- a directed edge consists of a pair (kl, k2) of nodes kl, k2, where a directed edge is created if and only if assuming kl is a prerequisite for assumption k2 is
- An undirected edge consists of an unordered pair
- the nodes are generated, in particular, without evaluation and the edges.
- the absolute plausibilities pk of the assumptions are calculated and added to the nodes Different calculation methods can be used for absolute plausibility. However, the calculation methods fulfill the following
- the absolute plausibihtat pk of an assumption is based exclusively on the absolute plausibility pk of its assumption and its own relative plausibihtat
- the absolute plausibihtat pk of an assumption must never be greater than the relative plausibihtat
- the relative plausibihtat expresses the plausibihtat uniate assumption that all conditions are fulfilled.
- assumptions with plausibility are to be assumed in the calculation of these plausibility
- the plausibility is by definition less than 100%
- a requirement with the plausibility 100% would mean that the condition is a fact
- Plausibility pk calculated Calculation methods from the processing without sharp knowledge can serve as the basis for the formation of a suitable calculation method (eg fuzzy logic, probability logic,
- Each partial graph found represents a knowledge partition consisting of all the assumptions whose nodes are in the subgraph.
- the knowledge paitition is formed by combining these assumptions with the set of all facts 5 shows the structure of a graph and the solution finding by means of an example.
- a graph 1 is shown.
- the graph 1 has a set of nodes 2 to 8 which represent assumptions 1 to 7.
- Each node is marked with exactly one absolute plausibility.
- the graph 1 also has four directed edges 9 to 12. Each directed edge connects a pair of nodes, for example, the blazed edge 9 connects nodes 2 and 6 and thus the corresponding assumptions 1 and 5.
- the assumption 1 is a prerequisite for the assumption 5 It is further apparent from Fig. 5 that, for example, assumption 6 and assumption 2 are a precondition for acceptance Furthermore, the graph 1 here has an uninverted edge 13. The non-directional edge 13 indicates that the assumptions 1 and assumptions 2 contradict each other Acceptance 2 and assumptions 6 and 7 are in conflict with each other. Acceptance 6 and assumption 3 together are a precondition for assumption 7
- a maximum subgraph without undirected edges consists of Assumptions 1 and 5 as well as Assumption 4 or Nodes 2, 6 and 5
- the second maximum subgraph consists of Assumptions 2, 3, 6, 7 and 4 or Nodes 3, 4, 7, 8 and 5
- the absolute plausibility could also be calculated when generating the assumptions.
- the coordination module could then provide a corresponding function for the calculation, which would input the plausibility of all assumptions as well as the relative plausibility of the Assumption expected and available to all analysis modules.
- the coordination module has the option of considering all variants and, through the formation of different knowledge partitions, different variants of an application description, which maps the work process into software engineering.
- the production proposal is designed to provide a clean separation between analysis and generation by allowing the analysis modules to formulate their production proposals into the formalism of the application building blocks, while at the same time coupling them to assumptions that make them complete and wide
- the production proposal forms a means of communication between the analysis modules and the generation modules and, on the other hand, a tool for the coordination module with which a knowledge partition a variant of the application description can be created
- the coordination module implements all the suggestions for generation that result from the knowledge partition Assumptions implied by the proposal to produce are included in the knowledge partition or the proposal for generation is independent of all assumptions, ie based on no assumption.
- the coordination module will invoke the appropriate production modules the information as to which generating module is to be used
- further generation modules are called, which converts the knowledge elements assigned to the current knowledge partition into application modules that have not hitherto been affected by any of the proposals for generation. This completes this variant of the application description
- no proposals for generation are used in the method. If no proposals are used for generation, the generation modules automatically access the knowledge elements of the individual knowledge partitions. If proposals are used for egg formation, these proposals themselves contain the References to the corresponding knowledge elements. The use of suggestions for
- Generation allows a clean separation, leaving the analysis modules with the full analysis of all knowledge elements.
- the generation modules can then focus on the actual engineering of application building blocks
- generation modules can be considered in more detail Analogous to the analysis modules, the generation modules are also specialized in certain tasks. There are two types of generation modules, namely dependent generation modules and independent generation modules
- Dependent generation modules are each assigned to an analysis module and set the suggestions for generating this analysis module to independent gige generation modules, however, vzw specialized in certain knowledge elements and / or application modules and work regardless of the Voi propose beat to the generation that result from the knowledge partitions Vzw exists for each analysis module that makes proposals for generation, including a generation module that can implement these proposals At least one generation module that can generate such application modules exists for each class of application modules or for each application module. For each knowledge element, there exists at least one generation module that can process this knowledge element and convert it into application modules
- Action modules that can be generated by one of the generation modules play a special role.
- the generation module can generate any commands, ie any functionality.
- each generation module has a library of commands that the generation module can generate. This library must be known to at least those analysis modules. Generate the proposals for generation for the generating module If this library is therefore known to a respective pair of a generation module and an analysis module, so that the provided functionality in the generation of the proposals can be considered
- Additional commands are provided in addition to the libraries, which are known to all analysis modules and generation modules. These commands can control the execution of an action module. For example, branch commands can be provided which execute various commands depending on the value of a condition and / or a data field Furthermore, a jump command can be provided which, unlike the execution of the commands, executes a command other than the following
- the method is essentially finished after the creation of an application description
- the method or the application description generator can and should provide the planner with tools for postprocessing the finished application description
- the description of the user interface in particular the designation of the form fields, the positioning of the form fields and / or the relative size of the form fields, can be processed.
- the application description is in principle independent of a certain form of representation of states and form fields. However, there are a number of presentation-relevant information that belongs to the application description and can be edited manually by the planner
- An implementation of the method and the application description generator can also give the planner the possibility of further parameterization. such as colors, fonts, etc. to set and add to the application description
- the processing can take place in tabular form, in a more comfortable variant through an interactive editor that allows editing by mouse or "drag and drop"
- the procedure may allow processing of access rights for applications
- the application description itself may already be present in a programming bit or as a machine-readable code
- the application description can be carried out by a runtime environment or an interpreter.
- An interpreter in the narrower sense reads the construction plan and carries it out step by step.
- the meaning of the runtime environment is that the runtime environment reads in the application description, creates an object for each application block of the application description, and then transfers the further processing of the application description to the created objects.
- the interpreter or the runtime environment requires at least the following additional modules
- a module for displaying the form elements of a state and for
- a module for the implementation of the command libraries for all generation modules and at least one module for the application of external data objects used or for execution of the data source blocks
- the interpreter can be involved in an application manager, which can contain additional functions, eg administration of the basic documents or for task management.
- the method described here can be extended or refined by various approaches
- comments in the basic documents can be generally used by using a common rule to formulate comments
- the sample implementation is implemented as a computer program that was created with Microsoft Visual C # 2008 and runs on a computer running the Windows XP operating system
- steps (1) An agent records the order of a customer and generates an order from it An order confirmation for the customer is written and printed
- a base document "Stuck list xls" in the form of an Excel spreadsheet, in the base document "Stuck list xls" for each sales item the goods are entered, from which the sales article is compiled
- the database table contains the following fields: a vendor number (type Number, key), a vendor (type String), a street (type String), a postal code (Type string) and a residence (type string)
- This database table is communicated to the user / planner by the application designation generator or the method before the analysis is started
- the described method does not dictate which analysis modules must include an implementation (other than the minimum requirements), nor in what order the analysis modules are called by the coordination module.
- the procedure of the analysis, the available analysis modules and the division of labor of the analysis modules are referred to as strategy of the method
- the sample implementation exemplarily supports Microsoft Word documents as an example for text documents and Microsoft Excel documents as an example for tabular folders
- the data structure of the application to be generated is analyzed and set up.
- the data fields are analyzed closer to their data types hm and secondly suitable data source objects are searched for or recreated
- the example implementation includes the following analysis modules 1 document analysis modules
- the analysis of the purpose of a component is performed exclusively by document analysis modules.
- the example implementation distinguishes certain component classes important to further analysis (see below) identified by the document analysis modules and identified by properties
- Another aspect of the strategy is the question of how suitable data source objects are found for data fields.
- the example implementation searches for data source objects as data sources exclusively at the component level, ie suitable data source objects are determined for entire components and then bound to the data fields of the components Alternatively, data source objects could Then, well-suited data source objects had to be filtered out by a separate mechanism, such as by intersecting the sets of data source fields associated with a data source object
- the example implementation is described here as a closed system, ie it works exclusively with known modules described here.
- the method can be designed so that it is able to handle an unknown set of modules, data types, properties, Functions, etc. to work
- the procedure itself is open in this respect and its true strength reaches it accordingly through an implementation as an open system
- the method is flexible on how exactly assumptions are implemented.
- a formula consists of one operator and two operands b
- An operand can be a formula, a function, a data field, or a constant value
- a function maps a set of input values to an output value.
- a function is therefore defined by its name, synonyms that can be used in the documents, the data type of the output, the number of its parameters, and the data types of the parameters.
- the application description generator sets a List of all available functions to which all analysis modules add the functions they support when the application description generator starts
- An operator can be considered as a function with two parameters, the parameters having the same data type as the output value.
- Each operator has a binding priority (numeric value).
- the binding priority controls which operators are evaluated first or calculated. Operators with higher priority become those with lower priority calculated For example, multiplication has a higher binding priority than addition and is calculated first without bracketing.
- each operato has a list of data types for which it is defined. Similar to the functions, the application description generator provides a list of all available operators, to which all At the start of the application description generator, analysis modules add the operators that support them The following table shows the operators supported by the example implementation
- a condition consists of a logical operator and one or two operands of the Boolean data type
- An operand can be a condition, a function, a data field, or a comparison
- a Vei equal consists of a comparison operand and two operands of the same data type
- the planner informs the application description generator simply by specifying the name, the type, and the data source fields and data types. This process is independent of the generation of an application in a management module.
- the application description generator generates for each known data source object and its associated data source object Add the appropriate knowledge elements and add them to the empty knowledge base of the coordination module at the beginning of each new analysis
- the sample implementation supports two types of data source objects, Database Tables and
- the known data source objects that are in the knowledge base from the start are hereafter referred to as existing data source objects.
- Data source objects that are added in the course of a knowledge base analysis are called new data source objects
- the implementation supports a number of classes of components that have certain properties defined.
- the following table summarizes these component classes
- the co-ordination module of the Beispiehmplement mich has a simple surface, with which the planner is guided in several steps through the process
- the planner or user can select existing databases that can use the procedure
- the coordination module recognizes which document analysis module the document analysis module needs to call.
- a document analysis module for MS-Word documents endings "docx” and "doc"
- an Document analysis module for MS-Excel documents ("xlsx” and "xlsx"
- the analysis module is then started to analyze the relationship of components to existing data source objects
- the analysis module is started to analyze components that have been identified as data source objects (database tables)
- the coordination module When the analysis is complete, the coordination module generates the
- Knowledge partitions (knowledge partitions can also be referred to as knowledge partitions) that result from the assumptions of the analysis. It generates an assumption graph as described in the description of the procedure in Section 2 7
- the example implementation uses the minimum formation, ie the absolute plausibility of an assumption is equal to the minimum of the absolute plausibility data of the assumptions and one's own relative plausibility This formula satisfies the requirements that the procedure places on a such formula
- the plausibility of a knowledge partition also results as a minimum of the plausibihtata of all assumptions belonging to it.
- the coordination module of the example implementation implements only knowledge partitions whose plausibility is a large 50.
- the proposals for generation are based on their assumptions are implemented as follows 1 First, the proposals of the document analysis modules are implemented for generation purposes.
- the generation modules for generating Woid documents and for implementing the Excel document design are needed
- the module for implementing the data structure is started with all proposals for generation that originate from step 2 of the analysis.
- the order in which the proposals are processed is decided by the generation module
- the planner receives a lot of application descriptions, which can now be tested and saved. These steps are no longer part of the navigation and are therefore not described here
- Both Word and Excel documents can contain formulas that are recognized and processed by the analysis modules. However, the analysis modules do not directly generate knowledge elements for the formulas, but apply to the module for formula analysis.
- An order includes the formula as a string, as well as it is present in the document, as well as the data field to which the formula is to be bound
- the two modules identify formulas in two ways 1 Through appropriate Word or Excel elements In Word, the formulas are in form fields, in Excel formulas are in cells
- examples are not analyzed directly. If one of the modules finds examples for a data field, then an order is generated for the data field for the analysis of data fields that includes the data field and the examples performs the function of analyzing data fields provided by the module
- Word documents are basically treated according to both roles. For each Word document A proposal for generation is generated containing the document and information necessary for the application to generate a data-filled instance of the document
- the analysis module for Microsoft Word documents works with all three approaches that were in the procedure to the point document analysis modules in the general part of the description. Access to a document is done through the COM class library for MS-Word documents This accesses the following Elements of a Word document provided
- the analysis module successively introduces a series of analysis, each of which is concerned with an element type, a layout scheme or a commentary class
- the module will generate a data field with that name and assume that this data field is a fact
- the module will try to find a meaningful name by itself.
- the first alphanumeric word that precedes the form field is searched for If there is such a word, then a data field with the corresponding one becomes
- Names are generated and an assumption made about this data field.
- the plausibility of this assumption is 70 and is increased by 20 if there is a colon behind the word and / or decreased by 20 if the word starts with a lower case.
- step a or b If a data field has been generated by step a or b, then information about the data field is determined and stored based on the properties of the form field. In the example implementation, this is the data type and any formula. Further information can be format defaults, maximum length, or a VBA Macro that runs on an event
- the keyword "list" indicates that the table represents a list that can be expanded by any number of lines. In this case are the columns of the table for data fields whose names result from the headings in the first row of the table. Possible occurring data in the table are evaluated as examples.
- a list component is generated
- table point to a constant content, ie additional rows or columns can not be added
- values in the table are then evaluated as data of the table instead of as examples. If a table component b could not be determined, then the structure of the table is examined.
- the following table provides information on the assignment of the table Table structure for the type of component in the example implementation
- step a If at least one possible component type could be determined in step a or step b, then corresponding components are created Components from step a are created as facts, components from step b as assumptions
- a data field is generated that receives the name from the first row (or column). If it is a list component, then the property for the data field of the first column or row "Leader" created as an assumption with the plausibihtat 80 If the component itself is a
- the module searches for text whose structure suggests a list; at the same time, the module searches for so-called embedded ones
- the first paragraph contains one or more (alphanumeric) words (and no further strings) that can be considered as titles of the list
- o contains a string that is recognized as an embedded comment
- a list component is generated as an assumption with the plausibility 90.
- a data field is assumed to be the assumption
- Plausibility 99 generated based on the component If there is a comment for a word whose content is identified as a formula, then the module generates a job to the formula analysis module with the comment as formula text and the generated data field for the data field in the first position from links becomes a property
- the example implementation recognizes three comments that control the selection of different text blocks.
- the keywords "Show if” or “Condition” with or without a colon followers introduce a comment that ties the text block that follows the comment to a condition between the key term and the end of the comment is considered a condition and processed by the condition analysis module. If the application creates an instance of the analyzed Word document, then the text block is included in the instance if and only if the condition is met ends with the embedded comment
- Desire of the user is included in an instance of the document
- the remainder of the comment is saved as selection text
- the text block is also defined as described in the previous point
- the module creates a new data field of the type Yes / No As text to be displayed in an input mask, the selection text will be Data field added
- the key word "option” introduces a comment indicating that the subsequent text block belongs to a set of text blocks from which the user must select one.
- the key word is followed by a name that applies to all
- the rest of the comment is saved as selection text for the text block for this comment
- the text block is defined as well as described in the first point
- the module creates a new data field with the mentioned name as a name, which is of type Option If this data field was already created by a previous comment, then it also applies to this comment
- the selection text is added to the list of option values of the data field
- the text block and a condition that is met if and only if the new data field has the value of the selection text is added to the template information for the suggestion for generation to that document
- a component is created for the document that obtains the name of the document (without file extension) and includes all the data fields found so far. Then, "part-whole" relationship is created between the "documentary” component and each of the previously found components If a component is an assumption, an assumption (plausibility 99) based on the component is also generated for the relationship. For list components, a "mas- detaü” relationship is additionally generated (possibly also as an assumption).
- the component for the document itself generated in (5) also receives the Assumption "output" as assumption
- the assumptions of the properties "input” and “output” of the component generated in (5) are marked as contradictions
- a nnect take over the names of the form fields
- the names of the data fields 5, 6 and 9 are determined according to (1) b
- an assumption [assumptions 1-3] is generated with plausibility 90 for the data field 7
- a job [job 1] is generated for the formula analysis module according to (1) c
- step (5) a component [component 2] named "job” is created, to which data fields 1-13 are assigned.
- the component receives the properties "rnput” and “output” as contradictory assumptions 10 and 11
- a "master detail” relationship is created between the "Order” component and Component 1 (as a detail). For both relationships, one assumption is made. 13] with Plausibihtat 99 generated Both assumptions are based on assumption 4
- step (6) a proposal for generation [proposal 1] with reference to the document and component 2 is generated
- the module can directly accept the names of the form fields for the names of all data fields according to (1) a
- paragraph (3) finds a list structure that is introduced by the line with the words "article” and “quantity”. Accordingly, a component [component 3] is created with the property "list”. Additionally, the component receives the property "input In addition, an assumption [assumption 14] with plausibihtat 90 is generated for the component. Then the following data fields are generated, which are assigned to component 3
- step (5) a component [component 4] named "order" is created to which the data fields 14-22 are assigned.
- the component receives the input and output properties as contradictory assumptions
- step (6) a proposal for generation [proposal 2] with reference to the document and component 4 is generated
- the analysis module for Microsoft Excel documents works with all three approaches described in the general section on document analysis modules. Access to a document takes place via the COM class library for access to MS Excel documents Using these approaches, the module sequentially performs a series of individual analyzes, each of which looks after an element type, a layout scheme, or a commentary class
- a list component For each list object of a worksheet, a list component is created For each column title of the list object, a data field is generated If there is a comment for one of the columns, then it is assigned to the data field as a property for later processing by the data field analysis module If the list contains values , then these are assigned to both the component and the corresponding data fields as examples
- a list is defined by the following conditions
- each column is either empty or has an alphanumeric text on the first line starting with a letter
- the module If there is a comment to the cell or cell below, whose content is identified as a formula, then the module generates a job to the formula analysis module with the comment as the formula text and the generated data field
- the worksheet contains one or more formulas, then a proposal for creating a template is generated, but it only refers to the worksheet. In addition to the document, the created component is also saved. The component also receives the property "Output".
- the component receives the datasource property (5) If the worksheet is not recognized as a list, then the worksheet is examined line by line (and column by line) for list or form structures.
- a list structure has already been defined in the previous point.
- a form structure must satisfy the following conditions
- the procedure is as described above.
- the component is generated as an assumption with plausibility 90.
- the data fields are also generated as assumptions (plausibility 99) based on the component
- the document consists of a single worksheet
- the names of the data fields are taken directly from the cells of the first line.
- the data field 23 receives the property "leader” as an assumption [assumption 22] with the plausibility 80.
- a reference to component 5 is stored as the value of the property.
- the document consists of a single worksheet.
- the names of the data fields are taken directly from the cells of the first row.
- the data field 27 receives the property "leader” as an assumption [assumption 23] with the plausibility 80 A reference to component 6 is stored as the value of the property
- This analysis module provides a function for analyzing information about a data field and deriving possible data types for the data field. Other analysis modules can do this
- the module also provides a function that performs this analysis on all data fields existing in the knowledge base.
- the application of this function by the coordination module is standard in the example implementation
- the module provides a function that provides a suitable data field for a name. This function can only be called by other analysis modules and only as a query
- the data type is added to the data field as an assumption with the plausibility 70
- the data type Boolean undergoes a special treatment o If all examples belong to one of the following sets, the data type Boolean is added as an assumption with the plausibihtat 99 t, yes ",” no " ⁇ or ⁇ ” true “,” false “ ⁇ or ⁇ , x “,” “ ⁇ or ⁇ ” present ",” not available " ⁇
- the sample implementation has a database that stores information about possible data fields. This information includes
- the module searches in the database for matching entries by comparing the name of the data field with the names in the database. If it finds a matching entry, then the data type (s) and properties are added to the data field as assumptions with the stored plausibility if the data field already exists Data types, the procedure is as described above
- This function receives a name and a desired data type as arguments and searches for a suitable data field.
- the function assumes that the knowledge base has already been searched for data fields of the same name and therefore focuses on heuristics with the target matching data fields with different names.
- the example implementation implements two approaches
- the first approach uses the background information on synonyms (see previous function) First, data fields with synonymous names and matching data types are searched If there are no such data fields, but data fields with synonymous names without data types, then one of these data fields will be delivered added to the data field as an assumption
- the function therefore tries to divide the received name into meaningful parts of a name, which in case of a name like "set order" happens simply by decomposing into the existing words
- the function searches for data fields whose names correspond to one of the name parts and which are located in one component at the same time, their names to another name part It is again preferred to have data fields with matching data type, otherwise the data type must be added as an assumption to the data field that is supplied If the function finds several matching data fields, then the module issues the found data fields via a request to the coordination module, which then lets the scheduler select one of the data fields, which is finally provided by the function to the calling module
- Analysis module for analyzing the relationship of components to existing data sources
- the module looks for matches between the data fields of a component and the data source fields of existing data sources. If the match is high enough, the module generates a relationship between the component and the data source as an assumption
- C is the set of data fields of the component and D is the set of data source fields.
- a knowledge element of the Relationship class is created with the component and the data source.
- the type of relationship is defined as "source.”
- An assumption is created for the relationship that implies plausibility receives.
- a source relationship is created with the data field and the corresponding data source field .
- Type and plausibility are identical to the type and plausibility of the relationship between component and data source. which is based on the assumption from a. and obtains the plausibility 99.
- Analysis module for analyzing components that have been identified as data source objects (database tables) The module searches for components that could represent a database table.
- the document analysis modules take on the task of analyzing the probable purpose of a component and generating corresponding meaningful properties. Therefore, the module searches for components with the datasource property Document analysis module for database tables or representatives of database tables are kept For each of these components, the module generates a new data source and a proposal for generation
- the new data source is an exact replica of the component, ie a data source field is generated for each data field of the component and for each data type of a data field that data type is generated for the corresponding data source field.
- a data source field is generated for each data field of the component and for each data type of a data field that data type is generated for the corresponding data source field.
- an assumption is generated based on the assumption is based on (1) and has a plausibility of 99. Contradictions are defined for the assumptions about the data types
- the generation proposal is the sole source of information for the new data source. Further information is not necessary as the generation module specializes in generating new data sources
- step (5) no matching component is found for component 5
- step (3) of the method in section 5 component 2 is found. Consequently, a new component [component 8] is generated, to which data fields 1-4 and 6 are assigned. For the new component, an assumption [assumption 50] with Plausibihtat 60 Finally, the relationships and assumptions according to 5 4 (3) are generated
- This analysis module provides a function that uses a string of characters that is a formula in infix notation (operator is between his
- the function receives the character string and the data field to which the formula is assigned as parameter (hereinafter referred to as target data field) Function to be given a plausibility for the formula to be generated. This is useful if the calling document parser module is not sure if the text is really a formula.
- a formula consists of one operator and two operands.
- An operand may be another formula, function, constant or data field (see Fig. 10).
- the task of this module is, on the one hand, the linear text form in which formulas appear in documents, in the described recursive ones
- the module ensures that all data fields and functions involved in the function have a suitable data type.
- Target data field is assigned as the value of a newly created property "formula” If the target data field already has a property "formula”, then a Storfall occurs (see below)
- the module does not come to fruition there, the component containing this component (relationship "part / whole") is searched for and is itself no longer contained by any other component - if such a component exists c If the module is still not well-founded, it searches the entire knowledge base and takes the first data field found
- the data types are not compatible, then if the affected operand is a data field, then the data type of the operator is added to it as an assumption (and contradictions to existing data types, which may be converted into assumptions) Plausibihtat 99 and are based on the assumption of the formula.
- a set of candidate data types are formed per sub-formula so that the module has one or more data types for the entire formula, as a result of bottom-up editing at the end
- the document analysis modules have created three requests for analysis of formulas executed by this module
- step (3) a the data fields 11 and 12 are found for the two operands "quantity” and "price".
- the data type of the formula is set to number in step 3, since this is the only data type of the single operator and 12 the data type number is added for each data type an assumption [assumptions 59 and 60] is based on acceptance 58.
- step (4) the target data field (data field 13) is also the data type number including the assumption [assumption 61] based on assumption 58, added (see FIG.
- step (3) d the data fields 11 and 26 are found for the two operands "quantity order" and "quantity stucco list".
- the data type of the formula is set to number in step 3, since this is the only data type of the single operator
- step (4) the data data number including assumption [assumption 63], based on assumption 62, is also added to the target data field (data field 22) (cf. FIG. 13).
- Fig. 13 shows the results of the tasks (b) and (c)
- the application description generator must decide whether the application offers options for maintaining the master data and, if so, how the maintenance is integrated into the application. This task is performed by this module. differs
- the application does not provide a way to maintain a specific data source object 2
- the application provides the ability to maintain a specific data source object in each step
- the application provides the ability to maintain a specific data source object only in steps that use the data source object
- variants are available for existing data source objects Only variants 2 and 3 are available for new data source objects, since these data source objects would otherwise always be empty. Which of the variants should be selected is determined by the module by a question to the planner ( via the coordination module) with the above-mentioned selection possibilities If one of the variants 2 or 3 is selected, then the module generates a corresponding proposal for generation, which comprises the respective data source and the selected variant
- Analysis module for analyzing components that have been identified as input components
- the module does not contain components that could represent an input form
- the document analysis modules take on the task of analyzing the probable purpose of a component and generating meaningful properties. Therefore, the module searches for components with the property "input" be held by a document analysis module for an input form or the representative of an input form
- the module distinguishes between input components that are not part of another input component and those that are part of another input component. The latter are referred to herein as subcomponents
- the aim of the module is to generate a proposal for generation for an application module state that serves for data input.
- the module assumes that a new data source is generated for the entered data.
- existing data sources should also be used to avoid double entries If, for example, the entry of customer orders is involved and a customer file already exists, then the address data of a customer should be retrieved from the customer file
- the module has the following tasks
- the module starts from a grid with 2 columns and an infinite number of rows in which the form elements are positive. This knowledge partition leaves Scope in the execution of the application in terms of size of the elements and distances between the elements
- the module simply takes over all the data fields It must decide which data fields of the component are retrieved from existing data sources.
- the module searches for subcomponents that are related to an existing data source and whose data sources have a key field for accessing the records If there are several such data sources for one subcomponent, then several proposals for generation are generated
- the module searches the property lists of all components For each component that has the property "input”, it starts the following analysis (1) if the component has a relationship "part / whole" to another
- the module will create a new data source with the name of the component and an associated assumption with plausibility 99 based on the mput property of the component. between the component and the new data source and with a
- the data field does not belong to any subcomponent that has a "master / detail" relationship to the component being examined
- the module generates a proposal for generation for the flow logic generation module with the content to generate a state containing the
- the module generates information about a form element for entering a value for the data field and adds this information to the proposal for generation (4)
- the module generates a data source field to the data source from (2) for the data field and a data type for each data type of the data field
- a data source field is generated for the new data source.
- the subcomponent does not have the property "list” then it is checked if it has a relation "source” to an existing data source. If so, then the module generates information to a form element for inputting a value for the data field associated with the key of the existing data source and adds that information to the proposal for generation (4).
- the form ment receives an action that is executed when a value has been entered. This action finds the record matching the value in the existing data source and, if a record exists, loads its values into the data fields of the subcomponent.
- the module generates data source fields, data types and assumptions (including contradictions) for all data fields of the subcomponent analogous to (5) b.
- the module generates generation suggestions for the new data sources generated in (2) and (6).
- the module asks the planner, via the coordination module, how the data entered should be stored. There are three possible answers between which the planner can choose:
- the procedure is the same as in the second answer.
- information is added to create a task by which the entered data is stored and then the data fields are emptied.
- the answer is added to the proposal for generation (FIG. 9)
- the module checks if the input component is a document for which a document template exists. If so, then the module asks via the coordination module if a document should be generated for the entered data If yes, then the module adds to the suggestion to generate information about a task that will trigger the generation of a document with the currently displayed record
- step (2) a new data source [data source 3] named "order” as an assumption [assumption 64] and a relation "source” between component 2 and data source 3 as an assumption [assumption 65] based on assumption 64, generated
- the component contains a data field (10), which has the property "leader”, but with the reference to component 1 (see example to 5 1)
- a data field (10) which has the property "leader”, but with the reference to component 1 (see example to 5 1)
- Den Data type number has data fields 5, 6, 10, 11, 12 and 13
- Data field 6 belongs to component 8 (relationship "source”, see 5 5)
- data fields 10 to 13 belong to component 1 (relationship "master / detaü", see 5 1)
- the property "leader” as an assumption [assumption 66] with plausibility 80 - with a Reference to component 2 as value
- step (4) a proposal for generation [proposal 7] for generating a step with the name "order” is generated, which is based on assumption 10
- the form field for data field 6 receives an action for loading the data from data source 2
- proposals for generation [proposals 8 and 9] for the new data sources 3 and 4 which are also based on assumption 10 The question in step (8) is answered by way of example with the third answer.
- the question in (9) is answered as an example with "yes". Both answers are added to proposal 5
- Analysis module for analyzing components identified as output components
- the module looks for components with the property "Output", ie components that are held by a document analysis module for an output form or the representative of an output form
- the module has to decide which records are displayed and if changes to the data are allowed. It asks the planner about the coordination module two questions
- the planner can only answer with "yes” or "no". The answer is added to the proposal for generation as information
- the planner can only answer “yes” or “no” If the planner answers "yes” then the generation request will be accompanied by information about an action that will trigger the saving as soon as a record or state is left
- the module checks if the output component is a document for which a document template exists. If so, then the module asks about the coordination module, whether for the displayed one
- the module adds to the suggestion to generate information about a task that is generating a document with the one currently being displayed Triggering record
- connection components there are components (called connection components) that can form a connection between other components Mapping data fields of one component to data fields of another component that is described by the data components of the connection component If there is such a connection, data from one component can be used to generate data for the other component
- connection component must have the property "list”, so it should be a list and not part of another component
- the data field in the source component owns the property "leader"
- connection component list There are no sample pairs of the two data fields in the connection component with the same values, so there are no two lines in the connection component list in which the values for the two data fields are identical
- the component with the input property can not have a data field with a formula that has data fields from the other connected component as arguments 3 There is no data field for which there are data fields according to a and b in both components found in 2
- condition in 2 e is an example of a heuristic that can be used to find the best out of several possible candidates for a connection.
- the idea behind this heuristic is that it makes little sense when calculating the data Input data should be used, which should actually be issued later Of course, here on other heuristics can be used
- the module searches the set of all components for suitable constellations. For this purpose, components with the property "list” are searched for. For each component found, the module searches for pairs of data fields which satisfy condition 2, for which two components exist in particular For each found combination of data fields and components, the module creates a "connected" relationship between the components for which the following information is stored, the connection component
- the relationship is an assumption with plausibility 90 based on the properties "list”, "input” and “output” of the components involved. Based on the newly created relationship, a proposal for generation is finally generated, which generates an action for Creation of new data of the target component from data of the source component The integration of the action in the application process is left to the generation module
- component 3 as source component and component 1 as target component also satisfy conditions 2 a - 2 d.
- data field 22 in component 3 is assigned a formula such that condition 2 e is violated
- the analysis module for analyzing comments analyzes texts that are recognized or held by other modules than comments. For this purpose, it works through a text vzw word by word and looks for patterns that stand for known comments. If the analysis module recognizes a pattern, then the text becomes is processed as a comment and, based on this, generates suitable knowledge elements, which as a result are fed back to the requesting module, which has recognized the comment and sent it to the analysis module for analysis of comments
- a document analysis module finds a comment in a document, it sends a request via the coordination module to the analysis module for analysis of comments, supplying the comment text with
- the comments are either corresponding document elements (eg comment elements in Microsoft Office documents) or indicated by a special text form in Any document analysis module can decide for itself whether to process a found comment itself or to assign a request to the analysis module to analyze comments. In principle, it is up to the document analysis modules to decide which components of a document If a document analysis module wants to use the analysis module for the analysis of comments, then it has to pass the found comment to the analysis module for the analysis of comments
- the analysis module for analyzing comments knows a set of text patterns, one or more of which represent a particular comment class. Each comment class is associated with a sub-module that further processes the text pattern and generates corresponding knowledge elements
- the analysis of a comment or the corresponding string is done in two steps First, the comment is compared with all text patterns If a matching text pattern is found, then the module executes the corresponding sub-module in the second step (see FIG. 32).
- a text pattern consists of a sequence of constant or specific texts and variables.
- the variables stand as placeholders for text.
- Such a sequence represents a set of texts that can be mapped onto the text pattern by matching or comparing the constant texts or documents of the variables with suitable text parts.
- the assignment of the variables of a text pattern with matching parts of the analyzed comment forms the
- Each sub-module represents a commentary class and provides certain information resulting from the commentary analyzed, depending on the semantics of the commentary class. These are generally newly generated knowledge elements that are further processed by the ordering module.
- the analysis module can be used for the analysis of But comments must also provide other information, but then it must be made sure that all the commissioning modules can work this information vei
- the analysis module has an extensible set of comment classes for comment analysis, each of which implements (kind of) information
- the text patterns are structured as follows
- Tables such as those recognized by the Word and Excel analysis modules that contain fixed values, can be interpreted as mapping tables if there is a column whose values are unique, that is, no value occurs more than once Treating formulas that consist of exactly one function that results from the assignment
- the analysis of assignment tables is the task of a special analysis module for the analysis of assignment tables, which specializes in this
- the analysis module for the analysis of allocation tables is called by the coordination module for all components that have the property "table” (see the sections on the analysis of tables in the analysis module). len for Word and Excel) In the simple example described here, the module continues to process only components that have exactly two data fields
- the module checks if there are data fields with unique values, with data fields vzw formed by the columns in the table. Two conditions must be met for uniqueness
- the module If these conditions for a data field - namely an input data field - are met, the module generates a formula that is assigned to the other data field (namely the output data field).
- This formula consists of a single function that references and results in the component that 1 saves the values from the component for later use in the finished application, and
- this procedure can also be applied to components with more than two data fields.
- the module then either searches for suitable data field pairs or combines several data fields as input data fields if they have unique value tuples (eg a combination name, first name, and Date of birth, which will probably be clear in practice)
- the example implementation uses five generation modules, one for the implementation of the data structure, one for the implementation of
- FIG. 16 shows which generation modules generate which components, so that the End a complete application comes out
- the module processes proposals for the generation of the analysis module for the analysis of documents in MS-Word format and generates a document template block
- a template template in the example implementation consists of a revised copy of the base document and an action that fills that copy with data
- the generation module will make a copy of the base document and revise it as follows - all examples will be deleted
- Form fields and elements that have induced data fields are named after the corresponding data fields (if they do not already have the same name)
- the action is built from Word-specific commands.
- the example implementation supports the following commands (the execution of which is not described here in more detail, since it is not the responsibility of the application description generator)
- the generation module processes proposals for the generation of the analysis module for the analysis of documents in MS-Excel format and generates document templates
- a document template component consists of a revised copy of the base document and an action that fills this copy with data
- the generation module creates a new Excel document with exactly one blank page in which the first line of the base document is copied. If the data fields are arranged vertically, then the first column is copied. This is sufficient as the module for analyzing documents in MS Excel format just
- the action is built from Excel-specific commands.
- the example implementation supports the following commands (the execution of which is not described here in more detail, since it is not the responsibility of the application description generator)
- This module implements the complete data structure of the application, ie
- a new building block data source is generated.
- the building block is provided with information about the appropriate data source fields contained in the knowledge partition and an indication of being a new data source. which must be generated before the first execution of the application in a real database added
- This module implements the states and all tasks, ie the flow and the functionality of the application
- proposals for generating the module for analyzing connections between two components are processed by a third component as they exist in the knowledge partition.
- the module first looks for matching states appropriate state created from the input component which is part of the connection. An action is added to this state which is executed upon exiting the state after saving and which generates new data records from the entered data according to the information in the proposal Status is via the name, which is identical to the name of the component
- the target component is the output component
- the Part of the connection is the component that has a "master / detail" relationship as a "detail" to an output component
- All output data fields associated with a key of the data source of one of the (output) components are assigned actions that fill the data field with a unique value when the data field is reinitialized. This occurs in step 5 of the below-mentioned command to create new ones Datasets of the output component (s)
- step (6) b If there are data fields in one of the output components associated with the key of a data source according to step (6) b or c of the input component analysis module (see FIG. 5 9), then that data In each case an action is assigned for loading other data fields from the data sources, as described in step (6) b. These actions are carried out in step 5 of the command given below
- Output component Tasks for navigation are not necessary as they are already implicit in the table form. If a corresponding information is contained in the proposal, a task for creating a document is created from the document template (see above).
- the action that creates the new records consists of a single command that passes the following parameters
- the data source associated with the input component as well as a set of pairs of data fields and associated data source fields
- the command executes the following operations or steps with these data for each new data record in the input data source (see FIG. 18). 1 Load all values from the fields of the data record into the connected data
- Fig. 17 shows an example of completing a procedure.
- the coordination module performs points 1 to 4 of the analysis (see Figure 3).
- the individual analysis steps for the example are described in more detail in the corresponding sections to the analysis modules.
- Figure 21 lists the proposals for generation including the assumptions on which they are based
- This knowledge partition 1 contains the assumptions 10 and 18 and thus the
- This knowledge partition 2 contains the assumptions 11 and 18 and thus the prediction for generation 1-6 and 10-15. The result is an application that has a step for entering the order, which has a step for issuing the order (see FIG 27)
- This knowledge partition is nonsensical and was quite likely to be rejected by the planner Knowledge partition 3 is shown in FIG. 24:
- This knowledge partition 3 contains the assumptions 10 and 19 and thus the suggestions for generation 1-9, 16-18 and 19
- the result is an application that allows the entry of applications in the first state, from which orders are generated when changing to the next state , which can be viewed and created in the second state (see Fig. 28)
- the Word documents can be created using tasks in their respective states.
- the application offers the option of editing the stub list and customer list.
- This knowledge partition makes the request to the work process best and was probably approved by the planner. This knowledge partition will be described in more detail later
- Knowledge partition 4 is shown in FIG. 25. This knowledge partition contains the assumptions 11 and 19 and thus the suggestions for generation 1-6 and 13-18. The result is an application consisting of two successive steps for output (see FIG. 29) Knowledge partitioning is nonsense and was quite likely to be rejected by the planner
- the action is generated by the module 6 1 and is structured as follows for the document "order doc"
- the two proposals are realized by the generation module 6 3 are each building blocks for the data sources "Stucklist” and "order"
- the actual installation of data sources as database tables is a matter of execution of the application, which is not the subject of the method proposals for the generation 8, 9, 17, 18
- Note Module 6 4 implements the suggestions in the order of 7, 16, 19, 5, 6
- Basic documents can be recognized as a checklist. consists of a structured sequence of individual points to be processed in a work process Each point is described by an explanatory text and, if required, by additional comments. Hyperlinks may connect the points to other basic documents and / or other documents
- a document analysis module recognizes such a checklist, then a request vzw is directed to a special analysis module for checklists by the coordination module, which is described below for a simple vanain of checklists.
- the coordination module which is described below for a simple vanain of checklists.
- the document analysis module itself to have the checklist structure analyzed
- the analysis module for checklists needs as input a list of points, each having a sequence number, a text, a possibly empty set of comments and / or possibly an empty set of hyperlinks. This list was generated by the requesting analysis module and the Request attached to the coordination module
- the checklist analysis module can receive a plausibility check.
- the coordination module generates an assumption with this plausibility and returns the plausibility to the requesting document analysis module, which can use the assumption to construct contradictions
- the checklist analysis module can receive a plausibility check.
- the coordination module generates an assumption with this plausibility and returns the plausibility to the requesting document analysis module, which can use the assumption to construct contradictions
- FIG. 19 shows an example of a checklist as a base document.
- the checklist has individual so-called "points".
- the points are identified by interchangeable check boxes, but in other embodiments they may be marked by other characters, for example, indentation signs or a particularly continuous one Numbering
- the following hyperlinks are contained in this basic document: customer data xls, supplier data xls, parts list xls These hyperlinks refer to other basic documents, for example the hyperlinks to those shown in FIGS Reference basic documents
- the analysis module for checkpoints processes the points in the order of their numbering.
- the module combines the points into sets, each of which forms a component that stands for a state. If the successor of a point (according to the numbering) is not the same Quantity as the point itself belongs, then no other subsequent point having a higher number than this point may belong to the set to which this point belongs
- a point that has a non-empty set of hyperlinks, at least one of which points to an input component, is considered a placeholder for components or states that are outside the
- the first point in the order that does not match the previously described rule marks the beginning of a new component (which includes the points in front of it)
- Each component is marked vzw as an input component.
- a proposal for generating a state is generated
- a proposal for a transition (the associated state) is generated.
- a suggestion for a transition is made from the state that is to be generated for the current component. to a state possibly generated for the component addressed by the hyperlink, as well as a proposal for the inverse transition Data fields are derived from the text of a point. Two cases are distinguished
- Hyperlinks that reference output or master data components are interpreted as tasks that are added to the proposal to create the component to which the hyperlinks belong
- checklists may be refined by supporting items that themselves consist of a list of items and / or that take into account conditions that must be met prior to the activation of a item
- the basic document consists of paragraphs, each beginning with there can also be paragraphs that consist of text only or are filled with lines of the characters "-" or "_" li
- the basic document consists of a single supplement, whose individual points consist of text, hyperlinks and comments
- the associated analysis module (cf. example 5 analysis module for analyzing documents in MS Word format) generates a request to an analysis module for the analysis of checklists
- the request is accompanied by a list of points (cf. 8 1, checklists), which are determined as follows
- hyperlinks vzw are recognized and analyzed.
- Hyperlinks that refer to web pages or other documents eg basic documents are interpreted as tasks.
- For each hyperlink another proposal for generation is created
- Vzw is an analysis module for presentation files, in particular an analysis module for diagrams
- Presentation files eg PowerPoint files can serve as basic documents
- a PowerPoint presentation is considered as a template in the example implementation, which is filled with data when the generated application description is executed.
- a presentation file, in particular a Powerpomt presentation file, can contain text fields and diagrams
- FIGS. 30 and 31 For explanation of diagrams, reference may be made to FIGS. 30 and 31
- Beti respect are here exemplarily two types of diagrams, namely a bar chart (see Figure 30) and a line chart (see Figure 31), the following steps can also be on other types of diagrams - such as pie charts - transferred
- the data of a diagram comes from a tuple of data fields, in particular a pair of data fields, each data field being associated with an axis of the diagram.
- the tuple may in particular be a pair
- the data of a diagram comes from a pair of data fields, where one of the data fields is the X-axis and one of the data fields is the Y-axis. Any value of the X-axis data field represented in the diagram must be Therefore, the two data fields must come from a common data source.
- the analysis module for analyzing charts or the analysis module for analyzing presentation files therefore generates vzw data fields and a component to which both data fields belong. This component is called output
- the names of the data fields are derived from the labels of the diagrams, as well as the data types (cf. FIGS. 30, 31). In the examples illustrated in FIGS.
- a data field "month” with the data type “date” or “data” is respectively identified “Date” and a data field “Sales” with the data type “Number” recognized
- the analysis module for presentation files in particular for PowerPoint basic documents, first analyzes relevant objects of the presentation and uses them to generate knowledge elements.
- an implementation proposal for a document template and an action for creating instances of this template are generated.
- Text files are read in as a string. Since a text file contains no exploitable objects other than characters or words, the analysis must be limited to certain structures and embedded comments, as described in the analysis module for Word documents. The embedded comments and list structures listed there can be copied exactly for text files. For example, placeholders that are composed of the character "_" can be interpreted as simple data fields, and the name of such a data field is derived as described in (1) b for Word documents Word as data fields are interpreted, eg " ⁇ date ⁇ ".
- HTML files are read in as a string.
- Knowledge elements can in principle be derived from all HTML commands. Examples are:
- Text box, checkbox, radio button and selection lists are treated as well as form fields in Word.
- Hyperlinks are treated the same as in Word.
- the task of the application manager is to integrate and execute applications created by the application designer in the form of application descriptions in an existing IT system environment.
- the application description provided by the application designer is vzw system independent.
- the Application Manager assumes the role of the interface between the application and the system environment.
- the Application Manager handles the exchange of data between the application and the environment by accessing databases, services, and other data sources
- each implementation of the application manager vzw has at least a part of the following features, in particular all the following features
- the method implemented as a computer program is referred to as designer.
- the result vzw is written in tables of a database. These tables contain all the blocks, the sequence, data and functions of the database.
- the application description may be in other forms, such as a text file or an XML file. Implementing the application manager simply requires providing a method for reading the application description for the appropriate format
- Each Application Manager implementation has a surface for managing available data sources and for managing and launching available applications.
- the execution of an application is inherited by an interpretation module (see Figure 33).
- the interpretation module must either implement any of the designer's potentially usable commands or access separate modules
- there may be other modules that implement additional functionality see 9 4 Possible additional functions of the application manager).
- the purpose of such add-on modules is to further integrate generated applications into the system environment with respect to communication and data management. Examples are user administration, document management or data management communication between users
- Each additional module must have an interface to the interpretation module, which allows an application to use its functions at runtime.
- Fig. 33 shows the general my architecture of an application manager, as well as the specific architecture of an example implementation referenced in this description
- the example implementation is based on the "net” technology
- the "net” technology is a software platform developed by Microsoft and includes a runtime environment, a programmer-specific collection of class libraries - so-called APIs and connected utilities
- All modules are implemented as classes. Specifically, there is a class for the interpretation module that generates an instance for each application description that is executed when it starts, that reads the application description and implements the application
- the application description generated by the application designer consists of a set of application building blocks that define the data, functions, and operation of the application. These application building blocks are read when the application description is read into a specific structure dependent on the application manager implementation, which serves as the basis for executing the application described application is used
- a data field stands for a placeholder, which can take on different values of the same type in the course of the work process
- Data fields are related to formulas, conditions, actions, form fields, and data sources
- Data fields are used to store data during the execution of an application
- a formula is a calculation rule that calculates a result from a set of inputs (data fields and constant values) using operators. The result of the calculation is stored in a data field, so that a formula is always bound to a data field.
- Each formula is assigned to a data field whose value is filled with the result after the calculation of the formula. If further data from this data field depend on my data, then these are also recalculated. This results in a sequence of recalculations of formulas in the sample implementation the formulas are processed according to the principle of a breadth-first search Alternatively, an algorithm can be used that optimizes the order of the computations by analyzing the correlations so that as few double calculations as possible are made. In the example implementation, it is also assumed that the concatenation of the formulas do not give any cir- cular references Alternatively, possible occurring crosschecks could be treated by considering a write-off condition for the recalculation
- condition is a formula that maps a set of inputs to one of two values true or false using comparison and logical operators. Unlike formulas, conditions can be bound to data fields as well as components
- the result of a condition depends on the data fields that occur as operands of the condition. If the value of a data field is changed, the condition is reevaluated. In the example implementation, an action is performed depending on the result of the condition.
- a condition can be assigned one action for the result True and one for the result False
- a data source is an application that persists outside the application. animal object from which S 1 Ch can fetch the application data and / or to which the application can supply data
- the application designer creates, as part of the application description, specifications for the data sources that the application description uses.
- these specifications include the name of the data source and the names and data types of the fields that are used from this data source. It is the task of the application manager to comply with these specifications To find suitable real data sources and to ensure that they are used when executing the application description For this, the manager knows a lot of real data sources.
- these are stored in a database, whereby for each data source the type, name, information for the data source technical access to the data source and a list of available fields with names and data types is stored
- databases or database tables and services are supported as a type.
- a service is a program that has a known name
- the following table describes the information that is stored in the sample implementation for a data source
- the manager When an application description is installed or first started, the manager tries to associate all data sources of the application description with a real data source matching the application designer's specifications. This is done by comparing the name and fields of the real data sources with the application designer's preferences Data sources match, then the application manager asks which data source to use. allows you to create a new database table as a data source
- Action Definition An action consists of a sequence of commands, which are executed one after the other, whereby jumps are possible depending on the execution
- Actions are performed by the Application Manager if associated conditions are met or the user triggers them (see Tasks).
- actions can also be performed when special events are triggered, such as the activation of a step or the reading of the width of a data field
- an action is performed by sequentially exporting the commands that make up the action.
- the application manager should provide a class that implements this command.
- this class has a method " Exe- cute ", which is called to execute the command
- the state blocks describe the step-by-step sequence of the application description (see FIG. 35).
- a state is generated. From the perspective of the application manager, such a step is regarded as a state in which an application can be located and which is represented by a screen mask can be described with issuing possibilities and by a lot of executable activities of the user, as well as possible transitions to other states.
- the states can also be called steps
- the screen mask is described by form elements (see Fig. 36) Each form element is assigned to a data field. If the state is active, then each screen element is assigned a screen element; in the example implementation, this is a Windows Forms Control. With these screen elements, the application manager builds the screen of the active step on Via the form elements, the screen elements are each assigned to exactly one data field. A screen element always displays the value of the assigned data field. If the value of a data field is changed by a formula or an action, the new value is immediately displayed by the element the value of a screen elements are changed by the user, then the new value is written to the data field and the recalculation or evaluation of all dependent on this data field formulas and conditions is performed
- the executable activities of the user include not only entries in the screen.
- Tasks A task is called a task, which is specially marked so that its execution can be started directly by the user. These tasks are displayed on the screen and can be selected and started by the user In the example implementation, the tasks are displayed as a list on the edge of the screen, with each task being performed by a mouse click. Alternatively, tasks can also be made available in a menu, through buttons, or any other means
- Each state is assigned a set of other states which can be reached from this state, ie, which can become the active state. This defines possible state transitions. State transition to another state can be linked to a condition, ie this state can only become active when the condition is evaluated to the value "true" In addition, for each transition, it is determined whether the new state can only be activated by selecting the user, or whether this should be done automatically as soon as the linked condition is true In the example implementation, all states which can only be activated by the selection of the user or their names are displayed in a list on the screen (see FIG. 36). The transition is triggered by clicking on the name
- each state has a method that executes when it becomes active and a method that executes when it loses the active state.
- the first method creates the appropriate screen element for each form field of the state, writes all tasks to the state List of Tasks and All Available States in the List of States
- the second method removes all the screen elements associated with its form fields and deletes all entries from the lists of states. gave or achievable conditions
- all types of application building blocks are implemented as classes
- the application manager For each application building block created by the application designer and stored in the application description, the application manager generates a corresponding object when the application description is loaded.
- the assignments for example, between data fields and formulas or conditions and actions implemented by references to the corresponding objects
- the execution of an application is realized on two levels. On the first level, the surface, the steps and the form fields or screen elements realize the communication with the user. The interplay between screen elements, form fields and data fields creates a connection between the inputs of the User and the functionality of the application made This interaction is illustrated in FIG.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Stored Programmes (AREA)
Abstract
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| DE102009019319A DE102009019319A1 (de) | 2009-04-30 | 2009-04-30 | Verfahren zur Erzeugung mindestens einer Anwendungsbeschreibung |
| PCT/EP2010/002597 WO2010124853A2 (fr) | 2009-04-30 | 2010-04-28 | Procédé de création d'un guide d'utilisation |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| EP2425331A1 true EP2425331A1 (fr) | 2012-03-07 |
Family
ID=42341357
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP10719262A Withdrawn EP2425331A1 (fr) | 2009-04-30 | 2010-04-28 | Procédé de création d'un guide d'utilisation |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US8775349B2 (fr) |
| EP (1) | EP2425331A1 (fr) |
| DE (1) | DE102009019319A1 (fr) |
| WO (1) | WO2010124853A2 (fr) |
Families Citing this family (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080255866A1 (en) * | 2007-04-10 | 2008-10-16 | Jan Matthes | Systems and methods to integrated business data |
| US9613267B2 (en) * | 2012-05-31 | 2017-04-04 | Xerox Corporation | Method and system of extracting label:value data from a document |
| US10635692B2 (en) | 2012-10-30 | 2020-04-28 | Ubiq Security, Inc. | Systems and methods for tracking, reporting, submitting and completing information forms and reports |
| WO2016049227A1 (fr) | 2014-09-23 | 2016-03-31 | FHOOSH, Inc. | Opérations sécurisées à haut débit de stockage, consultation, récupération et transmission de données |
| US10579823B2 (en) | 2014-09-23 | 2020-03-03 | Ubiq Security, Inc. | Systems and methods for secure high speed data generation and access |
| US10180932B2 (en) | 2015-06-30 | 2019-01-15 | Datawatch Corporation | Systems and methods for automatically creating tables using auto-generated templates |
| US10515093B2 (en) | 2015-11-30 | 2019-12-24 | Tableau Software, Inc. | Systems and methods for interactive visual analysis using a specialized virtual machine |
| US10380140B2 (en) * | 2015-11-30 | 2019-08-13 | Tableau Software, Inc. | Systems and methods for implementing a virtual machine for interactive visual analysis |
| US10296576B2 (en) * | 2015-12-08 | 2019-05-21 | International Business Machines Corporation | Filling information from mobile devices with security constraints |
| US10216521B2 (en) * | 2017-06-20 | 2019-02-26 | Nvidia Corporation | Error mitigation for resilient algorithms |
| US10970301B2 (en) | 2017-12-27 | 2021-04-06 | Sap Se | Keyfigure comments bound to database level persistence |
| US11349656B2 (en) | 2018-03-08 | 2022-05-31 | Ubiq Security, Inc. | Systems and methods for secure storage and transmission of a data stream |
| US11429783B2 (en) * | 2019-09-30 | 2022-08-30 | Stats Llc | Augmented natural language generation platform |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5442792A (en) * | 1992-08-07 | 1995-08-15 | Hughes Aircraft Company | Expert system compilation method |
| CA2147192A1 (fr) | 1994-06-29 | 1995-12-30 | Tadao Matsuzuki | Generateur de programmes automatique |
| US5815717A (en) | 1995-10-27 | 1998-09-29 | Authorgenics, Inc. | Application program and documentation generator system and method |
| US5904822A (en) * | 1996-11-13 | 1999-05-18 | The University Of Iowa Research Foundation | Methods and apparatus for analyzing electrophoresis gels |
| US6016394A (en) | 1997-09-17 | 2000-01-18 | Tenfold Corporation | Method and system for database application software creation requiring minimal programming |
-
2009
- 2009-04-30 DE DE102009019319A patent/DE102009019319A1/de not_active Withdrawn
-
2010
- 2010-04-28 WO PCT/EP2010/002597 patent/WO2010124853A2/fr not_active Ceased
- 2010-04-28 US US13/266,928 patent/US8775349B2/en not_active Expired - Fee Related
- 2010-04-28 EP EP10719262A patent/EP2425331A1/fr not_active Withdrawn
Also Published As
| Publication number | Publication date |
|---|---|
| DE102009019319A1 (de) | 2011-01-05 |
| US20120047100A1 (en) | 2012-02-23 |
| US8775349B2 (en) | 2014-07-08 |
| WO2010124853A2 (fr) | 2010-11-04 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP2425331A1 (fr) | Procédé de création d'un guide d'utilisation | |
| DE3855756T2 (de) | Schnittstelle für Materialliste für CAD/CAM-Umgebung | |
| DE60133343T2 (de) | Verfahren und System in einem elektronischen Kalkulationsblatt für die Handhabung von benutzerdefinierten Optionen in einer Ausschneiden-Kopieren-Einfügen-Funktion | |
| DE69418474T2 (de) | Semantisches objektmodellierungssystem und verfahren um relationelle datenbankschemata herzustellen | |
| EP1311989B1 (fr) | Procede de recherche automatique | |
| DE69831777T2 (de) | Framework zur finanziellen Integration von Geschäftsapplikationen | |
| DE69735922T2 (de) | System und Verfahren zum flexiblen Darstellen von Arbeitsvorgängen | |
| DE60311805T2 (de) | Erfassung, Zusammenstellung und/oder Visualisierung von strukturellen Merkmalen von Architekturen | |
| DE3855706T2 (de) | Automatisierte Rechnung von Materialien | |
| DE60130475T2 (de) | Durchführung von kalkulationen eines tabellenkalkulationstyps in einem datenbanksystem | |
| DE69811790T2 (de) | Ableitung von Prozessmodellen aus Rechnungsprüfvorgängen für Systeme zur Verwaltung von Arbeitsflüssen | |
| DE69602364T2 (de) | Rechnersystem um semantische objektmodelle von existierenden relationellen datenbanksystemen herzustellen | |
| DE112005002887T5 (de) | Modell-getriebenes Benutzerinterview | |
| DE10150391A1 (de) | Objektmodell und Verfahren zum Erfassen von Informationen | |
| DE112013000916T5 (de) | System zum Anzeigen und Bearbeiten von Artefakten an einem zeitbezogenen Referenzpunkt | |
| DE112010000947T5 (de) | Verfahren zur völlig modifizierbaren Framework-Datenverteilung im Data-Warehouse unter Berücksichtigung der vorläufigen etymologischen Separation der genannten Daten | |
| DE102004043788A1 (de) | Programm Generator | |
| EP1561180A2 (fr) | Dispositif et procede de generation d'un outil de traitement | |
| DE60310881T2 (de) | Methode und Benutzerschnittstelle für das Bilden einer Darstellung von Daten mit Meta-morphing | |
| DE69907714T2 (de) | Komponentbasiertes quellcodegeneratorverfahren | |
| DE69425480T2 (de) | Dokumentaufbereitungsapparat | |
| DE10215653A1 (de) | Verfahren und Anordung zur automatischen Erzeugung von Programmcodeabschnitten sowie ein entsprechendes Computerprogrammprodukt und ein entsprechendes computerlesbares Speichermedium | |
| DE69925108T2 (de) | Ableitung einer objektklasse durch vererbung, instanzierung oder klonierung | |
| DE19729911A1 (de) | System zur Verbesserung der Organisation von Daten einer Dokumentation | |
| EP1490762B1 (fr) | Procede, produit logiciel et systeme de traitement universel de l'information assiste par ordinateur |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| 17P | Request for examination filed |
Effective date: 20111118 |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR |
|
| DAX | Request for extension of the european patent (deleted) | ||
| 17Q | First examination report despatched |
Effective date: 20161206 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
| 18D | Application deemed to be withdrawn |
Effective date: 20170617 |