WO2023143746A1 - System and method for managing artifacts related to apps - Google Patents

System and method for managing artifacts related to apps Download PDF

Info

Publication number
WO2023143746A1
WO2023143746A1 PCT/EP2022/052222 EP2022052222W WO2023143746A1 WO 2023143746 A1 WO2023143746 A1 WO 2023143746A1 EP 2022052222 W EP2022052222 W EP 2022052222W WO 2023143746 A1 WO2023143746 A1 WO 2023143746A1
Authority
WO
WIPO (PCT)
Prior art keywords
app
artifacts
entities
user
data
Prior art date
Application number
PCT/EP2022/052222
Other languages
French (fr)
Inventor
Girish SURYANARAYANA
Pravin Kumar
Shubham SAHU
Chinnamma SREERAM
Original Assignee
Mendix Technology B.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mendix Technology B.V. filed Critical Mendix Technology B.V.
Priority to PCT/EP2022/052222 priority Critical patent/WO2023143746A1/en
Publication of WO2023143746A1 publication Critical patent/WO2023143746A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/34Graphical or visual programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/36Software reuse
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces

Definitions

  • the present invention relates to software management systems, and in particular relates to a computer system and method for managing artifacts related to apps.
  • the artifacts include work items such as business requirements, software requirements, software models, software components, code files, test cases, and defects.
  • traceability between the artifacts is weak or missing, impact of a defect fix on other aspects of the project may not be understood well enough by developers or testers. This may lead to new issues or defects in subsequent developments within the project.
  • a computer- implemented method of managing artifacts related to an app may comprise extracting, by a processing unit, the artifacts re- lated to the app from one or more data sources, via one or more data connectors; identifying one or more entities from the extracted artifacts based on an entity configuration pa- rameter; mapping the one or more entities to an ontology struc- ture based on an ontology configuration parameter; and gener- ating a knowledge graph for the artifacts based on the mapped entities.
  • a computer system may be arranged and configured to execute the steps of this computer-implemented method of managing an app.
  • the described computer system may be arranged and configured to execute the following steps: extracting, by a processing unit, the artifacts related to the app from one or more data sources, via one or more data connectors; identifying one or more entities from the extracted artifacts based on an entity configuration parameter; mapping the one or more enti- ties to an ontology structure based on an ontology configura- tion parameter; and generating a knowledge graph for the ar- tifacts based on the mapped entities.
  • a computer- readable medium may be encoded with executable instructions, that program product may comprise computer program code which, when executed, cause the computer system to carry out the described computer-implemented method of managing artifacts related to an app.
  • a computer readable medium may comprise computer program code which, when executed by a computer system, cause the computer system to carry out this computer-implemented method of managing arti- facts related to an app.
  • the described computer-readable medium may be non-transitory and may further be a software component on a storage device.
  • the mentioned application development plat- form may be a visual model-based and/or low-code app develop- ment platform which is described in more detail below.
  • FIG 1 illustrates functional block diagram of an example com puter system or data processing system that facilitates man- agement of artifacts related to apps, in accordance with an embodiment of the present invention
  • FIG 2 illustrates a functional block diagram indicating work- flow 200 for managing artifacts related to an app, in accord- ance with an exemplary embodiment of the present invention
  • FIG 3 illustrates an example of the graph visualization of a knowledge graph on the application management UI, in accord- ance with an embodiment of the present invention
  • FIG 4 shows a flowchart of a method for managing artifacts related to an app, in accordance with an embodiment of the present invention
  • FIG 5 illustrates a block diagram of a data processing system, in accordance with an embodiment of the present invention.
  • An app generally refers to a software program which on execu- tion performs specific desired tasks.
  • apps are executed in a runtime environment containing one or more operating systems (“OSs”), virtual machines (e.g., supporting JavaTM programming language), device drivers, etc.
  • OSs operating systems
  • Virtual machines e.g., supporting JavaTM programming language
  • Apps, including native applications can be created, edited, and represented using traditional source code. Examples of such traditional source code comprise C, C++, Java, Flash, Python, Perl, and other script-based methods of representing an app.
  • Developing, creating and managing such script-based apps, or parts of such script-based apps can be accomplished by manual coding of suitably trained users.
  • An ADF provides a set of pre-defined code/data modules that can be directly/indirectly used in the development of an app.
  • An ADF may also provide tools such as an Integrated Development Environment (“IDE”), code generators, debuggers, etc., which facilitate a developer in coding/implementing the desired logic of the app in a faster/simpler manner.
  • IDE Integrated Development Environment
  • an ADF simplifies app development by providing reusable components which can be used by app developers to define user interfaces (“UIs”) and app logic by, for example, selecting components to perform desired tasks and defining the appearance, behavior, and interactions of the selected compo- nents.
  • apps can also be created, ed- ited, and represented using visual model-based representa- tions. Unlike traditional source code implementations, such apps can be created, edited, and/or represented by drawing, moving, connecting, and/or disconnecting visual depictions of logical elements within a visual modeling environment.
  • Visual model-based representations of apps can use symbols, shapes, lines, colors, shades, animations, and/or other visual ele- ments to represent logic, data or memory structures or user interface elements.
  • programming a visual model-based app can, in some cases, be done by connecting various logical elements (e.g., action blocks and/or decision blocks) to cre- ate a visual flow chart that defines the app's operation.
  • logical elements e.g., action blocks and/or decision blocks
  • defining data structures e.g., variable types, da- tabase objects, or classes
  • user interface elements e.g., dropdown boxes, lists, text input boxes
  • Visual-model based apps can therefore be more intuitive to program and/or edit compared to tradi- tional script-based apps.
  • an approach is suggested to manage artifacts related to apps, which may involve the explained visual model-based representations.
  • references to a “model,” a “visual model,” or an “app” or “app” should be understood to refer to visual model- based apps, including native apps, unless specifically indi- cated.
  • visual model based apps can repre sent complete, stand-alone apps for execution on a computer system.
  • Visual model-based apps can also represent discrete modules that are configured to perform certain tasks or func- tions, but which do not represent complete apps — instead, such discrete modules can be inserted into a larger app or combined with other discrete modules to perform more compli- cated tasks. Examples of such discrete modules can comprise modules for validating a ZIP code, for receiving information regarding current weather from a weather feed, and/or for ren- dering graphics.
  • Visual models may be represented in two forms: an internal representation and one or more associated visual representa- tions.
  • the internal representation may be a file encoded ac- cording to a file format used by a modeling environment to capture and define the operation of an app (or part of an app).
  • the internal representation may define what in- puts an app can receive, what outputs an app can provide, the algorithms and operations by which the app can arrive at re- sults, what data the app can display, what data the app can store, etc.
  • the internal representation may also be used to instruct an execution environment how to execute the logic of the app during run-time.
  • Internal representations may be stored in the form of non-human-readable code (e.g., binary code). Internal representations may also be stored according to a binary stored JSON (java script object notation) format, and/or an XML format.
  • an execution engine may use an internal representation to compile and/or generate executable machine code that, when executed by a processor, causes the processor to implement the functionality of the model.
  • the internal representation may be associated with one or more visual representations.
  • Visual representations may comprise visual elements that depict how an app's logic flows, but which are not designed to be compiled or executed. These visual representations may include, for example, flow-charts or de- cision trees that show a user how the app will operate.
  • the visual models may also visually depict data that is to be received from the user, data that is to be stored, and data that is to be displayed to the user. These visual models may also be interactive, which allows a user to manipulate the model in an intuitive way.
  • visual representations may be configured to display a certain level of detail (e.g., number of branches, number of displayed parameters, granular- ity of displayed logic) by default.
  • users may interact with the visual representation in order to show a desired level of detail—for example, users may display or hide branches of logic, and/or display or hide sets of parameters. Details re- lating to an element of the visual model may be hidden from view by default but can appear in a sliding window or pop-up that appears on-screen when the user clicks on the appropriate element. Users may also zoom in or out of the model, and/or pan across different parts of the model, to examine different parts of the model. Users may also copy or paste branches of logic from one section of the model into another section, or copy/paste branches of logic from a first model into a second model.
  • parts of the model may contain links to other parts of the model, such that if a user clicks on a link, the user will automatically be led to another part of the model.
  • a viewing user may interact with a visual representation in at least some of the same ways that the viewing user might interact with the model if it were displayed within a modeling environment.
  • the visual representation may be configured to mimic how the model would appear if it were displayed within a visual modeling environment.
  • a single in- ternal representation may correspond to multiple visual rep- resentations that use different styles or formatting rules to display app logic.
  • multiple visual representa- tions corresponding to the same internal representation may differ from one another in their use of color, elements that are comprised or omitted, and use of symbols, shapes, lines, colors, and/or shades to depict logic flow.
  • Approaches involving the above-described functionalities of visual model-based representations, visual model-based apps, and/or visual models are sometimes understood to be comprised by a so-called low-code development platform.
  • a low-code development platform may further be de- scribed as software that provides a development environment used to create application software through graphical user interfaces and configuration instead of traditional hand-coded computer programming.
  • a low-code model may enable developers of varied experience levels to create applications using a visual user interface in combination with model-driven logic.
  • Such low-code development platforms may produce entirely op- erational apps or require additional coding for specific sit- uations.
  • Low-code app development platforms may reduce the amount of traditional hand coding, enabling accelerated deliv- ery of business apps. A common benefit is that a wider range of people can contribute to the app's development — not only those with formal programming skills.
  • Low-code app development platforms can also lower the initial cost of setup, training, deployment, and maintenance.
  • apps can also be created, ed- ited, and represented using visual model-based representa- tions. Unlike traditional source code implementations, such apps can be created, edited, and/or represented by drawing, moving, connecting, and/or disconnecting visual depictions of logical elements within a visual modeling environment.
  • Visual model-based representations of apps can use symbols, shapes, lines, colors, shades, animations, and/or other visual ele- ments to represent logic, data or memory structures or user interface elements.
  • programmers are typically required to type out de tailed scripts according to a complicated set of programming syntax rules.
  • programming a visual model-based app can, in some cases, be done by connecting various logical elements (e.g., action blocks and/or decision blocks) to cre- ate a visual flow chart that defines the app's operation.
  • defining data structures e.g., variable types, da- tabase objects, or classes
  • user interface elements e.g., dropdown boxes, lists, text input boxes
  • defining data structures e.g., variable types, da- tabase objects, or classes
  • user interface elements e.g., dropdown boxes, lists, text input boxes
  • Visual-model based apps including native apps, can therefore be more intuitive to program and/or edit compared to tradi- tional script-based apps.
  • an approach is suggested to manage artifacts related to apps, which may involve apps developed using the explained visual model-based representations.
  • references to a “model,” a “visual model,” or an “application” or “app” should be understood to refer to visual model-based apps, including native apps, unless specifically indicated.
  • visual model-based apps can represent complete, stand-alone apps for execution on a com- puter system.
  • Visual model-based apps can also represent dis- crete modules that are configured to perform certain tasks or functions, but which do not represent complete apps — instead, such discrete modules can be inserted into a larger app or combined with other discrete modules to perform more compli- cated tasks. Examples of such discrete modules can comprise modules for validating a ZIP code, for receiving information regarding current weather from a weather feed, and/or for ren- dering graphics.
  • Visual models may be represented in two forms: an internal representation and one or more associated visual representa- tions.
  • the internal representation may be a file encoded ac- cording to a file format used by a modeling environment to capture and define the operation of an app (or part of an app). For example, the internal representation may define what in- puts an app can receive, what outputs an app can provide, the algorithms and operations by which the app can arrive at re- sults, what data the app can display, what data the app can store, etc.
  • the internal representation may also be used to instruct an execution environment how to execute the logic of the app during run-time.
  • Internal representations may be stored in the form of non-human-readable code (e.g., binary code).
  • Internal representations may also be stored according to a binary stored JSON (java script object notation) format, and/or an XML format.
  • an execution engine may use an internal representation to compile and/or generate executable machine code that, when executed by a processor, causes the processor to implement the functionality of the model.
  • the internal representation may be associated with one or more visual representations.
  • Visual representations may comprise visual elements that depict how an app's logic flows, but which are not designed to be compiled or executed. These visual representations may include, for example, flow-charts or de- cision trees that show a user how the app will operate.
  • the visual models may also visually depict data that is to be received from the user, data that is to be stored, and data that is to be displayed to the user.
  • visual models may also be interactive, which allows a user to manipulate the model in an intuitive way.
  • visual representations may be configured to display a certain level of detail (e.g., number of branches, number of displayed parameters, granular- ity of displayed logic) by default.
  • users may interact with the visual representation in order to show a desired level of detail for example, users may display or hide branches of logic, and/or display or hide sets of parameters. Details re- lating to an element of the visual model may be hidden from view by default but can appear in a sliding window or pop-up that appears on-screen when the user clicks on the appropriate element. Users may also zoom in or out of the model, and/or pan across different parts of the model, to examine different parts of the model.
  • Users may also copy or paste branches of logic from one section of the model into another section, or copy/paste branches of logic from a first model into a second model.
  • parts of the model may contain links to other parts of the model, such that if a user clicks on a link, the user will automatically be led to another part of the model.
  • a viewing user may interact with a visual representation in at least some of the same ways that the viewing user might interact with the model if it were displayed within a modeling environment.
  • the visual representation may be configured to mimic how the model would appear if it were displayed within a visual modeling environment.
  • a single in- ternal representation may correspond to multiple visual rep- resentations that use different styles or formatting rules to display app logic.
  • multiple visual representa- tions corresponding to the same internal representation may differ from one another in their use of color, elements that are comprised or omitted, and use of symbols, shapes, lines, colors, and/or shades to depict logic flow.
  • Approaches involving the above-described functionalities of visual model-based representations, visual model-based apps, and/or visual models are sometimes understood to be included by a so-called low-code development platform.
  • a low-code development platform may further be de- scribed as software that provides a development environment used to create application software through graphical user interfaces and configuration instead of traditional hand-coded computer programming.
  • a low code model may enable developers of varied experience levels to create applications using a visual user interface in combination with model-driven logic.
  • Low-code development platforms may produce entirely op- erational apps or require additional coding for specific sit- uations.
  • Low-code development platforms may reduce the amount of traditional hand coding, enabling accelerated delivery of business apps.
  • a common benefit is that a wider range of people can contribute to the app's development — not only those with formal programming skills.
  • Low-code development platforms can also lower the initial cost of setup, training, deployment, and maintenance.
  • FIG 1 a functional block diagram of an example computer system or data processing system 100 that facilitates management of artifacts related to apps is illus- trated, in accordance with an embodiment of the present inven- tion.
  • the computer system 100 may include a visual model-based ap- plication development platform 118 including at least one pro- cessor 102 that is configured to execute at least one artifact management module 106 from a memory 104 accessed by the pro- cessor 102.
  • the visual model-based application devel- opment platform 118 may include the above-described function- alities of visual model-based representations, visual model- based apps, and/or visual models and, by way of example, be a low-code development platform.
  • the artifact management module 106 may be configured (i.e., programmed) to cause the processor 102 to carry out various acts and functions described herein.
  • the described artifact management module 106 may include and/or correspond to one or more components of an app development application that is configured to generate and store artifacts related to apps in a data store 108 such as a database.
  • the described artifact management mod ule 106 may include and/or correspond to one or more components of an app development application.
  • the application development platform 118 may be cloud-based, internet-based and/or be operated by a provider providing app development and creation support, in- cluding e.g., supporting low-code and/or visual model-based app development.
  • the user may be located close to the appli- cation development platform 118 or remote to the application development platform 118, e.g., using a mobile device for con- necting to the application development platform 118, e.g., via the internet, wherein the mobile device may include an input device 110 and a display device 112.
  • the application development platform 118 may be installed and run on a user’s device, such as a computer, laptop, pad, on-premise computing facility, or the like. Examples of product systems that may be adapted to include the artifact management features described herein may include the low-code software development platform of Mendix Inc., of Bos- ton, Massachusetts, USA.
  • This platform provides tools to build, test, deploy, iterate, develop, create and manage apps and is based on visual, model-driven software development.
  • the systems and methods described herein may be used in other product systems (e.g., product lifecycle management (PLM), product data management (PDM), ap- plication lifecycle management (ALM) systems) and/or any other type of system that generates a plurality of artifacts during development of apps. It should be appreciated that it can be difficult and time- consuming to manage artifacts in complex app development and/or management environments.
  • PLM product lifecycle management
  • PDM product data management
  • ALM ap- plication lifecycle management
  • the described product system or computer system 100 may include at least one input device 110 and at least one display device 112 (such as a display screen).
  • the described processor 102 may be config- ured to generate a graphical user interface (GUI) 114 through the display device 112.
  • GUI graphical user interface
  • Such a GUI 114 may include GUI elements such as buttons, links, search boxes, lists, text boxes, im- ages, scroll bars usable by a user to provide inputs through the input device 110 to develop an app 120 and/or to access artifacts related to the app 120.
  • the GUI 114 may be an application management UI 116 provided to a user for modifying the app 120 and a search UI 123 within the application management UI 116.
  • the search UI 123 enables the user to search for artifacts associated with the app 120, stored in a graph database.
  • artifact as used herein may include at least one of requirements, software components, code files, test cases, de- fects, app data, app components, app architecture, app pro- gramming interfaces (APIs), app services, app usages, app links, app description, app dependencies, artifact dependen- cies, app environments, app tags, business events, event ar- tifacts, notifications, app interfaces, trigger information, user interface designs, maps of reusable IT assets and links to pages containing information about one or more artifacts.
  • the artifact management module 106 and/or the processor 102 may be configured to extract the artifacts 144 related to the app 120 from one or more data sources 122, via one or more data connectors 124.
  • the artifacts 144 are stored in the one or more data sources 122.
  • the artifacts may include, but not limited to, require ments, user stories, epics, features, components, packages, code files and test cases for development of the app 120.
  • Examples of the data sources 122 may include, but not limited to, Application Lifecycle Management Tools such as TFS, TMS, Jira, Confluence, IBM JAZZ and Clear Case.
  • Each of the data sources 122 includes an Application Programming Interface that may be accessed via the data connector 124.
  • the term ‘data connector‘ as used herein refers to a standalone software or a function that imports, exports or converts one data format to another.
  • the data connector 124 extracts or imports artifacts 144 from the respective data source 122. More specifically, the data connector 124 connects to the API of the data source 122 and exposes a granular API to a data layer.
  • the data layer is based on a predetermined schema.
  • the granular API exposes artifacts relevant to the predetermined schema of the data layer.
  • the data layer stores the relevant artifacts 144 extracted from the source in the form of semantic data.
  • the artifact management module 106 and/or the processor 102 may be further configured to identifying one or more entities from the extracted artifacts based on an entity configuration parameter. More specifically, a specific class of entities present in the semantic data within the data layer are iden- tified.
  • the specific class of entities is identified based on an entity configuration parameter.
  • the entities may indicate different items used during a software lifecycle.
  • a first class of entities may correspond to agile methodology and may include ‘user story‘, ‘epic‘, ‘features‘.
  • a second class of entities may correspond to Scrum methodology and may include, but not limited to, ‘product backlog‘, ‘sprint backlog‘ and ‘increment‘.
  • the en- tity configuration parameter may be set by a user in order to identify entities belonging to one of the classes of entities. For example, if the development is based on agile methodology, the user may set the entity configuration parameter to identify entities from the first class of entities.
  • the artifact management module 106 and/or the processor 102 may be further configured to map the one or more entities to an ontology structure based on an ontology configuration pa- rameter. More specifically, the one or more entities identi- fied are mapped to an ontology structure based on an ontology configuration parameter.
  • the ontology structure is selected from a predefined set of ontology structures based on the ontology configuration parameter. For example, the ontology structures may be defined separately for different domains. The domains may vary based on use of the app 120. Non-limiting examples may include human resources, sales, engineering, man- ufacturing, inventory, design, planning, maintenance etc.
  • the entity ‘Employee number‘ that may be used in human resources domain may be mapped to ‘Part number‘ in manufactur- ing domain using an ontology structure for maintenance.
  • the ontology structures may also vary based on software vocabulary of a targeted user of the app 120. For example, different users may prefer different terminologies for the same entity.
  • an entity called ‘Feature‘ may be referred to as ‘Minimum Marketable Feature‘.
  • the ontology struc- tures also define relationships between the mapped entities. For example, if changes in a first entity impacts a second entity, then the second entity shares a parent-child relation- ship with the first entity.
  • the artifact management module 106 and/or the processor 102 may be further configured to generate a knowledge graph for the artifacts based on the mapped entities. More specifically, triples are generated for the mapped entities using predefined libraries. In an embodiment, each of the mapped entities is processed using a Web Ontology Language (OWL) library to gen erate the triples. The triples are further stored as a knowledge graph in a graph database.
  • WNL Web Ontology Language
  • the term ‘triple‘ as used herein refers to a set of three entities that codifies a statement about the mapped entities in the form of subject– predicate–object expressions.
  • the knowledge graph is a graph structure defined by nodes, edges and properties.
  • the knowledge graph is based on Resource Description Framework (RDF) format.
  • RDF Resource Description Framework
  • the knowledge graph may be stored in the graph database.
  • the artifact management module 106 may provide an application management UI 116 to a user.
  • the appli- cation management UI 116 may enable the user to modify the app 120 and to search for artifacts stored in the graph database.
  • the user relates to the user developing or creating the app 120 and not to the app user who runs the app 120 on his or her target device 140.
  • the artifact management module 106 and/or the processor 102 may further be configured to capture the user’s intent to search at least one of the artifacts in response to user interactions with the application management UI 116.
  • the application management UI 116 may provide the user a search UI 123.
  • the user may input or type in the semantic query in the search UI 123.
  • the knowledge graph is queried based on a semantic query received from an user.
  • the knowledge graph is queried using SPARQL.
  • SPARQL is a semantic query language that facilitates querying and manipulation of data stored in the knowledge graph.
  • an output is generated based on querying of the knowledge graph.
  • the generated output is displayed via the application management UI 116 to the user.
  • the generated output indicating the respective, searched ar tifact or the relationship of the respective, searched arti- fact to other artifacts. For example, if the semantic query corresponds to “User story”, the knowledge graph is queried to identify nodes that correspond to user stories of the app 120.
  • the generated output may include one or more user stories corresponding to the app 120. Further, the generated output may also include other nodes that may be related to the nodes corresponding to the user stories.
  • the artifact management module 106 and/or the processor 102 is configured to provide a graphic visualization of the knowledge graph on the application man- agement UI 116.
  • the user may interact with the application management UI 116 to traverse the knowledge graph. For example, the user may click on a specific node in the knowledge graph to view details including related information associated with the node.
  • the artifact management module 106 and/or the processor 102 is configured to capture the user’s intent to import the respective, searched artifact to the application management UI 116 in response to user interactions with the application management UI 116.
  • the user may interact with the application management UI 116 to express his or her intent to import the respective artifact 122 in order to modify the app 120.
  • the import may be done by “drag and drop” or by a “dropdown” window in the application management UI 116.
  • the respective artifact 122 may be copied to the application management UI 116.
  • the import of the respective artifact 122 may comprise importing metadata of the respective artifact 122 explained in more detail below.
  • metadata may comprise information on the origin of the artifact, such as the respec- tive data source 122, author of the artifact, date of creation, debugging history, machines or devices used for creating the artifact etc.
  • the mentioned metadata may, by way of example, comprise possible statuses of the artifacts, such as started, stopped, pending, completed, or information on the possible changes or transitions between statuses.
  • the mentioned metadata may comprise information on the type and/or format of the respective artifact 122, in the form of integers, decimal numbers, text strings, Boolean data, etc. Further, more complex or composite information may be used, such as pictures, photos, sound data, etc.
  • the import of the respective artifact 122 may further comprise connectivity information which may be required to allow for obtaining or retrieving the respective artifact 122 from the respective data source 122.
  • Such connec- tivity information may, by way of example, allow for estab- lishing a communication connection with the respective data source 122.
  • the artifact management module 106 and/or the processor 102 may further be configured to modify the app 120 through the application management UI 116 based on the respective, imported, searched artifact. Further, the modified app is deployed and run on a target device 140.
  • the app 120 may be understood as deployed if the ac- tivities which are required to make this app 120 available for use by the app user on the target device 140.
  • the app deploy- ment process may comprise several interrelated activities with possible transitions between them.
  • the app deployment process may comprise at least the release of the app 120 and the installation and the activation of the app 120.
  • the release activity may follow from the com- pleted development process and is sometimes classified as part of the development process rather than deployment process. It may comprise operations required to prepare a system (here: e.g., the application development platform 118 or an on-line app store) for assembly and transfer to the computer system(s) (here: e.g., the application development platform 118) on which it will be run in production.
  • the installation of the app 120 may involve establishing some form of command, shortcut, script or service for executing the software (manually or automatically) of the app 120.
  • it may involve configuration of the system – possibly by asking the end user questions about the intended app use, or directly asking them how they would like it to be configured – and/or making all the required subsystems ready to use.
  • Activation may be the activity of starting up the executable component of software or the app 120 for the first time (which is not to be confused with the common use of the term activation concerning a software li cense, which is a function of Digital Rights Management sys- tems).
  • the app 120 Once the app 120 has been deployed on the respective target device 140, the app 120 may be put into operation to fulfill the business needs of the app (end) user.
  • the respective target device 140 may be a smartphone, smartwatch, handheld, pad, laptop or the like, or a desktop device, e.g., including desktop computers, or other “smart” devices, e.g., smart television sets, fridges, home or industrial automation devices, wherein smart television sets may e.g., be a television set with integrated Internet capa- bilities or a set-top box for television that offers more advanced computing ability and connectivity than a contempo- rary basic television set.
  • the respective target device 140 may be or comprise a manufacturing operation management (MOM) system, a manufacturing execution system (MES), and enterprise resource planning (ERP) system, a supervisory control and data acquisition (SCADA) system, or any combination thereof.
  • MOM manufacturing operation management
  • MES manufacturing execution system
  • ERP enterprise resource planning
  • SCADA supervisory control and data acquisition
  • the respective target device 140 on which the app 120 may be deployed and run may use the respective artifact 122 from the data source 122.
  • the respective target device 140 may be part of a complex production line or production plant, e.g., a bottle filing machine, conveyor, welding machine, weld- ing robot, etc.
  • the application development platform 118 may comprise the above-described functionalities of visual model-based representations, visual model-based apps, and/or visual models and, by way of example, be a visual model-based application development platform or a low-code application de- velopment platform.
  • the application management UI 118 may pro- vide an interactive user interface of the application devel- opment platform 118 which supports and enables the user to manage the artifacts.
  • app 120 may be or comprise a software program which on execution performs specific de- sired tasks.
  • the artifact management module 106 and/or the processor 102 may further be configured to modify the app 120 through the application management UI 116 by using the imported artifacts.
  • the app 120 may be developed and eventually be completed by the visual model-based application development platform 118 taking into account the user’s input provided by his/her interactions with the application management UI 116 and using the imported artifacts 122.
  • the user’s input may also include a name, an identifier, and the current version of the app 120.
  • a computer-readable medium 160 which may comprise a computer program product 162 is shown in FIG 1, wherein the computer program product 162 may be encoded with executable instructions, that when executed, cause the computer system 100 or and/or the application development platform 118 to carry out the described method.
  • FIG 2 illustrates a functional block diagram indicating a workflow 200 for managing artifacts related to an app, in accordance with an exemplary embodiment of the present inven- tion.
  • the workflow comprises a plurality of data sources 205A, 205B and 205C.
  • the data sources 205A, 205B and 205C are asso- ciated with data connectors 210A, 210B and 210C respectively.
  • Each of the data connectors 210 expose a granular API associ- ated with respective data sources 205 to a data layer 215.
  • the granular API exposes artifacts relevant to a predetermined schema of the data layer 215.
  • the data layer 215 stores the relevant artifacts extracted from the source in the form of semantic data.
  • an entity identifier component 220 identifies entities present in the semantic data within the data layer 215.
  • component‘ as used herein refers to a piece of executable software code that causes a processor (similar to processor 102) to perform a predefined function.
  • the entity identification component 220 when executed, causes the processor to identify entities present in the semantic data.
  • the entity identifier component 220 is con- figured to identify a specific class of entities from the semantic data based on an entity configuration parameter.
  • an ontology mapper component 225 maps the extracted en- tities to to an ontology structure based on an ontology con- figuration parameter.
  • a knowledge graph component 230 computes triplets of the mapped entities, i.e., the artifacts, to generate a knowledge graph.
  • the generated knowledge graph is further stored in a graph database 235.
  • the graph database 235 may be accessed by a user, via an application management UI 240 for accessing the artifacts in the knowledge graph.
  • the user may enter semantic queries on the application management UI 240 for searching or accessing the artifacts in the graph database 235. Further, the semantic query is processed by a query engine, for example based on SPARQL, to identify the searched artifact from the graph database 235. Upon querying, the relevant artifact is displayed to the user, for example via graphic visualisation, on the application management UI 240. The query engine 245 may also facilitate importing of the searched artifact to the application management UI 240.
  • FIG 3 illustrates an example of the graph visualization of a knowledge graph on the application management UI 240, in ac- cordance with an embodiment of the present invention.
  • the knowledge graph comprises a plurality of nodes interconnected using edges.
  • Each of the nodes indicate an artifact associated with an app.
  • Each of the edges indicate parent-child relation- ships between respective nodes. For example, child nodes are impacted by respective parent nodes.
  • the user may click or zoom in on a specific node to view details associated with the respective artifact.
  • each of the nodes may include links to storage locations associated with the respective ar tifacts. When the user clicks on the node, the respective link may enable the user to access the artifact.
  • the node 310 represents a Minimum Marketable Entity (MME1) and has two child nodes 315A, 315B that represent the artifacts MMF1 and MMF2 corresponding to Minimum Marketable Features corresponding to MME1.
  • MME1 Minimum Marketable Entity
  • the node 315A has a child node Userstory1 320A that represents a user story corresponding to MMF1 315A.
  • the node 315B has two child nodes Us- erstory2 320B and Userstory3320C that represents a user story corresponding to MMF1 315A.
  • Userstory1 320A has two child nodes Task1 325A and Task2 325B that represent tasks related to Userstory1.
  • Userstory3 320C has a child nodes Task4 325C that represent tasks related to Userstory3 320C.
  • the node 325A has a child node changeset1 330A and the node 325C has a child node changeset4330B.
  • FIG 4 shows a flowchart of a method 400 for managing artifacts related to an app, in accordance with an embodiment of the present invention.
  • the method may start at 405 and the meth- odology may comprise several method steps carried out through operation of at least one processor.
  • the artifacts related to the app are extracted from one or more data sources, via one or more data connectors.
  • one or more entities are identifiied from the extracted artifacts based on an entity configuration parame- ter.
  • the one or more entities identified are mapped to an ontology structure based on an ontology configu- ration parameter.
  • a knowledge graph for the ar- tifacts is generated based on the mapped entities. More spe- cifically, triples are generated for the mapped entities using predefined libraries.
  • the methodology may end. It should further be appreciated that the methodology 400 may comprise other acts and features discussed previously with respect to the computer-implemented method of managing arti- facts related to apps.
  • the method may further comprise the acts of providing an application management user interface (UI) of an application development platform to the user; of capturing the user’s intent to search at least one of the artifacts in re- sponse to user interactions with the application management UI; of querying the knowledge graph based on user’s intent; of generating an output based on querying of the knowledge graph; and of displaying the generated output via the application management UI to the user, the generated output indicating the respective, searched artifact or the relationship of the re- spective, searched artifact to other artifacts.
  • UI application management user interface
  • the methodology may further comprise the act of capturing the user’s intent to import the respective, searched artifact to the application management UI in response to user interactions with the application management UI; of importing the respective, searched artifact corresponding to the captured user’s import intent to the application manage- ment UI; and of displaying the respective, searched artifact on the application management UI.
  • the meth- odology may further comprise the acts of modifying the app through the application management UI or developing a new app based on the respective, imported, searched artifact; and of deploying and running the modified app on a target device.
  • acts associated with these methodol- ogies may be carried out by one or more processors.
  • processor(s) may be included in one or more data processing systems, for exam ple, that execute software components operative to cause these acts to be carried out by the one or more processors.
  • software components may comprise com- puter-executable instructions corresponding to a routine, a sub-routine, programs, applications, modules, libraries, a thread of execution, and/or the like.
  • software components may be written in and/or produced by software environments/languages/frameworks such as Java, JavaScript, Python, C, C#, C++ or any other software tool capable of producing components and graphical user inter- faces configured to carry out the acts and features described herein.
  • the suggested approach offers several advantages over other approaches. E.g., the suggested approach helps with improving traceability of artifacts related to apps.
  • the knowledge graph-based approach provides an intuitive un- derstanding of artifacts and relationships between artifacts, thereby easing searching for artifacts and comprehension of traceability information. Further, the knowledge graph-based approach provides backward traceability along with forward traceability which is currently not present in any Application Lifecycle Management tools.
  • the suggested approach allows it to support rapid look-ups and response to complex queries that may not be easily performed using existing art.
  • the sug- gested approach also provides graphical visualization, i.e., of the knowledge graph via the application management UI for better comprehension of the relationships between the arti facts by a user. Further, the user may easily identify design flaws, for e.g.
  • unrelated artifacts being linked to an app.
  • impact of a modification in an artifact on other artifacts may be identified at an early stage, based on the knowledge graph, thereby helping with quality improvement & mitigation of potential risks resulting from such modifica- tion.
  • the suggested approach also helps in ensuring that requirements are fulfilled when re-architecting or modi- fying the app.
  • specific requirements may be associated with specific bugs in the source code. The suggested approach helps in identifying any such potential bugs with reduced efforts.
  • FIG 5 illustrates a block diagram of a data processing system 1000 (also referred to as a computer system) in which an em- bodiment can be implemented, for example, as a portion of a product system, and/or other system operatively configured by software or otherwise to perform the processes as described herein.
  • the data processing system 1000 may include, for ex- ample, the application development platform 118 and/or the computer system or data processing system 100 mentioned above.
  • the data processing system depicted includes at least one pro- cessor 1002 (e.g., a CPU) that may be connected to one or more bridges/controllers/buses 1004 (e.g., a north bridge, a south bridge).
  • One of the buses 1004, for example, may include one or more I/O buses such as a PCI Express bus.
  • Also connected to various buses in the depicted example may include a main memory 1006 (RAM) and a graphics controller 1008.
  • the graphics con troller 1008 may be connected to one or more display devices 1010. It should also be noted that in some embodiments one or more controllers (e.g., graphics, south bridge) may be inte- grated with the CPU (on the same chip or die).
  • CPU architectures include IA-32, x86-64, and ARM processor archi- tectures.
  • Other peripherals connected to one or more buses may include communication controllers 1012 (Ethernet controllers, Wi-Fi controllers, cellular controllers) operative to connect to a local area network (LAN), Wide Area Network (WAN), a cellular network, and/or other wired or wireless networks 1014 or com- munication equipment.
  • Further components connected to various busses may include one or more I/O controllers 1016 such as USB controllers, Bluetooth controllers, and/or dedicated audio controllers (connected to speakers and/or microphones).
  • peripherals may be connected to the I/O control- ler(s) (via various ports and connections) including input devices 1018 (e.g., keyboard, mouse, pointer, touch screen, touch pad, drawing tablet, trackball, buttons, keypad, game controller, gamepad, camera, microphone, scanners, motion sensing devices that capture motion gestures), output devices 1020 (e.g., printers, speakers) or any other type of device that is operative to provide inputs to or receive outputs from the data processing system.
  • input devices 1018 e.g., keyboard, mouse, pointer, touch screen, touch pad, drawing tablet, trackball, buttons, keypad, game controller, gamepad, camera, microphone, scanners, motion sensing devices that capture motion gestures
  • output devices 1020 e.g., printers, speakers
  • many devices referred to as input devices or output de- vices may both provide inputs and receive outputs of communi- cations with the data processing system.
  • the pro- cessor 1002 may be integrated into a housing (such as a tablet) that includes a touch screen that serves as both an input and display device.
  • a housing such as a tablet
  • some input devices such as a laptop
  • may include a plurality of different types of input devices e.g., touch screen, touch pad, keyboard
  • other pe- ripheral hardware 1022 connected to the I/O controllers 1016 may include any type of device, machine, or component that is configured to communicate with a data processing system. Additional components connected to various busses may include one or more storage controllers 1024 (e.g., SATA).
  • a storage controller may be connected to a storage device 1026 such as one or more storage drives and/or any associated removable media, which can be any suitable non-transitory machine usable or machine-readable storage medium. Examples include nonvola- tile devices, volatile devices, read only devices, writable devices, ROMs, EPROMs, magnetic tape storage, floppy disk drives, hard disk drives, solid-state drives (SSDs), flash memory, optical disk drives (CDs, DVDs, Blu-ray), and other known optical, electrical, or magnetic storage devices drives and/or computer media. Also, in some examples, a storage device such as an SSD may be connected directly to an I/O bus 1004 such as a PCI Express bus.
  • I/O bus 1004 such as a PCI Express bus.
  • a data processing system in accordance with an embodiment of the present disclosure may include an operating system 1028, software/firmware 1030, and data stores 1032 (that may be stored on a storage device 1026 and/or the memory 1006).
  • Such an operating system may employ a command line interface (CLI) shell and/or a graphical user interface (GUI) shell.
  • CLI command line interface
  • GUI graphical user interface
  • the GUI shell permits multiple display windows to be presented in the graphical user interface simultaneously, with each display window providing an interface to a different application or to a different instance of the same application.
  • a cursor or pointer in the graphical user interface may be manipulated by a user through a pointing device such as a mouse or touch screen.
  • the position of the cursor/pointer may be changed and/or an event, such as clicking a mouse button or touching a touch screen, may be generated to actuate a desired response.
  • operating systems that may be used in a data pro- cessing system may include Microsoft Windows, Linux, UNIX, iOS, and Android operating systems.
  • data stores include data files, data tables, relational database (e.g., Oracle, Microsoft SQL Server), database servers, or any other structure and/or device that is capable of storing data, which is retrievable by a processor.
  • the communication controllers 1012 may be connected to the network 1014 (not a part of data processing system 1000), which can be any public or private data processing system network or combination of networks, as known to those of skill in the art, including the Internet.
  • Data processing system 1000 can communicate over the network 1014 with one or more other data processing systems such as a server 1034 (also not part of the data processing system 1000).
  • a server 1034 also not part of the data processing system 1000.
  • an alternative data processing system may correspond to a plurality of data pro- cessing systems implemented as part of a distributed system in which processors associated with several data processing sys- tems may be in communication by way of one or more network connections and may collectively perform tasks described as being performed by a single data processing system.
  • processors associated with several data processing sys- tems may be in communication by way of one or more network connections and may collectively perform tasks described as being performed by a single data processing system.
  • a data processing system such a system may be implemented across several data processing systems organized in a distributed system in com- munication with each other via a network.
  • controller means any device, system or part thereof that controls at least one operation, whether such a device is implemented in hardware, firmware, software or some combination of at least two of the same. It should be noted that the functionality associated with any particular controller may be centralized or distributed, whether locally or remotely.
  • data processing systems may be implemented as virtual machines in a virtual machine architecture or cloud environment.
  • the processor 1002 and associated components may correspond to a virtual machine executing in a virtual machine environment of one or more servers. Examples of virtual machine architectures include VMware ESCi, Microsoft Hyper-V, Xen, and KVM.
  • VMware ESCi Hyper-V
  • Xen Xen
  • KVM KVM
  • the data processing system 1000 in this example may correspond to a computer, workstation, server, PC, notebook computer, tablet, mobile phone, and/or any other type of apparatus/system that is op- erative to process data and carry out functionality and fea- tures described herein associated with the operation of a data processing system, computer, processor, and/or a controller discussed herein.
  • the depicted example is provided for the purpose of explanation only and is not meant to imply archi- tectural limitations with respect to the present disclosure.
  • the processor described herein may be located in a server that is remote from the display and input devices described herein.
  • the de- scribed display device and input device may be included in a client device that communicates with the server (and/or a vir- tual machine executing on the server) through a wired or wire- less network (which may include the Internet).
  • a client device may execute a remote desktop application or may correspond to a portal device that carries out a remote desktop protocol with the server to send inputs from an input device to the server and receive visual information from the server to display through a display de- vice.
  • remote desktop protocols include Tera- dici's PCoIP, Microsoft's RDP, and the RFB protocol.
  • the processor described herein may correspond to a virtual processor of a virtual machine executing in a physical processor of the server.
  • the terms “component” and “system” are intended to encompass hardware, software, or a combination of hardware and software.
  • a system or component may be a process, a process executing on a processor, or a processor.
  • a component or system may be localized on a single device or distributed across several devices.
  • a processor corresponds to any electronic device that is configured via hardware circuits, software, and/or firmware to process data.
  • processors de- scribed herein may correspond to one or more (or a combination) microprocessors, CPU, FPGA, ASIC, or any other integrated cir- cuit (IC) or other type of circuit that is capable of pro- cessing data in a data processing system, which may have the form of a controller board, computer, server, mobile phone, and/or any other type of electronic device.
  • a data processing system which may have the form of a controller board, computer, server, mobile phone, and/or any other type of electronic device.
  • data processing system 1000 may conform to any of the various current implementations and practices known in the art.
  • words or phrases used herein should be construed broadly, unless expressly limited in some examples.
  • the terms “include” and “com- prise,” as well as derivatives thereof, mean inclusion without limitation.
  • the singular forms a , an and the are intended to include the plural forms as well, unless the context clearly indicates otherwise.
  • the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
  • the term “or” is inclusive, meaning and/or, unless the context clearly indi- cates otherwise.
  • phrases “associated with” and “associated therewith,” as well as derivatives thereof, may mean to in- clude, be included within, interconnect with, contain, be con- tained within, connect to or with, couple to or with, be com- municable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, or the like.
  • first”, “second”, “third” and so forth may be used herein to describe various elements, func- tions, or acts, these elements, functions, or acts should not be limited by these terms. Rather these numeral adjectives are used to distinguish different elements, functions or acts from each other.
  • phrases such as “processor is configured to” carry out one or more functions or processes may mean the processor is operatively configured to or operably configured to carry out the functions or processes via software, firmware, and/or wired circuits.
  • a processor that is configured to carry out a function/process may correspond to a processor that is executing the software/firmware, which is programmed to cause the processor to carry out the function/process and/or may correspond to a processor that has the software/firmware in a memory or storage device that is available to be executed by the processor to carry out the function/process.
  • a processor that is “configured to” carry out one or more functions or processes may also correspond to a processor circuit particularly fabricated or “wired” to carry out the functions or processes (e.g., an ASIC or FPGA design).
  • the phrase “at least one” before an element (e.g., a processor) that is configured to carry out more than one func- tion may correspond to one or more elements (e.g., processors) that each carry out the functions and may also correspond to two or more of the elements (e.g., processors) that respec- tively carry out different ones of the one or more different functions.
  • the term “adjacent to” may mean that an element is relatively near to but not in contact with a further ele- ment; or that the element is in contact with the further por- tion, unless the context clearly indicates otherwise.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

A computer system and method for managing artifacts related to an app is disclosed. The method comprises extracting, by a processing unit, the artifacts related to the app from one or more data sources, via one or more data connectors. Further, one or more entities are identified from the extracted artifacts based on an entity configuration parameter. The one or more entities are further mapped to an ontology structure based on an ontology configuration parameter. Further, a knowledge graph is generated for the artifacts based on the mapped entities.

Description

Description SYSTEM AND METHOD FOR MANAGING ARTIFACTS RELATED TO APPS Technical field The present invention relates to software management systems, and in particular relates to a computer system and method for managing artifacts related to apps. Background In software engineering projects, traceability between arti- facts is useful. The artifacts include work items such as business requirements, software requirements, software models, software components, code files, test cases, and defects. When traceability between the artifacts is weak or missing, impact of a defect fix on other aspects of the project may not be understood well enough by developers or testers. This may lead to new issues or defects in subsequent developments within the project. Weak or missing traceability between the arti- facts lead to either decrease in efficiency of software devel- opment, low product quality or both. Developers may find it difficult to perform impact analysis when they are not knowl- edgeable about the system/component. This can reduce the code quality. Developers or architects may not take enough measures before designing the solution because of lack of knowledge on impact on developed features. This may result in more defects, lack of scalability of solution, etc. Therefore, lack of trace- ability leads to decreased efficiency during project develop- ment and poor quality of the solution. In light of the above, there exists a need for improving traceability of artifacts associated with an app. Summary Variously disclosed embodiments comprise methods and computer systems that may be used to facilitate managing of artifacts related to an app. According to a first aspect of the invention, a computer- implemented method of managing artifacts related to an app may comprise extracting, by a processing unit, the artifacts re- lated to the app from one or more data sources, via one or more data connectors; identifying one or more entities from the extracted artifacts based on an entity configuration pa- rameter; mapping the one or more entities to an ontology struc- ture based on an ontology configuration parameter; and gener- ating a knowledge graph for the artifacts based on the mapped entities. According to a second aspect of the invention, a computer system may be arranged and configured to execute the steps of this computer-implemented method of managing an app. In par- ticular, the described computer system may be arranged and configured to execute the following steps: extracting, by a processing unit, the artifacts related to the app from one or more data sources, via one or more data connectors; identifying one or more entities from the extracted artifacts based on an entity configuration parameter; mapping the one or more enti- ties to an ontology structure based on an ontology configura- tion parameter; and generating a knowledge graph for the ar- tifacts based on the mapped entities. According to a third aspect of the invention, a computer- readable medium may be encoded with executable instructions, that program product may comprise computer program code which, when executed, cause the computer system to carry out the described computer-implemented method of managing artifacts related to an app. According to a fourth aspect of the invention, a computer readable medium may comprise computer program code which, when executed by a computer system, cause the computer system to carry out this computer-implemented method of managing arti- facts related to an app. By way of example, the described computer-readable medium may be non-transitory and may further be a software component on a storage device. In some examples, the mentioned application development plat- form may be a visual model-based and/or low-code app develop- ment platform which is described in more detail below. The foregoing has outlined rather broadly the technical fea- tures of the present disclosure so that those skilled in the art may better understand the detailed description that fol- lows. Additional features and advantages of the disclosure will be described hereinafter that form the subject of the claims. Those skilled in the art will appreciate that they may readily use the conception and the specific embodiments dis- closed as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. Those skilled in the art will also realize that such equivalent constructions do not depart from the spirit and scope of the disclosure in its broadest form. Also, before undertaking the detailed description below, it should be understood that various definitions for certain words and phrases are provided throughout this patent document and those of ordinary skill in the art will understand that such definitions apply in many, if not most, instances to prior as well as future uses of such defined words and phrases. While some terms may comprise a wide variety of embodiments, the appended claims may expressly limit these terms to specific embodiments. Brief description of figures FIG 1 illustrates functional block diagram of an example com puter system or data processing system that facilitates man- agement of artifacts related to apps, in accordance with an embodiment of the present invention; FIG 2 illustrates a functional block diagram indicating work- flow 200 for managing artifacts related to an app, in accord- ance with an exemplary embodiment of the present invention; FIG 3 illustrates an example of the graph visualization of a knowledge graph on the application management UI, in accord- ance with an embodiment of the present invention; FIG 4 shows a flowchart of a method for managing artifacts related to an app, in accordance with an embodiment of the present invention; and FIG 5 illustrates a block diagram of a data processing system, in accordance with an embodiment of the present invention. Detailed description Various technologies that pertain to systems and methods for managing artifacts related to apps, in a product system will now be described with reference to the drawings, where like reference numerals represent like elements throughout. The drawings discussed below, and the various embodiments used to describe the principles of the present disclosure in this pa- tent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged apparatus. It is to be understood that functionality that is described as being carried out by certain system ele- ments may be performed by multiple elements. Similarly, for instance, an element may be configured to perform functional- ity that is described as being carried out by multiple ele ments. The numerous innovative teachings of the present patent document will be described with reference to exemplary non- limiting embodiments. An app generally refers to a software program which on execu- tion performs specific desired tasks. In general, several apps are executed in a runtime environment containing one or more operating systems (“OSs”), virtual machines (e.g., supporting Java™ programming language), device drivers, etc. Apps, including native applications, can be created, edited, and represented using traditional source code. Examples of such traditional source code comprise C, C++, Java, Flash, Python, Perl, and other script-based methods of representing an app. Developing, creating and managing such script-based apps, or parts of such script-based apps can be accomplished by manual coding of suitably trained users. Developers often use Application Development Frameworks (“ADFs”) (which are by themselves applications or apps) for implementing/developing desired apps. An ADF provides a set of pre-defined code/data modules that can be directly/indirectly used in the development of an app. An ADF may also provide tools such as an Integrated Development Environment (“IDE”), code generators, debuggers, etc., which facilitate a developer in coding/implementing the desired logic of the app in a faster/simpler manner. In general, an ADF simplifies app development by providing reusable components which can be used by app developers to define user interfaces (“UIs”) and app logic by, for example, selecting components to perform desired tasks and defining the appearance, behavior, and interactions of the selected compo- nents. Some ADFs are based on a model-view-controller design pattern that promotes loose coupling and easier app develop ment and maintenance. According to another approach, apps can also be created, ed- ited, and represented using visual model-based representa- tions. Unlike traditional source code implementations, such apps can be created, edited, and/or represented by drawing, moving, connecting, and/or disconnecting visual depictions of logical elements within a visual modeling environment. Visual model-based representations of apps can use symbols, shapes, lines, colors, shades, animations, and/or other visual ele- ments to represent logic, data or memory structures or user interface elements. In order to program a traditional script- based app, programmers are typically required to type out de- tailed scripts according to a complicated set of programming syntax rules. In contrast, programming a visual model-based app can, in some cases, be done by connecting various logical elements (e.g., action blocks and/or decision blocks) to cre- ate a visual flow chart that defines the app's operation. Similarly, defining data structures (e.g., variable types, da- tabase objects, or classes) and/or user interface elements (e.g., dropdown boxes, lists, text input boxes) in a visual model-based app can be done by drawing, placing, or connecting visual depictions of logical elements within a virtual work- space, as opposed to typing out detailed commands in a script. Visual-model based apps, including native apps, can therefore be more intuitive to program and/or edit compared to tradi- tional script-based apps. In the present document, an approach is suggested to manage artifacts related to apps, which may involve the explained visual model-based representations. For brevity, references to a “model,” a “visual model,” or an “app” or “app” should be understood to refer to visual model- based apps, including native apps, unless specifically indi- cated. In some cases, such visual model based apps can repre sent complete, stand-alone apps for execution on a computer system. Visual model-based apps can also represent discrete modules that are configured to perform certain tasks or func- tions, but which do not represent complete apps — instead, such discrete modules can be inserted into a larger app or combined with other discrete modules to perform more compli- cated tasks. Examples of such discrete modules can comprise modules for validating a ZIP code, for receiving information regarding current weather from a weather feed, and/or for ren- dering graphics. Visual models may be represented in two forms: an internal representation and one or more associated visual representa- tions. The internal representation may be a file encoded ac- cording to a file format used by a modeling environment to capture and define the operation of an app (or part of an app). For example, the internal representation may define what in- puts an app can receive, what outputs an app can provide, the algorithms and operations by which the app can arrive at re- sults, what data the app can display, what data the app can store, etc. The internal representation may also be used to instruct an execution environment how to execute the logic of the app during run-time. Internal representations may be stored in the form of non-human-readable code (e.g., binary code). Internal representations may also be stored according to a binary stored JSON (java script object notation) format, and/or an XML format. At run-time, an execution engine may use an internal representation to compile and/or generate executable machine code that, when executed by a processor, causes the processor to implement the functionality of the model. The internal representation may be associated with one or more visual representations. Visual representations may comprise visual elements that depict how an app's logic flows, but which are not designed to be compiled or executed. These visual representations may include, for example, flow-charts or de- cision trees that show a user how the app will operate. The visual models may also visually depict data that is to be received from the user, data that is to be stored, and data that is to be displayed to the user. These visual models may also be interactive, which allows a user to manipulate the model in an intuitive way. For example, visual representations may be configured to display a certain level of detail (e.g., number of branches, number of displayed parameters, granular- ity of displayed logic) by default. However, users may interact with the visual representation in order to show a desired level of detail—for example, users may display or hide branches of logic, and/or display or hide sets of parameters. Details re- lating to an element of the visual model may be hidden from view by default but can appear in a sliding window or pop-up that appears on-screen when the user clicks on the appropriate element. Users may also zoom in or out of the model, and/or pan across different parts of the model, to examine different parts of the model. Users may also copy or paste branches of logic from one section of the model into another section, or copy/paste branches of logic from a first model into a second model. In some cases, parts of the model may contain links to other parts of the model, such that if a user clicks on a link, the user will automatically be led to another part of the model. A viewing user may interact with a visual representation in at least some of the same ways that the viewing user might interact with the model if it were displayed within a modeling environment. In other words, the visual representation may be configured to mimic how the model would appear if it were displayed within a visual modeling environment. A single in- ternal representation may correspond to multiple visual rep- resentations that use different styles or formatting rules to display app logic. For instance, multiple visual representa- tions corresponding to the same internal representation may differ from one another in their use of color, elements that are comprised or omitted, and use of symbols, shapes, lines, colors, and/or shades to depict logic flow. Approaches involving the above-described functionalities of visual model-based representations, visual model-based apps, and/or visual models are sometimes understood to be comprised by a so-called low-code development platform. By way of exam- ple, such a low-code development platform may further be de- scribed as software that provides a development environment used to create application software through graphical user interfaces and configuration instead of traditional hand-coded computer programming. A low-code model may enable developers of varied experience levels to create applications using a visual user interface in combination with model-driven logic. Such low-code development platforms may produce entirely op- erational apps or require additional coding for specific sit- uations. Low-code app development platforms may reduce the amount of traditional hand coding, enabling accelerated deliv- ery of business apps. A common benefit is that a wider range of people can contribute to the app's development — not only those with formal programming skills. Low-code app development platforms can also lower the initial cost of setup, training, deployment, and maintenance. According to another approach, apps can also be created, ed- ited, and represented using visual model-based representa- tions. Unlike traditional source code implementations, such apps can be created, edited, and/or represented by drawing, moving, connecting, and/or disconnecting visual depictions of logical elements within a visual modeling environment. Visual model-based representations of apps can use symbols, shapes, lines, colors, shades, animations, and/or other visual ele- ments to represent logic, data or memory structures or user interface elements. In order to program a traditional script- based app, programmers are typically required to type out de tailed scripts according to a complicated set of programming syntax rules. In contrast, programming a visual model-based app can, in some cases, be done by connecting various logical elements (e.g., action blocks and/or decision blocks) to cre- ate a visual flow chart that defines the app's operation. Similarly, defining data structures (e.g., variable types, da- tabase objects, or classes) and/or user interface elements (e.g., dropdown boxes, lists, text input boxes) in a visual model-based app can be done by drawing, placing, or connecting visual depictions of logical elements within a virtual work- space, as opposed to typing out detailed commands in a script. Visual-model based apps, including native apps, can therefore be more intuitive to program and/or edit compared to tradi- tional script-based apps. In the present document, an approach is suggested to manage artifacts related to apps, which may involve apps developed using the explained visual model-based representations. For brevity, references to a “model,” a “visual model,” or an “application” or “app” should be understood to refer to visual model-based apps, including native apps, unless specifically indicated. In some cases, such visual model-based apps can represent complete, stand-alone apps for execution on a com- puter system. Visual model-based apps can also represent dis- crete modules that are configured to perform certain tasks or functions, but which do not represent complete apps — instead, such discrete modules can be inserted into a larger app or combined with other discrete modules to perform more compli- cated tasks. Examples of such discrete modules can comprise modules for validating a ZIP code, for receiving information regarding current weather from a weather feed, and/or for ren- dering graphics. Visual models may be represented in two forms: an internal representation and one or more associated visual representa- tions. The internal representation may be a file encoded ac- cording to a file format used by a modeling environment to capture and define the operation of an app (or part of an app). For example, the internal representation may define what in- puts an app can receive, what outputs an app can provide, the algorithms and operations by which the app can arrive at re- sults, what data the app can display, what data the app can store, etc. The internal representation may also be used to instruct an execution environment how to execute the logic of the app during run-time. Internal representations may be stored in the form of non-human-readable code (e.g., binary code). Internal representations may also be stored according to a binary stored JSON (java script object notation) format, and/or an XML format. At run-time, an execution engine may use an internal representation to compile and/or generate executable machine code that, when executed by a processor, causes the processor to implement the functionality of the model. The internal representation may be associated with one or more visual representations. Visual representations may comprise visual elements that depict how an app's logic flows, but which are not designed to be compiled or executed. These visual representations may include, for example, flow-charts or de- cision trees that show a user how the app will operate. The visual models may also visually depict data that is to be received from the user, data that is to be stored, and data that is to be displayed to the user. These visual models may also be interactive, which allows a user to manipulate the model in an intuitive way. For example, visual representations may be configured to display a certain level of detail (e.g., number of branches, number of displayed parameters, granular- ity of displayed logic) by default. However, users may interact with the visual representation in order to show a desired level of detail for example, users may display or hide branches of logic, and/or display or hide sets of parameters. Details re- lating to an element of the visual model may be hidden from view by default but can appear in a sliding window or pop-up that appears on-screen when the user clicks on the appropriate element. Users may also zoom in or out of the model, and/or pan across different parts of the model, to examine different parts of the model. Users may also copy or paste branches of logic from one section of the model into another section, or copy/paste branches of logic from a first model into a second model. In some cases, parts of the model may contain links to other parts of the model, such that if a user clicks on a link, the user will automatically be led to another part of the model. A viewing user may interact with a visual representation in at least some of the same ways that the viewing user might interact with the model if it were displayed within a modeling environment. In other words, the visual representation may be configured to mimic how the model would appear if it were displayed within a visual modeling environment. A single in- ternal representation may correspond to multiple visual rep- resentations that use different styles or formatting rules to display app logic. For instance, multiple visual representa- tions corresponding to the same internal representation may differ from one another in their use of color, elements that are comprised or omitted, and use of symbols, shapes, lines, colors, and/or shades to depict logic flow. Approaches involving the above-described functionalities of visual model-based representations, visual model-based apps, and/or visual models are sometimes understood to be included by a so-called low-code development platform. By way of exam- ple, such a low-code development platform may further be de- scribed as software that provides a development environment used to create application software through graphical user interfaces and configuration instead of traditional hand-coded computer programming. A low code model may enable developers of varied experience levels to create applications using a visual user interface in combination with model-driven logic. Such low-code development platforms may produce entirely op- erational apps or require additional coding for specific sit- uations. Low-code development platforms may reduce the amount of traditional hand coding, enabling accelerated delivery of business apps. A common benefit is that a wider range of people can contribute to the app's development — not only those with formal programming skills. Low-code development platforms can also lower the initial cost of setup, training, deployment, and maintenance. With reference to FIG 1, a functional block diagram of an example computer system or data processing system 100 that facilitates management of artifacts related to apps is illus- trated, in accordance with an embodiment of the present inven- tion. The computer system 100 may include a visual model-based ap- plication development platform 118 including at least one pro- cessor 102 that is configured to execute at least one artifact management module 106 from a memory 104 accessed by the pro- cessor 102. Herein, the visual model-based application devel- opment platform 118 may include the above-described function- alities of visual model-based representations, visual model- based apps, and/or visual models and, by way of example, be a low-code development platform. The artifact management module 106 may be configured (i.e., programmed) to cause the processor 102 to carry out various acts and functions described herein. For example, the described artifact management module 106 may include and/or correspond to one or more components of an app development application that is configured to generate and store artifacts related to apps in a data store 108 such as a database. Furthermore, the described artifact management mod ule 106 may include and/or correspond to one or more components of an app development application. By way of example, the application development platform 118 may be cloud-based, internet-based and/or be operated by a provider providing app development and creation support, in- cluding e.g., supporting low-code and/or visual model-based app development. The user may be located close to the appli- cation development platform 118 or remote to the application development platform 118, e.g., using a mobile device for con- necting to the application development platform 118, e.g., via the internet, wherein the mobile device may include an input device 110 and a display device 112. In some examples, the application development platform 118 may be installed and run on a user’s device, such as a computer, laptop, pad, on-premise computing facility, or the like. Examples of product systems that may be adapted to include the artifact management features described herein may include the low-code software development platform of Mendix Inc., of Bos- ton, Massachusetts, USA. This platform provides tools to build, test, deploy, iterate, develop, create and manage apps and is based on visual, model-driven software development. However, it should be appreciated that the systems and methods described herein may be used in other product systems (e.g., product lifecycle management (PLM), product data management (PDM), ap- plication lifecycle management (ALM) systems) and/or any other type of system that generates a plurality of artifacts during development of apps. It should be appreciated that it can be difficult and time- consuming to manage artifacts in complex app development and/or management environments. For example, advanced coding or soft- ware development or management knowledge of users may be re- quired, or selections of many options need to be made con sciously, both involving many manual steps, which is a long and non-efficient process. To enable the enhanced management of artifacts, the described product system or computer system 100 may include at least one input device 110 and at least one display device 112 (such as a display screen). The described processor 102 may be config- ured to generate a graphical user interface (GUI) 114 through the display device 112. Such a GUI 114 may include GUI elements such as buttons, links, search boxes, lists, text boxes, im- ages, scroll bars usable by a user to provide inputs through the input device 110 to develop an app 120 and/or to access artifacts related to the app 120. By way of example, the GUI 114 may be an application management UI 116 provided to a user for modifying the app 120 and a search UI 123 within the application management UI 116. The search UI 123 enables the user to search for artifacts associated with the app 120, stored in a graph database. The term ‘artifact’ as used herein may include at least one of requirements, software components, code files, test cases, de- fects, app data, app components, app architecture, app pro- gramming interfaces (APIs), app services, app usages, app links, app description, app dependencies, artifact dependen- cies, app environments, app tags, business events, event ar- tifacts, notifications, app interfaces, trigger information, user interface designs, maps of reusable IT assets and links to pages containing information about one or more artifacts. In an embodiment, the artifact management module 106 and/or the processor 102 may be configured to extract the artifacts 144 related to the app 120 from one or more data sources 122, via one or more data connectors 124. In the present embodiment, the artifacts 144 are stored in the one or more data sources 122. The artifacts may include, but not limited to, require ments, user stories, epics, features, components, packages, code files and test cases for development of the app 120. Examples of the data sources 122 may include, but not limited to, Application Lifecycle Management Tools such as TFS, TMS, Jira, Confluence, IBM JAZZ and Clear Case. Each of the data sources 122 includes an Application Programming Interface that may be accessed via the data connector 124. The term ‘data connector‘ as used herein refers to a standalone software or a function that imports, exports or converts one data format to another. In the present embodiment, the data connector 124 extracts or imports artifacts 144 from the respective data source 122. More specifically, the data connector 124 connects to the API of the data source 122 and exposes a granular API to a data layer. The data layer is based on a predetermined schema. The granular API exposes artifacts relevant to the predetermined schema of the data layer. The data layer stores the relevant artifacts 144 extracted from the source in the form of semantic data. The artifact management module 106 and/or the processor 102 may be further configured to identifying one or more entities from the extracted artifacts based on an entity configuration parameter. More specifically, a specific class of entities present in the semantic data within the data layer are iden- tified. The specific class of entities is identified based on an entity configuration parameter. For example, the entities may indicate different items used during a software lifecycle. For example, a first class of entities may correspond to agile methodology and may include ‘user story‘, ‘epic‘, ‘features‘. In another example, a second class of entities may correspond to Scrum methodology and may include, but not limited to, ‘product backlog‘, ‘sprint backlog‘ and ‘increment‘. The en- tity configuration parameter may be set by a user in order to identify entities belonging to one of the classes of entities. For example, if the development is based on agile methodology, the user may set the entity configuration parameter to identify entities from the first class of entities. The artifact management module 106 and/or the processor 102 may be further configured to map the one or more entities to an ontology structure based on an ontology configuration pa- rameter. More specifically, the one or more entities identi- fied are mapped to an ontology structure based on an ontology configuration parameter. The ontology structure is selected from a predefined set of ontology structures based on the ontology configuration parameter. For example, the ontology structures may be defined separately for different domains. The domains may vary based on use of the app 120. Non-limiting examples may include human resources, sales, engineering, man- ufacturing, inventory, design, planning, maintenance etc. For example, the entity ‘Employee number‘ that may be used in human resources domain may be mapped to ‘Part number‘ in manufactur- ing domain using an ontology structure for maintenance. Simi- larly, the ontology structures may also vary based on software vocabulary of a targeted user of the app 120. For example, different users may prefer different terminologies for the same entity. For example, an entity called ‘Feature‘ may be referred to as ‘Minimum Marketable Feature‘. In addition to mapping the entities to a target ontology, the ontology struc- tures also define relationships between the mapped entities. For example, if changes in a first entity impacts a second entity, then the second entity shares a parent-child relation- ship with the first entity. The artifact management module 106 and/or the processor 102 may be further configured to generate a knowledge graph for the artifacts based on the mapped entities. More specifically, triples are generated for the mapped entities using predefined libraries. In an embodiment, each of the mapped entities is processed using a Web Ontology Language (OWL) library to gen erate the triples. The triples are further stored as a knowledge graph in a graph database. The term ‘triple‘ as used herein refers to a set of three entities that codifies a statement about the mapped entities in the form of subject– predicate–object expressions. An example of a triple is (de- fect1, detected by, test_case1) which links a defect ‘defect1‘ associated with a test case test_case1 whose execution helped detect it. Non-limiting examples of graph databases include GraphDB. The knowledge graph is a graph structure defined by nodes, edges and properties. In implementation, the knowledge graph is based on Resource Description Framework (RDF) format. The knowledge graph may be stored in the graph database. By way of example, the artifact management module 106 may provide an application management UI 116 to a user. The appli- cation management UI 116 may enable the user to modify the app 120 and to search for artifacts stored in the graph database. Herein, the user relates to the user developing or creating the app 120 and not to the app user who runs the app 120 on his or her target device 140. In an embodiment, the artifact management module 106 and/or the processor 102 may further be configured to capture the user’s intent to search at least one of the artifacts in response to user interactions with the application management UI 116. In an example, the application management UI 116 may provide the user a search UI 123. The user may input or type in the semantic query in the search UI 123. The knowledge graph is queried based on a semantic query received from an user. In an implementation, the knowledge graph is queried using SPARQL. SPARQL is a semantic query language that facilitates querying and manipulation of data stored in the knowledge graph. Further, an output is generated based on querying of the knowledge graph. The generated output is displayed via the application management UI 116 to the user. The generated output indicating the respective, searched ar tifact or the relationship of the respective, searched arti- fact to other artifacts. For example, if the semantic query corresponds to “User story“, the knowledge graph is queried to identify nodes that correspond to user stories of the app 120. Here, the generated output may include one or more user stories corresponding to the app 120. Further, the generated output may also include other nodes that may be related to the nodes corresponding to the user stories. In another embodiment, the artifact management module 106 and/or the processor 102 is configured to provide a graphic visualization of the knowledge graph on the application man- agement UI 116. The user may interact with the application management UI 116 to traverse the knowledge graph. For example, the user may click on a specific node in the knowledge graph to view details including related information associated with the node. In an embodiment, the artifact management module 106 and/or the processor 102 is configured to capture the user’s intent to import the respective, searched artifact to the application management UI 116 in response to user interactions with the application management UI 116. The user may interact with the application management UI 116 to express his or her intent to import the respective artifact 122 in order to modify the app 120. This may facilitate and speed up the modification of the app 120 considerably, especially for users who are non-IT ex- perts, since a large variety of different artifacts 122 may be available for import and use for modifying the app 120 or developing a new app. Available artifacts 122 may then be displayed to the user via the search UI 123 for selection and the user may select one of the displayed artifacts 122 for import to the application man- agement UI 116. In some examples, all the available artifacts 122 may be displayed to the user via the search UI 123 for selection and the user may select one of the displayed arti- facts 122 for import for purposes of modifying the app 120 or developing a new app. By way of example, the import may be done by “drag and drop” or by a “dropdown” window in the application management UI 116. For the import of the respective artifact 122, the respective artifact 122 may be copied to the application management UI 116. In some examples, the import of the respective artifact 122 may comprise importing metadata of the respective artifact 122 explained in more detail below. Such metadata may comprise information on the origin of the artifact, such as the respec- tive data source 122, author of the artifact, date of creation, debugging history, machines or devices used for creating the artifact etc. The mentioned metadata may, by way of example, comprise possible statuses of the artifacts, such as started, stopped, pending, completed, or information on the possible changes or transitions between statuses. In some examples, the mentioned metadata may comprise information on the type and/or format of the respective artifact 122, in the form of integers, decimal numbers, text strings, Boolean data, etc. Further, more complex or composite information may be used, such as pictures, photos, sound data, etc. In some examples, the import of the respective artifact 122 may further comprise connectivity information which may be required to allow for obtaining or retrieving the respective artifact 122 from the respective data source 122. Such connec- tivity information may, by way of example, allow for estab- lishing a communication connection with the respective data source 122. In an embodiment, the artifact management module 106 and/or the processor 102 may further be configured to modify the app 120 through the application management UI 116 based on the respective, imported, searched artifact. Further, the modified app is deployed and run on a target device 140. Herein, the app 120 may be understood as deployed if the ac- tivities which are required to make this app 120 available for use by the app user on the target device 140. The app deploy- ment process may comprise several interrelated activities with possible transitions between them. These activities may occur at the producer side (e.g., by the app developer) or at the consumer side (by the app user or end user) or both. In some examples, the app deployment process may comprise at least the release of the app 120 and the installation and the activation of the app 120. The release activity may follow from the com- pleted development process and is sometimes classified as part of the development process rather than deployment process. It may comprise operations required to prepare a system (here: e.g., the application development platform 118 or an on-line app store) for assembly and transfer to the computer system(s) (here: e.g., the application development platform 118) on which it will be run in production. Therefore, it may sometimes involve determining the resources required for the system to operate with tolerable performance and planning and/or docu- menting subsequent activities of the deployment process. For simple systems, the installation of the app 120 may involve establishing some form of command, shortcut, script or service for executing the software (manually or automatically) of the app 120. For complex systems, it may involve configuration of the system – possibly by asking the end user questions about the intended app use, or directly asking them how they would like it to be configured – and/or making all the required subsystems ready to use. Activation may be the activity of starting up the executable component of software or the app 120 for the first time (which is not to be confused with the common use of the term activation concerning a software li cense, which is a function of Digital Rights Management sys- tems). Once the app 120 has been deployed on the respective target device 140, the app 120 may be put into operation to fulfill the business needs of the app (end) user. In some examples, the respective target device 140 may be a smartphone, smartwatch, handheld, pad, laptop or the like, or a desktop device, e.g., including desktop computers, or other “smart” devices, e.g., smart television sets, fridges, home or industrial automation devices, wherein smart television sets may e.g., be a television set with integrated Internet capa- bilities or a set-top box for television that offers more advanced computing ability and connectivity than a contempo- rary basic television set. Further, by way of example, the respective target device 140 may be or comprise a manufacturing operation management (MOM) system, a manufacturing execution system (MES), and enterprise resource planning (ERP) system, a supervisory control and data acquisition (SCADA) system, or any combination thereof. In some examples, the respective target device 140 on which the app 120 may be deployed and run may use the respective artifact 122 from the data source 122. The respective target device 140 may be part of a complex production line or production plant, e.g., a bottle filing machine, conveyor, welding machine, weld- ing robot, etc. As mentioned above, the application development platform 118 may comprise the above-described functionalities of visual model-based representations, visual model-based apps, and/or visual models and, by way of example, be a visual model-based application development platform or a low-code application de- velopment platform. The application management UI 118 may pro- vide an interactive user interface of the application devel- opment platform 118 which supports and enables the user to manage the artifacts. For example, app 120 may be or comprise a software program which on execution performs specific de- sired tasks. In some examples, the artifact management module 106 and/or the processor 102 may further be configured to modify the app 120 through the application management UI 116 by using the imported artifacts. The app 120 may be developed and eventually be completed by the visual model-based application development platform 118 taking into account the user’s input provided by his/her interactions with the application management UI 116 and using the imported artifacts 122. By way of example, the user’s input may also include a name, an identifier, and the current version of the app 120. Further, a computer-readable medium 160 which may comprise a computer program product 162 is shown in FIG 1, wherein the computer program product 162 may be encoded with executable instructions, that when executed, cause the computer system 100 or and/or the application development platform 118 to carry out the described method. FIG 2 illustrates a functional block diagram indicating a workflow 200 for managing artifacts related to an app, in accordance with an exemplary embodiment of the present inven- tion. The workflow comprises a plurality of data sources 205A, 205B and 205C. The data sources 205A, 205B and 205C are asso- ciated with data connectors 210A, 210B and 210C respectively. Each of the data connectors 210 expose a granular API associ- ated with respective data sources 205 to a data layer 215. In other words, the granular API exposes artifacts relevant to a predetermined schema of the data layer 215. The data layer 215 stores the relevant artifacts extracted from the source in the form of semantic data. Further, an entity identifier component 220 identifies entities present in the semantic data within the data layer 215. The term ‘component‘ as used herein refers to a piece of executable software code that causes a processor (similar to processor 102) to perform a predefined function. For example, the entity identification component 220, when executed, causes the processor to identify entities present in the semantic data. The entity identifier component 220 is con- figured to identify a specific class of entities from the semantic data based on an entity configuration parameter. Fur- ther, an ontology mapper component 225 maps the extracted en- tities to to an ontology structure based on an ontology con- figuration parameter. Further, a knowledge graph component 230 computes triplets of the mapped entities, i.e., the artifacts, to generate a knowledge graph. The generated knowledge graph is further stored in a graph database 235. The graph database 235 may be accessed by a user, via an application management UI 240 for accessing the artifacts in the knowledge graph. The user may enter semantic queries on the application management UI 240 for searching or accessing the artifacts in the graph database 235. Further, the semantic query is processed by a query engine, for example based on SPARQL, to identify the searched artifact from the graph database 235. Upon querying, the relevant artifact is displayed to the user, for example via graphic visualisation, on the application management UI 240. The query engine 245 may also facilitate importing of the searched artifact to the application management UI 240. FIG 3 illustrates an example of the graph visualization of a knowledge graph on the application management UI 240, in ac- cordance with an embodiment of the present invention. The knowledge graph comprises a plurality of nodes interconnected using edges. Each of the nodes indicate an artifact associated with an app. Each of the edges indicate parent-child relation- ships between respective nodes. For example, child nodes are impacted by respective parent nodes. The user may click or zoom in on a specific node to view details associated with the respective artifact. For example, each of the nodes may include links to storage locations associated with the respective ar tifacts. When the user clicks on the node, the respective link may enable the user to access the artifact. In the present example, the node 310 represents a Minimum Marketable Entity (MME1) and has two child nodes 315A, 315B that represent the artifacts MMF1 and MMF2 corresponding to Minimum Marketable Features corresponding to MME1. The node 315A has a child node Userstory1 320A that represents a user story corresponding to MMF1 315A. Similarly, the node 315B has two child nodes Us- erstory2 320B and Userstory3320C that represents a user story corresponding to MMF1 315A. Further, Userstory1 320A has two child nodes Task1 325A and Task2 325B that represent tasks related to Userstory1. Similarly, Userstory3 320C has a child nodes Task4 325C that represent tasks related to Userstory3 320C. The node 325A has a child node changeset1 330A and the node 325C has a child node changeset4330B. The node 330A and 330B share child nodes File1 335A and File3 335C. The node 330A has a further child node File2335B. FIG 4 shows a flowchart of a method 400 for managing artifacts related to an app, in accordance with an embodiment of the present invention. The method may start at 405 and the meth- odology may comprise several method steps carried out through operation of at least one processor. At step 410, the artifacts related to the app are extracted from one or more data sources, via one or more data connectors. At step 415, one or more entities are identifiied from the extracted artifacts based on an entity configuration parame- ter. At step 420, the one or more entities identified are mapped to an ontology structure based on an ontology configu- ration parameter. At step 425, a knowledge graph for the ar- tifacts is generated based on the mapped entities. More spe- cifically, triples are generated for the mapped entities using predefined libraries. At 430, the methodology may end. It should further be appreciated that the methodology 400 may comprise other acts and features discussed previously with respect to the computer-implemented method of managing arti- facts related to apps. For example, the method may further comprise the acts of providing an application management user interface (UI) of an application development platform to the user; of capturing the user’s intent to search at least one of the artifacts in re- sponse to user interactions with the application management UI; of querying the knowledge graph based on user’s intent; of generating an output based on querying of the knowledge graph; and of displaying the generated output via the application management UI to the user, the generated output indicating the respective, searched artifact or the relationship of the re- spective, searched artifact to other artifacts. In some examples, the methodology may further comprise the act of capturing the user’s intent to import the respective, searched artifact to the application management UI in response to user interactions with the application management UI; of importing the respective, searched artifact corresponding to the captured user’s import intent to the application manage- ment UI; and of displaying the respective, searched artifact on the application management UI. It should also be appreciated that in some examples, the meth- odology may further comprise the acts of modifying the app through the application management UI or developing a new app based on the respective, imported, searched artifact; and of deploying and running the modified app on a target device. As discussed previously, acts associated with these methodol- ogies (other than any described manual acts such as an act of manually making a selection through the input device) may be carried out by one or more processors. Such processor(s) may be included in one or more data processing systems, for exam ple, that execute software components operative to cause these acts to be carried out by the one or more processors. In an example embodiment, such software components may comprise com- puter-executable instructions corresponding to a routine, a sub-routine, programs, applications, modules, libraries, a thread of execution, and/or the like. Further, it should be appreciated that software components may be written in and/or produced by software environments/languages/frameworks such as Java, JavaScript, Python, C, C#, C++ or any other software tool capable of producing components and graphical user inter- faces configured to carry out the acts and features described herein. The suggested approach offers several advantages over other approaches. E.g., the suggested approach helps with improving traceability of artifacts related to apps. The knowledge graph-based approach provides an intuitive un- derstanding of artifacts and relationships between artifacts, thereby easing searching for artifacts and comprehension of traceability information. Further, the knowledge graph-based approach provides backward traceability along with forward traceability which is currently not present in any Application Lifecycle Management tools. For example, in forward traceabil- ity, it may be possible to view source code related to a specific user-story by querying the knowledge graph corre- sponding to the app. In backward traceability, different ver- sions of all user-stories corresponding to a specific source code may be viewed, by querying the knowledge graph corre- sponding to the app. Therefore, the suggested approach allows it to support rapid look-ups and response to complex queries that may not be easily performed using existing art. The sug- gested approach also provides graphical visualization, i.e., of the knowledge graph via the application management UI for better comprehension of the relationships between the arti facts by a user. Further, the user may easily identify design flaws, for e.g. unrelated artifacts being linked to an app. Further, impact of a modification in an artifact on other artifacts may be identified at an early stage, based on the knowledge graph, thereby helping with quality improvement & mitigation of potential risks resulting from such modifica- tion. Further, the suggested approach also helps in ensuring that requirements are fulfilled when re-architecting or modi- fying the app. In an example, specific requirements may be associated with specific bugs in the source code. The suggested approach helps in identifying any such potential bugs with reduced efforts. Thanks to the suggested approach, the use of knowledge graphs to managing artifacts related to apps helps in collating in- formation related to an app, thereby reducing risk of knowledge loss with change in human resources employed to develop, mod- ify, test, debug or deploy the app. FIG 5 illustrates a block diagram of a data processing system 1000 (also referred to as a computer system) in which an em- bodiment can be implemented, for example, as a portion of a product system, and/or other system operatively configured by software or otherwise to perform the processes as described herein. The data processing system 1000 may include, for ex- ample, the application development platform 118 and/or the computer system or data processing system 100 mentioned above. The data processing system depicted includes at least one pro- cessor 1002 (e.g., a CPU) that may be connected to one or more bridges/controllers/buses 1004 (e.g., a north bridge, a south bridge). One of the buses 1004, for example, may include one or more I/O buses such as a PCI Express bus. Also connected to various buses in the depicted example may include a main memory 1006 (RAM) and a graphics controller 1008. The graphics con troller 1008 may be connected to one or more display devices 1010. It should also be noted that in some embodiments one or more controllers (e.g., graphics, south bridge) may be inte- grated with the CPU (on the same chip or die). Examples of CPU architectures include IA-32, x86-64, and ARM processor archi- tectures. Other peripherals connected to one or more buses may include communication controllers 1012 (Ethernet controllers, Wi-Fi controllers, cellular controllers) operative to connect to a local area network (LAN), Wide Area Network (WAN), a cellular network, and/or other wired or wireless networks 1014 or com- munication equipment. Further components connected to various busses may include one or more I/O controllers 1016 such as USB controllers, Bluetooth controllers, and/or dedicated audio controllers (connected to speakers and/or microphones). It should also be appreciated that various peripherals may be connected to the I/O control- ler(s) (via various ports and connections) including input devices 1018 (e.g., keyboard, mouse, pointer, touch screen, touch pad, drawing tablet, trackball, buttons, keypad, game controller, gamepad, camera, microphone, scanners, motion sensing devices that capture motion gestures), output devices 1020 (e.g., printers, speakers) or any other type of device that is operative to provide inputs to or receive outputs from the data processing system. Also, it should be appreciated that many devices referred to as input devices or output de- vices may both provide inputs and receive outputs of communi- cations with the data processing system. For example, the pro- cessor 1002 may be integrated into a housing (such as a tablet) that includes a touch screen that serves as both an input and display device. Further, it should be appreciated that some input devices (such as a laptop) may include a plurality of different types of input devices (e.g., touch screen, touch pad, keyboard). Also, it should be appreciated that other pe- ripheral hardware 1022 connected to the I/O controllers 1016 may include any type of device, machine, or component that is configured to communicate with a data processing system. Additional components connected to various busses may include one or more storage controllers 1024 (e.g., SATA). A storage controller may be connected to a storage device 1026 such as one or more storage drives and/or any associated removable media, which can be any suitable non-transitory machine usable or machine-readable storage medium. Examples include nonvola- tile devices, volatile devices, read only devices, writable devices, ROMs, EPROMs, magnetic tape storage, floppy disk drives, hard disk drives, solid-state drives (SSDs), flash memory, optical disk drives (CDs, DVDs, Blu-ray), and other known optical, electrical, or magnetic storage devices drives and/or computer media. Also, in some examples, a storage device such as an SSD may be connected directly to an I/O bus 1004 such as a PCI Express bus. A data processing system in accordance with an embodiment of the present disclosure may include an operating system 1028, software/firmware 1030, and data stores 1032 (that may be stored on a storage device 1026 and/or the memory 1006). Such an operating system may employ a command line interface (CLI) shell and/or a graphical user interface (GUI) shell. The GUI shell permits multiple display windows to be presented in the graphical user interface simultaneously, with each display window providing an interface to a different application or to a different instance of the same application. A cursor or pointer in the graphical user interface may be manipulated by a user through a pointing device such as a mouse or touch screen. The position of the cursor/pointer may be changed and/or an event, such as clicking a mouse button or touching a touch screen, may be generated to actuate a desired response. Examples of operating systems that may be used in a data pro- cessing system may include Microsoft Windows, Linux, UNIX, iOS, and Android operating systems. Also, examples of data stores include data files, data tables, relational database (e.g., Oracle, Microsoft SQL Server), database servers, or any other structure and/or device that is capable of storing data, which is retrievable by a processor. The communication controllers 1012 may be connected to the network 1014 (not a part of data processing system 1000), which can be any public or private data processing system network or combination of networks, as known to those of skill in the art, including the Internet. Data processing system 1000 can communicate over the network 1014 with one or more other data processing systems such as a server 1034 (also not part of the data processing system 1000). However, an alternative data processing system may correspond to a plurality of data pro- cessing systems implemented as part of a distributed system in which processors associated with several data processing sys- tems may be in communication by way of one or more network connections and may collectively perform tasks described as being performed by a single data processing system. Thus, it is to be understood that when referring to a data processing system, such a system may be implemented across several data processing systems organized in a distributed system in com- munication with each other via a network. Further, the term “controller” means any device, system or part thereof that controls at least one operation, whether such a device is implemented in hardware, firmware, software or some combination of at least two of the same. It should be noted that the functionality associated with any particular controller may be centralized or distributed, whether locally or remotely. In addition, it should be appreciated that data processing systems may be implemented as virtual machines in a virtual machine architecture or cloud environment. For example, the processor 1002 and associated components may correspond to a virtual machine executing in a virtual machine environment of one or more servers. Examples of virtual machine architectures include VMware ESCi, Microsoft Hyper-V, Xen, and KVM. Those of ordinary skill in the art will appreciate that the hardware depicted for the data processing system may vary for particular implementations. For example, the data processing system 1000 in this example may correspond to a computer, workstation, server, PC, notebook computer, tablet, mobile phone, and/or any other type of apparatus/system that is op- erative to process data and carry out functionality and fea- tures described herein associated with the operation of a data processing system, computer, processor, and/or a controller discussed herein. The depicted example is provided for the purpose of explanation only and is not meant to imply archi- tectural limitations with respect to the present disclosure. Also, it should be noted that the processor described herein may be located in a server that is remote from the display and input devices described herein. In such an example, the de- scribed display device and input device may be included in a client device that communicates with the server (and/or a vir- tual machine executing on the server) through a wired or wire- less network (which may include the Internet). In some embod- iments, such a client device, for example, may execute a remote desktop application or may correspond to a portal device that carries out a remote desktop protocol with the server to send inputs from an input device to the server and receive visual information from the server to display through a display de- vice. Examples of such remote desktop protocols include Tera- dici's PCoIP, Microsoft's RDP, and the RFB protocol. In such examples, the processor described herein may correspond to a virtual processor of a virtual machine executing in a physical processor of the server. As used herein, the terms “component” and “system” are intended to encompass hardware, software, or a combination of hardware and software. Thus, for example, a system or component may be a process, a process executing on a processor, or a processor. Additionally, a component or system may be localized on a single device or distributed across several devices. Also, as used herein a processor corresponds to any electronic device that is configured via hardware circuits, software, and/or firmware to process data. For example, processors de- scribed herein may correspond to one or more (or a combination) microprocessors, CPU, FPGA, ASIC, or any other integrated cir- cuit (IC) or other type of circuit that is capable of pro- cessing data in a data processing system, which may have the form of a controller board, computer, server, mobile phone, and/or any other type of electronic device. Those skilled in the art will recognize that, for simplicity and clarity, the full structure and operation of all data processing systems suitable for use with the present disclo- sure is not being depicted or described herein. Instead, only so much of a data processing system as is unique to the present disclosure or necessary for an understanding of the present disclosure is depicted and described. The remainder of the construction and operation of data processing system 1000 may conform to any of the various current implementations and practices known in the art. Also, it should be understood that the words or phrases used herein should be construed broadly, unless expressly limited in some examples. For example, the terms “include” and “com- prise,” as well as derivatives thereof, mean inclusion without limitation. The singular forms a , an and the are intended to include the plural forms as well, unless the context clearly indicates otherwise. Further, the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. The term “or” is inclusive, meaning and/or, unless the context clearly indi- cates otherwise. The phrases “associated with” and “associated therewith,” as well as derivatives thereof, may mean to in- clude, be included within, interconnect with, contain, be con- tained within, connect to or with, couple to or with, be com- municable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, or the like. Also, although the terms “first”, “second”, “third” and so forth may be used herein to describe various elements, func- tions, or acts, these elements, functions, or acts should not be limited by these terms. Rather these numeral adjectives are used to distinguish different elements, functions or acts from each other. For example, a first element, function, or act could be termed a second element, function, or act, and, sim- ilarly, a second element, function, or act could be termed a first element, function, or act, without departing from the scope of the present disclosure. In addition, phrases such as “processor is configured to” carry out one or more functions or processes, may mean the processor is operatively configured to or operably configured to carry out the functions or processes via software, firmware, and/or wired circuits. For example, a processor that is configured to carry out a function/process may correspond to a processor that is executing the software/firmware, which is programmed to cause the processor to carry out the function/process and/or may correspond to a processor that has the software/firmware in a memory or storage device that is available to be executed by the processor to carry out the function/process. It should also be noted that a processor that is “configured to” carry out one or more functions or processes, may also correspond to a processor circuit particularly fabricated or “wired” to carry out the functions or processes (e.g., an ASIC or FPGA design). Further the phrase “at least one” before an element (e.g., a processor) that is configured to carry out more than one func- tion may correspond to one or more elements (e.g., processors) that each carry out the functions and may also correspond to two or more of the elements (e.g., processors) that respec- tively carry out different ones of the one or more different functions. In addition, the term “adjacent to” may mean that an element is relatively near to but not in contact with a further ele- ment; or that the element is in contact with the further por- tion, unless the context clearly indicates otherwise. Although an exemplary embodiment of the present disclosure has been described in detail, those skilled in the art will under- stand that various changes, substitutions, variations, and im- provements disclosed herein may be made without departing from the spirit and scope of the disclosure in its broadest form. None of the description in the present patent document should be read as implying that any particular element, step, act, or function is an essential element, which must be included in the claim scope: the scope of patented subject matter is de- fined only by the allowed claims. List of reference numerals 100 computer system or data processing system 102 processor 104 memory 106 artifact management module 108 data store 110 input device 112 display device 114 Graphical User Interface 116 application management UI 118 application development platform 120 app 122 data sources 123 search UI 124 data connectors 140 target device 144 artifacts 160 computer-readable medium 162 computer program product 205A-B data sources 210A-B data sources 215 data layer 220 entity identifier component 225 ontology mapper component 230 knowledge graph component 235 graph database 240 application management UI 245 query engine 1000 data processing system 1002 processor 1004 bridges/controllers/buses 1006 memory 1008 graphics controller 1010 display devices 1012 communication controllers 1014 networks 1016 I/O controllers 1018 input devices 1020 output devices 1022 peripheral hardware 1024 storage controllers 1026 storage device 1028 operating system 1030 software/firmware 1032 data stores 1034 server

Claims

Claims: 1. A computer-implemented method for managing artifacts re- lated to an app, the method comprising: extracting, by a processing unit (102), the arti- facts related to the app from one or more data sources, via one or more data connectors; identifying one or more entities from the extracted artifacts based on an entity configuration parameter; mapping the one or more entities to an ontology structure based on an ontology configuration parameter; and generating a knowledge graph for the artifacts based on the mapped entities. 2. The method according to claim 1, wherein extracting the artifacts related to the app from one or more data sources via the one or more data connectors comprises: connecting to application programming interfaces as- sociated with the one or more data sources using at least one of the data connectors; and extracting the artifacts associated with the app from the application programming interfaces of the one or more data sources. 3. The method according to claim 2, wherein the artifacts are extracted from the application programming interfaces via a data layer, and wherein the data layer is associated with a predetermined schema. 4. The method according to claim 1, wherein identifying the one or more entities from the extracted artifacts based on the entity configuration parameter comprises: determining a class of entities to be identified from the extracted artifacts based on the entity config- uration parameter; and identifying, from the artifacts, the one or more entities belonging to the determined class of entities. 5. The method according to claim 1 and 4, wherein mapping the one or more entities to the ontology structure based on the ontology configuration parameter comprises: identifying the ontology structure from a plurality of ontology structures based on the ontology configura- tion parameter; and mapping the one or more entities to the identified ontology structure based on the class of entities. 6. The method according to claim 1, wherein generating the knowledge graph for the artifacts based on the mapped entities comprises: computing triples corresponding to each of the mapped entities based on a relationship between the mapped entities in the ontology structure; and generating the knowledge graph based on the triples computed. 7. The method according to claim 1 further comprising: providing an application management user interface (UI) (116) of an application development platform to the user; capturing the user’s intent to search at least one of the artifacts in response to user interactions with the application management UI (116); querying the knowledge graph based on user’s intent; generating an output based on querying of the knowledge graph; and displaying the generated output via the application management UI (116) to the user, the generated output indicating the respective, searched artifact or the re- lationship of the respective, searched artifact to other artifacts. 8. The method according to claim 7 further comprising: capturing the user’s intent to import the respec- tive, searched artifact to the application management UI (116) in response to user interactions with the applica- tion management UI (116); importing the respective, searched artifact corre- sponding to the captured user’s import intent to the ap- plication management UI (116); and displaying the respective, searched artifact on the application management UI (116). 9. The method according to claim 8 further comprising: modifying the app through the application management UI (116) based on the respective, imported, searched ar- tifact; and deploying and running the modified app on a target device (140). 10. A computer system (100) arranged and configured to execute the steps of the computer-implemented method ac- cording to any one of the preceding claims. 11. A computer program product (162), comprising com- puter program code which, when executed by a computer system (100), cause the computer system (100) to carry out the method of one of the claims 1 to 9. 12. A computer-readable medium (160) comprising a com- puter program product (162) comprising computer program code which, when executed by a computer system (100), cause the computer system (100) to carry out the method of one of the claims 1 to 9.
PCT/EP2022/052222 2022-01-31 2022-01-31 System and method for managing artifacts related to apps WO2023143746A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2022/052222 WO2023143746A1 (en) 2022-01-31 2022-01-31 System and method for managing artifacts related to apps

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2022/052222 WO2023143746A1 (en) 2022-01-31 2022-01-31 System and method for managing artifacts related to apps

Publications (1)

Publication Number Publication Date
WO2023143746A1 true WO2023143746A1 (en) 2023-08-03

Family

ID=80461021

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2022/052222 WO2023143746A1 (en) 2022-01-31 2022-01-31 System and method for managing artifacts related to apps

Country Status (1)

Country Link
WO (1) WO2023143746A1 (en)

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BILAL ABU-SALIH ET AL: "Toward a Knowledge-based Personalised Recommender System for Mobile App Development", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 9 September 2019 (2019-09-09), XP081846356 *

Similar Documents

Publication Publication Date Title
Akiki et al. Adaptive model-driven user interface development systems
US20200326914A1 (en) Creating an app method and system
CN107220172B (en) Method and system for automated User Interface (UI) testing via model-driven techniques
Fursin Collective knowledge: organizing research projects as a database of reusable components and portable workflows with common interfaces
JP7280388B2 (en) Apparatus and method, equipment and medium for implementing a customized artificial intelligence production line
US20230086854A1 (en) Dynamically controlling case model structure using case fragments
CN106484389B (en) Action stream segment management
Snell et al. Microsoft Visual Studio 2012 Unleashed: Micro Visua Studi 2012 Unl_p2
de_Almeida Monte-Mor et al. Applying MDA approach to create graphical user interfaces
AU2014100798A4 (en) A visual role and transition based method and system for developing complex web applications
Gassner Flash Builder 4 and Flex 4 Bible
Aghaee et al. Live mashup tools: challenges and opportunities
CN113010168A (en) User interface generation method based on scene tree
US11809844B2 (en) Creating an app method and system
Uluca Angular 6 for Enterprise-Ready Web Applications: Deliver production-ready and cloud-scale Angular web apps
US20210271458A1 (en) Managing an app method and system
Soni DevOps for Web Development
WO2023143746A1 (en) System and method for managing artifacts related to apps
EP4006715A1 (en) Creating a native app method and system
Maikantis et al. SmartCLIDE: shortening the toolchain of SOA-based cloud software development by automating service creation, composition, testing, and deployment
da Cruz et al. Automatic generation of user interface models and prototypes from domain and use case models
US20240118877A1 (en) System and method for decomposing monolith applications into software services
US20240111922A1 (en) System and method for managing simulation artifacts
EP4030282A1 (en) Computing system and method for software architecture planning
Mughal Advancing BDD Software Testing: Dynamic Scenario Re-Usability And Step Auto-Complete For Cucumber Framework

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22706252

Country of ref document: EP

Kind code of ref document: A1