WO2018042022A1 - System and apparatus for providing different versions of a type of data journey - Google Patents

System and apparatus for providing different versions of a type of data journey Download PDF

Info

Publication number
WO2018042022A1
WO2018042022A1 PCT/EP2017/072023 EP2017072023W WO2018042022A1 WO 2018042022 A1 WO2018042022 A1 WO 2018042022A1 EP 2017072023 W EP2017072023 W EP 2017072023W WO 2018042022 A1 WO2018042022 A1 WO 2018042022A1
Authority
WO
WIPO (PCT)
Prior art keywords
journey
data
version
logic
dedicated
Prior art date
Application number
PCT/EP2017/072023
Other languages
French (fr)
Inventor
Daniel GOOVAERTS
Paul GRIMBERS
Original Assignee
The Glue
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Glue filed Critical The Glue
Publication of WO2018042022A1 publication Critical patent/WO2018042022A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • G06F8/656Updates while running

Definitions

  • aspects of the present invention relate to data storage and retrieval, and more particularly to a method, system and computer program product for dynamic creation and deployment of different versions of a data journey in an in memory data grid or similar data storage arrangement, particularly for an institution possessing large amounts of data.
  • the invention particularly relates to a front-end interface with a handler connected to a node, storing data concerning events captured by an event interface, to provide a more rapid response to a customer's request of the system and apparatus, and dynamic creation and deployment of different versions of a data journey.
  • EDA Event-Driven Architecture
  • IMDG memory data grids
  • IMDGs have the advantage of providing caches which are distributed over a number of nodes, thereby acting as channel and allowing a faster exchange of events.
  • IMDG usage is innately limited by the amount of data that can be stored in the IMDG.
  • the loading of data into the IMDG is typically restricted to a static model where the end user determines the data that needs to be loaded into the IMDG.
  • IMDG architecture normally requires to bring nodes down for upgrading or changing of node logic, even if applying a
  • the present invention relates to a system for providing services to a plurality of users, the services comprising at least one type of journey and at least a first and a second version of said one type of journey, wherein a journey has a beginning and an end and comprises a plurality of actions triggered by external requests and external events or events translated from user requests received in the system, the system comprising: a first dedicated data cache to store data for said first version; a first dedicated logic, coupled to said first dedicated data cache, to process said first version; a second dedicated data cache to store data for said second version; a second dedicated logic, coupled to said second dedicated data cache, to process the second version, the logic and data for the second version comprising at least some logic and data which is a copy of the logic and data for the first version and the first and second dedicated logic being configured to run the first and second versions in parallel for different users; a request handler to determine, upon receiving a request related to a journey type, the version of said journey type based on the origin and/or context of the request, and
  • the present invention also relates to a process for providing services to a plurality of users, the services comprising at least one type of journey and at least a first and a second version of said one type of journey, wherein a journey has a beginning and an end and comprises a plurality of actions triggered by external requests and external events or events translated from user requests received in a system, the process comprising: providing a first dedicated data cache to store data for said first version; providing a first dedicated logic, coupled to said first dedicated data cache, to process said first version; providing a second dedicated data cache to store data for said second version; providing a second dedicated logic, coupled to said second dedicated data cache, to process the second version, the logic and data for the second version comprising at least some logic and data which is a copy of the logic and data for the first version and the first and second dedicated logic being configured to run the first and second versions in parallel for different users; providing a request handler to determine, upon receiving a request related to a journey type, the version of said journey type based on the
  • the present invention relates to machine executable instructions that when executed by at least one processor cause the at least one processor to perform the subject process.
  • the present invention relates to a non-transitory machine readable medium comprising machine executable instructions according to the subject process.
  • the present invention relates to machine readable storage storing machine executable instructions according the present process.
  • the present invention also relates to the use of a system according to the invention to update and/or variate the service offerings to the plurality of users by adding a new version of a node in parallel instead of updating an existing running node.
  • FIGURE 1 is a flow diagram illustrating a process that makes use of the architecture, from an applications standpoint, of a preferred embodiment of a server in the system and apparatus of this invention.
  • the system and apparatus of this invention comprise at least two parallel servers, one being a back-up for the other, each with its own front-end interface, event interface, back-end interface, plurality of nodes and request handler, as also illustrated in Figures 4, 5 and 6.
  • FIGURE 2 is a flow diagram of the operation of the front-end interface or request handler of the system and apparatus in absence of different versions of a journey instance.
  • the request handler communicates with front end applications for receiving external requests from users to allow the users to access services provided via the system and apparatus.
  • FIGURE 3 is a flow diagram of the operation of the event interface or optional external event handler of the system and apparatus of this invention, whereby different versions of a journey instance may be employed, allowing for a unique variance and granularity of the process.
  • FIGURE 4 is a flow diagram of the operation of the back-end interface or journey event processor of the system and apparatus of this invention.
  • FIGURE 5 is a flow diagram of the operation of the back-end interface or journey event processor of the system and apparatus of this invention.
  • FIGURE 6 is a block diagram of the operation of several systems coupled through the grid- provided configuration and data caches.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electromagnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fibre cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider an Internet Service Provider
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other
  • EPC event based processing
  • EPC event-driven data programming that has an application flow control determined by events or changes in state.
  • EPC is an ordered structure or chain of events and functions. It provides various connectors that allow alternative and parallel execution of processes. Furthermore it is specified by the usages of logical operators, such as OR, AND, and XOR. EPCs require non-local semantics, i.e., the execution behaviour of a particular node within an EPC may depend on the state of other parts of the EPC.
  • activity or “function” as used herein preferably means an active component of an EPC that has decision-making authority and that typically consumes time and resources.
  • event preferably means, in accordance with DIN 69900, a condition of an EPC that has occurred and causes a sequence of activities.
  • An event is a passive component of an EPC and has no decision-making authority. Events can trigger activities.
  • node as used herein preferably means a self-contained storage node of an EPC, especially a node interconnected with other nodes in a storage grid so that any node can communicate with any other node without the data having to pass through a centralized switch.
  • Each node contains its own storage medium, microprocessor, indexing capability, and management layer.
  • EPCs require non-local semantics, the execution behaviour of a particular node within an EPC may depend on the state of other nodes of the EPC, possibly far away.
  • a cluster of several nodes may share a common switch, but each node is also connected to at least one other node cluster.
  • Nodes are individual parts of a larger data structure, such as linked lists and tree data structures.
  • nodes are stored in grids.
  • Grid storage introduces a new level of fault-tolerance and redundancy. If one storage node fails or a pathway between two nodes is interrupted, the network can reroute access another way or to a redundant node. This reduces the need for online maintenance, which practically eliminates downtime. Also, the multiple paths between pairs of nodes ensures that a storage grid can maintain optimum performance under conditions of fluctuating load. Also, grid storage is scalable. If a new storage node is added, it can be automatically recognized by the rest of the grid. This reduces the need for expensive hardware upgrades and downtime. Suitable software packages for grid storage for handling the stored data are available from Gemfire, Hazelcast, XAP Gigaspaces and GridGain. In this regard, the software package, used, preferably acts as a data access and processing layer between the application and the data storage and uses memory (RAM) as the primary data repository for data being processed rather than a disk.
  • RAM memory
  • grid storage systems for handling stored data involve the use of only one data cache, accessible by all nodes.
  • each separate event handled by the system and apparatus of this invention must only end up in the nodes that can handle it.
  • the system and apparatus of this invention have different dedicated data caches for each journey type, and each of these data caches is only accessible by the nodes that have the code for this journey type.
  • Gridgain offers technology to have data stored in a dedicated data cache and distributed only to a predetermined set of nodes. When data is put in a dedicated data cache, it is sufficient just to inform GridGain of the name of the dedicated data cache.
  • the names of each set of dedicated data caches can be stored and accessible to GridGain in a configuration cache.
  • each dedicated data cache is provided with a specific name for each of the journey types of the invention, for which it is storing data.
  • the nodes which can handle a specific journey type put the dedicated data cache name in a configuration cache.
  • a node When a node starts up, it register itself with the grid and requests that the grid provide access to the dedicated data cache with the specific name of the journey type handled by the node.
  • the configuration cache is also informed when nodes start up and which are available. So, the dedicated cache contains the journey type and the journey name.
  • the grid knows from the configuration cache which nodes should be able to access the dedicated data cache with the specific journey name.
  • the request handler knows the name of the dedicated data cache by looking in the configuration cache.
  • the request handler When the request handler wants to send the request to the nodes, it puts it in the correct named dedicated data cache, and the grid then knows to which nodes it can possibly send the request. Using the dedicated data caches and the configuration cache in this way, the request handler only needs to know the name(s) of the dedicated data cache(s) for each request.
  • journey preferably means the actions (e.g., authorisations, verifications, back-end, queries, etc.) occurring in an EPC, triggered by external requests by customers.
  • a journey of a customer can occur between a front-end security layer and a back-end data processing layer of a bank, as follows: A customer's journey can be broken down in multiple small journeys, such as: 1. his/her checking his bank account to know its balance; 2. subsequently starting to make a payment from his/her account; 3. subsequently confirming and finalising the payment whereby an event occurs and the journey ends.
  • a journey in accordance with this invention could be, for example a journey for a bank account: Each entry and the current balance of a customer, as well as each authorisation, verification, querying back-end etc. occurring internally, triggered by events and external requests, is recorded and updated and all changes can be viewed; a journey of a consent status: Each change of a client's consent to do something on his behalf, e.g., by a trader, is recorded and updated and all changes can be viewed; a journey of energy management, e.g., electricity/gas metre: Each gas and/or electricity metre reading is recorded and all changes can be viewed; a journey of
  • “Version” of journey, or journey “versioning” herein refers to developing and deploying a new version, e.g. changing journey type l.y to journey type 1.x.
  • a journey comprises atomic blocks or steps, i.e. comprising single decision points.
  • one or more atomic steps may be different, however the controller logic remains the same, as does the business logic.
  • the method and system of the present invention advantageously permit to not, or only briefly interrupt the default pipeline, by adding a new node of the same type but running a slightly different formatting logic.
  • a new node may be started up that uses the same endpoint and the same controller, but with an additional handler or different handler addressing the difference in atomic blocks, so the route would be the same as for 1.x but with an additional external handler; while the back end application does need to be modified, creating the new version functionality in the node level.
  • Versioning preferably also comprises logic, and cache to keep those routes separate for 1.x and l.y, preferably keeping all the logic in the route mapping and support versioning by both request header and query string.
  • the methodology to define, implement and deploy new journeys also comprises a visual representation of journeys; a repository of atomic blocks; a (visual) journey definition tool; and/or a mapping tool to map the journey definition on the correct Java classes to be written.
  • journey event handlers and journey event processors are preferably provided that can define, implement, test, deploy, monitor and/or change journeys.
  • a datagrid advantageously allows this by keeping data and processing together, limiting pan-system data transfer.
  • Every version of a journey is preferably deployed in a separate container.
  • the solution is therefore a concrete practical and useful realisation of the high level microservices concept.
  • This also allows for multiple versions to be available at the same time and so if something is wrong with a new version, an older, functional version can be used instead. It also means different versions can be used by different users. This is useful for versioning, in case a new version of a journey does not work, and eliminates any downtime of the system for making changes. If there is a bug with the new version, it is easy to revert back to the old version. When a new version is created, the system can then start up a new node to deal with the new journey version in parallel to the existing nodes. This advantageously allows to introduce variance and novel offerings and journey without having to stop the existing system, and without affecting the performed.
  • the new node van simply, be stopped and the issues addressed without having to stop any of the processes. This allows to reduce down times, the time required to create a new version of a journey, and makes the system extremely adaptable.
  • the back end user can, without any additional investment into the back end architecture, create a multitude of new offerings, or respond to external events, e.g. law changes requiring different authorisations, or the like.
  • the system also comprises a tool for automated walk-through tests to verify the correctness of the journey definition, i.e. the (hierarchical) composition of atomic blocks.
  • a tool for automated walk-through tests to verify the correctness of the journey definition, i.e. the (hierarchical) composition of atomic blocks.
  • the system also preferably provides for a check to detect whether there are actions occurring that are opposing each other, or there are events or paths that cannot be reached. More preferably, the check may be implemented at the level of the visual modelling tool. This is particularly interesting for walking through new journeys; a dedicated tool would thus allow to minimize the need for regression testing.
  • the system preferably also comprises a tool to automatically generate test scenarios and test data, i.e. a test runner and/or test validator, and one or more monitoring tools for the test.
  • a tool to automatically generate test scenarios and test data i.e. a test runner and/or test validator, and one or more monitoring tools for the test.
  • Examples for journey versions are as follows: In a banking context, the journeys are easily changeable when the solution is implemented. An example is for instance if a geolocation input is required that shows that one customer is for instance located in Belgium at one point in time, and at a different moment located in the Netherlands, which may require a different acknowledgement. Hence a new or different version of a journey will need to be performed. Journeys may need to be changed when functionality changes.
  • the present solution can be used to create a flexible system which allows banks that want to update journeys to new versions to run multiple versions of a journey. This is useful when banks are deploying a new journey and there is a transition period where both the new and old journeys should be accessible.
  • car sharing where there is typically an underlying contract required, for exploitation, but also for insurance purposes.
  • car sharing scheme wants to add drivers to that contract, a new version of that contract as a new version of the journey is created rather than just adapting the current contract journey.
  • the contracts that were previously made according to the old contract journey, i.e. before the new version was created remain untouched, and so do not contain the functionality to add new drivers. Accordingly the present system and process permits to change people between separate contracts rather than having to amend the contract each and every time a driver is added or removed, as the contracts remain completely separate.
  • Another example is a telecommunications package such as an internet and telephone contract. New versions of a contract are created frequently in line with market conditions. Instead of having to cancel or amend an existing contract, it is much more efficient to close a new contract managed as new journey version, which is managed through a completely separate execution.
  • the term "container” as used herein preferably means a software package, also referred to as microservice of an EPC that contains everything needed to run the software package: code, runtime, system tools, system libraries - anything that can be installed on a server.
  • a container guarantees that its software will always run the same, regardless of its environment.
  • a container generally includes an entire runtime environment: an application, plus all its dependencies, libraries and other binaries, and configuration files needed to run it, bundled into one package. By containerizing an application platform and its dependencies, the effects of differences in OS distributions and underlying infrastructure can be avoided. Suitable container software packages are for instance available from Docker, Linux Containers, FreeBSD jails, AIX Workload Partitions, Solaris Containers and CoreOS rkt.
  • the containerisation of the nodes advantageously ensures operability and developments; security through redundancy; isolation of nodes in case of issues, simple access restrictions; requires inter node communication; encryption; but permits changeability and versioning without disturbing on-going activities.
  • a system for providing services to a plurality of users comprising at least one type of journey and at least a first and a second version of said one type of journey, wherein a journey has a beginning and an end and comprises a plurality of actions triggered by external events or events translated from user requests received in the system, the system comprising:
  • a first dedicated data cache to store data for said first version
  • a first dedicated logic coupled to said first dedicated data cache, to process said first version
  • a second dedicated logic coupled to said second dedicated data cache, to process the second version, the logic and data for the second version comprising at least some logic and data which is a copy of the logic and data for the first version and the first and second dedicated logic being configured to run the first and second versions in parallel for different users;
  • a request handler to determine, upon receiving a request related to a journey type, the version of said journey type based on the origin and/or context of the request, and to route the request to the first or second dedicated processor accordingly.
  • the system and apparatus preferably also comprise a copy of the container comprising the one or more nodes.
  • the system and apparatus more preferably also have the container and its copy run on different servers.
  • the system and apparatus preferably also have each node in communication with a configuration cache accessible by the front-end interface.
  • each node is configured to store information about the one or more journeys it is configured to process according to data in the configuration cache, and the front-end interface is configured to route the request based on information in the configuration cache to the node.
  • the system and apparatus preferably also have each node being configured to generate a journey ID for each commenced journey process and communicate the journey ID to the front-end interface, wherein the front-end interface is configured to route a request for a commenced journey process to the node based on the journey ID.
  • the system and apparatus preferably also have one or more actions of at least one of said journeys comprising a request for information stored in a back-end system and the back-end interface is configured to pre-load information from the back-end system into a data cache accessible by a plurality of nodes upon occurrence of an event.
  • the system and apparatus more preferably also have the event corresponding to a user initiating a journey process.
  • the system and apparatus preferably also comprise a back-end system that is a banking or insurance system.
  • the system and apparatus preferably also have the plurality of journeys comprising at least one out of plurality of actions for checking a user's account balance, a plurality of actions for transferring money into or out of a bank account, a plurality of actions for making a payment, a plurality of actions for a financial instrument transaction.
  • Figure 5 illustrates the basic architecture, from an applications standpoint, of a server in the system and apparatus of this invention.
  • the system and apparatus of this invention comprise at least two parallel servers, one being a back-up for the other, each with its own front-end interface, event interface, back-end interface, plurality of nodes and request handler.
  • the functionality of the system and apparatus and each of its servers is implemented via a central processor that manages the launching of script files and controls the operation of each server.
  • the central processor utilizes a central service utility that runs in the background and automates tasks within the system and apparatus.
  • the central service utility includes two types of utilities, one that runs on the individual servers and one that runs across all of the servers.
  • the central service utility utilizes an event-driven design to perform tasks by monitoring a set of directories on the various servers and identifying the presence of an event before initiating, or triggering, an associated script or application. Multiple scripts and flags can be used together to complete tasks, and each task may consist of multiple scripts and/or third party programs.
  • An event may include an empty file, a file comprising a single line of data, or a complete data file; and a flag file contains data that indicates what task is to be performed based on the event.
  • the central service utility supports tasks performed by standard internet-based services (e.g., Internet Information Services (IIS) and Active Server Page Network (ASP.NET) services) and standard software-framework-based services (e.g., Component Object Model Plus (COM+) and .NET services).
  • IIS Internet Information Services
  • ASP.NET Active Server Page Network
  • software-framework-based services e.g., Component Object Model Plus (COM+) and .NET services.
  • the internet-based services provide functionality for the robust, interactive data exchange processes of the present invention, and provide functionality for presenting data to users of the various systems of the I PI 100 in a web-browser-type format.
  • the software-framework-based services provide functionality for centrally managing all of the business logic and routines utilized by the present invention.
  • Each of the servers also includes functionality for managing a relational database.
  • Each database utilizes relational technology (e.g., a Relational Database Management System (RDBMS)) to manage all discrete data centrally, which facilitates the seamless sharing of information across all applications. And, by using standardized medical vocabularies to normalize data, information can also be shared seamlessly. In addition, by storing data in relational databases, that data can be more efficiently queried to produce de-identified data sets.
  • relational technology e.g., a Relational Database Management System (RDBMS)
  • RDBMS Relational Database Management System
  • each database also utilizes standardized database languages designed for the retrieval and management of data in relational database, such as the Structured Query Language (SQL) and XML-Related Specifications (e.g., SQL/XML).
  • SQL Structured Query Language
  • XML-Related Specifications e.g., SQL/XML
  • Those standardized database languages are used to assign normalized extensions to particular types of data so that data can be more easily located within a database.
  • those languages can also be used to define proprietary extensions unique to the system in which they are employed. Accordingly, the present invention provides functionality for storing data in a meaningful way that provides fast, easy access, which further enhances the data querying capabilities of the present invention.
  • the system preferably comprises a grid of nodes providing processing and in-memory data, the grid of nodes comprising a first node comprising said first dedicated data cache and said first dedicated logic.
  • the system preferably comprises a request handler to receive requests for services from users, the request handler comprising logic and data to determine a journey version a request relates to and logic to route the request to the first node upon a determination that the request relates to a version that can be handled by the first node.
  • the first node comprises a configuration cache accessible by the request handler, the configuration cache being configured to store information about the one or more journey versions the first node can handle, wherein the request handler is configured to route the request to the first node based on information in the configuration cache of the first node.
  • the grid of nodes comprises a second node comprising said second dedicated cache and said second dedicated logic.
  • a single node may be able to perform the logic of the first and the second node, and hence combine the functionality into a combined node.
  • the first node further comprises said second dedicated cache and said second dedicated logic.
  • first data cache and logic are provided by a first container and the second data cache and logic are provided by a second container.
  • FIGURE 2 is a flow diagram of the operation of the front-end interface or request handler of the system and apparatus of this invention.
  • the request handler communicates with front end applications for receiving external requests from users to allow the users to access services provided via the system and apparatus.
  • FIGURE 3 is a flow diagram of the operation of the event interface or external event handler of the system and apparatus of this invention.
  • the present system and process provide a configuration cache, and a separate data cache.
  • the system uses dedicated caches per journey type and these caches are only accessible by the nodes that have the code for this journey type.
  • the Grid will be informed about the cache's unique name or denominator.
  • dedicated caches are employed with a specific name/denominator for each of the journey types.
  • those nodes which can handle a specific journey type may place the cache name in the configuration cache.
  • the cache referred to herein as dynamic configuration cache, contains the journey type and the cache name.
  • the request handler in turn knows the name of the cache by looking in the configuration cache.
  • the request handler wants to send the request to the nodes, it puts it in the correct named cache, and the grid then knows to which nodes it can possibly send it.
  • the request handler only needs to know the name of the cache in which to put the requests; hence for the request handler, it is completely transparent how many nodes are available that can handle the request.
  • a defined configuration cache which is populated when the nodes start up and which is available to all the node advantageously permits to reduce data transfer, and time lost in finding the cache.
  • distributed cache technology itself is typically provided by IMDG
  • the present system makes use of a dynamic configuration cache.
  • a second cache comprises backend data in the in memory data grid, for performance reasons.
  • the original copy of his data is always kept in the backend, and to retrieve the most recent value, a « data retrieval instruction journey » is employed.
  • data of a certain age may be employed without upsetting the process, also called usage based freshness Nurse
  • a journey needs to retrieve data from the backend it first looks for the latest data in the in memory cache. For this, it specifies the maximum age that the piece of data should have, as the age is the elapsed time since the data was stored in the cache.
  • the cache will start a « data retrieval instruction journey » to get the "latest" data from the backend.
  • this data retrieval journey which is coupled to a « communication journey » brings back the value from the backend, it updates the value in the cache, and also stores the timestamp. It then returns the data to the original journey, which by now is fresh enough as it was just retrieved.
  • pre-fetching may be employed as an optimization, by coupling journeys that are typically used for a certain action. For example, when a money transfer journey is started, it is already clear that during a later stage in the processing, the debtor account details need to be checked. Hence a data retrieval journey directed to this data can already retrieve these values from the backend even though these are not immediately required, to have the data present in the in memory cache.
  • Backend pre-load In the case of very large databases, loading substantially all the data into the IMDG may not be economical or practical. Under such circumstances, a user may pre-load selective data that may be expected to be frequently requested or used into the IMDG. For data not pre-loaded in the IMDG, an IMDG client-pre-loader or loader plug-in may act as a data access layer to fetch or collect any frequently requested data from the database and cache this data in the IMDG.
  • Frequently requested data may be defined as any data or data object which is used, requested or accessed a set number of times or frequency that exceeds a predetermined threshold or is accessed a preset number of times over a selected time duration.
  • Figure 1 illustrates a preferred embodiment of the subject invention, namely how a journey instance is handled: The request handler receives a request (101), and checks if the request includes a journey instance ID (102). If it does, i.e.
  • the request handler checks the journey ID/cache name and identifies in which data cache a journey instance with the ID is stored (112)
  • the request processor instructs the grid to route the request to the node that is responsible for the identified data cache and which holds the primary copy of the journey with that ID at that time (113)
  • the request processor of the node receives the request (114)
  • the request processor translates the request into an event, generates an event object and includes the journey instance ID and data associated with the request in the object (115)
  • the request processor stores the object in the identified data cache (116)
  • the journey event processor is triggered and starts processing the event (117).
  • the request handler checks the configuration data cache for a journey type which matches the request and identifies from the configuration data the list of nodes which can process this journey type (103).
  • the request handler asks the grid to send the request to the request processor of one of the nodes in the list, and preferably the grid selects which of the nodes itself (104).
  • the request processor on the selected node receives the request (105), and the request processor of the node generates a unique journey instance ID (106).
  • the request processor then translates the received request into an initial event, generates an initial event object and includes the unique journey instance ID and data for the request in the initial event object (107).
  • the request processor from its internal configuration, identifies the name of the cache to use for this journey type. It stores the initial event object in the identified data cache, i.e. the grid preferably decides which node will hold the primary copy of the event and which nodes will contain a backup copy (108).
  • the journey event processor on the node which has the primary copy of the event is triggered and starts processing the event (109). Then the journey event processor replies to the request handler with the journey instance ID and the name of the cache (110), and the Request handler stores the journey instance ID and the name of the cache (111).
  • Figure 2 illustrates a preferred embodiment of the process depicted in Figure 1, namely whereby different versions of a journey may be employed.
  • the request handler receives a request (201), and confirms if the request include a journey instance ID (202). If not, the request handler checks the configuration data cache for a journey type which matches the request and identifies from the configuration data the list of nodes which can process this journey type (203).
  • the request handler uses versioning logic to determine to which version of a journey the request relates (205). The request handler then asks the grid to send the request to the request processor of one of the nodes in the list for the determined version, as indicated by the list (206). The request handler then asks the grid to send the request to the request processor of one of the nodes in the list (207) The request processor on the selected node receives the request (208), and the request processor of the primary node generates a unique journey instance ID (209). The request processor translates the received request into an initial event, generates an initial event object and includes the unique journey instance ID and data for the request in the initial event object (210).
  • the request processor from its internal configuration identifies the name of the cache to use for this journey type. It stores the initial event object in the identified data cache (211).
  • the journey event processor on the node which has the primary copy of the event is triggered and starts processing the event (212).
  • the journey event processor replies to the request handler with the journey instance ID and the name of the cache (213)
  • the Request handler stores the journey instance ID and the name of the cache (214)
  • the request handler checks the journey ID/cache name and identifies in which data cache a journey instance with the ID is stored (215)
  • the request processor instructs the grid to route the request to the node that is responsible for the identified data cache and which holds the primary copy of the journey with that ID at that time (216)
  • the request processor of the node receives the request (217)
  • the request processor translates the request into an event, generates an event object and includes the journey instance ID and data associated with the request in the object (218)
  • the request processor stores the object in the identified data cache (219)
  • the journey event processor is triggered and starts processing the event (220).
  • FIG 3 illustrates a flow chart illustrating a preferred embodiment of a process of an external event handler:
  • the external event handler receives an external event (301).
  • the external event handler sends a query to the grid to find the journey ID's of all journey instances that are affected by the external event (302).
  • the external event handler receives from the grid a list of all journey Id's of the journey instances that are affected by the external event (303).
  • the external event handler checks the configuration data cache and determines in which data cache each journey instance with the associated ID is stored (304).
  • the external event processor instructs the grid to route the external event to the node that is responsible for the identified data cache and which holds the primary copy of the journey with that ID at that time (305).
  • the external event processor of the node receives the external event (306).
  • the external event processor translates the external event into an internal event, generates an event object and includes the journey instance ID and data associated with the external event in the object (307).
  • the external event processor stores the object in the identified data cache (308).
  • the journey event processor is triggered and starts processing the event (309).
  • FIG. 4 illustrates a preferred embodiment of the apparatus and system (401) including an external event (402) handler (407).
  • a user terminal (403) for sending requests and receiving responses is linked through the security layer (404) to the front end interface (406).
  • the front end interface comprises one or more a request handlers (405) checks the request.
  • the request handler also has access to the Journey Id and data cache, and to the grid, to route the request to a node (413) with appropriate journey processing logic.
  • a similar line-up exists for external events, whereby the external event handler may effectively be considered a different kind of node.
  • the node resides in a container, and comprises a request processor (415), a Journey Event processor (418), and logic to process a particular journey. Nodes of the same type share the journey event and data cache, which is only accessible for node of this type (416); whereas all nodes share the dynamic
  • the journey event processor negotiates and communicates with the communication event data cache and the back end system (425) through a communication node (422).
  • the system thus links a back end, and provides a front end for users, and is highly scalable as additional nodes can be added, and operated using the dynamic configuration cache in a grid.
  • a request processor (501) creates an A-E-l event, i.e. an event 1 of journey type A, and pits it in the journey data cache (518) for journey type A. Then the journey event processor (515) is triggered on the node (524) where the primary copy of the journey instance is stored.
  • the journey processor executes the logic for A-E-l. As part of this logic, it creates a new event A-E-2 (504) and puts it in the cache (518). The processing for A-E-l is now finished.
  • the journey event processor is then triggered on the node where the primary copy of the journey instance is stored.
  • the journey processor then executes the logic for A-E-2. As part of this logic, it creates a new event B-E-l, wherein B represents another journey type, and puts it in the cache. This is similar to the request processor creating an event for a journey, only now, it is the journey event processor of another journey.
  • the processing for A-E-2 is now finished.
  • the journey event processor is triggered on the node where the primary copy of the journey instance is stored.
  • the journey processor executes the logic for B-E-l. As part of this logic, it needs to have a communication with the back-end system. So it creates a new event C-E-l (C is technically just another journey type, but as it communicates with the messaging environment of the bank, it is referred to as a
  • the communication event processor 508 is then triggered on the node where the primary copy of the communication instance is stored.
  • the communication processor 509 executes the logic for C-E-l. This implies sending a message to the backend and waiting for the reply (509 step 1).
  • the communication processor creates a message and sends it through the communication network to the specific backend.
  • the technical routing is done by the communication network (ESB/EAI, step 509.2).
  • the back end received the message and processes it, after this is done. It sends a reply message (509.3).
  • the reply message is received by the communication event processor.
  • the communication event processor translates this reply message into an event C-E-2 and puts it in the cache.
  • the communication event processor is now done with the processing of event C-E-l.
  • the communication event processor 510 is triggered on the node where the primary copy of the communication instance is stored.
  • the communication processor 511 executes the logic for C-E-2. This consist of creating and event B-E-2 for the journey instance which created C-E-l, and puts it in the cache of Journey type.
  • the journey event processor 512 is triggered on the node where the primary copy of the journey instance is stored.
  • the journey event processor 513 then executes the logic for B-E-2. This consist of creating and event A-E-3 for the journey instance which created B-E-1, and puts it in the cache of Journey type.
  • the journey event processor 514 is triggered on the node where the primary copy of the journey instance is stored.
  • the journey event processor 515 executes the logic for A-E-3. This consist of creating a feedback and returning it to the request handler.
  • Figure 6 finally illustrates the preferred use of system comprising a multitude of apparatus (here two, 603 and 604) and servers, which can advantageously be on one server, or on several servers at varying distances, whereby the operation may be distributed, to allow for instance security of supply, or scalability.
  • the configuration and data caches are preferably shared, although the containers and nodes may be located at a distance in different servers; and the grid may advantageously distribute the user front end requests (601) to the most useful request handler (603 or 604).

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention relates to a system for providing services to a plurality of users, the services comprising at least one type of journey and at least a first and a second version of said one type of journey, wherein a journey has a beginning and an end and comprises a plurality of actions triggered by external events or events translated from user requests received in the system, the system comprising: a first node responsible for a dedicated data cache to store data for said first version; and comprising first dedicated logic, coupled to said first dedicated data cache, to process said first version; a second node responsible for a dedicated data cache to store data for said second version; and comprising second dedicated logic, coupled to said second dedicated data cache, to process the second version, the logic and data for the second version comprising at least some logic and data which is a copy of the logic and data for the first version and the first and second dedicated logic being configured to run the first and second versions in parallel for different users; each of the first and second node being provided in a separate container; and a request handler to determine, upon receiving a request related to a journey type, the version of said journey type based on the origin and/or context of the request, and to route the request to the first or second node accordingly.

Description

System and Apparatus for Providing Different Versions of a Type of Data Journey
Field of the Invention
Aspects of the present invention relate to data storage and retrieval, and more particularly to a method, system and computer program product for dynamic creation and deployment of different versions of a data journey in an in memory data grid or similar data storage arrangement, particularly for an institution possessing large amounts of data. The invention particularly relates to a front-end interface with a handler connected to a node, storing data concerning events captured by an event interface, to provide a more rapid response to a customer's request of the system and apparatus, and dynamic creation and deployment of different versions of a data journey.
Background of the Invention
Traditionally, programming included a sequential process with a call stack, following a predictive central control. Changes to the software typically require a full reprogramming of the entire process, and hence are slow and costly to implement. However, issues can be corrected quite easily, as the systems are structured linearly. Among users with large amounts of data and increasing numbers of transactions, traditional banking institutions stand out as servicing in the above way for a long time, as they mainly operate large databases. If a new functionality needs to be included, there is no easy way to implement this without serious downtime, and reduced service speed, or access problems. Also, even if only a single service needs updating, all services need to be changed which could mean shutting down the services temporarily. For example, even if only 1 % needs changing to provide new functionality, 99% of the services will require checking or updating non- regression testing. A similar example outside the banking world relates to telecommunications service providers, whose business model evolves quickly, but who can only make changes with great difficulty and under risk of loss of services, as their legacy systems tend to be heavy, risky and slow. Risk is that times might be changed which should not be changed without negatively affecting the functioning (regression), typically, non-regression testing required
These issues have become more aggravated in recent years, as legacy database systems have reached their capacity, as illustrated by the almost daily unavailability of online and mobile banking or telecom applications.
In recent years, Event-Driven Architecture (EDA) has been developed, permitting implementing multistage business processes that deliver goods, services and information with minimum delay. EDA is based on distributed processing, whereby "nodes", i.e. microservices react to incoming events, and publish events in response. Event channels then transport events from one node to the next, usually asynchronously. This means the system is quicker and responds faster, as a reaction follows as soon as an event is triggered, and usually no central response is required to further the process. EDA often is used in conjunction with in memory data grids (IMDG) to store data that is more frequently used or requested, as it allows for fast scaling. The IMDG then in turn may be used in conjunction with traditional databases. The more frequently used or accessed data in the cache of the IMDG enables faster data access as the data is accessed from memory rather than from the database, which also reduces stress on the database. Also, IMDGs have the advantage of providing caches which are distributed over a number of nodes, thereby acting as channel and allowing a faster exchange of events.
However, IMDG usage is innately limited by the amount of data that can be stored in the IMDG. The loading of data into the IMDG is typically restricted to a static model where the end user determines the data that needs to be loaded into the IMDG. Also, IMDG architecture normally requires to bring nodes down for upgrading or changing of node logic, even if applying a
multiversion software updating technique, and subsequently create new versions of a node with additional code inserted. So while not the entire system needs to go into downtime, it still requires nodes of a certain type to go down, similar to the call stack architecture although at a different scale, and be replaced by the upgraded node. As this happens during the normal operation of the grid, it will touch other aspects of the code of the grid, and so creates a risk of instability.
As all of the processing logic is collated in a single large user application, namely the grid, there is always the risk of some code or action being corrupted or incomplete when introducing a change, leading to a potential total or partial failure. Due the decentralised nature of the distributed computing in a grid, this will lead to issues with recognition of errors or faulty code, as the interrelated nature of the IMDG makes it more difficult to spot, and to correct errors, as the distributed state means that diagnostics and error handling are more complex than the sequential synchronous predictive transactional centralized systems.
Accordingly, there is a need for improved data processing systems, allowing distributed computing and event driven architecture, as well as simple upgrading of nodes or data journey versioning, to enable users to add variance, and evolve in real time without the risk for unnecessary downtime and loss of data and possible impact on those parts of the system that should have stayed unaffected by the changes.
Summary of the Invention
Accordingly, the present invention relates to a system for providing services to a plurality of users, the services comprising at least one type of journey and at least a first and a second version of said one type of journey, wherein a journey has a beginning and an end and comprises a plurality of actions triggered by external requests and external events or events translated from user requests received in the system, the system comprising: a first dedicated data cache to store data for said first version; a first dedicated logic, coupled to said first dedicated data cache, to process said first version; a second dedicated data cache to store data for said second version; a second dedicated logic, coupled to said second dedicated data cache, to process the second version, the logic and data for the second version comprising at least some logic and data which is a copy of the logic and data for the first version and the first and second dedicated logic being configured to run the first and second versions in parallel for different users; a request handler to determine, upon receiving a request related to a journey type, the version of said journey type based on the origin and/or context of the request, and to route the request to the first or second dedicated processor accordingly. The improved system preferably is compatible with an existing data processing apparatus and system of the institution.
In a further aspect, the present invention also relates to a process for providing services to a plurality of users, the services comprising at least one type of journey and at least a first and a second version of said one type of journey, wherein a journey has a beginning and an end and comprises a plurality of actions triggered by external requests and external events or events translated from user requests received in a system, the process comprising: providing a first dedicated data cache to store data for said first version; providing a first dedicated logic, coupled to said first dedicated data cache, to process said first version; providing a second dedicated data cache to store data for said second version; providing a second dedicated logic, coupled to said second dedicated data cache, to process the second version, the logic and data for the second version comprising at least some logic and data which is a copy of the logic and data for the first version and the first and second dedicated logic being configured to run the first and second versions in parallel for different users; providing a request handler to determine, upon receiving a request related to a journey type, the version of said journey type based on the origin and/or context of the request, and to route the request to the first or second dedicated processor accordingly.
In a further aspect the present invention relates to machine executable instructions that when executed by at least one processor cause the at least one processor to perform the subject process.
In a further aspect the present invention relates to a non-transitory machine readable medium comprising machine executable instructions according to the subject process.
In a further aspect the present invention relates to machine readable storage storing machine executable instructions according the present process. In yet a further aspect, the present invention also relates to the use of a system according to the invention to update and/or variate the service offerings to the plurality of users by adding a new version of a node in parallel instead of updating an existing running node.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
The present invention is further described in the detailed description which follows in reference to the noted plurality of drawings by way of non-limiting examples of embodiments of the present invention in which like reference numerals represent similar parts throughout the several views of the drawings and wherein:
FIGURE 1 is a flow diagram illustrating a process that makes use of the architecture, from an applications standpoint, of a preferred embodiment of a server in the system and apparatus of this invention. Preferably the system and apparatus of this invention comprise at least two parallel servers, one being a back-up for the other, each with its own front-end interface, event interface, back-end interface, plurality of nodes and request handler, as also illustrated in Figures 4, 5 and 6.
FIGURE 2 is a flow diagram of the operation of the front-end interface or request handler of the system and apparatus in absence of different versions of a journey instance. The request handler communicates with front end applications for receiving external requests from users to allow the users to access services provided via the system and apparatus.
FIGURE 3 is a flow diagram of the operation of the event interface or optional external event handler of the system and apparatus of this invention, whereby different versions of a journey instance may be employed, allowing for a unique variance and granularity of the process.
FIGURE 4 is a flow diagram of the operation of the back-end interface or journey event processor of the system and apparatus of this invention.
FIGURE 5 is a flow diagram of the operation of the back-end interface or journey event processor of the system and apparatus of this invention.
FIGURE 6 is a block diagram of the operation of several systems coupled through the grid- provided configuration and data caches.
The flowcharts and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Detailed Description of the Invention
Unless otherwise stated, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
The flowcharts and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of embodiments of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," "module" or "system." Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fibre, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electromagnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fibre cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other
programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The term "event based processing" or "EPC" as used herein preferably means event-driven data programming that has an application flow control determined by events or changes in state. EPC is an ordered structure or chain of events and functions. It provides various connectors that allow alternative and parallel execution of processes. Furthermore it is specified by the usages of logical operators, such as OR, AND, and XOR. EPCs require non-local semantics, i.e., the execution behaviour of a particular node within an EPC may depend on the state of other parts of the EPC.
The term "activity" or "function" as used herein preferably means an active component of an EPC that has decision-making authority and that typically consumes time and resources.
The term "event" preferably means, in accordance with DIN 69900, a condition of an EPC that has occurred and causes a sequence of activities. An event is a passive component of an EPC and has no decision-making authority. Events can trigger activities. The term "node" as used herein preferably means a self-contained storage node of an EPC, especially a node interconnected with other nodes in a storage grid so that any node can communicate with any other node without the data having to pass through a centralized switch. Each node contains its own storage medium, microprocessor, indexing capability, and management layer. Because EPCs require non-local semantics, the execution behaviour of a particular node within an EPC may depend on the state of other nodes of the EPC, possibly far away. A cluster of several nodes may share a common switch, but each node is also connected to at least one other node cluster. Nodes are individual parts of a larger data structure, such as linked lists and tree data structures.
Preferably nodes are stored in grids. Grid storage introduces a new level of fault-tolerance and redundancy. If one storage node fails or a pathway between two nodes is interrupted, the network can reroute access another way or to a redundant node. This reduces the need for online maintenance, which practically eliminates downtime. Also, the multiple paths between pairs of nodes ensures that a storage grid can maintain optimum performance under conditions of fluctuating load. Also, grid storage is scalable. If a new storage node is added, it can be automatically recognized by the rest of the grid. This reduces the need for expensive hardware upgrades and downtime. Suitable software packages for grid storage for handling the stored data are available from Gemfire, Hazelcast, XAP Gigaspaces and GridGain. In this regard, the software package, used, preferably acts as a data access and processing layer between the application and the data storage and uses memory (RAM) as the primary data repository for data being processed rather than a disk.
Normally, grid storage systems for handling stored data involve the use of only one data cache, accessible by all nodes. However, as not all the nodes in the system and apparatus of this invention have the same functionality, each separate event handled by the system and apparatus of this invention must only end up in the nodes that can handle it. To this end, the system and apparatus of this invention have different dedicated data caches for each journey type, and each of these data caches is only accessible by the nodes that have the code for this journey type. Gridgain, for example, offers technology to have data stored in a dedicated data cache and distributed only to a predetermined set of nodes. When data is put in a dedicated data cache, it is sufficient just to inform GridGain of the name of the dedicated data cache. In this regard, the names of each set of dedicated data caches can be stored and accessible to GridGain in a configuration cache.
In the system and apparatus of this invention, each dedicated data cache is provided with a specific name for each of the journey types of the invention, for which it is storing data. The nodes which can handle a specific journey type, put the dedicated data cache name in a configuration cache. When a node starts up, it register itself with the grid and requests that the grid provide access to the dedicated data cache with the specific name of the journey type handled by the node. The configuration cache is also informed when nodes start up and which are available. So, the dedicated cache contains the journey type and the journey name. By each node registering itself in the grid, the grid knows from the configuration cache which nodes should be able to access the dedicated data cache with the specific journey name. The request handler knows the name of the dedicated data cache by looking in the configuration cache. When the request handler wants to send the request to the nodes, it puts it in the correct named dedicated data cache, and the grid then knows to which nodes it can possibly send the request. Using the dedicated data caches and the configuration cache in this way, the request handler only needs to know the name(s) of the dedicated data cache(s) for each request.
The term "journey" as used herein preferably means the actions (e.g., authorisations, verifications, back-end, queries, etc.) occurring in an EPC, triggered by external requests by customers. For instance, in the case of a banking environment, Herein, a journey of a customer can occur between a front-end security layer and a back-end data processing layer of a bank, as follows: A customer's journey can be broken down in multiple small journeys, such as: 1. his/her checking his bank account to know its balance; 2. subsequently starting to make a payment from his/her account; 3. subsequently confirming and finalising the payment whereby an event occurs and the journey ends. A journey in accordance with this invention could be, for example a journey for a bank account: Each entry and the current balance of a customer, as well as each authorisation, verification, querying back-end etc. occurring internally, triggered by events and external requests, is recorded and updated and all changes can be viewed; a journey of a consent status: Each change of a client's consent to do something on his behalf, e.g., by a trader, is recorded and updated and all changes can be viewed; a journey of energy management, e.g., electricity/gas metre: Each gas and/or electricity metre reading is recorded and all changes can be viewed; a journey of
computer/phone backups: All system backups are recorded and all changes can be viewed so that the reason for any problem with computer/phone can be identified.
"Version" of journey, or journey "versioning" herein refers to developing and deploying a new version, e.g. changing journey type l.y to journey type 1.x. A journey comprises atomic blocks or steps, i.e. comprising single decision points. In different versions of the same journey, one or more atomic steps may be different, however the controller logic remains the same, as does the business logic. The method and system of the present invention advantageously permit to not, or only briefly interrupt the default pipeline, by adding a new node of the same type but running a slightly different formatting logic. A new node may be started up that uses the same endpoint and the same controller, but with an additional handler or different handler addressing the difference in atomic blocks, so the route would be the same as for 1.x but with an additional external handler; while the back end application does need to be modified, creating the new version functionality in the node level. Versioning preferably also comprises logic, and cache to keep those routes separate for 1.x and l.y, preferably keeping all the logic in the route mapping and support versioning by both request header and query string. Preferably, the methodology to define, implement and deploy new journeys also comprises a visual representation of journeys; a repository of atomic blocks; a (visual) journey definition tool; and/or a mapping tool to map the journey definition on the correct Java classes to be written.
In the present system, journey event handlers and journey event processors are preferably provided that can define, implement, test, deploy, monitor and/or change journeys. A datagrid advantageously allows this by keeping data and processing together, limiting pan-system data transfer.
Every version of a journey is preferably deployed in a separate container. The solution is therefore a concrete practical and useful realisation of the high level microservices concept. By creating each version of a journey as an individual, runnable microservice, it can advantageously be modified without touching the already existing nodes, therefore reducing the risk of system collapse.
This also allows for multiple versions to be available at the same time and so if something is wrong with a new version, an older, functional version can be used instead. It also means different versions can be used by different users. This is useful for versioning, in case a new version of a journey does not work, and eliminates any downtime of the system for making changes. If there is a bug with the new version, it is easy to revert back to the old version. When a new version is created, the system can then start up a new node to deal with the new journey version in parallel to the existing nodes. This advantageously allows to introduce variance and novel offerings and journey without having to stop the existing system, and without affecting the performed. In the case that a new node should work appropriately, the new node van simply, be stopped and the issues addressed without having to stop any of the processes. This allows to reduce down times, the time required to create a new version of a journey, and makes the system extremely adaptable. By use of this process and system, the back end user can, without any additional investment into the back end architecture, create a multitude of new offerings, or respond to external events, e.g. law changes requiring different authorisations, or the like.
Preferably the system also comprises a tool for automated walk-through tests to verify the correctness of the journey definition, i.e. the (hierarchical) composition of atomic blocks. Similarly to classic walk-through tests for process-based scenarios, we will investigate how to automatically walk through the several paths and investigate the actions occurring in the several atomic blocks of a journey. The system also preferably provides for a check to detect whether there are actions occurring that are opposing each other, or there are events or paths that cannot be reached. More preferably, the check may be implemented at the level of the visual modelling tool. This is particularly interesting for walking through new journeys; a dedicated tool would thus allow to minimize the need for regression testing.
To allow automatic testing for the implementation, the system preferably also comprises a tool to automatically generate test scenarios and test data, i.e. a test runner and/or test validator, and one or more monitoring tools for the test.
Examples for journey versions are as follows: In a banking context, the journeys are easily changeable when the solution is implemented. An example is for instance if a geolocation input is required that shows that one customer is for instance located in Belgium at one point in time, and at a different moment located in the Netherlands, which may require a different acknowledgement. Hence a new or different version of a journey will need to be performed. Journeys may need to be changed when functionality changes. The present solution can be used to create a flexible system which allows banks that want to update journeys to new versions to run multiple versions of a journey. This is useful when banks are deploying a new journey and there is a transition period where both the new and old journeys should be accessible.
Another example is car sharing, where there is typically an underlying contract required, for exploitation, but also for insurance purposes. When the car sharing scheme wants to add drivers to that contract, a new version of that contract as a new version of the journey is created rather than just adapting the current contract journey.
When the new contract journey is made, the contracts that were previously made according to the old contract journey, i.e. before the new version was created, remain untouched, and so do not contain the functionality to add new drivers. Accordingly the present system and process permits to change people between separate contracts rather than having to amend the contract each and every time a driver is added or removed, as the contracts remain completely separate.
Another example is a telecommunications package such as an internet and telephone contract. New versions of a contract are created frequently in line with market conditions. Instead of having to cancel or amend an existing contract, it is much more efficient to close a new contract managed as new journey version, which is managed through a completely separate execution.
The term "container" as used herein preferably means a software package, also referred to as microservice of an EPC that contains everything needed to run the software package: code, runtime, system tools, system libraries - anything that can be installed on a server. A container guarantees that its software will always run the same, regardless of its environment. A container generally includes an entire runtime environment: an application, plus all its dependencies, libraries and other binaries, and configuration files needed to run it, bundled into one package. By containerizing an application platform and its dependencies, the effects of differences in OS distributions and underlying infrastructure can be avoided. Suitable container software packages are for instance available from Docker, Linux Containers, FreeBSD jails, AIX Workload Partitions, Solaris Containers and CoreOS rkt.
The containerisation of the nodes advantageously ensures operability and developments; security through redundancy; isolation of nodes in case of issues, simple access restrictions; requires inter node communication; encryption; but permits changeability and versioning without disturbing on-going activities.
In accordance with this invention, a system for providing services to a plurality of users, the services comprising at least one type of journey and at least a first and a second version of said one type of journey, wherein a journey has a beginning and an end and comprises a plurality of actions triggered by external events or events translated from user requests received in the system, the system comprising:
a first dedicated data cache to store data for said first version;
a first dedicated logic, coupled to said first dedicated data cache, to process said first version;
a second dedicated data cache to store data for said second version;
a second dedicated logic, coupled to said second dedicated data cache, to process the second version, the logic and data for the second version comprising at least some logic and data which is a copy of the logic and data for the first version and the first and second dedicated logic being configured to run the first and second versions in parallel for different users;
a request handler to determine, upon receiving a request related to a journey type, the version of said journey type based on the origin and/or context of the request, and to route the request to the first or second dedicated processor accordingly.
The system and apparatus preferably also comprise a copy of the container comprising the one or more nodes. The system and apparatus more preferably also have the container and its copy run on different servers.
The system and apparatus preferably also have each node in communication with a configuration cache accessible by the front-end interface. In this regard, each node is configured to store information about the one or more journeys it is configured to process according to data in the configuration cache, and the front-end interface is configured to route the request based on information in the configuration cache to the node. The system and apparatus preferably also have each node being configured to generate a journey ID for each commenced journey process and communicate the journey ID to the front-end interface, wherein the front-end interface is configured to route a request for a commenced journey process to the node based on the journey ID.
The system and apparatus preferably also have one or more actions of at least one of said journeys comprising a request for information stored in a back-end system and the back-end interface is configured to pre-load information from the back-end system into a data cache accessible by a plurality of nodes upon occurrence of an event. The system and apparatus more preferably also have the event corresponding to a user initiating a journey process.
The system and apparatus preferably also comprise a back-end system that is a banking or insurance system.
The system and apparatus preferably also have the plurality of journeys comprising at least one out of plurality of actions for checking a user's account balance, a plurality of actions for transferring money into or out of a bank account, a plurality of actions for making a payment, a plurality of actions for a financial instrument transaction.
As set out above, Figure 5 illustrates the basic architecture, from an applications standpoint, of a server in the system and apparatus of this invention. Preferably the system and apparatus of this invention comprise at least two parallel servers, one being a back-up for the other, each with its own front-end interface, event interface, back-end interface, plurality of nodes and request handler.
The functionality of the system and apparatus and each of its servers is implemented via a central processor that manages the launching of script files and controls the operation of each server. The central processor utilizes a central service utility that runs in the background and automates tasks within the system and apparatus. Thus, the central service utility includes two types of utilities, one that runs on the individual servers and one that runs across all of the servers.
The central service utility utilizes an event-driven design to perform tasks by monitoring a set of directories on the various servers and identifying the presence of an event before initiating, or triggering, an associated script or application. Multiple scripts and flags can be used together to complete tasks, and each task may consist of multiple scripts and/or third party programs. An event may include an empty file, a file comprising a single line of data, or a complete data file; and a flag file contains data that indicates what task is to be performed based on the event.
The central service utility supports tasks performed by standard internet-based services (e.g., Internet Information Services (IIS) and Active Server Page Network (ASP.NET) services) and standard software-framework-based services (e.g., Component Object Model Plus (COM+) and .NET services). The internet-based services provide functionality for the robust, interactive data exchange processes of the present invention, and provide functionality for presenting data to users of the various systems of the I PI 100 in a web-browser-type format. The software-framework-based services provide functionality for centrally managing all of the business logic and routines utilized by the present invention.
Each of the servers also includes functionality for managing a relational database. Each database utilizes relational technology (e.g., a Relational Database Management System (RDBMS)) to manage all discrete data centrally, which facilitates the seamless sharing of information across all applications. And, by using standardized medical vocabularies to normalize data, information can also be shared seamlessly. In addition, by storing data in relational databases, that data can be more efficiently queried to produce de-identified data sets.
To further facilitate the efficient querying of data, each database also utilizes standardized database languages designed for the retrieval and management of data in relational database, such as the Structured Query Language (SQL) and XML-Related Specifications (e.g., SQL/XML). Those standardized database languages are used to assign normalized extensions to particular types of data so that data can be more easily located within a database. And, in addition to standard extensions provided as part of those languages, those languages can also be used to define proprietary extensions unique to the system in which they are employed. Accordingly, the present invention provides functionality for storing data in a meaningful way that provides fast, easy access, which further enhances the data querying capabilities of the present invention.
The system preferably comprises a grid of nodes providing processing and in-memory data, the grid of nodes comprising a first node comprising said first dedicated data cache and said first dedicated logic.
The system preferably comprises a request handler to receive requests for services from users, the request handler comprising logic and data to determine a journey version a request relates to and logic to route the request to the first node upon a determination that the request relates to a version that can be handled by the first node.
In the subject system, preferably the first node comprises a configuration cache accessible by the request handler, the configuration cache being configured to store information about the one or more journey versions the first node can handle, wherein the request handler is configured to route the request to the first node based on information in the configuration cache of the first node.
In the subject system, preferably the grid of nodes comprises a second node comprising said second dedicated cache and said second dedicated logic. Alternatively, preferably a single node may be able to perform the logic of the first and the second node, and hence combine the functionality into a combined node. In the subject system, preferably the first node further comprises said second dedicated cache and said second dedicated logic.
In the subject system, preferably the first data cache and logic are provided by a first container and the second data cache and logic are provided by a second container.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of embodiments of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
FIGURE 2 is a flow diagram of the operation of the front-end interface or request handler of the system and apparatus of this invention. The request handler communicates with front end applications for receiving external requests from users to allow the users to access services provided via the system and apparatus.
FIGURE 3 is a flow diagram of the operation of the event interface or external event handler of the system and apparatus of this invention.
Contrary to the traditional use of an IMDG, where only one data cache is provided accessible by all nodes, the present system and process provide a configuration cache, and a separate data cache. As not all of the nodes have the same functionality, it must be assured that events only end up in the nodes that can handle it. Therefore the system uses dedicated caches per journey type and these caches are only accessible by the nodes that have the code for this journey type.
As the IMDG offers the technology to have data stored in a cache which is distributed over a set of nodes, when placing data in a cache, the Grid will be informed about the cache's unique name or denominator. Preferably, dedicated caches are employed with a specific name/denominator for each of the journey types. Advantageously, then, those nodes which can handle a specific journey type may place the cache name in the configuration cache. When a node starts up, it also registers itself with the grid and requests the grid to get access to the cache with the specific name. As a result, the cache, referred to herein as dynamic configuration cache, contains the journey type and the cache name. By the node registering itself in the grid, the grid knows which nodes access the cache with a specific name. The request handler in turn knows the name of the cache by looking in the configuration cache. When the request handler wants to send the request to the nodes, it puts it in the correct named cache, and the grid then knows to which nodes it can possibly send it. Using the caches in this way, the request handler only needs to know the name of the cache in which to put the requests; hence for the request handler, it is completely transparent how many nodes are available that can handle the request.
Accordingly, the use of a defined configuration cache which is populated when the nodes start up and which is available to all the node advantageously permits to reduce data transfer, and time lost in finding the cache. While distributed cache technology itself is typically provided by IMDG, the present system makes use of a dynamic configuration cache.
A second cache comprises backend data in the in memory data grid, for performance reasons. The original copy of his data is always kept in the backend, and to retrieve the most recent value, a « data retrieval instruction journey » is employed. In order to reduce the need for continuously requesting the most up to date data from the back end data bases, data of a certain age may be employed without upsetting the process, also called usage based freshness ». When a journey needs to retrieve data from the backend, it first looks for the latest data in the in memory cache. For this, it specifies the maximum age that the piece of data should have, as the age is the elapsed time since the data was stored in the cache. If the data in the cache is older than this maximum age, the cache will start a « data retrieval instruction journey », to get the "latest" data from the backend. When this data retrieval journey, which is coupled to a « communication journey », brings back the value from the backend, it updates the value in the cache, and also stores the timestamp. It then returns the data to the original journey, which by now is fresh enough as it was just retrieved.
Similarly, pre-fetching may be employed as an optimization, by coupling journeys that are typically used for a certain action. For example, when a money transfer journey is started, it is already clear that during a later stage in the processing, the debtor account details need to be checked. Hence a data retrieval journey directed to this data can already retrieve these values from the backend even though these are not immediately required, to have the data present in the in memory cache.
Backend pre-load: In the case of very large databases, loading substantially all the data into the IMDG may not be economical or practical. Under such circumstances, a user may pre-load selective data that may be expected to be frequently requested or used into the IMDG. For data not pre-loaded in the IMDG, an IMDG client-pre-loader or loader plug-in may act as a data access layer to fetch or collect any frequently requested data from the database and cache this data in the IMDG.
Frequently requested data may be defined as any data or data object which is used, requested or accessed a set number of times or frequency that exceeds a predetermined threshold or is accessed a preset number of times over a selected time duration. Figure 1 illustrates a preferred embodiment of the subject invention, namely how a journey instance is handled: The request handler receives a request (101), and checks if the request includes a journey instance ID (102). If it does, i.e. if the response is yes at step 102, the request handler checks the journey ID/cache name and identifies in which data cache a journey instance with the ID is stored (112) The request processor instructs the grid to route the request to the node that is responsible for the identified data cache and which holds the primary copy of the journey with that ID at that time (113) The request processor of the node receives the request (114) The request processor translates the request into an event, generates an event object and includes the journey instance ID and data associated with the request in the object (115) The request processor stores the object in the identified data cache (116) The journey event processor is triggered and starts processing the event (117).
If the response is "no" at step (102), the request handler checks the configuration data cache for a journey type which matches the request and identifies from the configuration data the list of nodes which can process this journey type (103). The request handler asks the grid to send the request to the request processor of one of the nodes in the list, and preferably the grid selects which of the nodes itself (104). The request processor on the selected node receives the request (105), and the request processor of the node generates a unique journey instance ID (106). The request processor then translates the received request into an initial event, generates an initial event object and includes the unique journey instance ID and data for the request in the initial event object (107). The request processor, from its internal configuration, identifies the name of the cache to use for this journey type. It stores the initial event object in the identified data cache, i.e. the grid preferably decides which node will hold the primary copy of the event and which nodes will contain a backup copy (108). The journey event processor on the node which has the primary copy of the event is triggered and starts processing the event (109). Then the journey event processor replies to the request handler with the journey instance ID and the name of the cache (110), and the Request handler stores the journey instance ID and the name of the cache (111).
Figure 2 illustrates a preferred embodiment of the process depicted in Figure 1, namely whereby different versions of a journey may be employed. The request handler receives a request (201), and confirms if the request include a journey instance ID (202). If not, the request handler checks the configuration data cache for a journey type which matches the request and identifies from the configuration data the list of nodes which can process this journey type (203).
It then confirms if more than one version of the journey are available (204). If yes, the request handler uses versioning logic to determine to which version of a journey the request relates (205). The request handler then asks the grid to send the request to the request processor of one of the nodes in the list for the determined version, as indicated by the list (206). The request handler then asks the grid to send the request to the request processor of one of the nodes in the list (207) The request processor on the selected node receives the request (208), and the request processor of the primary node generates a unique journey instance ID (209). The request processor translates the received request into an initial event, generates an initial event object and includes the unique journey instance ID and data for the request in the initial event object (210). The request processor from its internal configuration identifies the name of the cache to use for this journey type. It stores the initial event object in the identified data cache (211). The journey event processor on the node which has the primary copy of the event is triggered and starts processing the event (212). The journey event processor replies to the request handler with the journey instance ID and the name of the cache (213) The Request handler stores the journey instance ID and the name of the cache (214)
If the response was "yes" at step 202, the request handler checks the journey ID/cache name and identifies in which data cache a journey instance with the ID is stored (215) The request processor instructs the grid to route the request to the node that is responsible for the identified data cache and which holds the primary copy of the journey with that ID at that time (216) The request processor of the node receives the request (217) The request processor translates the request into an event, generates an event object and includes the journey instance ID and data associated with the request in the object (218) The request processor stores the object in the identified data cache (219) The journey event processor is triggered and starts processing the event (220).
Figure 3 illustrates a flow chart illustrating a preferred embodiment of a process of an external event handler: The external event handler receives an external event (301). The external event handler sends a query to the grid to find the journey ID's of all journey instances that are affected by the external event (302). The external event handler receives from the grid a list of all journey Id's of the journey instances that are affected by the external event (303). The external event handler checks the configuration data cache and determines in which data cache each journey instance with the associated ID is stored (304).
For each affected journey instance, the external event processor instructs the grid to route the external event to the node that is responsible for the identified data cache and which holds the primary copy of the journey with that ID at that time (305).
The external event processor of the node receives the external event (306). The external event processor translates the external event into an internal event, generates an event object and includes the journey instance ID and data associated with the external event in the object (307). The external event processor stores the object in the identified data cache (308). The journey event processor is triggered and starts processing the event (309).
Figure 4 illustrates a preferred embodiment of the apparatus and system (401) including an external event (402) handler (407). If a user terminal (403) for sending requests and receiving responses is linked through the security layer (404) to the front end interface (406). The front end interface comprises one or more a request handlers (405) checks the request. The request handler also has access to the Journey Id and data cache, and to the grid, to route the request to a node (413) with appropriate journey processing logic. A similar line-up exists for external events, whereby the external event handler may effectively be considered a different kind of node. The node resides in a container, and comprises a request processor (415), a Journey Event processor (418), and logic to process a particular journey. Nodes of the same type share the journey event and data cache, which is only accessible for node of this type (416); whereas all nodes share the dynamic
configuration cache (411). Once an event is processed, the journey event processor negotiates and communicates with the communication event data cache and the back end system (425) through a communication node (422). The system thus links a back end, and provides a front end for users, and is highly scalable as additional nodes can be added, and operated using the dynamic configuration cache in a grid.
In Figure 5, a process is shown in the subject system wherein several different journeys are triggered by events, in turn triggered by different nodes. A request processor (501) creates an A-E-l event, i.e. an event 1 of journey type A, and pits it in the journey data cache (518) for journey type A. Then the journey event processor (515) is triggered on the node (524) where the primary copy of the journey instance is stored.
Then the journey processor executes the logic for A-E-l. As part of this logic, it creates a new event A-E-2 (504) and puts it in the cache (518). The processing for A-E-l is now finished. The journey event processor is then triggered on the node where the primary copy of the journey instance is stored. The journey processor then executes the logic for A-E-2. As part of this logic, it creates a new event B-E-l, wherein B represents another journey type, and puts it in the cache. This is similar to the request processor creating an event for a journey, only now, it is the journey event processor of another journey. The processing for A-E-2 is now finished. The journey event processor is triggered on the node where the primary copy of the journey instance is stored. The journey processor executes the logic for B-E-l. As part of this logic, it needs to have a communication with the back-end system. So it creates a new event C-E-l (C is technically just another journey type, but as it communicates with the messaging environment of the bank, it is referred to as a
communication journey) and puts it in the cache. The processing for B-E-l is now finished. The communication event processor 508 is then triggered on the node where the primary copy of the communication instance is stored. The communication processor 509 executes the logic for C-E-l. This implies sending a message to the backend and waiting for the reply (509 step 1). The communication processor creates a message and sends it through the communication network to the specific backend. The technical routing is done by the communication network (ESB/EAI, step 509.2). The back end received the message and processes it, after this is done. It sends a reply message (509.3). The reply message is received by the communication event processor. The communication event processor translates this reply message into an event C-E-2 and puts it in the cache. The communication event processor is now done with the processing of event C-E-l. The communication event processor 510 is triggered on the node where the primary copy of the communication instance is stored. The communication processor 511 executes the logic for C-E-2. This consist of creating and event B-E-2 for the journey instance which created C-E-l, and puts it in the cache of Journey type. The journey event processor 512 is triggered on the node where the primary copy of the journey instance is stored. The journey event processor 513 then executes the logic for B-E-2. This consist of creating and event A-E-3 for the journey instance which created B-E-1, and puts it in the cache of Journey type. The journey event processor 514 is triggered on the node where the primary copy of the journey instance is stored. The journey event processor 515 executes the logic for A-E-3. This consist of creating a feedback and returning it to the request handler.
Figure 6 finally illustrates the preferred use of system comprising a multitude of apparatus (here two, 603 and 604) and servers, which can advantageously be on one server, or on several servers at varying distances, whereby the operation may be distributed, to allow for instance security of supply, or scalability. The configuration and data caches are preferably shared, although the containers and nodes may be located at a distance in different servers; and the grid may advantageously distribute the user front end requests (601) to the most useful request handler (603 or 604).
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to embodiments of the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of embodiments of the invention. The embodiment was chosen and described in order to best explain the principles of embodiments of the invention and the practical application, and to enable others of ordinary skill in the art to understand embodiments of the invention for various embodiments with various modifications as are suited to the particular use contemplated.
Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art appreciate that any arrangement which is calculated to achieve the same purpose may be substituted for the specific embodiments shown and that embodiments of the invention have other applications in other environments.

Claims

ims
A system for providing services to a plurality of users, the services comprising at least one type of journey and at least a first and a second version of said one type of journey, wherein a journey has a beginning and an end and comprises a plurality of actions triggered by external events or events translated from user requests received in the system, the system comprising: a first node responsible for a dedicated data cache to store data for said first version; and comprising first dedicated logic, coupled to said first dedicated data cache, to process said first version;
a second node responsible for a dedicated data cache to store data for said second version; and comprising second dedicated logic, coupled to said second dedicated data cache, to process the second version, the logic and data for the second version comprising at least some logic and data which is a copy of the logic and data for the first version and the first and second dedicated logic being configured to run the first and second versions in parallel for different users;
each of the first and second node being provided in a separate container; and
a request handler to determine, upon receiving a request related to a journey type, the version of said journey type based on the origin and/or context of the request, and to route the request to the first or second node accordingly.
A system according to claim 1, comprising a grid of nodes providing processing and in-memory data, the grid of nodes comprising said first node comprising said first dedicated data cache and said first dedicated logic.
A system according to claim 1, wherein the request handler comprises logic and data to determine a journey version a request relates to and logic to route the request to the first node upon a determination that the request relates to a version that can be handled by the first node.
A system according to claim 3, wherein the system further comprises a configuration cache accessible by the request handler, the configuration cache being configured to store information about the one or more journey versions the first node can handle, wherein the request handler is configured to determine that a journey has more than one version and further route the request to the first node based on information in the configuration cache. A system according to any one of the previous claims, wherein the grid of nodes comprises said second node and further nodes comprising said second dedicated cache and said second dedicated logic, and any further dedicated cache and dedicated logic.
A system according to any one of the preceding claims, wherein the first node is provided in a first container and the second node is provided in a second container.
A process for providing services to a plurality of users, the services comprising at least one type of journey and at least a first and a second version of said one type of journey, wherein a journey has a beginning and an end and comprises a plurality of actions triggered by external events or events translated from user requests received in the system, the process comprising:
Providing a first dedicated data cache to store data for said first version;
Providing first dedicated logic, coupled to said first dedicated data cache, to process said first version;
Providing a second dedicated data cache to store data for said second version;
Providing second dedicated logic, coupled to said second dedicated data cache, to process the second version, the logic and data for the second version comprising at least some logic and data which is a copy of the logic and data for the first version and the first and second dedicated logic being configured to run the first and second versions in parallel for different users;
Providing a request handler to determine, upon receiving a request related to a journey type, the version of said journey type based on the origin and/or context of the request, and to route the request to the first or second dedicated processor accordingly.
8. Machine executable instructions that when executed by at least one processor cause the at least one processor to perform the process of claim 7.
9. A non-transitory machine readable medium comprising machine executable instructions
according to claim 8.
10. Machine readable storage storing machine executable instructions according to claim 8.
11. Use of a system according to any one of claims 1 to 6 to update and variate the service offerings to the plurality of users by adding a new version of a node in parallel instead of updating an existing running node.
12. Machine executable instructions that when executed by at least one machine cause the at least one machine to provide:
a front-end interface to communicate with front end applications for receiving external requests from users to allow the users to access services, wherein the services are provided as one or more event-driven journeys, each of which corresponds to one or more events, at least some of which are translated from one or more of the requests received via the front-end interface, the events triggering one or more actions;
a first node responsible for a dedicated data cache to store data for said first version; and comprising first dedicated logic, coupled to said first dedicated data cache, to process said first version;
a second node responsible for a dedicated data cache to store data for said second version; and comprising second dedicated logic, coupled to said second dedicated data cache, to process the second version, the logic and data for the second version comprising at least some logic and data which is a copy of the logic and data for the first version and the first and second dedicated logic being configured to run the first and second versions in parallel for different users;
each of the one or more nodes being provided in a separate container;
each node of the one or more nodes to store logic and be responsible to store data associated with at least one journey of the plurality of journeys and at least one node of the one or more nodes being provided in a separate container;
a request handler to determine, upon receiving a request related to a journey type, the version of said journey type based on the origin and/or context of the request, and to route the request to the first or second node accordingly, and
a back-end interface to communicate with a back-end system comprising logic and data; wherein the front-end interface comprises a request handler configured and operative to route a request, associated with an event of a journey, to a node handling logic and data for processing that event.
13. A non-transitory machine readable medium comprising machine executable instructions
according to claim 12.
14. Machine readable storage storing machine executable instructions according to claim 12.
PCT/EP2017/072023 2016-09-02 2017-09-01 System and apparatus for providing different versions of a type of data journey WO2018042022A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
BE2016/5677 2016-09-02
BE2016/5677A BE1024534B1 (en) 2016-09-02 2016-09-02 SYSTEM AND DEVICE TO PROVIDE DIFFERENT VERSIONS OF A TYPE OF DATA TRAIN

Publications (1)

Publication Number Publication Date
WO2018042022A1 true WO2018042022A1 (en) 2018-03-08

Family

ID=57544146

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2017/072023 WO2018042022A1 (en) 2016-09-02 2017-09-01 System and apparatus for providing different versions of a type of data journey

Country Status (2)

Country Link
BE (1) BE1024534B1 (en)
WO (1) WO2018042022A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10241778B2 (en) * 2016-09-27 2019-03-26 Ca, Inc. Microservices version state visualization

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130239004A1 (en) * 2012-03-08 2013-09-12 Oracle International Corporation System and method for providing an in-memory data grid application container
WO2016111673A1 (en) * 2015-01-05 2016-07-14 Hewlett Packard Enterprise Development Lp Multi-tenant upgrading

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130239004A1 (en) * 2012-03-08 2013-09-12 Oracle International Corporation System and method for providing an in-memory data grid application container
WO2016111673A1 (en) * 2015-01-05 2016-07-14 Hewlett Packard Enterprise Development Lp Multi-tenant upgrading

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CRISTIAN CADAR ET AL: "Multi-version software updates", HOT TOPICS IN SOFTWARE UPGRADES, IEEE PRESS, 445 HOES LANE, PO BOX 1331, PISCATAWAY, NJ 08855-1331 USA, 3 June 2012 (2012-06-03), pages 36 - 40, XP058057348, ISBN: 978-1-4673-1764-1, DOI: 10.1109/HOTSWUP.2012.6226615 *
FENG CHEN ET AL: "Multi-version Execution for the Dynamic Updating of Cloud Applications", 2015 IEEE 39TH ANNUAL COMPUTER SOFTWARE AND APPLICATIONS CONFERENCE (COMPSAC). PROCEEDINGS IEEE COMPUTER SOCIETY LOS ALAMITOS, CA, USA, vol. 2, 2015, pages 185 - 190, XP002767995 *
SAMOVSKY M ET AL: "Cloud-based classification of text documents using the Gridgain platform", APPLIED COMPUTATIONAL INTELLIGENCE AND INFORMATICS (SACI), 2012 7TH IEEE INTERNATIONAL SYMPOSIUM ON, IEEE, 24 May 2012 (2012-05-24), pages 241 - 245, XP032210016, ISBN: 978-1-4673-1013-0, DOI: 10.1109/SACI.2012.6250009 *

Also Published As

Publication number Publication date
BE1024534B1 (en) 2018-04-04
BE1024534A1 (en) 2018-03-27

Similar Documents

Publication Publication Date Title
US20190303779A1 (en) Digital worker management system
US7809663B1 (en) System and method for supporting the utilization of machine language
US20200348921A1 (en) Microservice update system
AU2014209611B2 (en) Instance host configuration
US20120158795A1 (en) Entity triggers for materialized view maintenance
US10691712B2 (en) System and method for merging a mainframe data file to a database table for use by a mainframe rehosting platform
CN110515647B (en) Static resource management method, device, equipment and storage medium
CN103473696A (en) Method and system for collecting, analyzing and distributing internet business information
CN104517181B (en) Enterprise content management system and method for nuclear power station
CN108647265A (en) Based on multiple platform data interactive formula system
US20080109436A1 (en) Finalize sequencing for objects
CA3150183C (en) Flink streaming processing engine method and device for real-time recommendation and computer equipment
GB2436464A (en) System for managing objects according to the common information model
CN112486466B (en) Method for realizing quick universal basic framework based on micro-service architecture
US20150378828A1 (en) Test data management
WO2018234265A1 (en) System and apparatus for a guaranteed exactly once processing of an event in a distributed event-driven environment
González-Aparicio et al. A new model for testing CRUD operations in a NoSQL database
US11494179B1 (en) Software update on legacy system without application disruption
WO2018042022A1 (en) System and apparatus for providing different versions of a type of data journey
CN114035864A (en) Interface processing method, interface processing device, electronic device, and storage medium
US20240012835A1 (en) Synchronizing changes in a distributed system with intermittent connectivity
Zhou et al. SDAC: A model for analysis of the execution semantics of data processing framework in cloud
US20230297353A1 (en) Intelligent data processing system with multi-interface frontend and backend
WO2018042021A1 (en) System and apparatus for processing large amounts of data
Whitesell et al. Decentralizing Data

Legal Events

Date Code Title Description
DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17764788

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17764788

Country of ref document: EP

Kind code of ref document: A1