WO2023249558A1 - Procédé et système pour exécuter de manière adaptative une pluralité de tâches - Google Patents

Procédé et système pour exécuter de manière adaptative une pluralité de tâches Download PDF

Info

Publication number
WO2023249558A1
WO2023249558A1 PCT/SG2023/050433 SG2023050433W WO2023249558A1 WO 2023249558 A1 WO2023249558 A1 WO 2023249558A1 SG 2023050433 W SG2023050433 W SG 2023050433W WO 2023249558 A1 WO2023249558 A1 WO 2023249558A1
Authority
WO
WIPO (PCT)
Prior art keywords
tasks
node
data
nodes
task
Prior art date
Application number
PCT/SG2023/050433
Other languages
English (en)
Inventor
Ishan HANDA
Priyanka HARLALKA
Original Assignee
Gp Network Asia Pte. Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gp Network Asia Pte. Ltd. filed Critical Gp Network Asia Pte. Ltd.
Publication of WO2023249558A1 publication Critical patent/WO2023249558A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4016Transaction verification involving fraud or risk level assessment in transaction processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/02Reservations, e.g. for tickets, services or events
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0633Workflow analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/08Payment architectures
    • G06Q20/12Payment architectures specially adapted for electronic shopping systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/14Travel agencies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/40Business processes related to the transportation industry

Definitions

  • the present disclosure relates broadly, but not exclusively, to methods and systems for adaptively executing a plurality of tasks.
  • One of the ways of implementing risk management for a platform offering various services and/or products for sale is to maintain data points related to, for example, users who are using the platform. Data points can be relied upon for preventing fraud by attackers who use different payment instruments like lost or stolen cards for illicit earnings.
  • Various types of data points can be used in fraud detection and prevention.
  • aggregates can be computed based on raw data relating to transactions or other events occurring over a given time period, and these aggregates can be stored for use later.
  • the number of transactions performed by a user in the given time period e.g. the last 30 days
  • KYC Know Your Customer
  • These data points may be used in machine learning (ML) models to detect anomalies and decline potential fraudulent transactions. They may also be used in rules to set hard limits on the usage of various payment instruments that are available for a platform to reduce financial loss. Rules may be of the format ‘decline a transaction if the user has done transactions with 50 different merchants in the last 1 week’. The data point in this rule may be the number of unique merchants that a user has transacted within the last 1 week. This is an example of an aggregate used in a rule.
  • ML machine learning
  • a method for adaptively executing a plurality of tasks comprising: defining, by one or more processors, a schema representing a plurality of tasks, each task comprising a data retrieval and/or data evaluation operation, the schema comprising task information indicating how to execute each of the plurality of tasks; generating, by the one or more processors, a graph representation of the plurality of tasks based on the schema, the graph representation comprising a plurality of nodes, each node of the plurality of nodes corresponding to one of the plurality of tasks; and executing, by the one or more processors, the plurality of tasks based on the graph representation and the task information.
  • a system for adaptively executing a plurality of tasks comprising: at least one processor; and at least one memory including computer program code; the at least one memory and the computer program code configured to, with the at least one processor, cause the system at least to: define, by one or more processors, a schema representing a plurality of tasks, each task comprising a data retrieval and/or data evaluation operation, the schema comprising task information indicating how to execute each of the plurality of tasks; generate, by the one or more processors, a graph representation of the plurality of tasks based on the schema, the graph representation comprising a plurality of nodes, each node of the plurality of nodes corresponding to one of the plurality of tasks; and execute, by the one or more processors, the plurality of tasks based on the graph representation and the task information.
  • FIG. 1 illustrates a system for adaptively executing a plurality of tasks according to various embodiments of the present disclosure.
  • FIG. 2 is a schematic diagram of a data processing server, according to various embodiments of the present disclosure.
  • FIG. 3 is an overview of a process for executing a plurality of tasks according to an example.
  • FIG. 4A depicts an overview of a process for adaptively executing a plurality of tasks according to various embodiments.
  • FIGs. 4B and 4C depict an example illustration of a schema for generating a graph according to various embodiments.
  • Fig. 5 depicts a graph illustrating a dependency relationship according to the schema of Figs. 4B and 4C.
  • FIGs. 6A - 6G depict example illustrations of various graphs for adaptively executing a plurality of tasks according to various embodiments.
  • FIG. 7 illustrates an example flow diagram of a method for adaptively executing a plurality of tasks according to various embodiments.
  • Fig. 8A is a schematic block diagram of a general purpose computer system upon which the data processing server of Fig. 2 can be practiced.
  • Fig. 8B is a schematic block diagram of a general purpose computer system upon which a combined transaction processing and data processing server of Fig. 1 can be practiced.
  • FIG. 9 shows an example of a computing device to realize the transaction processing server shown in Fig. 1.
  • FIG. 10 shows an example of a computing device to realize the data processing server shown in Fig. 1 .
  • FIG. 11 shows an example of a computing device to realize a combined transaction processing and data processing server shown in Fig. 1 .
  • a platform refers to a system of networked computer devices that facilitates exchanges between two or more interdependent groups, for example between a user (of a product or service) and a provider (of the product or service), each of which has a respective account registered with the platform.
  • a platform may offer a service offered by a provider such as a ride, delivery, online shopping, insurance, and other similar services to a requestor.
  • the user can typically access the platform via a website, an application, or other similar methods.
  • a schema refers to a framework or plan for structuring data, and defines how data may be organized within a database.
  • a schema may be used for generating a graph for adaptively executing a plurality of tasks.
  • the graph may comprise a plurality of nodes.
  • Each node may be connected to one or more other nodes by an edge or link.
  • a node may be connected from an upstream position to another node in downstream position. In this case, the node at the downstream position may be termed a child node, while the node at the upstream position may be termed a parent node.
  • Each node may be representative of a task, such as for example retrieving data from a data source (e.g. a data point (DP) node), processing the retrieved data utilizing machine learning (ML) models (e.g. a model node), evaluation of rules that may be set based on the processed data (e.g. a rule evaluation node), and other similar tasks.
  • a data source e.g. a data point (DP) node
  • ML machine learning
  • the data that is retrieved by executing a data point node may relate to users of a platform.
  • a ML model may process the retrieved data (e.g. by execution of a model node).
  • Computing resources such as processing threads that are used by a ML model for the processing may be termed as a worker pool.
  • each of the plurality of ML model nodes may utilize its own worker pool or a shared worker pool for processing the retrieved data.
  • a rule evaluation node may utilize the processed data to, for example, evaluate pre-defined static rules.
  • a rule may be a hard limit on the usage of various payment instruments by a user of a platform to reduce financial loss.
  • a rule may be of the format ‘decline a transaction if the user has done transactions with 50 different merchants in the last 1 week’. The data point for evaluating this rule may the number of unique merchants that a user has transacted within the last 1 week.
  • FIG. 3 An example of a graph is shown in Fig. 3.
  • each of DP nodes DP2 302, DP3 304, DP4 306, DP5 308, DP6 310, DP7 312, DP8 314, DP9 316, DP10 318 and DP1 1 320 represent a data point
  • each of model nodes Model-1 322, Model-2 324 and Model-3 326 represents a machine learning model that takes one or more of the data points DP1 -DP1 1 as input and generates one or more outputs based on that input
  • rule evaluation node 328 represents a set of rules that can be applied to one or more of the data points DP1 -DP1 1 and/or one or more of the outputs of the machine learning models Model-1 , Model-2, Model-3.
  • each model evaluation and rule evaluation may be using its worker pool, to evaluate data points individually. For example, if there are four worker pools in the above scenario, there is no guarantee that the worker pool of Model-1 322 will finish fetching DP2 302 and DP3 304 before the worker pool of Model-2 324 or vice versa. Similarly, as DP1 1 320, DP10 318, Model-1 322 and Model-2 324 are needed for rule evaluation node 328 and Model-3 326, it is possible to end up having similar problems here. If this problem is extended to thousands of data points, it becomes very evident that this method of fetching data points is not scalable.
  • FIG. 400 An objective to be discussed in the present disclosure is to provide an approach for deciding the execution order of nodes in a graph.
  • the graph 300 of Figure 3 may be restructured to graph 400 of Fig. 4A.
  • all DP2 nodes 302 and DP3 nodes 304 are consolidated into only 1 DP2 node 402 and DP3 node 404 respectively.
  • the number of vertices (which is directly equivalent to the number of data point evaluations) has gone down considerably in the new design. More importantly, there is a dependency between different data points which did not exist in graph 300. This advantageously ensures none of the data points which are dependent on the current data point gets evaluated before the current data point’s evaluation completes.
  • the proposed architecture advantageously aims to eliminate all the duplicate data point retrieval calls.
  • This design also eliminates the need to have n-dependent application programming interface (API) calls as code, where each API call uses some data from the output of previous calls. These calls can be modelled in, for example, the graph 400 itself, thus ensuring maximum parallelism when executing the nodes.
  • API application programming interface
  • This design also advantageously makes a platform application more scalable such that a large number of machine learning models and rule evaluations can run parallelly.
  • the proposed architecture is flexible in that if a latency threshold is exceeded (i.e., the overall process of fetching data points takes more than x ms), it is possible to ignore the rest of the data points that were supposed to be evaluated according to the graph 400 and continue with rule evaluation and model evaluation.
  • the data points may be represented by a schema (for example, one that is parsed by a parser of a platform) to arrive at the graph structure of graph 400.
  • An example schema 430 is shown in Figs. 4B and 4C, wherein the data points are modelled as a tree structure. While the schema 430 is in a JavaScript Object Notation (JSON) format, it will be appreciated that other similar type of formats may also be utilized.
  • JSON JavaScript Object Notation
  • each label is parsed and appended to the previously parsed labels. These appended labels are stored as-is, so that they may be utilized at a later step to uniquely identify nodes.
  • Some examples of labels are sender 432, success 434, curr day 440, amount 446, and other similar labels.
  • a check is performed to see if the values present in filters have a Node: prefix. In this example, this signifies that a node needs to be created and evaluated to find the value as indicated by the filters.
  • the actual data source from where a node’s value has to be retrieved may be found when we arrive at a corresponding node under the output label e.g. output node. In this example, when arriving at the step of processing a sender.
  • id node which is created based on Node:sender.id label 438
  • the value required by the node needs to be derived from an input data source because data source 466 for the node is indicated as “InputUserlD” 468. If the filter does not have the Node: prefix then the value is taken as is.
  • a node can be created from it. The name of the node is derived by appending the labels parsed until that node.
  • An identifier in the mapping before a colon denotes that it is an API call.
  • An identifier in the mapping before an underscore denotes a configuration that needs to be utilized to make an API call.
  • An identifier in the mapping after an underscore denotes a path in a response from which an actual value is fetched.
  • API:PaxKYCDetails_sender.kycLevel 464 under label kyc 462 signifies that an API call needs to be made to retrieve the user’s Know Your Customer (KYC) details.
  • KYC Know Your Customer
  • the second type of output node gathers all the filters from the JSON structure until an output node is encountered and forms a database request which is supposed to retrieve the relevant data point by filtering over a data set with given filters. While parsing the JSON structure for the second type of node, the parsing process keeps storing all the filters that are encountered before the output node. All filters that are encountered on a level before an output node or on a same level as the output node are passed to the output node for filtering. For example in the schema 430, the filter "from user id": "Node:sender.id” 438 on the first level becomes a filter for all the nested outputs in the schema.
  • aggregation column 448 indicates amount 450 which is a number (e.g. meta tag 452 indicating type 454 as number 456), and operator 458 is indicated as a summation (e.g. sum 460).
  • a separate parser may be utilized to translate now/d 442 to 2022-03-09 assuming today’s date is 2022-03-09.
  • the third type of output node has the prefix ‘Input:’ and signifies the input data that we already have.
  • “id” data source 466 indicates InputUserlD 468 to signify that data for “id” can be retrieved from input data map by parsing the path UserID.
  • meta tags e.g. meta tag 448, define the data type of the output nodes and validates it.
  • the third type of output node is the same as the first type described above, in terms of having an identifier before a colon which defines what kind of node it is. It does not have an identifier to define configuration which is not needed for this kind of node. In this case, a response path is indicated after the colon which can be used to fetch an actual value.
  • sender. id is a dependency for fetching values of sender.success.curr day. amount and sender. kyc .
  • graph 500 of Fig. 5 e.g. sender. id 502 is a dependency for fetching values of sender. success. curr day. amount 504 and sender. kyc 506).
  • a user may be any suitable type of entity, which may include a person, a consumer looking to purchase a product or service via a transaction processing server, a seller or merchant looking to sell a product or service via the transaction processing server, a motorcycle driver or pillion rider in a case of the user looking to book or provide a motorcycle ride via the transaction processing server, a car driver or passenger in a case of the user looking to book or provide a car ride via the transaction processing server, and other similar entity.
  • a user who is registered to the transaction processing or data processing server will be called a registered user.
  • a user who is not registered to the transaction processing server or data processing server will be called a non-registered user.
  • the term user will be used to collectively refer to both registered and non-registered users.
  • a user may interchangeably be referred to as a requestor (e.g. a person who requests for a product or service) or a provider (e.g. a person who provides the requested product or service to the requestor).
  • a data processing server is a server that hosts software application programs for performing data processing in relation to adaptively executing a plurality of tasks.
  • the data processing server may be implemented as shown in schematic diagram 200 of Fig. 2 for adaptively executing a plurality of tasks.
  • a transaction processing server is a server that hosts software application programs for processing payment transactions for, for example, purchasing of a good or service by a user.
  • the transaction processing server communicates with any other servers (e.g., a data processing server) concerning processing payment transactions relating to the purchasing of the good or service.
  • a data processing server e.g., data relating to an approved or rejected transaction (e.g. date, time, amount, currency, user name, and other similar data relating to the concerned transaction) may be provided to the data processing server as raw data that may be utilized for processing by data points.
  • the processed data may then be stored or transferred to a database.
  • the transaction processing server may also be in communication with a database directly which will store the data relating to an approved or rejected transaction as raw data, or may also be configured to process the data before doing so.
  • the transaction processing server may use a variety of different protocols and procedures in order to process the payment and/or travel coordination requests.
  • Transactions that may be performed via a transaction processing server include product or service purchases, credit purchases, debit transactions, fund transfers, account withdrawals, etc.
  • Transaction processing servers may be configured to process transactions via cash-substitutes, which may include payment cards, letters of credit, checks, payment accounts, etc.
  • the transaction processing server is usually managed by a service provider that may be an entity (e.g. a company or organization) which operates to process transaction requests and/or travel co-ordination requests e.g. pair a provider of a travel co-ordination request to a requestor of the travel co-ordination request.
  • the transaction processing server may include one or more computing devices that are used for processing transaction requests and/or travel co-ordination requests.
  • a transaction account is an account of a user who is registered at a transaction processing server.
  • the user can be a customer, a merchant providing a product for sale on a platform and/or for onboarding the platform, a hail provider (e.g., a driver), or any third parties (e.g., a courier) who want to use the transaction processing server.
  • the transaction account is not required to use the transaction processing server.
  • a transaction account includes details (e.g., name, address, vehicle, face image, etc.) of a user.
  • the transaction processing server manages the transaction.
  • the present specification also implicitly discloses a computer program, in that it would be apparent to the person skilled in the art that the individual steps of the method described herein may be put into effect by computer code.
  • the computer program is not intended to be limited to any particular programming language and implementation thereof. It will be appreciated that a variety of programming languages and coding thereof may be used to implement the teachings of the disclosure contained herein.
  • the computer program is not intended to be limited to any particular control flow. There are many other variants of the computer program, which can use different control flows without departing from the scope of the specification.
  • the computer readable medium may include storage devices such as magnetic or optical disks, memory chips, or other storage devices suitable for interfacing with a computer.
  • the computer readable medium may also include a hardwired medium such as exemplified in the Internet system, or wireless medium such as exemplified in the GSM mobile telephone system.
  • the computer program when loaded and executed on such a computer effectively results in an apparatus that implements the steps of the preferred method.
  • Fig. 1 illustrates a block diagram of a system 100 for adaptively executing a plurality of tasks. Further, the system 100 enables a payment transaction for a good or service, and/or a request for a ride between a requestor and a provider.
  • the system 100 comprises a requestor device 102, a provider device 104, an acquirer server 106, a transaction processing server 108, an issuer server 1 10, a data processing server 140 and a database 150.
  • the requestor device 102 is in communication with a provider device 104 via a connection 1 12.
  • the connection 1 12 may be wireless (e.g., via NFC communication, Bluetooth, etc.) or over a network (e.g., the Internet).
  • the requestor device 102 is also in communication with the data processing server 140 via a connection 121 .
  • the connection 121 may be via a network (e.g., the Internet).
  • the requestor device 102 may also be connected to a cloud that facilitates the system 100 for adaptively executing a plurality of tasks.
  • the requestor device 102 can send a signal or data to the cloud directly via a wireless connection (e.g., via NFC communication, Bluetooth, etc.) or over a network (e.g., the Internet).
  • the provider device 104 is in communication with the requestor device 102 as described above, usually via the transaction processing server 108.
  • the provider device 104 is, in turn, in communication with an acquirer server 106 via a connection 1 14.
  • the provider device 104 is also in communication with the data processing server 140 via a connection 123.
  • the connections 1 14 and 123 may be via a network (e.g., the Internet).
  • the provider device 104 may also be connected to a cloud that facilitates the system 100 for adaptively executing a plurality of tasks.
  • the provider device 104 can send a signal or data to the cloud directly via a wireless connection (e.g., via NFC communication, Bluetooth, etc.) or over a network (e.g., the Internet).
  • the acquirer server 106 is in communication with the transaction processing server 108 via a connection 1 16.
  • the transaction processing server 108 is in communication with an issuer server 110 via a connection 1 18.
  • the connections 1 16 and 1 18 may be via a network (e.g., the Internet).
  • the transaction processing server 108 is further in communication with the data processing server 140 via a connection 120.
  • the connection 120 may be over a network (e.g., a local area network, a wide area network, the Internet, etc.).
  • the transaction processing server 108 and the data processing server 140 are combined and the connection 120 may be an interconnected bus.
  • the data processing server 140 is in communication with the reference databases 150A and 150B via respective connection 122.
  • the connection 122 may be a network (e.g., the Internet).
  • the data processing server 140 may also be connected to a cloud that facilitates the system 100 for adaptively executing a plurality of tasks.
  • the data processing server 140 can send a signal or data to the cloud directly via a wireless connection (e.g., via NFC communication, Bluetooth, etc.) or over a network (e.g., the Internet).
  • the database 150 may comprise data relating users, transactions, products, services, and other similar data, for example relating to a platform.
  • the data may be raw or aggregated data.
  • the database 150 may be combined with the data processing server 140.
  • the database 150 may be managed by an external entity and the data processing server 140 is a server that, based on a schema comprising information indicating how to execute each of a plurality of tasks, executes the plurality of tasks based on the information.
  • the information may further indicate data to be retrieved and a data source for a task of the plurality of tasks, and executing the task by the data processing server 140 may further comprise retrieving the indicated data from the data source.
  • the indicated data may be raw data or aggregated data.
  • the data source may be the database 150, the data processing server 140, the transaction processing server 108, or other similar data source.
  • the database 150 may store the schema for which the data processing server 140 utilizes for executing the plurality of tasks.
  • one or more modules may store the raw data or aggregated data instead of the database 150, wherein the module may be integrated as part of the data processing server 140 or external from the data processing server 140.
  • each of the devices 102, 104, and the servers 106, 108, 1 10, 140, and/or database 150 provides an interface to enable communication with other connected devices 102, 104 and/or servers 106, 108, 1 10, 140, and/or database 150.
  • Such communication is facilitated by an application programming interface (“API”).
  • APIs may be part of a user interface that may include graphical user interfaces (GUIs), Web-based interfaces, programmatic interfaces such as application programming interfaces (APIs) and/or sets of remote procedure calls (RPCs) corresponding to interface elements, messaging interfaces in which the interface elements correspond to messages of a communication protocol, and/or suitable combinations thereof.
  • GUIs graphical user interfaces
  • APIs application programming interfaces
  • RPCs remote procedure calls
  • the data processing server 140 is associated with an entity (e.g. a company or organization or moderator of the service). In one arrangement, the data processing server 140 is owned and operated by the entity operating the transaction processing server 108. In such an arrangement, the data processing server 140 may be implemented as a part (e.g., a computer program module, a computing device, etc.) of the transaction processing server 108.
  • entity e.g. a company or organization or moderator of the service.
  • the data processing server 140 is owned and operated by the entity operating the transaction processing server 108.
  • the data processing server 140 may be implemented as a part (e.g., a computer program module, a computing device, etc.) of the transaction processing server 108.
  • the transaction processing server 108 may also be configured to manage the registration of users.
  • a registered user has a transaction account (see the discussion above) which includes details of the user.
  • the registration step is called on-boarding.
  • a user may use either the requestor device 102 or the provider device 104 to perform onboarding to the transaction processing server 108.
  • the on-boarding process for a user is performed by the user through one of the requestor device 102 or the provider device 104.
  • the user downloads an app (which includes the API to interact with the transaction processing server 108) to the requestor device 102 or the provider device 104.
  • the user accesses a website (which includes the API to interact with the transaction processing server 108) on the requestor device 102 or the provider device 104.
  • the user is then able to interact with the data processing server 140.
  • the user may be a requestor or a provider associated with the requestor device 102 or the provider device 104, respectively.
  • Details of the registration may include, for example, name of the user, address of the user, emergency contact, blood type or other healthcare information, next-of-kin contact, permissions to retrieve data and information from the requestor device 102 and/or the provider device 104 for product identification purposes, such as permission to use a camera of the requestor device 102 and/or the provider device 104 to take a picture of the user for identification purposes.
  • another mobile device may be selected instead of the requestor device 102 and/or the provider device 104 for retrieving the data. Once on-boarded, the user would have a transaction account that stores all the details.
  • the requestor device 102 is associated with a customer (or requestor) who is a party to a transaction that occurs between the requestor device 102 and the provider device 104, or between the requestor device 102 and the data processing server 140.
  • the requestor device 102 may be a computing device such as a desktop computer, an interactive voice response (IVR) system, a smartphone, a laptop computer, a personal digital assistant computer (PDA), a mobile computer, a tablet computer, and the like.
  • IVR interactive voice response
  • PDA personal digital assistant computer
  • the requestor device 102 includes transaction credentials (e.g., a payment account) of a requestor to enable the requestor device 102 to be a party to a payment transaction. If the requestor has a transaction account, the transaction account may also be included (i.e. , stored) in the requestor device 102. For example, a mobile device (which is a requestor device 102) may have the transaction account of the customer stored in the mobile device.
  • transaction credentials e.g., a payment account
  • the transaction account may also be included (i.e. , stored) in the requestor device 102.
  • a mobile device which is a requestor device 102 may have the transaction account of the customer stored in the mobile device.
  • the requestor device 102 is a computing device in a watch or similar wearable and is fitted with a wireless communications interface (e.g., a NFC interface). The requestor device 102 can then electronically communicate with the provider device 104 regarding a transaction request. The customer uses the watch or similar wearable to initiate the transaction request by pressing a button on the watch or wearable.
  • a wireless communications interface e.g., a NFC interface
  • the provider device 104 is associated with a provider who is also a party to the transaction request that occurs between the requestor device 102 and the provider device 104.
  • the provider device 104 may be a computing device such as a desktop computer, an interactive voice response (IVR) system, a smartphone, a laptop computer, a personal digital assistant computer (PDA), a mobile computer, a tablet computer, and the like.
  • IVR interactive voice response
  • PDA personal digital assistant computer
  • the term “provider” refers to a service provider and any third party associated with providing a product or service for purchase, or a travel or ride or delivery service via the provider device 104. Therefore, the transaction account of a provider refers to both the transaction account of a provider and the transaction account of a third party (e.g., a travel co-ordinator or merchant) associated with the provider.
  • the transaction account may also be included (i.e., stored) in the provider device 104.
  • a mobile device which is a provider device 104) may have the transaction account of the provider stored in the mobile device.
  • the provider device 104 is a computing device in a watch or similar wearable and is fitted with a wireless communications interface (e.g., a NFC interface). The provider device 104 can then electronically communicate with the requestor to initiate the transaction request by pressing a button on the watch or wearable.
  • a wireless communications interface e.g., a NFC interface
  • the acquirer server 106 is associated with an acquirer who may be an entity (e.g. a company or organization) which issues (e.g. establishes, manages, administers) a payment account (e.g. a financial bank account) of a merchant. Examples of the acquirer include a bank and/or other financial institution. As discussed above, the acquirer server 106 may include one or more computing devices that are used to establish communication with another server (e.g., the transaction processing server 108) by exchanging messages with and/or passing information to the other server. The acquirer server 106 forwards the payment transaction relating to a transaction request to the transaction processing server 108.
  • entity e.g. a company or organization
  • issues e.g. establishes, manages, administers
  • a payment account e.g. a financial bank account
  • the acquirer include a bank and/or other financial institution.
  • the acquirer server 106 may include one or more computing devices that are used to establish communication with another server (e.g., the transaction
  • the transaction processing server 108 is configured to process processes relating to a transaction account by, for example, forwarding data and information associated with the transaction to the other servers in the system 100 such as the data processing server 140.
  • the transaction processing server 108 may transmit data relating to an approved or rejected transaction (e.g. date, time, amount, currency, user name, and other similar data relating to the concerned transaction) to the data processing server 140.
  • the transaction processing server 108 may communicate with the data processing server 140 to facilitate payment for the data processing service after data relating to a request for data is retrieved and provided to the requestor.
  • the transaction processing server 108 may use a variety of different protocols and procedures in order to process the payment and/or travel co-ordination requests.
  • the issuer server 1 10 is associated with an issuer and may include one or more computing devices that are used to perform a payment transaction.
  • the issuer may be an entity (e.g. a company or organization) which issues (e.g. establishes, manages, administers) a transaction credential or a payment account (e.g. a financial bank account) associated with the owner of the requestor device 102.
  • the issuer server 1 10 may include one or more computing devices that are used to establish communication with another server (e.g., the transaction processing server 108) by exchanging messages with and/or passing information to the other server.
  • the database 150 is a database or server associated with an entity (e.g. a company or organization) which manages (e.g. establishes, administers) data relating to users, transactions, products, services, and other similar data, for example relating to the entity.
  • entity e.g. a company or organization
  • the database 150 may store raw or aggregated data relating to users of a platform, such as relating to user details, historical transactions, statistics relating to a user’s transaction and activities, and other similar data that may be retrieved by a DP node, and processed by the ML model node, which may then be used to set up or evaluate rules by a rule evaluation node.
  • the database 150 may store a schema based on which the data processing server 140 may utilize for adaptively executing a plurality of tasks.
  • the system 100 aims to eliminate all duplicate data point retrieval calls and enable maximum parallelism when executing a plurality of tasks, making a platform application more scalable such that a large number of machine learning models and rule evaluations can run parallelly.
  • an implementation of the system 100 executed topup transactions with topup latency reduced by 30 ms, approximately 1/3 fewer queries for Aerospike-based aggregates, and about 8 fewer queries for timescalebased aggregates per topup request. It will be appreciated that requests which involve a greater number of duplicated data points may have even greater improvements in latency and efficiency.
  • Fig. 2 illustrates a schematic diagram of the data processing server 140 according to various embodiments.
  • the data processing server 140 may comprise a data module 260 configured to receive data and information from the requestor device 102, provider device 104, transaction processing server 108, database 150, a cloud and other sources of information to facilitate adaptively executing a plurality of tasks.
  • the data module 260 may be further configured to send information relating to a completed task to the requestor device 102, the provider device 104, the transaction processing server 108, or other destinations where the information is required.
  • the data processing server 140 may comprise a sequence module 262 that is configured to define, by one or more processors, a schema representing a plurality of tasks, each task comprising a data retrieval and/or data evaluation operation, the schema comprising task information indicating how to execute each of the plurality of tasks; and generating, by the one or more processors, a graph representation of the plurality of tasks based on the schema, the graph representation comprising a plurality of nodes, each node of the plurality of nodes corresponding to one of the plurality of tasks.
  • the sequence module 262 may be further configured to determine an execution order for the plurality of tasks by: generating a matrix based on the graph representation, the matrix indicating, for each node of the plurality of nodes, whether there is a dependency on another node of the plurality of nodes; generating a frequency map based on the matrix, the frequency map indicating a total number of dependencies for each node of the plurality of nodes; and determining an execution order for the plurality of tasks based on the frequency map. Determining the execution order may further comprise identifying a node with zero dependencies from the frequency map, and adding the identified node to an execution queue. The sequence module 262 may be further configured to reduce the total number of dependencies for each node that is dependent on the identified node by one in the frequency map after the identified node is executed, and remove the identified node from the execution queue.
  • the sequence module 262 may be further configured to define a plurality of tasks based on a schema, the schema comprising information indicating how to execute each of the plurality of tasks.
  • the information further may indicate a node to be generated for each of the plurality of tasks to form a plurality of nodes in a graph, such as shown in the graph 400 of Fig. 4A.
  • Defining the plurality of tasks may further comprise generating a node for each of the plurality of tasks based on the information, each of the plurality of nodes representing a corresponding task to be executed.
  • Executing the plurality of tasks may be further based on a sequence of the plurality of nodes in the graph. The execution of the plurality of tasks based on the sequence is further explained in Figs. 6A - 6G.
  • defining the plurality of tasks may further comprise determining one or more of the plurality of tasks that can only be executed after a first task has been executed, a counter for each of the one or more tasks, and identifying a second task from the one or more tasks to be executed based on a number indicated in each counter.
  • identifying the second task may further comprise reducing the number indicated in each counter of the one or more tasks by one after the first task is executed; and identifying the second task when the counter for the identified second task is zero.
  • determining the one or more tasks further comprises identifying a match with the first task from a database (e.g. the database 150), the database comprising a plurality of tasks each indicating an identifier, and further indicating, for each task of the plurality of tasks, one or more tasks that can only be executed after each respective task of the plurality of tasks has been executed, the identified match indicating a same identifier as the first task; and determining the one or more tasks that corresponds to the identified match.
  • a database e.g. the database 150
  • determining a counter further comprises identifying a match with each of the one or more tasks from a database, the database comprising a plurality of tasks each indicating an identifier, and further indicating a counter for each of the plurality of tasks, each identified match indicating a same identifier as a corresponding task of the one or more tasks; and determining the counter that corresponds to each identified match of the one or more tasks.
  • sequence module 262 may be further configured to identify a match with the first task from the database, the identified match indicating a same identifier as the first task; and removing the identified match from the database after the first task is executed.
  • sequence module 262 may be further configured to reduce a number indicated in each counter of the one or more identified matches in the database by one after the first task is executed; and identify the second task from the one or more identified matches when the counter for the identified second task is zero.
  • sequence module 262 may be further configured to identify a plurality of tasks whose counter is zero, and execute the plurality of tasks in parallel with one another.
  • the data processing server 140 may also comprise a data point module 264 that is configured for executing, by the one or more processors, the plurality of tasks based on the graph representation and the task information. Two or more tasks of the plurality of tasks may be executed in parallel by one or more processors.
  • the task information may further indicate data to be retrieved and a data source for a data retrieval operation of the plurality of tasks, wherein executing the task further comprises retrieving the indicated data from the data source.
  • the data point module 264 may be further configured to execute a task corresponding to the identified node.
  • the data point module 264 may be further configured to execute a task for a DP node.
  • the information further indicates data to be retrieved and a data source for a task of the plurality of tasks
  • executing the task further comprises retrieving the indicated data from the data source.
  • the data processing server 140 may also comprise a machine learning module 266 that is configured for processing data relating to a task for a ML model node.
  • the data processing server 140 may also comprise a rule evaluation module 268 that is configured for evaluating rules based on the data from, for example, one or more DP nodes and/or one or more ML model nodes.
  • the plurality of tasks are executed by the data point module 264, machines learning module 266 and rule evaluation module 268 based on the information indicated in the schema.
  • Each of the data module 260, sequence module 262, data point module 264, machine learning module 266 and rule evaluation module 268 may further be in communication with a processing module (not shown) of the data processing server 140, for example for coordination of respective tasks and functions during the process.
  • the data module 260 may be further configured to communicate with and store data and information for each of the processing module, sequence module 262, data point module 264, machine learning module 266 and rule evaluation module 268.
  • all the tasks and functions required for adaptively executing a plurality of tasks may be performed by a single processor of the data processing server 140.
  • Figs. 6A - 6G illustrate how a plurality of tasks may be executed according to the present disclosure.
  • These identifiers may be used in an algorithm to evaluate the data points, for example based on the schema 430.
  • the resulting order of execution may be 1 8 10 7 1 1 6 5 4 93 2 13 12 14 15. If the nodes are executed parallelly, there might be a scenario in which DP10 418 (e.g. identifier 10) starts executing parallelly with DP8414 (e.g. identifier 8). There is no way to ensure dependency between nodes when using such a topological sort. Further, when utilizing existing algorithms based on breadth-first search (e.g. level order traversal) for the matrix 600, the resulting order of execution may be 1
  • identifier 14 15 in which identifier 1 is executed in a first level, identifiers 234 5 6 7 8 are executed in a second level, identifiers 10 1 1 9 12 13 are executed in a third level, and identifiers 14
  • DP9 416 e.g. identifier 9
  • model-1 422 e.g. identifier 12
  • the present proposed algorithm may comprise three data structures.
  • a boolean adjacency matrix where each node u of the graph 400 is represented as rows and columns.
  • An entry u-v is marked as 1 if node v has a dependency on node u for its evaluation.
  • This adjacency matrix may be called adj_mat e.g. matrix 602 as shown in Fig. 6B.
  • entry 604 is marked with a value of T to indicate this dependency on start node 401 (e.g. with identifier 1 ), while the remaining entries in the column corresponding to DP2402 (e.g.
  • a map of nodes with a key representing each respective node id and a value representing a count of nodes that each node is dependent on may be utilized.
  • This map may be called freq_map e.g. map 606 of Fig. 6C.
  • the map 606 can be easily constructed by counting the number of times the value T occurs in a corresponding column in matrix 602, and then indicating the count in a corresponding entry in map 606. For example, as rule evaluation node 428 with identifier 15 has 5 dependencies (e.g.
  • a value ‘5’ is indicated in entry 610 of map 606 for identifier 15 (see reference 608).
  • a worker pool implementation may be utilized that ensure the nodes are executed in an execution queue, and ensures parallel execution of the nodes. This queue may be called execution queue.
  • the proposed algorithm may be implemented, for example by the sequence module 262 of the data processing server 140, based on the following steps:
  • the respective count for all columns for which value is set in the row denoted by the dummy node (which has just finished executing) in the adj_mat is reduced by 1 and the freq_map is updated accordingly (e.g. by recounting based on the updated adj_mat).
  • the new node/s that are added to execution queue are then directly picked up for execution by a worker pool for the respective new node/s.
  • Steps 2-4 are then repeated until the end of the graph is reached (e.g. until rule evaluation node 428 in graph 400 is executed).
  • a first data structure (e.g. adj_mat) may be constructed as shown in matrix 602
  • a second data structure (e...g freq_map) may be constructed as shown in map 606
  • a third data structure (e.g. execution queue) may be constructed as follows: ⁇ start> / 1 / ⁇ end> to indicate that the node with identifier 1 (e.g. start node 401 of graph 400) is to be executed.
  • the sequence module 262 checks row 1 of matrix 602 (e.g.
  • Map 606 is also updated accordingly (e.g. by the sequence module 262) as shown in updated map 612 which now indicates that the value for each of identifiers 2, 3, 4, 5, 6, 7 and 8 is now ‘O’. As soon as the values are reduced to ‘0’ for the aforementioned nodes as shown in map 612, these nodes are then added to the execution queue, such that it becomes:
  • Start node 401 will be removed from the execution queue because it has finished executing.
  • the sequence module 262 checks row 2 of matrix 602 (e.g. the row corresponding to identifier 2) and determines that nodes with identifiers 12 and 13 are marked. Therefore, the sequence module 262 reduces the value indicated in the row 2 for the identifiers 12 and 13 by 1 in matrix 602. Map 612 is also updated accordingly (e.g.
  • updated map 614 which now indicates that the value for each of identifiers 12 and 13 is now reduced by 1 .
  • the value indicated for identifier 12 is reduced from ‘3’ to ‘2’
  • the value indicated for identifier 13 is reduced from ‘4’ to ‘3’. Since the values indicated for identifiers 12 and 13 do not become 0 in the freq_map, the nodes corresponding to these identifiers will not be added to the execution queue. Node 2 will be removed from the execution queue because it has finished executing, and the execution queue is now updated (e.g. by the sequence module 262) to:
  • the sequence module 262 checks row 3 of matrix 602 (e.g. the row corresponding to identifier 3) and determines that nodes with identifiers 12 and 13 are marked. Therefore, the sequence module 262 reduces the value indicated in the row 3 for the identifiers 12 and 13 by 1 in matrix 602. Map 614 is also updated accordingly (e.g. by the sequence module 262) as shown in updated map 616 which now indicates that the value for each of identifiers 12 and 13 is now reduced by 1.
  • the value indicated for identifier 12 is reduced from ‘2’ to T, and the value indicated for identifier 13 is reduced from ‘3’ to ‘2’. Since the values indicated for identifiers 12 and 13 do not become 0 in the freq_map, the nodes corresponding to these identifiers will not be added to the execution queue. Node 3 will be removed from the execution queue because it has finished executing, and the execution queue is now updated (e.g. by the sequence module 262) to: ⁇ start> 4 15161 718 ⁇ end>
  • the sequence module 262 checks row 4 of matrix 602 (e.g. the row corresponding to identifier 4) and determines that nodes with identifier 9 is marked. Therefore, the sequence module 262 reduces the value indicated in the row 4 for the identifier 9 by 1 in matrix 602. Map 616 is also updated accordingly (e.g. by the sequence module 262) as shown in updated map 618 which now indicates that the value for identifier 9 is now reduced by 1. For example, the value indicated for identifier 9 is reduced from T to ‘O’.
  • the node corresponding to this identifier (e.g. DP9 416) will be added to the execution queue.
  • Node 4 will be removed from the execution queue because it has finished executing, and the execution queue is now updated (e.g. by the sequence module 262) to:
  • Fig. 7 illustrates an example flow diagram of a method for adaptively executing a plurality of tasks according to various embodiments.
  • a schema representing a plurality of tasks is defined, each task comprising a data retrieval and/or data evaluation operation, the schema comprising task information indicating how to execute each of the plurality of tasks.
  • a graph representation of the plurality of tasks is generated based on the schema, the graph representation comprising a plurality of nodes, each node of the plurality of nodes corresponding to one of the plurality of tasks.
  • the plurality of tasks is executed based on the graph representation and the task information.
  • Fig. 8A depict a general-purpose computer system 1400, upon which the data processing server 140 described can be practiced.
  • the computer system 1400 includes a computer module 1401.
  • An external Modulator-Demodulator (Modem) transceiver device 1416 may be used by the computer module 1401 for communicating to and from a communications network 1420 via a connection 1421.
  • the communications network 1420 may be a wide-area network (WAN), such as the Internet, a cellular telecommunications network, or a private WAN.
  • the modem 1416 may be a traditional “dial-up” modem.
  • the modem 1416 may be a broadband modem.
  • a wireless modem may also be used for wireless connection to the communications network 1420.
  • the computer module 1401 typically includes at least one processor unit 1405, and a memory unit 1406.
  • the memory unit 1406 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM).
  • the computer module 1401 also includes an interface 1408 for the external modem 1416.
  • the modem 1416 may be incorporated within the computer module 1401 , for example within the interface 1408.
  • the computer module 1401 also has a local network interface 141 1 , which permits coupling of the computer system 1400 via a connection 1423 to a local-area communications network 1422, known as a Local Area Network (LAN).
  • LAN Local Area Network
  • the local communications network 1422 may also couple to the wide network 1420 via a connection 1424, which would typically include a so-called “firewall” device or device of similar functionality.
  • the local network interface 141 1 may comprise an Ethernet circuit card, a Bluetooth® wireless arrangement or an IEEE 802.1 1 wireless arrangement; however, numerous other types of interfaces may be practiced for the interface 141 1.
  • the I/O interfaces 1408 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated).
  • Storage devices 1409 are provided and typically include a hard disk drive (HDD) 1410. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used.
  • HDD hard disk drive
  • Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used.
  • An optical disk drive 1412 is typically provided to act as a non-volatile source of data.
  • Portable memory devices such optical disks, USB-RAM, portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the system 1400.
  • the components 1405 to 1412 of the computer module 1401 typically communicate via an interconnected bus 1404 and in a manner that results in a conventional mode of operation of the computer system 1400 known to those in the relevant art.
  • the processor 1405 is coupled to the system bus 1404 using a connection 1418.
  • the memory 1406 and optical disk drive 1412 are coupled to the system bus 1404 by connections 1419. Examples of computers on which the described arrangements can be practised include IBM-PC’s and compatibles, Sun Sparcstations, Apple or like computer systems.
  • the method 700 where performed by the data processing server 140 may be implemented using the computer system 1400.
  • the processes may be implemented as one or more software application programs 1433 executable within the computer system 1400.
  • the sub-processes 400, 500, and 600 are effected by instructions in the software 1433 that are carried out within the computer system 1400.
  • the software instructions may be formed as one or more code modules, each for performing one or more particular tasks.
  • the software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the methods and a second part and the corresponding code modules manage a user interface between the first part and the user.
  • the software may be stored in a computer readable medium, including the storage devices described below, for example.
  • the software is loaded into the computer system 1400 from the computer readable medium, and then executed by the computer system 1400.
  • a computer readable medium having such software or computer program recorded on the computer readable medium is a computer program product.
  • the use of the computer program product in the computer system 1400 preferably effects an advantageous apparatus for a data processing server 140.
  • the software 1433 is typically stored in the HDD 1410 or the memory 1406.
  • the software is loaded into the computer system 1400 from a computer readable medium, and executed by the computer system 1400.
  • the software 1433 may be stored on an optically readable disk storage medium (e.g., CD-ROM) 1425 that is read by the optical disk drive 1412.
  • a computer readable medium having such software or computer program recorded on it is a computer program product.
  • the use of the computer program product in the computer system 1400 preferably effects an apparatus for a data processing server 140.
  • the application programs 1433 may be supplied to the user encoded on one or more CD-ROMs 1425 and read via the corresponding drive 1412, or alternatively may be read by the user from the networks 1420 or 1422. Still further, the software can also be loaded into the computer system 1400 from other computer readable media.
  • Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the computer system 1400 for execution and/or processing.
  • Examples of such storage media include floppy disks, magnetic tape, optical disc, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 1401.
  • Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 1401 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
  • GUIs graphical user interfaces
  • a user of the computer system 1400 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s).
  • Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via loudspeakers and user voice commands input via a microphone.
  • the structural context of the computer system 1400 i.e., the data processing server 140
  • the structural context of the computer system 1400 is presented merely by way of example. Therefore, in some arrangements, one or more features of the computer system 1400 may be omitted. Also, in some arrangements, one or more features of the computer system 1400 may be combined together. Additionally, in some arrangements, one or more features of the computer system 1400 may be split into one or more component parts.
  • Fig. 9 shows an alternative implementation of the transaction processing server 108 (i.e., the computer system 1300).
  • the transaction processing 108 may be generally described as a physical device comprising at least one processor 802 and at least one memory 804 including computer program codes.
  • the at least one memory 804 and the computer program codes are configured to, with the at least one processor 802, cause the transaction processing server 108 to facilitate the operations described in method 700.
  • the transaction processing server 108 may also include a transaction processing module 806.
  • the memory 804 stores computer program code that the processor 802 compiles to have transaction processing module 806 perform the respective functions.
  • the transaction processing module 806 performs the function of communicating with the requestor device 102 and the provider device 104; and the acquirer server 106 and the issuer server 1 10 to respectively receive and transmit a transaction, travel request message, or other similar messages. Further, the transaction processing module 806 may provide data and information relating to an approved or rejected transaction (e.g. date, time, amount, currency, user name, and other similar data relating to the concerned transaction) to the data processing server 140 as raw data that may be utilized for processing by data points. The processed data may then be stored or transferred to a database, for example database 150.
  • an approved or rejected transaction e.g. date, time, amount, currency, user name, and other similar data relating to the concerned transaction
  • the processed data may then be stored or transferred to a database, for example database 150.
  • the transaction processing server may also be in communication with a database directly which will store the data relating to an approved or rejected transaction as raw data, or may also be configured to process the data before doing so.
  • Fig. 10 shows an alternative implementation of the data processing server 140 (i.e., the computer system 1400).
  • data processing server 140 may be generally described as a physical device comprising at least one processor 902 and at least one memory 904 including computer program codes. The at least one memory 904 and the computer program codes are configured to, with the at least one processor 902, cause the data processing server 140 to perform the operations described in the method 700.
  • the data processing server 140 may also include a data module 906, a sequence module 908, a data point module 910, a machine learning module 912 and a rule evaluation module 914.
  • the memory 904 stores computer program code that the processor 902 compiles to have each of the modules 906 to 914 performs their respective functions.
  • the sequence module 908 performs the function of defining, by one or more processors, a schema representing a plurality of tasks, each task comprising a data retrieval and/or data evaluation operation, the schema comprising task information indicating how to execute each of the plurality of tasks; and generating, by the one or more processors, a graph representation of the plurality of tasks based on the schema, the graph representation comprising a plurality of nodes, each node of the plurality of nodes corresponding to one of the plurality of tasks.
  • the sequence module 908 may be further configured to determine an execution order for the plurality of tasks by: generating a matrix based on the graph representation, the matrix indicating, for each node of the plurality of nodes, whether there is a dependency on another node of the plurality of nodes; generating a frequency map based on the matrix, the frequency map indicating a total number of dependencies for each node of the plurality of nodes; and determining an execution order for the plurality of tasks based on the frequency map. Determining the execution order may further comprise identifying a node with zero dependencies from the frequency map, and adding the identified node to an execution queue. The sequence module 908 may be further configured to reduce the total number of dependencies for each node that is dependent on the identified node by one in the frequency map after the identified node is executed, and remove the identified node from the execution queue.
  • the sequence module 908 may be further configured to define a plurality of tasks based on a schema, the schema comprising information indicating how to execute each of the plurality of tasks.
  • the sequence module 908 may be further configured to determine one or more of the plurality of tasks that can only be executed after a first task has been executed, and a counter for each of the one or more tasks, and identify a second task from the one or more tasks to be executed based on a number indicated in each counter.
  • the data point module 910 performs the function of executing, by the one or more processors, the plurality of tasks based on the graph representation and the task information. Two or more tasks of the plurality of tasks may be executed in parallel by one or more processors.
  • the task information may further indicate data to be retrieved and a data source for a data retrieval operation of the plurality of tasks, wherein executing the task further comprises retrieving the indicated data from the data source.
  • the data point module 910 may be further configured to execute a task corresponding to the identified node.
  • the data point module 910 may be further configured to execute a task for a DP node.
  • the information further indicates data to be retrieved and a data source for a task of the plurality of tasks
  • executing the task further comprises retrieving the indicated data from the data source.
  • the machine learning module 912 performs the function of processing data relating to a task for a ML model node.
  • the rule evaluation module 914 performs the function of evaluating rules based on the data from, for example, one or more DP nodes and/or one or more ML model nodes.
  • the data module 906 performs the functions of receiving data and information from the requestor device 102, provider device 104, transaction processing server 108, database 150, a cloud and other sources of information to facilitate the method 700.
  • the data module 906 may be configured to receive data and information from the requestor device 102, provider device 104, transaction processing server 108, database 150, a cloud and other sources of information to facilitate adaptively executing a plurality of tasks.
  • the data module 906 may be configured to receive data and information required for adaptively executing a plurality of tasks from the requestor device 102, the provider device 104, transaction processing server 108, database 150, and/or other sources of information.
  • the data module 906 may be further configured to send information relating to a completed task to the requestor device 102, the provider device 104, the transaction processing server 108, or other destinations where the information is required.
  • the data module 906 may be further configured to communicate with and store data and information for each of the sequence module 908, data point module 910, machine learning module 912 and rule evaluation module 914.
  • all the tasks and functions required for facilitating the method 700 may be performed by a single processor 902 of the data processing server 140, or by one or more processors.
  • Fig. 8B depicts a general-purpose computer system 1500, upon which a combined transaction processing server 108 and data processing server 140 described can be practiced.
  • the computer system 1500 includes a computer module 1501.
  • An external Modulator-Demodulator (Modem) transceiver device 1516 may be used by the computer module 1501 for communicating to and from a communications network 1520 via a connection 1521 .
  • the communications network 1520 may be a wide-area network (WAN), such as the Internet, a cellular telecommunications network, or a private WAN.
  • the modem 1516 may be a traditional “dial-up” modem.
  • the modem 1516 may be a broadband modem.
  • a wireless modem may also be used for wireless connection to the communications network 1520.
  • the computer module 1501 typically includes at least one processor unit 1505, and a memory unit 1506.
  • the memory unit 1506 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM).
  • the computer module 1501 also includes an interface 1508 for the external modem 1516.
  • the modem 1516 may be incorporated within the computer module 1501 , for example within the interface 1508.
  • the computer module 1501 also has a local network interface 151 1 , which permits coupling of the computer system 1500 via a connection 1523 to a local-area communications network 1522, known as a Local Area Network (LAN).
  • LAN Local Area Network
  • the local communications network 1522 may also couple to the wide network 1520 via a connection 1524, which would typically include a so-called “firewall” device or device of similar functionality.
  • the local network interface 151 1 may comprise an Ethernet circuit card, a Bluetooth® wireless arrangement or an IEEE 802.1 1 wireless arrangement; however, numerous other types of interfaces may be practiced for the interface 151 1.
  • the I/O interfaces 1508 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated).
  • Storage devices 1509 are provided and typically include a hard disk drive (HDD) 1510. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used.
  • An optical disk drive 1512 is typically provided to act as a non-volatile source of data.
  • Portable memory devices, such optical disks, USB-RAM, portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the system 1500.
  • the components 1505 to 1512 of the computer module 1501 typically communicate via an interconnected bus 1504 and in a manner that results in a conventional mode of operation of the computer system 1500 known to those in the relevant art.
  • the processor 1505 is coupled to the system bus 1504 using a connection 1518.
  • the memory 1506 and optical disk drive 1512 are coupled to the system bus 1504 by connections 1519. Examples of computers on which the described arrangements can be practised include IBM-PC’s and compatibles, Sun Sparcstations, Apple or like computer systems.
  • the steps of the method 700 performed by the data processing server 140 and facilitated by the transaction processing server 108 may be implemented using the computer system 1500.
  • the steps of the method 700 as performed by the data processing server 140 may be implemented as one or more software application programs 1533 executable within the computer system 1500.
  • the steps of the method 700 are effected by instructions in the software 1533 that are carried out within the computer system 1500.
  • the software instructions may be formed as one or more code modules, each for performing one or more particular tasks.
  • the software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the steps of the method 700 and a second part and the corresponding code modules manage a user interface between the first part and the user.
  • the software may be stored in a computer readable medium, including the storage devices described below, for example.
  • the software is loaded into the computer system 1500 from the computer readable medium, and then executed by the computer system 1500.
  • a computer readable medium having such software or computer program recorded on the computer readable medium is a computer program product.
  • the use of the computer program product in the computer system 1500 preferably effects an advantageous apparatus for a combined transaction processing and data processing server.
  • the software 1533 is typically stored in the HDD 1510 or the memory 1506.
  • the software is loaded into the computer system 1500 from a computer readable medium, and executed by the computer system 1500.
  • the software 1533 may be stored on an optically readable disk storage medium (e.g., CD-ROM) 1525 that is read by the optical disk drive 1512.
  • a computer readable medium having such software or computer program recorded on it is a computer program product.
  • the use of the computer program product in the computer system 1500 preferably effects an apparatus for a combined transaction processing and data processing server.
  • the application programs 1533 may be supplied to the user encoded on one or more CD-ROMs 1525 and read via the corresponding drive 1512, or alternatively may be read by the user from the networks 1520 or 1522. Still further, the software can also be loaded into the computer system 1500 from other computer readable media.
  • Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the computer system 1500 for execution and/or processing.
  • Examples of such storage media include floppy disks, magnetic tape, optical disc, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 1501.
  • Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 1501 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
  • GUIs graphical user interfaces
  • a user of the computer system 1500 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s).
  • Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via loudspeakers and user voice commands input via a microphone.
  • Fig. 1 1 shows an alternative implementation of combined transaction processing and data processing server (i.e., the computer system 1500).
  • the combined transaction processing and data processing server may be generally described as a physical device comprising at least one processor 1002 and at least one memory 904 including computer program codes.
  • the at least one memory 1004 and the computer program codes are configured to, with the at least one processor 1002, cause the combined transaction processing and data processing server to perform the operations described in the steps of the method 700.
  • the combined transaction processing and data processing server may also include a transaction processing module 806, a data module 906, a sequence module 908, a data point module 910, a machine learning module 912 and a rule evaluation module 914.
  • the memory 1004 stores computer program code that the processor 1002 compiles to have each of the modules 806 to 912 performs their respective functions.
  • the transaction processing module 806 performs the same functions as described for the same transaction processing module in Fig. 9.
  • the data module 906, a sequence module 908, a data point module 910, a machine learning module 912 and a rule evaluation module 914 perform the same functions as described for the same corresponding modules in Fig. 10.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Accounting & Taxation (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Game Theory and Decision Science (AREA)
  • Educational Administration (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

La présente invention concerne des procédés et des systèmes pour exécuter de manière adaptative une pluralité de tâches. Dans certains exemples, la présente invention concerne un procédé pour exécuter de manière adaptative une pluralité de tâches, lequel procédé consiste à : définir, par un ou plusieurs processeurs, un schéma représentant une pluralité de tâches, chaque tâche comprenant une opération d'extraction de données et/ou d'évaluation de données, le schéma comprenant des informations de tâche indiquant la façon d'exécuter chacune de la pluralité de tâches ; générer, par le ou les processeurs, une représentation de graphe de la pluralité de tâches sur la base du schéma, la représentation de graphe comprenant une pluralité de nœuds, chaque nœud de la pluralité de nœuds correspondant à l'une de la pluralité de tâches ; et exécuter, par le ou les processeurs, la pluralité de tâches sur la base de la représentation de graphe et des informations de tâche.
PCT/SG2023/050433 2022-06-22 2023-06-19 Procédé et système pour exécuter de manière adaptative une pluralité de tâches WO2023249558A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SG10202250258M 2022-06-22
SG10202250258M 2022-06-22

Publications (1)

Publication Number Publication Date
WO2023249558A1 true WO2023249558A1 (fr) 2023-12-28

Family

ID=89380714

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2023/050433 WO2023249558A1 (fr) 2022-06-22 2023-06-19 Procédé et système pour exécuter de manière adaptative une pluralité de tâches

Country Status (1)

Country Link
WO (1) WO2023249558A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080065448A1 (en) * 2006-09-08 2008-03-13 Clairvoyance Corporation Methods and apparatus for identifying workflow graphs using an iterative analysis of empirical data
KR20170101609A (ko) * 2016-02-29 2017-09-06 경기대학교 산학협력단 지식베이스 기반의 개념그래프 확장 시스템
US20170308411A1 (en) * 2016-04-20 2017-10-26 Samsung Electronics Co., Ltd Optimal task scheduler
US20180143861A1 (en) * 2009-02-13 2018-05-24 Ab Initio Technology Llc Task managing application for performing tasks based on messages received from a data processing application initiated by the task managing application
US20220129766A1 (en) * 2018-12-24 2022-04-28 Parexel International, Llc Data storage and retrieval system including a knowledge graph employing multiple subgraphs and a linking layer including multiple linking nodes, and methods, apparatus and systems for constructing and using same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080065448A1 (en) * 2006-09-08 2008-03-13 Clairvoyance Corporation Methods and apparatus for identifying workflow graphs using an iterative analysis of empirical data
US20180143861A1 (en) * 2009-02-13 2018-05-24 Ab Initio Technology Llc Task managing application for performing tasks based on messages received from a data processing application initiated by the task managing application
KR20170101609A (ko) * 2016-02-29 2017-09-06 경기대학교 산학협력단 지식베이스 기반의 개념그래프 확장 시스템
US20170308411A1 (en) * 2016-04-20 2017-10-26 Samsung Electronics Co., Ltd Optimal task scheduler
US20220129766A1 (en) * 2018-12-24 2022-04-28 Parexel International, Llc Data storage and retrieval system including a knowledge graph employing multiple subgraphs and a linking layer including multiple linking nodes, and methods, apparatus and systems for constructing and using same

Similar Documents

Publication Publication Date Title
CN113169980B (zh) 使用区块链的交易账户数据维护系统和方法
US20240045989A1 (en) Systems and methods for secure data aggregation and computation
US20190325473A1 (en) Reward point redemption for cryptocurrency
US11257134B2 (en) Supplier invoice reconciliation and payment using event driven platform
US10572685B1 (en) Protecting sensitive data
US12001800B2 (en) Semantic-aware feature engineering
US11570214B2 (en) Crowdsourced innovation laboratory and process implementation system
US20230098747A1 (en) Systems and methods for payment transactions, alerts, dispute settlement, and settlement payments, using multiple blockchains
US11803823B2 (en) Systems and methods for blockchain-based payment transactions, alerts, and dispute settlement, using a blockchain interface server
US10467636B2 (en) Implementing retail customer analytics data model in a distributed computing environment
US20190188579A1 (en) Self learning data loading optimization for a rule engine
US10318546B2 (en) System and method for test data management
US20170212731A1 (en) Systems and methods for visual data management
US20230325592A1 (en) Data management using topic modeling
US11734350B2 (en) Statistics-aware sub-graph query engine
CN112837149A (zh) 一种企业信贷风险的识别方法和装置
US11379191B2 (en) Presentation oriented rules-based technical architecture display framework
US20220164868A1 (en) Real-time online transactional processing systems and methods
KR20210068039A (ko) 거래 시스템을 구현하는 네트워크 노드들의 서브세트 내의 컨텍스트 기반 필터링
WO2023249558A1 (fr) Procédé et système pour exécuter de manière adaptative une pluralité de tâches
US11314710B2 (en) System and method for database sharding using dynamic IDs
US9342541B1 (en) Presentation oriented rules-based technical architecture display framework (PORTRAY)
WO2020070721A1 (fr) Système et procédé pour transactions simples et sécurisées sur des réseaux sociaux pour dispositifs mobiles
JP7499424B2 (ja) 暗号的に保護されたトークンベースのオペレーションのためのコンピュータネットワークシステムおよびその使用方法
WO2023219572A1 (fr) Procédé et système de traitement adaptatif d'une demande de données

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23827616

Country of ref document: EP

Kind code of ref document: A1