US20150163179A1 - Execution of a workflow that involves applications or services of data centers - Google Patents

Execution of a workflow that involves applications or services of data centers Download PDF

Info

Publication number
US20150163179A1
US20150163179A1 US14/563,331 US201414563331A US2015163179A1 US 20150163179 A1 US20150163179 A1 US 20150163179A1 US 201414563331 A US201414563331 A US 201414563331A US 2015163179 A1 US2015163179 A1 US 2015163179A1
Authority
US
United States
Prior art keywords
workflow
message
applications
orchestrator
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/563,331
Inventor
Stephane Herman Maes
Woong Joseph Kim
Ankit Ashok Desai
Christopher William Johnson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Micro Focus LLC
Original Assignee
Hewlett Packard Development Co LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Development Co LP filed Critical Hewlett Packard Development Co LP
Priority to US14/563,331 priority Critical patent/US20150163179A1/en
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. reassignment HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DESAI, ANKIT ASHOK, MAES, STEPHANE HERMAN, JOHNSON, CHRISTOPHER WILLIAM, KIM, WOONG JOSEPH
Publication of US20150163179A1 publication Critical patent/US20150163179A1/en
Assigned to HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP reassignment HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
Assigned to ENTIT SOFTWARE LLC reassignment ENTIT SOFTWARE LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Assigned to JPMORGAN CHASE BANK, N.A. reassignment JPMORGAN CHASE BANK, N.A. SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ARCSIGHT, LLC, ATTACHMATE CORPORATION, BORLAND SOFTWARE CORPORATION, ENTIT SOFTWARE LLC, MICRO FOCUS (US), INC., MICRO FOCUS SOFTWARE, INC., NETIQ CORPORATION, SERENA SOFTWARE, INC.
Assigned to JPMORGAN CHASE BANK, N.A. reassignment JPMORGAN CHASE BANK, N.A. SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ARCSIGHT, LLC, ENTIT SOFTWARE LLC
Assigned to MICRO FOCUS LLC reassignment MICRO FOCUS LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: ENTIT SOFTWARE LLC
Assigned to MICRO FOCUS LLC (F/K/A ENTIT SOFTWARE LLC) reassignment MICRO FOCUS LLC (F/K/A ENTIT SOFTWARE LLC) RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0577 Assignors: JPMORGAN CHASE BANK, N.A.
Assigned to MICRO FOCUS SOFTWARE INC. (F/K/A NOVELL, INC.), ATTACHMATE CORPORATION, MICRO FOCUS LLC (F/K/A ENTIT SOFTWARE LLC), NETIQ CORPORATION, SERENA SOFTWARE, INC, BORLAND SOFTWARE CORPORATION, MICRO FOCUS (US), INC. reassignment MICRO FOCUS SOFTWARE INC. (F/K/A NOVELL, INC.) RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718 Assignors: JPMORGAN CHASE BANK, N.A.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • H04L51/046Interoperability with other network applications or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/541Interprogram communication via adapters, e.g. between incompatible applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0276Advertisement creation

Definitions

  • An enterprise may employ multiple applications to perform various tasks.
  • the tasks can be performed by various applications, and in some cases multiple applications can perform overlapping tasks.
  • the tasks can include tasks associated with information technology (IT) management, such as management of development and production of program code, management of a portfolio of products or services, support management, IT service management, cloud and Software as a Service (SaaS) service management and so forth.
  • IT management performs management with respect to components of an IT environment, where the components can include computers, storage devices, communication nodes, machine-readable instructions, and so forth.
  • IT management can be modeled by an Information Technology Infrastructure (ITIL) (that provides a set of best practices for IT management), a Business Process Framework (eTOM) from the TM Forum, and so forth.
  • ITIL Information Technology Infrastructure
  • eTOM Business Process Framework
  • FIG. 1A is schematic diagram of an example arrangement including data centers according to some implementations.
  • FIG. 1B is a block diagram of a gateway according to some implementations.
  • FIG. 2 is a block diagram of a service exchange according to some implementations.
  • FIG. 3 is a block diagram of a service exchange that interacts with a legacy integration framework, according to further implementations.
  • FIG. 4 is a flow diagram of a process of a service exchange according to
  • FIG. 5 is a block diagram of an example computer system according to some implementations.
  • An “enterprise” can refer to a business concern, an educational organization, a government agency, an individual, or any other entity.
  • a “workflow” can refer to any process that the enterprise can perform, such as a use case. Such a process of the workflow can also be referred to as an “end-to-end process” or an “enterprise process” since the process involves a number of activities of the enterprise from start to finish.
  • a “use case” can refer to any specific business process or other service that an enterprise desires to implement.
  • An “application” can refer to machine-readable instructions (such as software and/or firmware) that are executable.
  • the application can include logic associated with an enterprise process, which can implement or support all or parts of the enterprise process (or processes).
  • An application can be an application developed by the enterprise, or an application provided by an external vendor of the enterprise.
  • An application can be provided on the premises of the enterprise, or in the cloud (public cloud or virtual private cloud), and the application can be a hosted application (e.g. an application provided by a provider over a network), a managed service (a service managed and/or operated by a third party that can be hosted or on premise), or a software as a service (SaaS) (a service available on a subscription basis to users), and so forth.
  • multiple applications used by the enterprise may be provided by different vendors.
  • an application implements a particular set of business logic and is not aware of other applications that are responsible for performing other processes.
  • the design of the application may or may not have taken into account the presence of other applications upstream or downstream (with respect to an end-to-end process). This is especially true for older (legacy) applications.
  • applications can at least expose well defined application programming interfaces (APIs) that assume that the applications will be interacting with other systems. Such applications are called by their APIs or can call other APIs. Even with such APIs, applications may not readily interact with each other.
  • APIs application programming interfaces
  • a point-to-point integration mechanism can include a component (or multiple components) provided between applications to perform data transformations, messaging services, and other tasks to allow the applications to determine how and when to communicate and interact with each other.
  • Point-to-point integration mechanisms can be provided for different subsets of applications. If there are a large number of applications in a portfolio of applications used by an enterprise, then there can be a correspondingly large number of point-to-point integration mechanisms.
  • point-to-point integration mechanisms may have to be modified and/or re-tested. Modifying or re-testing an integration mechanism between applications can be a time-consuming and costly exercise, particularly if there are a large number of integration mechanisms deployed by the enterprise. This exercise can rapidly become a complex combinatorial exercise. If point-to-point integration is used, an enterprise may be hesitant to upgrade applications, to add new applications, to change application vendors, or to modify processes, since doing so can be complex and costly. However, maintaining a static portfolio of applications can prevent an enterprise from being agile in meeting evolving demands by users or customers of the enterprise. If an enterprise has applications provided by multiple vendors, additional challenges may arise. The application can be built to support updated releases of other applications, which adds complexity to application development if an enterprise wishes to deploy another release of an application of another vendor.
  • a service exchange and integration framework (referred to as a “service exchange” in the ensuing discussion) is provided that is able to integrate applications in a flexible manner, and orchestrate execution of workflows (which can refer to enterprise processes or use cases as noted above). Applications are used to implement their respective logic parts of each workflow. These applications are orchestrated to automate the end-to-end enterprise process or use case.
  • orchestrating execution of a workflow can refer to modeling and executing the logic of sequencing of the tasks of the workflow. Some of the tasks of the workflow are delegated using the orchestration to be performed by the logic of the applications.
  • a workflow can include an order fulfillment workflow.
  • An order fulfillment workflow can include the following tasks: receive an order from a customer, determine applications that are to be involved in fulfilling the order, invoke the identified applications to fulfill the order, and return a status (e.g. confirmation number or information to further manage the order, such as to view, update, cancel, or repeat the order) to the customer.
  • a status e.g. confirmation number or information to further manage the order, such as to view, update, cancel, or repeat the order
  • the foregoing example order fulfillment workflow is a simplified workflow that includes a simple collection of tasks. An actual order fulfillment workflow may involve many more tasks.
  • a workflow can involve processes of applications in multiple data centers or in the cloud or over the internet.
  • a “data center” can refer to an arrangement of resources (including computing resources such as computers or processors, storage resources to store information, communication resources to communicate information, and machine-executable instructions such as applications, operating systems, and so forth).
  • a data center can be provided by an enterprise.
  • a data center can also be a public cloud, a private cloud, or a hybrid cloud that is made up of a public cloud and a private cloud.
  • a public cloud can be provided by a provider that is different from the enterprise.
  • a private cloud can be provided by the enterprise.
  • a data center is “provided” by a provider (enterprise or other provider) if the provider manages the resources of the data center and/or makes available the resources of the data center to users, machines, and/or program code.
  • Multiple data centers can be coupled over a private network and/or over a public network such as the Internet.
  • a “cloud” can refer to an infrastructure including resources that are available for use by users.
  • the resources of a public cloud are available over a public network, to multiple tenants (or customers), who are able to subscribe or rent some share of the public cloud resources.
  • a public cloud provided by a third party provider can also be deployed on the premises of the enterprise.
  • a cloud can also be a hybrid cloud, which includes both a public cloud and a private cloud.
  • Another type of cloud is a managed cloud, which includes resources of the enterprise that are managed by a third party provider.
  • an enterprise can deploy multiple data centers, such as in different geographic regions (e.g. across a city, a state, a country, or the world) to achieve redundancy, high availability (to ensure availability of resources or to provide disaster recovery in case of failure of a data center), or scalability (increasing resources to meet increased demand). Deployment of multiple data centers can also be for satisfying government regulations (e.g. regulation specifying that certain data has to be kept in a specific country). These data centers can be managed by the enterprise or by a third party provider.
  • a data center can also provide services such as Software as a Service (SaaS) services. SaaS can refer to an arrangement in which software (or more generally, machine-executable instructions) is made available to users on a subscription basis.
  • SaaS Software as a Service
  • Orchestrating workflows across different data centers can be associated with various challenges. For example, use of different data centers may involve communications through many firewalls. As another example, the data centers can be coupled over a network (such as the Internet) that can be associated with unexpected delays, packet losses, etc., particularly during times of high usage. In such cases, guaranteeing the satisfaction of target goals associated with a service level agreement (SLA) or quality of service (QoS) level can be difficult. Also, managing security can be more complex. In addition, if cloud resources and/or SaaS services are employed, instances of applications that are to be orchestrated can be dynamically created, moved, replaced, and so forth, which can involve the use of dynamic addressing; as a result, it can be more difficult to address such application instances.
  • SLA service level agreement
  • QoS quality of service
  • Legacy integration frameworks such as Enterprise Service Bus (ESB) integration frameworks
  • ESB Enterprise Service Bus
  • a legacy integration framework can refer to an integration framework different from that provided by the service exchange according to the present disclosure.
  • ESB refers to an architecture model for designing and implementing communication between mutually interacting applications in a service-oriented architecture (SOA), where the applications can be distributed usually within a data center. While theoretically it is also possible to distribute application across the Internet or among clouds, that can be associated with issues relating to changing use cases and routing, dynamic addressing, and message delivering in a manageable manner across Internet or data centers.
  • SOA service-oriented architecture
  • the ESB framework provides for monitoring and control of routing of message between applications, resolving contention between applications, and other tasks.
  • the service exchange is able to interact with an ESB integration framework or another framework, e.g. with a message queue for exchanging messages among applications that would already be present to integrate the applications.
  • techniques or mechanisms enable the orchestrated execution of applications across multiple data centers.
  • the service exchange according to some implementations enable cloud scale message brokering (to allow an exchange of messages across clouds) or a cloud event driven architecture (EDA).
  • An EDA refers to a framework that orchestrates behavior around the production, detection and consumption of events as well as the responses the events evoke.
  • a cloud EDA refers to such a framework implemented across clouds.
  • FIG. 1A illustrates an example arrangement that includes a data center 100 , a data center 102 , and a data center 104 .
  • the data centers 100 , 102 , and 104 can be enterprise data centers and/or clouds as discussed above. Although just three data centers are shown in FIG. 1A , it is noted that in other examples, different numbers of data centers can be provided.
  • the data center 100 includes a service exchange 110 , which includes an orchestrator 112 , a message broker 114 , and adapters 116 .
  • the adapters 116 are provided between the message broker 114 and respective applications 118 .
  • the applications 118 are depicted as being part of the service exchange 110 in FIG. 1A , it is noted that in other examples, the applications 118 can be separate from the service exchange 110 , and some applications 118 can even be external of the data center 100 . For example, some applications 118 can be provided by an entity that is separate from the provider of the data center 100 .
  • Each of the orchestrator 112 , message broker 114 , and adapters 116 can be implemented as a combination of machine-executable instructions and processing hardware, such as a processor, a processor core, an application-specific integrated circuit (ASIC) device, a programmable gate array, and so forth. In other examples, any of the orchestrator 112 , message broker 114 , and adapters 116 can be implemented with just processing hardware.
  • ASIC application-specific integrated circuit
  • the message broker 114 is operatively or communicatively coupled to the orchestrator 112 and the adapters 116 . Generally, the message broker 114 is used to exchange messages among components, including the orchestrator 112 and the adapters 116 .
  • a message can include any or some combination of the following: a call (e.g. API call) or an event (e.g. response, result, or other type of event).
  • the message broker 114 is responsible for ensuring that API calls and events (e.g. responses, results, etc.) are sent to the correct adapter or to the correct workflow instance (multiple workflow instances may execute concurrently).
  • the endpoints may all receive a call or event and make a decision regarding whether each endpoint should process the call or event.
  • the message broker 114 further includes a message confirmation engine (MCE) 119 to perform the following tasks.
  • the message confirmation engine 119 ensures that a message put on the message broker 114 is delivered to a target by checking for a confirmation of receipt of the message by the target (e.g. an adapter 116 ), such as with a positive acknowledgement, for example. Message confirmation is thus implemented with the message broker 114 and the adapters 116 . If the target does not confirm receipt of the message, the message confirmation engine 119 can cause the message broker 114 to resend the message to the target.
  • the message confirmation engine 119 can also ensure that the target processes the message (e.g. by checking that the target returns a confirmation of commit).
  • the confirmation of commit is an indication of successful completion of processing of the message.
  • An application can send the confirmation of commit, or alternatively, an adapter 116 can query the application for the confirmation of commit. If the confirmation of commit is not received, then the message confirmation engine 119 can cause the message broker to resend the message, or to indicate an error, depending on the type of message and application/flow design. Idempotent calls on the applications can be repeated as often as appropriate until commit is confirmed. When the calls cannot be repeated, error messages are sent and the workflow handles (in its logic) what to do to perform rollback or notification. Rollback can refer rolling back the workflow to a prior known good state. Notification can include notifying a management system (or management systems). The action to take in response to a lack of confirmation of commit can be determined from a canonical data model 117 (discussed further below).
  • the message confirmation engine 119 can also ensure that messages are delivered in a managed manner (managed delivery of messages); in other words, messages are delivered without loss and with acceptable delays (delays within specified target levels).
  • the message confirmation engine 119 can also perform remediation if message loss or delays occur. Delivery times for messages can be monitored, and messages that are lost or excessively delayed (delayed longer than a specified target goal) are re-sent. Remediation can include resending the message that is missing or delayed to allow the endpoint to not have to wait anymore. Remediation can also include notifying other systems such as network or traffic management systems to try to get more or better or alternate bandwidth, for example.
  • the message confirmation engine 119 can also perform secure communication of messages with endpoints, by applying security to the messages. Applying security can include encryption of messages, mutual authentication of messages, or use of certificates. Encryption of a message is accomplished by using a key (e.g. public key or private key) to encrypt the message.
  • a key e.g. public key or private key
  • Mutual authentication refers to the two communicating endpoints authenticating each other, such as with use of credentials or other security information.
  • a certificate can be used to establish a secure communication session between two endpoints.
  • a gateway 142 can also be provided in the data center 100 .
  • the gateway 142 is discussed further below, following the discussion of operations of the orchestrator 112 , the message broker 114 , and the adapters 116 .
  • the gateway on the enterprise service exchange side can only do as much as it can with available protocols.
  • the remote cloud can support the same protocols or mechanisms of the gateway, or a gateway and service exchange of a remote data center that is geographically close to or collocated with the remote data center can be used to interact with the cloud.
  • the message broker 114 is able to send a confirmation of successful completion of an application in a workflow to the orchestrator 112 or to a requester that initiated the workflow.
  • the orchestrator 112 is used to orchestrate the execution of a specific workflow 113 that involves tasks performed by multiple applications (e.g. a subset or all of applications 118 ).
  • flow logic can be loaded into the orchestrator 112 , and the flow logic is executed by the orchestrator 112 .
  • Flow logic can include a representation of a collection of tasks that are to be performed.
  • the flow logic can be in the form of program code (e.g. a script or other form of machine-executable instructions), a document according to a specified language or structure (e.g. Business Process Execution Language (BPEL),a Business Process Model and Notation (BPMN), etc.), or any other type of representation (e.g.
  • BPEL Business Process Execution Language
  • BPMN Business Process Model and Notation
  • the flow logic can be generated by a human, a machine, or program code, and can be stored in a machine-readable or computer-readable storage medium accessible by the orchestrator 112 .
  • the orchestrator 112 is able to execute multiple flow logic to perform respective workflows. Multiple workflows and workflow instances (instances of a particular workflow refer to multiple instantiations of the particular workflow) can be concurrently executed in parallel by the orchestrator 112 .
  • the orchestrator 112 is able to evaluate (interpret or execute) a flow logic, and perform tasks specified by the flow logic in response to a current state of the workflow and calls and events received by the orchestrator 112 .
  • a workflow can be a stateful workflow. As a stateful workflow is performed by the orchestrator 112 , the orchestrator 112 is able to store a current state of the workflow, to indicate the portion of the workflow already executed. Based on the workflow's current state and a received event, the orchestrator 112 is able to transition from a current state to a next state of the workflow and can determine a next action to perform, where the next action may involve the invocation of another application.
  • the orchestrator 112 Whenever the orchestrator 112 receives a new call or event (e.g. response, results, or other event), the orchestrator 112 evaluates which workflow instance is to receive the call or event and loads the workflow instance with a correct state. In some cases, it is possible that multiple workflow instances may check if they are supposed to be a recipient of a call or event.
  • a new call or event e.g. response, results, or other event
  • a workflow can be a stateless workflow, which does not keep track of a current state of the workflow. Rather, the stateless workflow performs corresponding next steps or actions as events are received by the orchestrator 112 .
  • Use of a stateless workflow is generally suitable for asynchronous operation (discussed further below).
  • a stateful workflow can be used with both a synchronous operation and asynchronous operation.
  • the events (e.g. results, responses, etc.) received by the orchestrator 112 can be provided by applications that are invoked in the workflow or from another source, such as through an interface 115 (e.g. an application programming interface (API)) of the message broker 114 .
  • the message broker 114 can also direct an event to a particular workflow instance (note that there can be multiple workflow instances executing concurrently). If the workflow instance is a stateful workflow, then an event can be provided to a state of the workflow.
  • An external entity can communicate with the message broker 114 using the API 115 , such as to trigger a workflow (enterprise process or use case) or make progress (or step through) the workflow.
  • the API 115 of the message broker can also be used to communicate a status update of a workflow.
  • the message broker 114 can include queues for temporarily storing information to be forwarded target components, and can include information forwarding logic that is able to determine a destination of a unit of information based on identifiers and/or addresses contained in the unit of information.
  • the message broker 114 can employ an Advanced Message Queuing Protocol (AMQP), which is an open standard application layer protocol for message-oriented middleware.
  • AMQP Advanced Message Queuing Protocol
  • AMPQ is described in a specification provided by the Organization for the Advancement of Structured Information Standards (OASIS).
  • OASIS Organization for the Advancement of Structured Information Standards
  • RabbitMQ is an open source message broker application.
  • the information exchanged using the message broker 114 can include information sent by the orchestrator 112 , where the information sent by the orchestrator 112 can include applications calls and/or data.
  • An “application call” can refer to a command (or commands) or any other type of message that is issued to cause an instance of a respective application to execute to perform a requested task (or tasks).
  • the information exchanged using the message broker 114 can also include information sent by the applications.
  • the information sent by an application can include response information that is responsive to a respective application call.
  • the information sent by the applications can also include information sent autonomously by an application without a corresponding request from the orchestrator 112 .
  • Information from an application can be included in an event sent by the application, where an “event” can refer to a representation of a unit of information.
  • the event can include a response, a result, or any other information. Note that an event from an application can be in response to a synchronous call or asynchronous call.
  • a synchronous call to an application by the orchestrator 112 is performed for a synchronous operation.
  • a workflow waits for a response to be received before proceeding further (in other words, the workflow blocks on the response).
  • An asynchronous operation of a workflow refers to an operation in which the workflow does not wait for a response from an application in response to a call to the application.
  • an event from an application can be due to something else occurring at the application level or in the environment (e.g. a support agent closes a ticket when using the application).
  • a support agent closes a ticket when using the application.
  • Such an event can be sent to the workflow, such as the workflow for an incident case exchange use case (explained further below).
  • An event or call can also be received through the API 115 of the message broker 104 from another source.
  • the message broker 114 is able to respond to a call (such as an API call from the orchestrator 112 by making a corresponding call to the API of the respective instance of an application that is executing in a particular workflow instance.
  • Adapters 116 may register with the message broker 114 , and the message broker 114 can use the registration to determine how to direct a call, and how events (e.g. results, responses, etc.) are tagged or associated to a workflow instance.
  • a message (a call or event) may be addressed to several workflow instances, in which case the message broker 114 can direct the message to the several workflow instances.
  • the orchestrator 112 can issue application (synchronous or asynchronous) calls to the message broker 114 for invoking the applications at corresponding points in the workflow.
  • a call can also be made by the orchestrator as part of throwing an event (which refers to the workflow deciding to communicate the event as a result of some specified thing occurring).
  • the flow logic for a respective workflow can be written abstractly using a canonical data model (CDM) 117 .
  • CDM canonical data model
  • the canonical data model 117 is depicted as being inside the message broker 114 , it is noted that the canonical data model 117 can be separate from the message broker 117 in other examples.
  • the canonical data model 117 can be used to express application calls to be issued by the orchestrator 112 to the message broker 114 .
  • the canonical data model 117 can also be used to express arguments (e.g. messages) for use in the calls, as well as the logic to be performed.
  • the application calls can be abstract calls.
  • the canonical data model 117 can be expressed in a specific language, such as a markup language or in another form.
  • a flow logic is written according to the canonical data model 117 can represent the following: arguments that are being exchanged in interactions of the applications, the functions that are called to support the interactions, the events (e.g. responses, results, or other events) that can result, any errors that can arise, and states of the use case executed across the applications.
  • the canonical data model 117 can be been defined across a large number of use cases representative of the relevant interactions that can take place in a particular domain (such as IT management or another domain) and across a wide set of applications that can be used to support subsets of the use cases.
  • a canonical data model can be shared across use cases of a particular domain.
  • a different canonical data model can be used for use cases of another domain. If a use case involves applications in different domains, then a canonical data model can be expanded to support the other domain, or multiple canonical data models may be used.
  • the information representing interactions between applications and the information representing the states of the applications can be used to track a current state of a workflow (assuming a stateful workflow).
  • the information regarding the errors in the canonical data model 117 can be used for handling errors that arise during execution of the applications.
  • the information regarding the errors can be used to map an error of an application to an error of the workflow that is being performed by the orchestrator 112
  • the service exchange 110 does not employ the canonical data model 117 , but rather development of the flow logic can be ad-hoc (such as by use of the ad-hoc models noted above) for each use case and/or set of applications.
  • the application calls issued by the orchestrator 112 can be sent through an interface between the orchestrator 112 and the message broker 114 .
  • the expression of the flow logic does not have to be concerned with specific data models or interfaces employed by the applications, which simplifies the design of the orchestrator 112 .
  • the orchestrator 112 does not have to know specific locations of the applications—the applications can be distributed across multiple different systems in disparate geographic locations.
  • the message broker 114 is responsible for routing the application calls to the respective adapters 116 .
  • Information communicated between the message broker 114 and the adapters 116 is also in an abstract form according to the canonical data model.
  • the message broker 114 can forward an abstract application call from the orchestrator 112 to a respective adapter.
  • an adapter can send an event from an application to the message broker in an abstract form according to the canonical data model.
  • the adapters 116 perform protocol translations between the protocol of the abstract API of the message broker 114 , and the protocols to which the interfaces exposed by the corresponding applications are bound.
  • the protocol of the abstract API of the message broker 114 can be according to a Representational State Transfer (REST) protocol or some other protocol.
  • the protocol of an interface exposed by an application can include Simple Object Access Protocol (SOAP), Remote Procedure Call (RPC), Session Initiation Protocol (SIP), and so forth.
  • SOAP Simple Object Access Protocol
  • RPC Remote Procedure Call
  • SIP Session Initiation Protocol
  • Each adapter 116 can also transform the data model of a message (e.g. message carrying an event) and an abstract API call to the data model and specific API call exposed by a particular application (e.g. instance or release of the particular application). Stated differently, the adapter 116 performs interface adaptation or interface translation by converting the abstract message or abstract API to a message or API call that conforms to the API of the target application. The reverse conversion is performed in the reverse direction, where the result, response, event, message or API call from an application is converted to an abstract message or abstract API call that can be passed through the message broker 104 to the orchestrator 112 .
  • Each adapter 116 can also perform address translation between an address in the address space used by the orchestrator 112 and the message broker 114 , and an address in the address space of an application.
  • the service exchange 110 provides for a multi-point orchestrated integration across multiple applications.
  • the multi-point orchestrated integration can include the applications 118 associated with the data center 100 as well as applications or services of the other data centers 102 and 104 .
  • the workflow 113 executed by the orchestrator 112 of the service exchange 110 can thus involve both processes of applications 118 associated with the data center 100 , as well as processes of applications of the data center 102 and services 140 (e.g. SaaS services) in the data center 104 .
  • services 140 e.g. SaaS services
  • the data center 102 also includes a service exchange 130 that can have a similar arrangement as the service exchange 110 of the data center 100 .
  • the service exchange 130 can include or be associated with applications 138 . Execution of the applications 138 can provide the services 120 of the data center 102 shown in FIG. 1A .
  • the service exchange 130 includes an orchestrator 132 that can execute a workflow 133 , a message broker 134 that includes a message confirmation engine (ME) 139 and a canonical data model 137 (similar to the message confirmation engine 119 and the canonical data model 117 in the message broker 114 ), and adapters 136 .
  • the message broker 134 also has an interface 135 similar to the interface 115 of the message broker 114 .
  • the workflow 133 executed by the orchestrator 132 in the service exchange 130 of the data center 102 can also involve applications and services across multiple data centers.
  • the data center 104 does not include a service exchange similar to service exchange 110 or 130 , but instead includes a different infrastructure for deploying the services 140 .
  • a service exchange does not exist in any data center (including services and/or applications to be orchestrated) that is provided by a provider different from the enterprise that provides the data centers 100 and 102 , for example. In such situations, techniques as discussed further above where the remote data center is without a gateway can be applied.
  • an orchestrator 112 or 132 can orchestrate execution of a workflow that includes selected applications or services, including the applications 118 , the applications 138 , and the SaaS services 140 .
  • the data center 100 and data center 102 each includes a respective gateway 142 and 144 .
  • the gateways 142 and 144 are shown outside the respective service exchanges 110 and 130 , it is noted that the gateways 142 and 144 can also be considered to be part of the respective service exchanges 142 and 144 .
  • Each gateway 142 or 144 includes a bridge between communications of the respective service exchange 110 or 130 (more specifically the communications of the respective message broker 114 or 134 ) and communications over a network 146 , which can be a public network such as the Internet.
  • Communications over the network 146 can be according to a specified protocol, such as the Hypertext Transfer Protocol (HTTP), WebSocket protocol, Representational State Transfer (REST) protocol, or any other protocol.
  • HTTP Hypertext Transfer Protocol
  • REST Representational State Transfer
  • each gateway 142 or 144 includes a protocol translator 150 that can convert between the protocol used by the respective message broker 114 or 134 , and the protocol used over the network 146 .
  • information exchange can be accomplished using the respective message broker 114 or 134 , without involving the gateway 142 or 144 .
  • packet loss and delay and/or security of messages may not be a concern, since the communications occur within the same data center.
  • gateways 142 and 144 are provided to address the foregoing issues and possibly issues associated with confirmation of message delivery and message processing commit, as discussed above.
  • the message confirmation engine 119 or 139 of the message broker 114 or 134 can be used to ensure that a message is delivered to a target by checking for confirmation of receipt of the message by the target, and to ensure that the target has returned a confirmation of commit.
  • the gateway 142 or 144 can include a message confirmation engine 154 to perform the foregoing tasks, and can perform resending of a message in response to not receiving a confirmation of receipt of the message, and resending the message or indicating an error in response to not receiving a confirmation of commit.
  • the message confirmation engine 119 or 139 in the message broker 114 or 134 can also ensure that messages are delivered in a managed manner; in other words, messages are delivered without loss and with acceptable delays (delays within specified target levels).
  • the message broker 114 or 134 can also perform remediation if message loss or delays occur. Delivery times for messages can be monitored, and messages that are lost or excessively delayed (delayed longer than a specified target goal) are re-sent. Also, management systems can be notified as appropriate.
  • the message confirmation engine 154 in the gateway 142 or 144 can perform managed delivery of messages in lieu of or in addition to that performed by the message confirmation engine 119 or 139 of the message broker 114 or 134 .
  • the message confirmation engine 119 or 139 can also perform secure communication of messages with endpoints, such as by employing encryption of messages, mutual authentication of messages, or use of certificates.
  • a security engine 152 of the gateway 142 or 144 can perform the respective tasks, which can include message encryption, mutual authentication, or security using a certificate.
  • the gateway 142 or 144 can implement protocol changes and capabilities to perform the foregoing.
  • the gateway 142 or 144 can implement a mechanism to number and timestamp messages or packets (carrying the messages) that are sent to target endpoints over the network 146 .
  • sequence numbers that monotonically increase can be assigned to messages (or packets) as they are sent to the target endpoint.
  • the sequence numbers can be used to identify which data units (message or packets) were not received (i.e. lost).
  • the gateway 142 or 144 can also determine the time for delivery and receipt of messages sent to the target endpoints. If the time taken to deliver a data unit (message or packet) exceeds a target goal, then the data unit can be re-sent.
  • bi-directional Hypertext Transfer Protocol (HTTP) communications can be established between gateways 142 and 144 with packet numbering and timestamps (that indicate when a packet was sent).
  • the bi-directional HTTP communications include a return channel through which a receiver is able to provide feedback regarding delays or data loss.
  • the WebSocket protocol supports adding extensions such as packet numbers and timestamps in a standardized manner.
  • managed communications of messages can be accomplished in the following manner.
  • the SaaS services 140 can expose APIs and protocols that are compatible with the use of packet numbering and timestamping by the gateways 142 and 144 .
  • the SaaS services 140 can employ a Websocket protocol that employs packet numbering and timestamps, or some other mechanism.
  • the gateways 142 , 144 can interact with the SaaS services 140 to perform managed delivery of messages.
  • managed delivery of messages with the SaaS services 140 is accomplished using just the gateway 142 or 144 on one side. If the remote data center 104 does not support deployment of a gateway and service exchange (e.g. because the data center belongs to other entity), then the gateway 142 on the enterprise service exchange side can only do as much as it can with available protocols. In alternative examples, the remote data center 104 can support the same protocols or mechanisms of the gateway 142 . Examples of implementing same behavior at the SaaS level would be if the SaaS APIs were bounded to WebSocket with same extensions. In other examples, a gateway and service exchange of another data center that is geographically close to or collocated with the data center 104 can be used to interact with the data center 104 .
  • the gateway 142 or 144 can perform security-related tasks for messages in addition to or in lieu of the security-related tasks performed by the message broker 114 or 134 discussed above. As discussed above, these security-related tasks include encryption of messages, mutual authentication of messages, or use of certificates. For communications with the SaaS services 140 where just one gateway 142 or 144 is present, then encryption and authentication as supported by SaaS services 140 can be employed.
  • a respective gateway can also be included in the data center 104 .
  • a workflow provided by the service exchange 110 or 130 may be associated with a target performance goal, or more simply, a target goal.
  • a target goal can include any of the following: target maximum response time from a request, target maximum usage of resources, target maximum error rate, and so forth.
  • the target goal can be specified in a service level agreement (SLA) and can specify a maximum allowable delay between a call from the orchestrator and a response from a target application or service.
  • SLA service level agreement
  • the target goal can be associated with a QoS level that is specified for the workload, either by agreement or some other mechanism.
  • FIG. 2 shows an example of the service exchange 110 with a management engine 202 according to some implementations.
  • the management engine 202 can be implemented with a combination of machine-executable instructions and processing hardware, or with just processing hardware.
  • the management engine 202 can be separate from the message broker 114 , or can be part of the message broker 114 .
  • the service exchange 110 includes the orchestrator 112 and message broker 114 as discussed above. Also, the service exchange 110 includes adapters 116 - 1 to 116 -N (N>1) that are provided between the message broker 114 and respective applications 118 - 1 to 118 -N.
  • the management engine 202 includes a performance monitor 112 that can monitor the performance of a workflow that is executed by the orchestrator 112 .
  • the management engine 202 also is able to access a database 206 that stores information relating to target goals 208 associated with respective workloads that can be executed by the orchestrator 112 , where the target goals can be specified by an SLA or a QoS level.
  • the database 206 can also store metrics and thresholds.
  • the performance monitor 112 can detect a time when a request is received to initiate a workflow. In response to the request, the performance monitor 112 can measure the amount of time that has passed since the time the request was received. The performance monitor 112 can retrieve information relating to a target goal associated with the workflow. The performance monitor 112 can compare the elapsed time with the target goal to determine whether execution of the workflow will satisfy a target maximum time duration specified by the respective target goal. If not, the performance monitor 112 can issue an indication to a handler 210 of the management engine 202 to handle the potential violation of the target goal to perform remediation.
  • the target goal can specify the maximum time duration from when task i (of an application of any of multiple data centers) begins to when task i is expected to complete.
  • the performance monitor 204 is able to compare the elapsed execution time for task i with the maximum time duration of the target goal. If violation of the maximum time duration has or is predicted to occur, then the performance monitor 204 issues an indication to the handler 210 , which can take action to resolve the issue. For example, the handler 210 can cause additional computing resources to be allocated to the workflow, so that the workflow can execute at a faster rate to meet the target goal.
  • the performance monitor 204 can also monitor for other events associated with the workflow. For example, the performance monitor 204 can determine the error rate associated with execution of the workflow, or can determine the amount of resources used by the execution of the workflow. If the error rate or resource usage exceeds a specified threshold (e.g. a threshold error rate or a threshold resource consumption), then the performance monitor 204 can issue a respective indication to cause the handler 210 to take a corresponding action, such as to allocate a different set of resources to execute the workflow (if a currently allocated set of resources is causing an excessive error rate), or to reduce an allocation of resources (if the workflow is consuming an excessive amount of resources).
  • a specified threshold e.g. a threshold error rate or a threshold resource consumption
  • FIG. 3 shows an example in which the service exchange 110 according to some implementations of the present disclosure is able to interact with a legacy integration framework 302 , such as an ESB framework (as discussed above), a Schools Interoperability Framework (SIF), or any other integration framework that is a different type of integration framework from the service exchange 110 .
  • SIF includes a specification for modeling data, and a Service-Oriented Architecture (SOA) specification for sharing data between institutions.
  • SOA Service-Oriented Architecture
  • the legacy integration framework 302 integrates applications 304 , such as according to enterprise architecture integration (EAI).
  • a gateway 306 is provided between the legacy integration framework 302 and the service exchange 110 to perform protocol translation 308 and interface translation 310 between the interface 115 of the service exchange 110 and the interface 312 to the legacy integration framework 302 .
  • the protocol translation 308 and interface translation 310 can be similar to the protocol and interface translations applied by the adapters 116 of the service exchange 110 , except that the protocol translation 308 and interface translation 310 are to provide adaptation to the legacy integration framework 302 and to the applications 304 integrated by the legacy integration framework 302 .
  • a bridge can also be provided between the service exchange 110 and the legacy integration framework 302 if the service exchange 110 and the legacy integration framework 302 are in different data centers.
  • the bridge can include gateways similar to the gateways 142 and 144 at each end discussed above.
  • FIG. 3 also shows a portal 314 .
  • a portal can also be provided in implementations according to FIGS. 1 and 2 .
  • the portal 314 is an example of an entity interacting with the API 105 for triggering workflows or of orchestrated applications.
  • FIG. 3 shows the portal 314 as using the message broker 104 , it is noted that the portal 314 can also be one of the applications orchestrated through a respective adapter 116 .
  • the portal 314 can present a user interface (UI).
  • the portal 314 can include machine-executable instructions or a combination of machine-executable instructions and processing hardware.
  • the portal 314 can be at a computer (e.g. client computer) that can be remote from the service exchange 110 .
  • the UI allows a user to interact with the service exchange 110 .
  • a user can perform an action in the UI that triggers the execution of a flow logic (of multiple different flow logic) by the orchestrator 112 to perform a workflow.
  • An indication of user action in the UI can be communicated to the orchestrator 112 and the corresponding workflow by the portal 314 and the message broker 114 .
  • the indication can be communicated using the API 105 (e.g. REST API) of the message broker 114 .
  • This indication of user action received by the message broker 114 can be communicated to the orchestrator 112 , which invokes execution of the corresponding flow logic to perform the requested workflow.
  • FIG. 4 is a flow diagram of a process performed by a service exchange (e.g. 110 or 130 ).
  • An orchestrator e.g. 112 or 132 of the service exchange in a first data center executes (at 402 ) a workflow that is associated with a target performance goal, where the workflow includes tasks of applications of the first data center and of a second data center.
  • information is communicated (at 404 ) between the orchestrator and the processes of the applications through a message broker (e.g. 114 or 134 ) of the service exchange.
  • a message broker e.g. 114 or 134
  • Adapters e.g. 116 or 136 of the service exchange perform (at 406 ) protocol and interface translations for information communicated between the message broker and the applications in the first data center.
  • a management engine (e.g. 202 ) monitors (at 408 ) performance of the workflow to determine whether the executing of the workflow is able to meet the target goal.
  • Content of the service exchange platform including the orchestrator, the message broker, and the adapters can be changed, such as from an administration system coupled to the service exchange.
  • Applications can be changed, flow logic can be changed, and use cases can be created.
  • Any given application can be updated or replaced, simply by replacing or modifying the corresponding adapter. For example, if an enterprise wishes to upgrade or replace a given application (with a new application or an updated version of the given application), then the corresponding adapter to which the given application is coupled can be replaced or updated to support the updated or replaced application.
  • replacing the given application can involve replacing a first application supplied by a first vendor with a second application supplied by a different vendor.
  • replacing the given application can involving replacing a first application supplied by a vendor with another application supplied by the same vendor.
  • replacing the given application can include upgrading the application to a new release.
  • Changing a given adapter can involve removing a representation of the adapter (which can be in the form of program code, a markup language file, or some other representation), and replacing the removed representation of the adapter with a new representation of a different adapter.
  • Changing the given adapter can alternatively involve modifying the given adapter or modifying a configuration of the given adapter to support the different application.
  • the changing of the given adapter can be performed by a machine or by program code, either autonomously (such as in response to detection of a replacement of an application) or in response to user input.
  • Changing an application may also involve moving an instance of the application from one instance to another instance, or from one location to another location.
  • the respective adapter can be updated or configuration of the adapter is changed (the adapter itself remains unchanged), to refer to another application instance or to an instance of the application at another location.
  • the respective adapter can delegate or orchestrate with another application (or web service) that provides the missing functionality.
  • the workflow can be modified to take into account the loss of functionality in the use case.
  • the workflow can be modified to use the new functionality.
  • a workflow can be modified relatively easily by changing the respective flow logic with a different flow logic (a modified version of the flow logic or a new flow logic).
  • the different flow logic can then be loaded onto the orchestrator to implement the modified workflow.
  • workflows can be easily customizable by providing new or modified flow logic to the orchestrator.
  • a new use case specifies use of new calls and data not covered in current adapters (e.g. an adapter is able to call just a subset of APIs of the application) or the canonical model.
  • the canonical data model can be updated and adapters can be updated to be able to make the calls, or new adapters can be provided.
  • New use cases can also be created, and corresponding flow logic and adapters can be provided.
  • the canonical data model may be updated accordingly.
  • the content changes noted above can be performed using any of various tools, such as a Software Development Kit (SDk) tool or other type of tool used to create applications and other program code.
  • SDk Software Development Kit
  • a content pack can be updated using the tool, and the modified content pack can be loaded using an administration system.
  • the administration system can configure the adapters to point to the correct instance of an application.
  • a new use case and respective content can be also created with an SDk tool.
  • the canonical data model 107 is updated, the canonical data model 107 remains backwards compatible with content packs of existing use cases.
  • FIG. 5 is a block diagram of an example computer system 500 according to some implementations, which can be used to implement the service exchange 110 or 130 according to some implementations.
  • the computer system 500 can include one computer or multiple computers coupled over a network.
  • the computer system 500 includes a processor (or multiple processors) 502 .
  • a processor can include a microprocessor, a microcontroller, a physical processor module or subsystem, a programmable integrated circuit, a programmable gate array, or another physical control or computing device.
  • the processor(s) 502 can be coupled to a non-transitory machine-readable or computer-readable storage medium 504 , which can store various machine-executable instructions.
  • the machine-executable instructions can include orchestration instructions 506 to implement the orchestrator 112 or 132 , message broker instructions 508 to implement the message broker 114 or 134 (including the message broker application 410 and event handlers 414 shown in FIG. 3 ), adapter instructions 510 to implement the adapters 116 , management engine instructions 512 to implement the management engine 202 (including the performance monitor 204 and the handler 210 ), and message confirmation instruction 514 to implement the message confirmation engine 119 or 139 .
  • the storage medium (or storage media) 504 can include one or multiple forms of memory including semiconductor memory devices such as dynamic or static random access memories (DRAMs or SRAMs), erasable and programmable read-only memories (EPROMs), electrically erasable and programmable read-only memories (EEPROMs) and flash memories; magnetic disks such as fixed, floppy and removable disks; other magnetic media including tape; optical media such as compact disks (CDs) or digital video disks (DVDs); or other types of storage devices.
  • semiconductor memory devices such as dynamic or static random access memories (DRAMs or SRAMs), erasable and programmable read-only memories (EPROMs), electrically erasable and programmable read-only memories (EEPROMs) and flash memories
  • magnetic disks such as fixed, floppy and removable disks
  • other magnetic media including tape optical media such as compact disks (CDs) or digital video disks (DVDs); or other types of storage devices.
  • CDs compact disks
  • DVDs digital video disk
  • Such computer-readable or machine-readable storage medium or media is (are) considered to be part of an article (or article of manufacture).
  • An article or article of manufacture can refer to any manufactured single component or multiple components.
  • the storage medium or media can be located either in the machine running the machine-readable instructions, or located at a remote site from which machine-readable instructions can be downloaded over a network for execution.

Abstract

A service exchange includes an orchestrator to execute a workflow that involves a plurality of applications and services of a plurality of data centers. A message broker is to exchange messages between the orchestrator and the applications. Adapters are to perform protocol and interface translations for information communicated between at least some of the applications and the message broker.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application claims the benefit under 35 U.S.C. §119(e) of U.S. Provisional Application No. 61/913,799, filed Dec. 9, 2013, which is hereby incorporated by reference.
  • BACKGROUND
  • An enterprise may employ multiple applications to perform various tasks. The tasks can be performed by various applications, and in some cases multiple applications can perform overlapping tasks. As an example, the tasks can include tasks associated with information technology (IT) management, such as management of development and production of program code, management of a portfolio of products or services, support management, IT service management, cloud and Software as a Service (SaaS) service management and so forth. IT management performs management with respect to components of an IT environment, where the components can include computers, storage devices, communication nodes, machine-readable instructions, and so forth. Various aspects of IT management can be modeled by an Information Technology Infrastructure (ITIL) (that provides a set of best practices for IT management), a Business Process Framework (eTOM) from the TM Forum, and so forth. With advancements in IT management technology, new IT management processes have been introduced, such as self-service IT, IT as a service provider, DevOps and autonomous IT, and so forth.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Some implementations are described with respect to the following figures.
  • FIG. 1A is schematic diagram of an example arrangement including data centers according to some implementations.
  • FIG. 1B is a block diagram of a gateway according to some implementations.
  • FIG. 2 is a block diagram of a service exchange according to some implementations.
  • FIG. 3 is a block diagram of a service exchange that interacts with a legacy integration framework, according to further implementations.
  • FIG. 4 is a flow diagram of a process of a service exchange according to
  • FIG. 5 is a block diagram of an example computer system according to some implementations.
  • DETAILED DESCRIPTION
  • Workflows performed by an enterprise can involve the use of a number of applications. An “enterprise” can refer to a business concern, an educational organization, a government agency, an individual, or any other entity. A “workflow” can refer to any process that the enterprise can perform, such as a use case. Such a process of the workflow can also be referred to as an “end-to-end process” or an “enterprise process” since the process involves a number of activities of the enterprise from start to finish. A “use case” can refer to any specific business process or other service that an enterprise desires to implement. An “application” can refer to machine-readable instructions (such as software and/or firmware) that are executable. The application can include logic associated with an enterprise process, which can implement or support all or parts of the enterprise process (or processes). An application can be an application developed by the enterprise, or an application provided by an external vendor of the enterprise. An application can be provided on the premises of the enterprise, or in the cloud (public cloud or virtual private cloud), and the application can be a hosted application (e.g. an application provided by a provider over a network), a managed service (a service managed and/or operated by a third party that can be hosted or on premise), or a software as a service (SaaS) (a service available on a subscription basis to users), and so forth. In some cases, multiple applications used by the enterprise may be provided by different vendors.
  • Within a portfolio of applications used by an enterprise, many applications may not be able to directly interact with each other. In general, an application implements a particular set of business logic and is not aware of other applications that are responsible for performing other processes. The design of the application may or may not have taken into account the presence of other applications upstream or downstream (with respect to an end-to-end process). This is especially true for older (legacy) applications. More recently, applications can at least expose well defined application programming interfaces (APIs) that assume that the applications will be interacting with other systems. Such applications are called by their APIs or can call other APIs. Even with such APIs, applications may not readily interact with each other. Different applications may employ different data formats, different languages, different interfaces, different protocols, and so forth.
  • Application developers have developed a portfolio of applications that rely on using point-to-point integration to provide some level of integration across the portfolio. With point-to-point integration, a given application is aware of another application in the portfolio that is upstream or downstream of the given application. Such applications are mutually aware of each other.
  • A point-to-point integration mechanism can include a component (or multiple components) provided between applications to perform data transformations, messaging services, and other tasks to allow the applications to determine how and when to communicate and interact with each other.
  • Different point-to-point integration mechanisms can be provided for different subsets of applications. If there are a large number of applications in a portfolio of applications used by an enterprise, then there can be a correspondingly large number of point-to-point integration mechanisms.
  • As applications evolve (e.g. new release of an application, new functionality added to an application, variation of the expected use cases, variation of interaction to take place between applications), corresponding point-to-point integration mechanisms may have to be modified and/or re-tested. Modifying or re-testing an integration mechanism between applications can be a time-consuming and costly exercise, particularly if there are a large number of integration mechanisms deployed by the enterprise. This exercise can rapidly become a complex combinatorial exercise. If point-to-point integration is used, an enterprise may be hesitant to upgrade applications, to add new applications, to change application vendors, or to modify processes, since doing so can be complex and costly. However, maintaining a static portfolio of applications can prevent an enterprise from being agile in meeting evolving demands by users or customers of the enterprise. If an enterprise has applications provided by multiple vendors, additional challenges may arise. The application can be built to support updated releases of other applications, which adds complexity to application development if an enterprise wishes to deploy another release of an application of another vendor.
  • In accordance with some implementations of the present disclosure, a service exchange and integration framework (referred to as a “service exchange” in the ensuing discussion) is provided that is able to integrate applications in a flexible manner, and orchestrate execution of workflows (which can refer to enterprise processes or use cases as noted above). Applications are used to implement their respective logic parts of each workflow. These applications are orchestrated to automate the end-to-end enterprise process or use case.
  • According to the present disclosure, orchestrating execution of a workflow can refer to modeling and executing the logic of sequencing of the tasks of the workflow. Some of the tasks of the workflow are delegated using the orchestration to be performed by the logic of the applications. As an example, a workflow can include an order fulfillment workflow. An order fulfillment workflow can include the following tasks: receive an order from a customer, determine applications that are to be involved in fulfilling the order, invoke the identified applications to fulfill the order, and return a status (e.g. confirmation number or information to further manage the order, such as to view, update, cancel, or repeat the order) to the customer. Note that the foregoing example order fulfillment workflow is a simplified workflow that includes a simple collection of tasks. An actual order fulfillment workflow may involve many more tasks.
  • In some cases, a workflow can involve processes of applications in multiple data centers or in the cloud or over the internet. A “data center” can refer to an arrangement of resources (including computing resources such as computers or processors, storage resources to store information, communication resources to communicate information, and machine-executable instructions such as applications, operating systems, and so forth). A data center can be provided by an enterprise. A data center can also be a public cloud, a private cloud, or a hybrid cloud that is made up of a public cloud and a private cloud. A public cloud can be provided by a provider that is different from the enterprise. A private cloud can be provided by the enterprise. A data center is “provided” by a provider (enterprise or other provider) if the provider manages the resources of the data center and/or makes available the resources of the data center to users, machines, and/or program code. Multiple data centers can be coupled over a private network and/or over a public network such as the Internet.
  • A “cloud” can refer to an infrastructure including resources that are available for use by users. The resources of a public cloud are available over a public network, to multiple tenants (or customers), who are able to subscribe or rent some share of the public cloud resources. In some cases, a public cloud provided by a third party provider can also be deployed on the premises of the enterprise.
  • The resources of a private cloud are dedicated for use by users within organizations of the enterprise. A cloud can also be a hybrid cloud, which includes both a public cloud and a private cloud. Another type of cloud is a managed cloud, which includes resources of the enterprise that are managed by a third party provider.
  • More traditionally, an enterprise can deploy multiple data centers, such as in different geographic regions (e.g. across a city, a state, a country, or the world) to achieve redundancy, high availability (to ensure availability of resources or to provide disaster recovery in case of failure of a data center), or scalability (increasing resources to meet increased demand). Deployment of multiple data centers can also be for satisfying government regulations (e.g. regulation specifying that certain data has to be kept in a specific country). These data centers can be managed by the enterprise or by a third party provider. A data center can also provide services such as Software as a Service (SaaS) services. SaaS can refer to an arrangement in which software (or more generally, machine-executable instructions) is made available to users on a subscription basis.
  • Orchestrating workflows across different data centers can be associated with various challenges. For example, use of different data centers may involve communications through many firewalls. As another example, the data centers can be coupled over a network (such as the Internet) that can be associated with unexpected delays, packet losses, etc., particularly during times of high usage. In such cases, guaranteeing the satisfaction of target goals associated with a service level agreement (SLA) or quality of service (QoS) level can be difficult. Also, managing security can be more complex. In addition, if cloud resources and/or SaaS services are employed, instances of applications that are to be orchestrated can be dynamically created, moved, replaced, and so forth, which can involve the use of dynamic addressing; as a result, it can be more difficult to address such application instances.
  • The foregoing issues also exist when attempting to broker messages reliably and with desired performance in a manageable manner across the Internet or among clouds. Legacy integration frameworks, such as Enterprise Service Bus (ESB) integration frameworks, also experience the foregoing challenges. A legacy integration framework can refer to an integration framework different from that provided by the service exchange according to the present disclosure. ESB refers to an architecture model for designing and implementing communication between mutually interacting applications in a service-oriented architecture (SOA), where the applications can be distributed usually within a data center. While theoretically it is also possible to distribute application across the Internet or among clouds, that can be associated with issues relating to changing use cases and routing, dynamic addressing, and message delivering in a manageable manner across Internet or data centers. The ESB framework provides for monitoring and control of routing of message between applications, resolving contention between applications, and other tasks.
  • The service exchange according to some implementations of the present disclosure is able to interact with an ESB integration framework or another framework, e.g. with a message queue for exchanging messages among applications that would already be present to integrate the applications.
  • In accordance with some implementations, techniques or mechanisms enable the orchestrated execution of applications across multiple data centers. Additionally, the service exchange according to some implementations enable cloud scale message brokering (to allow an exchange of messages across clouds) or a cloud event driven architecture (EDA). An EDA refers to a framework that orchestrates behavior around the production, detection and consumption of events as well as the responses the events evoke. A cloud EDA refers to such a framework implemented across clouds.
  • FIG. 1A illustrates an example arrangement that includes a data center 100, a data center 102, and a data center 104. The data centers 100, 102, and 104 can be enterprise data centers and/or clouds as discussed above. Although just three data centers are shown in FIG. 1A, it is noted that in other examples, different numbers of data centers can be provided.
  • The data center 100 includes a service exchange 110, which includes an orchestrator 112, a message broker 114, and adapters 116. The adapters 116 are provided between the message broker 114 and respective applications 118. Although the applications 118 are depicted as being part of the service exchange 110 in FIG. 1A, it is noted that in other examples, the applications 118 can be separate from the service exchange 110, and some applications 118 can even be external of the data center 100. For example, some applications 118 can be provided by an entity that is separate from the provider of the data center 100.
  • Each of the orchestrator 112, message broker 114, and adapters 116 can be implemented as a combination of machine-executable instructions and processing hardware, such as a processor, a processor core, an application-specific integrated circuit (ASIC) device, a programmable gate array, and so forth. In other examples, any of the orchestrator 112, message broker 114, and adapters 116 can be implemented with just processing hardware.
  • The message broker 114 is operatively or communicatively coupled to the orchestrator 112 and the adapters 116. Generally, the message broker 114 is used to exchange messages among components, including the orchestrator 112 and the adapters 116. A message can include any or some combination of the following: a call (e.g. API call) or an event (e.g. response, result, or other type of event). The message broker 114 is responsible for ensuring that API calls and events (e.g. responses, results, etc.) are sent to the correct adapter or to the correct workflow instance (multiple workflow instances may execute concurrently). Alternatively, the endpoints (adapters and workflow instances) may all receive a call or event and make a decision regarding whether each endpoint should process the call or event.
  • The message broker 114 further includes a message confirmation engine (MCE) 119 to perform the following tasks. The message confirmation engine 119 ensures that a message put on the message broker 114 is delivered to a target by checking for a confirmation of receipt of the message by the target (e.g. an adapter 116), such as with a positive acknowledgement, for example. Message confirmation is thus implemented with the message broker 114 and the adapters 116. If the target does not confirm receipt of the message, the message confirmation engine 119 can cause the message broker 114 to resend the message to the target.
  • The message confirmation engine 119 can also ensure that the target processes the message (e.g. by checking that the target returns a confirmation of commit). The confirmation of commit is an indication of successful completion of processing of the message. An application can send the confirmation of commit, or alternatively, an adapter 116 can query the application for the confirmation of commit. If the confirmation of commit is not received, then the message confirmation engine 119 can cause the message broker to resend the message, or to indicate an error, depending on the type of message and application/flow design. Idempotent calls on the applications can be repeated as often as appropriate until commit is confirmed. When the calls cannot be repeated, error messages are sent and the workflow handles (in its logic) what to do to perform rollback or notification. Rollback can refer rolling back the workflow to a prior known good state. Notification can include notifying a management system (or management systems). The action to take in response to a lack of confirmation of commit can be determined from a canonical data model 117 (discussed further below).
  • The message confirmation engine 119 can also ensure that messages are delivered in a managed manner (managed delivery of messages); in other words, messages are delivered without loss and with acceptable delays (delays within specified target levels). The message confirmation engine 119 can also perform remediation if message loss or delays occur. Delivery times for messages can be monitored, and messages that are lost or excessively delayed (delayed longer than a specified target goal) are re-sent. Remediation can include resending the message that is missing or delayed to allow the endpoint to not have to wait anymore. Remediation can also include notifying other systems such as network or traffic management systems to try to get more or better or alternate bandwidth, for example.
  • The message confirmation engine 119 can also perform secure communication of messages with endpoints, by applying security to the messages. Applying security can include encryption of messages, mutual authentication of messages, or use of certificates. Encryption of a message is accomplished by using a key (e.g. public key or private key) to encrypt the message. Mutual authentication refers to the two communicating endpoints authenticating each other, such as with use of credentials or other security information. A certificate can be used to establish a secure communication session between two endpoints.
  • To manage similar issues for communication of messages with the other data centers 102 and 104, a gateway 142 can also be provided in the data center 100. The gateway 142 is discussed further below, following the discussion of operations of the orchestrator 112, the message broker 114, and the adapters 116. If the other data center does not support deployment of a gateway and service exchange, then the gateway on the enterprise service exchange side can only do as much as it can with available protocols. In alternative examples, the remote cloud can support the same protocols or mechanisms of the gateway, or a gateway and service exchange of a remote data center that is geographically close to or collocated with the remote data center can be used to interact with the cloud.
  • In addition, the message broker 114 is able to send a confirmation of successful completion of an application in a workflow to the orchestrator 112 or to a requester that initiated the workflow.
  • The orchestrator 112 is used to orchestrate the execution of a specific workflow 113 that involves tasks performed by multiple applications (e.g. a subset or all of applications 118). To perform a workflow, flow logic can be loaded into the orchestrator 112, and the flow logic is executed by the orchestrator 112. “Flow logic” can include a representation of a collection of tasks that are to be performed. The flow logic can be in the form of program code (e.g. a script or other form of machine-executable instructions), a document according to a specified language or structure (e.g. Business Process Execution Language (BPEL),a Business Process Model and Notation (BPMN), etc.), or any other type of representation (e.g. Operations Orchestration from Hewlett-Packard, YAML Ain't Markup Language (YAML), Mistral from OpenStack, etc.). The flow logic can be generated by a human, a machine, or program code, and can be stored in a machine-readable or computer-readable storage medium accessible by the orchestrator 112.
  • The orchestrator 112 is able to execute multiple flow logic to perform respective workflows. Multiple workflows and workflow instances (instances of a particular workflow refer to multiple instantiations of the particular workflow) can be concurrently executed in parallel by the orchestrator 112.
  • The orchestrator 112 is able to evaluate (interpret or execute) a flow logic, and perform tasks specified by the flow logic in response to a current state of the workflow and calls and events received by the orchestrator 112. A workflow can be a stateful workflow. As a stateful workflow is performed by the orchestrator 112, the orchestrator 112 is able to store a current state of the workflow, to indicate the portion of the workflow already executed. Based on the workflow's current state and a received event, the orchestrator 112 is able to transition from a current state to a next state of the workflow and can determine a next action to perform, where the next action may involve the invocation of another application. Whenever the orchestrator 112 receives a new call or event (e.g. response, results, or other event), the orchestrator 112 evaluates which workflow instance is to receive the call or event and loads the workflow instance with a correct state. In some cases, it is possible that multiple workflow instances may check if they are supposed to be a recipient of a call or event.
  • In other examples, a workflow can be a stateless workflow, which does not keep track of a current state of the workflow. Rather, the stateless workflow performs corresponding next steps or actions as events are received by the orchestrator 112. Use of a stateless workflow is generally suitable for asynchronous operation (discussed further below). A stateful workflow can be used with both a synchronous operation and asynchronous operation.
  • The events (e.g. results, responses, etc.) received by the orchestrator 112 can be provided by applications that are invoked in the workflow or from another source, such as through an interface 115 (e.g. an application programming interface (API)) of the message broker 114. The message broker 114 can also direct an event to a particular workflow instance (note that there can be multiple workflow instances executing concurrently). If the workflow instance is a stateful workflow, then an event can be provided to a state of the workflow.
  • An external entity can communicate with the message broker 114 using the API 115, such as to trigger a workflow (enterprise process or use case) or make progress (or step through) the workflow. The API 115 of the message broker can also be used to communicate a status update of a workflow.
  • The message broker 114 can include queues for temporarily storing information to be forwarded target components, and can include information forwarding logic that is able to determine a destination of a unit of information based on identifiers and/or addresses contained in the unit of information.
  • In some examples, the message broker 114 can employ an Advanced Message Queuing Protocol (AMQP), which is an open standard application layer protocol for message-oriented middleware. AMPQ is described in a specification provided by the Organization for the Advancement of Structured Information Standards (OASIS). An example of a message broker that employs AMPQ is RabbitMQ, which is an open source message broker application.
  • In other examples, other types of message brokers that employ other messaging or information exchange protocols can be used.
  • The information exchanged using the message broker 114 can include information sent by the orchestrator 112, where the information sent by the orchestrator 112 can include applications calls and/or data. An “application call” can refer to a command (or commands) or any other type of message that is issued to cause an instance of a respective application to execute to perform a requested task (or tasks).
  • The information exchanged using the message broker 114 can also include information sent by the applications. For example, the information sent by an application can include response information that is responsive to a respective application call. The information sent by the applications can also include information sent autonomously by an application without a corresponding request from the orchestrator 112. Information from an application can be included in an event sent by the application, where an “event” can refer to a representation of a unit of information. The event can include a response, a result, or any other information. Note that an event from an application can be in response to a synchronous call or asynchronous call. A synchronous call to an application by the orchestrator 112 is performed for a synchronous operation. In a synchronous operation, a workflow waits for a response to be received before proceeding further (in other words, the workflow blocks on the response). An asynchronous operation of a workflow refers to an operation in which the workflow does not wait for a response from an application in response to a call to the application.
  • In other examples, an event from an application can be due to something else occurring at the application level or in the environment (e.g. a support agent closes a ticket when using the application). Such an event can be sent to the workflow, such as the workflow for an incident case exchange use case (explained further below).
  • An event or call can also be received through the API 115 of the message broker 104 from another source.
  • The message broker 114 is able to respond to a call (such as an API call from the orchestrator 112 by making a corresponding call to the API of the respective instance of an application that is executing in a particular workflow instance. Adapters 116 may register with the message broker 114, and the message broker 114 can use the registration to determine how to direct a call, and how events (e.g. results, responses, etc.) are tagged or associated to a workflow instance. In some cases, it is possible that a message (a call or event) may be addressed to several workflow instances, in which case the message broker 114 can direct the message to the several workflow instances.
  • When performing a workflow based on flow logic executed by the orchestrator 112, the orchestrator 112 can issue application (synchronous or asynchronous) calls to the message broker 114 for invoking the applications at corresponding points in the workflow. A call can also be made by the orchestrator as part of throwing an event (which refers to the workflow deciding to communicate the event as a result of some specified thing occurring).
  • The flow logic for a respective workflow can be written abstractly using a canonical data model (CDM) 117. Although the canonical data model 117 is depicted as being inside the message broker 114, it is noted that the canonical data model 117 can be separate from the message broker 117 in other examples.
  • The canonical data model 117 can be used to express application calls to be issued by the orchestrator 112 to the message broker 114. The canonical data model 117 can also be used to express arguments (e.g. messages) for use in the calls, as well as the logic to be performed. The application calls can be abstract calls. The canonical data model 117 can be expressed in a specific language, such as a markup language or in another form.
  • More generally, a flow logic is written according to the canonical data model 117 can represent the following: arguments that are being exchanged in interactions of the applications, the functions that are called to support the interactions, the events (e.g. responses, results, or other events) that can result, any errors that can arise, and states of the use case executed across the applications. In general ad-hoc data models can be used but they may change whenever a new use case is introduced or when an application changes. According to implementations of the present disclosure, the canonical data model 117 can be been defined across a large number of use cases representative of the relevant interactions that can take place in a particular domain (such as IT management or another domain) and across a wide set of applications that can be used to support subsets of the use cases. Thus, in general, a canonical data model can be shared across use cases of a particular domain. A different canonical data model can be used for use cases of another domain. If a use case involves applications in different domains, then a canonical data model can be expanded to support the other domain, or multiple canonical data models may be used.
  • The information representing interactions between applications and the information representing the states of the applications can be used to track a current state of a workflow (assuming a stateful workflow). The information regarding the errors in the canonical data model 117 can be used for handling errors that arise during execution of the applications. The information regarding the errors can be used to map an error of an application to an error of the workflow that is being performed by the orchestrator 112
  • By using the canonical data model 117, the development of flow logic that is valid across large sets of applications can be achieved. Sharing a data model across the flow logic can facilitate combining the flow logic and/or customizing the flow logic, and also allows for adapters to be changed or modified to replace applications.
  • In other implementations, the service exchange 110 does not employ the canonical data model 117, but rather development of the flow logic can be ad-hoc (such as by use of the ad-hoc models noted above) for each use case and/or set of applications.
  • The application calls issued by the orchestrator 112 can be sent through an interface between the orchestrator 112 and the message broker 114. In this way, the expression of the flow logic does not have to be concerned with specific data models or interfaces employed by the applications, which simplifies the design of the orchestrator 112.
  • Also, the orchestrator 112 does not have to know specific locations of the applications—the applications can be distributed across multiple different systems in disparate geographic locations. The message broker 114 is responsible for routing the application calls to the respective adapters 116.
  • Information communicated between the message broker 114 and the adapters 116 is also in an abstract form according to the canonical data model. For example, the message broker 114 can forward an abstract application call from the orchestrator 112 to a respective adapter. Similarly, an adapter can send an event from an application to the message broker in an abstract form according to the canonical data model.
  • The adapters 116 perform protocol translations between the protocol of the abstract API of the message broker 114, and the protocols to which the interfaces exposed by the corresponding applications are bound. As an example, the protocol of the abstract API of the message broker 114 can be according to a Representational State Transfer (REST) protocol or some other protocol. The protocol of an interface exposed by an application can include Simple Object Access Protocol (SOAP), Remote Procedure Call (RPC), Session Initiation Protocol (SIP), and so forth.
  • Each adapter 116 can also transform the data model of a message (e.g. message carrying an event) and an abstract API call to the data model and specific API call exposed by a particular application (e.g. instance or release of the particular application). Stated differently, the adapter 116 performs interface adaptation or interface translation by converting the abstract message or abstract API to a message or API call that conforms to the API of the target application. The reverse conversion is performed in the reverse direction, where the result, response, event, message or API call from an application is converted to an abstract message or abstract API call that can be passed through the message broker 104 to the orchestrator 112.
  • Each adapter 116 can also perform address translation between an address in the address space used by the orchestrator 112 and the message broker 114, and an address in the address space of an application.
  • The service exchange 110 provides for a multi-point orchestrated integration across multiple applications. The multi-point orchestrated integration can include the applications 118 associated with the data center 100 as well as applications or services of the other data centers 102 and 104.
  • In the example according to FIG. 1A, the workflow 113 executed by the orchestrator 112 of the service exchange 110 can thus involve both processes of applications 118 associated with the data center 100, as well as processes of applications of the data center 102 and services 140 (e.g. SaaS services) in the data center 104.
  • The data center 102 also includes a service exchange 130 that can have a similar arrangement as the service exchange 110 of the data center 100.
  • The service exchange 130 can include or be associated with applications 138. Execution of the applications 138 can provide the services 120 of the data center 102 shown in FIG. 1A.
  • The service exchange 130 includes an orchestrator 132 that can execute a workflow 133, a message broker 134 that includes a message confirmation engine (ME) 139 and a canonical data model 137 (similar to the message confirmation engine 119 and the canonical data model 117 in the message broker 114), and adapters 136. The message broker 134 also has an interface 135 similar to the interface 115 of the message broker 114.
  • The workflow 133 executed by the orchestrator 132 in the service exchange 130 of the data center 102 can also involve applications and services across multiple data centers.
  • In some examples, the data center 104 does not include a service exchange similar to service exchange 110 or 130, but instead includes a different infrastructure for deploying the services 140. In general, a service exchange does not exist in any data center (including services and/or applications to be orchestrated) that is provided by a provider different from the enterprise that provides the data centers 100 and 102, for example. In such situations, techniques as discussed further above where the remote data center is without a gateway can be applied.
  • In examples according to FIG. 1A, an orchestrator 112 or 132 can orchestrate execution of a workflow that includes selected applications or services, including the applications 118, the applications 138, and the SaaS services 140.
  • As further shown in FIG. 1A, the data center 100 and data center 102 each includes a respective gateway 142 and 144. Although the gateways 142 and 144 are shown outside the respective service exchanges 110 and 130, it is noted that the gateways 142 and 144 can also be considered to be part of the respective service exchanges 142 and 144. Each gateway 142 or 144 includes a bridge between communications of the respective service exchange 110 or 130 (more specifically the communications of the respective message broker 114 or 134) and communications over a network 146, which can be a public network such as the Internet.
  • Communications over the network 146 can be according to a specified protocol, such as the Hypertext Transfer Protocol (HTTP), WebSocket protocol, Representational State Transfer (REST) protocol, or any other protocol.
  • As further shown in FIG. 1B, each gateway 142 or 144 includes a protocol translator 150 that can convert between the protocol used by the respective message broker 114 or 134, and the protocol used over the network 146.
  • For orchestration of applications within a single data center (such as 100 or 102), information exchange can be accomplished using the respective message broker 114 or 134, without involving the gateway 142 or 144. Moreover, for communications within just one data center, packet loss and delay and/or security of messages may not be a concern, since the communications occur within the same data center.
  • However, for communications over among different data centers (e.g. across clouds) or over the Internet, then message loss and delay and/or message security can become a concern. The gateways 142 and 144 are provided to address the foregoing issues and possibly issues associated with confirmation of message delivery and message processing commit, as discussed above.
  • In some examples, the message confirmation engine 119 or 139 of the message broker 114 or 134 can be used to ensure that a message is delivered to a target by checking for confirmation of receipt of the message by the target, and to ensure that the target has returned a confirmation of commit. In other examples, the gateway 142 or 144 can include a message confirmation engine 154 to perform the foregoing tasks, and can perform resending of a message in response to not receiving a confirmation of receipt of the message, and resending the message or indicating an error in response to not receiving a confirmation of commit.
  • As further discussed above, the message confirmation engine 119 or 139 in the message broker 114 or 134 can also ensure that messages are delivered in a managed manner; in other words, messages are delivered without loss and with acceptable delays (delays within specified target levels). The message broker 114 or 134 can also perform remediation if message loss or delays occur. Delivery times for messages can be monitored, and messages that are lost or excessively delayed (delayed longer than a specified target goal) are re-sent. Also, management systems can be notified as appropriate.
  • The message confirmation engine 154 in the gateway 142 or 144 can perform managed delivery of messages in lieu of or in addition to that performed by the message confirmation engine 119 or 139 of the message broker 114 or 134.
  • As noted above, the message confirmation engine 119 or 139 can also perform secure communication of messages with endpoints, such as by employing encryption of messages, mutual authentication of messages, or use of certificates.
  • In addition to or in lieu of performing security for messages by the message broker 114 or 134 across the network 146, a security engine 152 of the gateway 142 or 144 can perform the respective tasks, which can include message encryption, mutual authentication, or security using a certificate.
  • The gateway 142 or 144 can implement protocol changes and capabilities to perform the foregoing. For example, the gateway 142 or 144 can implement a mechanism to number and timestamp messages or packets (carrying the messages) that are sent to target endpoints over the network 146. For example, sequence numbers that monotonically increase can be assigned to messages (or packets) as they are sent to the target endpoint. The sequence numbers can be used to identify which data units (message or packets) were not received (i.e. lost). The gateway 142 or 144 can also determine the time for delivery and receipt of messages sent to the target endpoints. If the time taken to deliver a data unit (message or packet) exceeds a target goal, then the data unit can be re-sent. In some examples, bi-directional Hypertext Transfer Protocol (HTTP) communications (such as according to the WebSocket protocol) can be established between gateways 142 and 144 with packet numbering and timestamps (that indicate when a packet was sent). The bi-directional HTTP communications include a return channel through which a receiver is able to provide feedback regarding delays or data loss. The WebSocket protocol supports adding extensions such as packet numbers and timestamps in a standardized manner.
  • For the data center 104, managed communications of messages can be accomplished in the following manner. In some examples, the SaaS services 140 can expose APIs and protocols that are compatible with the use of packet numbering and timestamping by the gateways 142 and 144. For example, the SaaS services 140 can employ a Websocket protocol that employs packet numbering and timestamps, or some other mechanism. In this way, the gateways 142, 144 can interact with the SaaS services 140 to perform managed delivery of messages.
  • In other examples, managed delivery of messages with the SaaS services 140 is accomplished using just the gateway 142 or 144 on one side. If the remote data center 104 does not support deployment of a gateway and service exchange (e.g. because the data center belongs to other entity), then the gateway 142 on the enterprise service exchange side can only do as much as it can with available protocols. In alternative examples, the remote data center 104 can support the same protocols or mechanisms of the gateway 142. Examples of implementing same behavior at the SaaS level would be if the SaaS APIs were bounded to WebSocket with same extensions. In other examples, a gateway and service exchange of another data center that is geographically close to or collocated with the data center 104 can be used to interact with the data center 104.
  • To implement security, the gateway 142 or 144 can perform security-related tasks for messages in addition to or in lieu of the security-related tasks performed by the message broker 114 or 134 discussed above. As discussed above, these security-related tasks include encryption of messages, mutual authentication of messages, or use of certificates. For communications with the SaaS services 140 where just one gateway 142 or 144 is present, then encryption and authentication as supported by SaaS services 140 can be employed.
  • In other examples, a respective gateway can also be included in the data center 104.
  • The latency associated with communications over the network 146 can cause delays in the use case progress and impact the user experience and QoS experienced. Also, faults or errors in the network 146 may cause certain information to be lost, so that reliable communications may not be readily available over the network 146. A workflow provided by the service exchange 110 or 130 may be associated with a target performance goal, or more simply, a target goal. Examples of a target goal can include any of the following: target maximum response time from a request, target maximum usage of resources, target maximum error rate, and so forth. The target goal can be specified in a service level agreement (SLA) and can specify a maximum allowable delay between a call from the orchestrator and a response from a target application or service. In other examples, the target goal can be associated with a QoS level that is specified for the workload, either by agreement or some other mechanism.
  • FIG. 2 shows an example of the service exchange 110 with a management engine 202 according to some implementations. The management engine 202 can be implemented with a combination of machine-executable instructions and processing hardware, or with just processing hardware. The management engine 202 can be separate from the message broker 114, or can be part of the message broker 114.
  • The service exchange 110 includes the orchestrator 112 and message broker 114 as discussed above. Also, the service exchange 110 includes adapters 116-1 to 116-N (N>1) that are provided between the message broker 114 and respective applications 118-1 to 118-N.
  • The management engine 202 includes a performance monitor 112 that can monitor the performance of a workflow that is executed by the orchestrator 112. The management engine 202 also is able to access a database 206 that stores information relating to target goals 208 associated with respective workloads that can be executed by the orchestrator 112, where the target goals can be specified by an SLA or a QoS level. The database 206 can also store metrics and thresholds.
  • According to the examples described above, the performance monitor 112 can detect a time when a request is received to initiate a workflow. In response to the request, the performance monitor 112 can measure the amount of time that has passed since the time the request was received. The performance monitor 112 can retrieve information relating to a target goal associated with the workflow. The performance monitor 112 can compare the elapsed time with the target goal to determine whether execution of the workflow will satisfy a target maximum time duration specified by the respective target goal. If not, the performance monitor 112 can issue an indication to a handler 210 of the management engine 202 to handle the potential violation of the target goal to perform remediation.
  • In some examples, in a workflow that includes multiple tasks, the target goal can specify the maximum time duration from when task i (of an application of any of multiple data centers) begins to when task i is expected to complete. The performance monitor 204 is able to compare the elapsed execution time for task i with the maximum time duration of the target goal. If violation of the maximum time duration has or is predicted to occur, then the performance monitor 204 issues an indication to the handler 210, which can take action to resolve the issue. For example, the handler 210 can cause additional computing resources to be allocated to the workflow, so that the workflow can execute at a faster rate to meet the target goal.
  • The performance monitor 204 can also monitor for other events associated with the workflow. For example, the performance monitor 204 can determine the error rate associated with execution of the workflow, or can determine the amount of resources used by the execution of the workflow. If the error rate or resource usage exceeds a specified threshold (e.g. a threshold error rate or a threshold resource consumption), then the performance monitor 204 can issue a respective indication to cause the handler 210 to take a corresponding action, such as to allocate a different set of resources to execute the workflow (if a currently allocated set of resources is causing an excessive error rate), or to reduce an allocation of resources (if the workflow is consuming an excessive amount of resources).
  • FIG. 3 shows an example in which the service exchange 110 according to some implementations of the present disclosure is able to interact with a legacy integration framework 302, such as an ESB framework (as discussed above), a Schools Interoperability Framework (SIF), or any other integration framework that is a different type of integration framework from the service exchange 110. SIF includes a specification for modeling data, and a Service-Oriented Architecture (SOA) specification for sharing data between institutions.
  • The legacy integration framework 302 integrates applications 304, such as according to enterprise architecture integration (EAI). A gateway 306 is provided between the legacy integration framework 302 and the service exchange 110 to perform protocol translation 308 and interface translation 310 between the interface 115 of the service exchange 110 and the interface 312 to the legacy integration framework 302.
  • The protocol translation 308 and interface translation 310 can be similar to the protocol and interface translations applied by the adapters 116 of the service exchange 110, except that the protocol translation 308 and interface translation 310 are to provide adaptation to the legacy integration framework 302 and to the applications 304 integrated by the legacy integration framework 302.
  • Although not shown, a bridge can also be provided between the service exchange 110 and the legacy integration framework 302 if the service exchange 110 and the legacy integration framework 302 are in different data centers. The bridge can include gateways similar to the gateways 142 and 144 at each end discussed above.
  • FIG. 3 also shows a portal 314. Note that a portal can also be provided in implementations according to FIGS. 1 and 2. The portal 314 is an example of an entity interacting with the API 105 for triggering workflows or of orchestrated applications. Although FIG. 3 shows the portal 314 as using the message broker 104, it is noted that the portal 314 can also be one of the applications orchestrated through a respective adapter 116.
  • In some examples, the portal 314 can present a user interface (UI). The portal 314 can include machine-executable instructions or a combination of machine-executable instructions and processing hardware. The portal 314 can be at a computer (e.g. client computer) that can be remote from the service exchange 110. The UI allows a user to interact with the service exchange 110.
  • A user can perform an action in the UI that triggers the execution of a flow logic (of multiple different flow logic) by the orchestrator 112 to perform a workflow.
  • An indication of user action in the UI (e.g. an action to order an item or service) can be communicated to the orchestrator 112 and the corresponding workflow by the portal 314 and the message broker 114. The indication can be communicated using the API 105 (e.g. REST API) of the message broker 114.
  • This indication of user action received by the message broker 114 can be communicated to the orchestrator 112, which invokes execution of the corresponding flow logic to perform the requested workflow.
  • FIG. 4 is a flow diagram of a process performed by a service exchange (e.g. 110 or 130).
  • An orchestrator (e.g. 112 or 132) of the service exchange in a first data center executes (at 402) a workflow that is associated with a target performance goal, where the workflow includes tasks of applications of the first data center and of a second data center.
  • During the executing of the workflow, information is communicated (at 404) between the orchestrator and the processes of the applications through a message broker (e.g. 114 or 134) of the service exchange.
  • Adapters (e.g. 116 or 136) of the service exchange perform (at 406) protocol and interface translations for information communicated between the message broker and the applications in the first data center.
  • A management engine (e.g. 202) monitors (at 408) performance of the workflow to determine whether the executing of the workflow is able to meet the target goal.
  • Content of the service exchange platform including the orchestrator, the message broker, and the adapters can be changed, such as from an administration system coupled to the service exchange. Applications can be changed, flow logic can be changed, and use cases can be created.
  • Any given application can be updated or replaced, simply by replacing or modifying the corresponding adapter. For example, if an enterprise wishes to upgrade or replace a given application (with a new application or an updated version of the given application), then the corresponding adapter to which the given application is coupled can be replaced or updated to support the updated or replaced application. In some cases, replacing the given application can involve replacing a first application supplied by a first vendor with a second application supplied by a different vendor. In other cases, replacing the given application can involving replacing a first application supplied by a vendor with another application supplied by the same vendor. As yet another example, replacing the given application can include upgrading the application to a new release.
  • Changing a given adapter can involve removing a representation of the adapter (which can be in the form of program code, a markup language file, or some other representation), and replacing the removed representation of the adapter with a new representation of a different adapter. Changing the given adapter can alternatively involve modifying the given adapter or modifying a configuration of the given adapter to support the different application. The changing of the given adapter can be performed by a machine or by program code, either autonomously (such as in response to detection of a replacement of an application) or in response to user input.
  • Changing an application may also involve moving an instance of the application from one instance to another instance, or from one location to another location. The respective adapter can be updated or configuration of the adapter is changed (the adapter itself remains unchanged), to refer to another application instance or to an instance of the application at another location.
  • When changing an application to a new or updated application, it may be possible that certain functionality of the previous application is no longer available from the new or updated application. In this case, the respective adapter can delegate or orchestrate with another application (or web service) that provides the missing functionality. Alternatively, the workflow can be modified to take into account the loss of functionality in the use case.
  • Also if new functionality is provided by new or upgraded application, the workflow can be modified to use the new functionality.
  • In accordance with some implementations, a workflow can be modified relatively easily by changing the respective flow logic with a different flow logic (a modified version of the flow logic or a new flow logic). The different flow logic can then be loaded onto the orchestrator to implement the modified workflow. By using the service exchange, workflows can be easily customizable by providing new or modified flow logic to the orchestrator. Nothing else has to be changed unless a new use case specifies use of new calls and data not covered in current adapters (e.g. an adapter is able to call just a subset of APIs of the application) or the canonical model. In this latter case, the canonical data model can be updated and adapters can be updated to be able to make the calls, or new adapters can be provided.
  • New use cases can also be created, and corresponding flow logic and adapters can be provided. In addition, the canonical data model may be updated accordingly.
  • The content changes noted above can be performed using any of various tools, such as a Software Development Kit (SDk) tool or other type of tool used to create applications and other program code. A content pack can be updated using the tool, and the modified content pack can be loaded using an administration system. The administration system can configure the adapters to point to the correct instance of an application. A new use case and respective content can be also created with an SDk tool. Note also that when the canonical data model 107 is updated, the canonical data model 107 remains backwards compatible with content packs of existing use cases.
  • FIG. 5 is a block diagram of an example computer system 500 according to some implementations, which can be used to implement the service exchange 110 or 130 according to some implementations. The computer system 500 can include one computer or multiple computers coupled over a network. The computer system 500 includes a processor (or multiple processors) 502. A processor can include a microprocessor, a microcontroller, a physical processor module or subsystem, a programmable integrated circuit, a programmable gate array, or another physical control or computing device.
  • The processor(s) 502 can be coupled to a non-transitory machine-readable or computer-readable storage medium 504, which can store various machine-executable instructions. The machine-executable instructions can include orchestration instructions 506 to implement the orchestrator 112 or 132, message broker instructions 508 to implement the message broker 114 or 134 (including the message broker application 410 and event handlers 414 shown in FIG. 3), adapter instructions 510 to implement the adapters 116, management engine instructions 512 to implement the management engine 202 (including the performance monitor 204 and the handler 210), and message confirmation instruction 514 to implement the message confirmation engine 119 or 139.
  • The storage medium (or storage media) 504 can include one or multiple forms of memory including semiconductor memory devices such as dynamic or static random access memories (DRAMs or SRAMs), erasable and programmable read-only memories (EPROMs), electrically erasable and programmable read-only memories (EEPROMs) and flash memories; magnetic disks such as fixed, floppy and removable disks; other magnetic media including tape; optical media such as compact disks (CDs) or digital video disks (DVDs); or other types of storage devices. Note that the instructions discussed above can be provided on one computer-readable or machine-readable storage medium, or alternatively, can be provided on multiple computer-readable or machine-readable storage media distributed in a large system having possibly plural nodes. Such computer-readable or machine-readable storage medium or media is (are) considered to be part of an article (or article of manufacture). An article or article of manufacture can refer to any manufactured single component or multiple components. The storage medium or media can be located either in the machine running the machine-readable instructions, or located at a remote site from which machine-readable instructions can be downloaded over a network for execution.
  • In the foregoing description, numerous details are set forth to provide an understanding of the subject disclosed herein. However, implementations may be practiced without some of these details. Other implementations may include modifications and variations from the details discussed above. It is intended that the appended claims cover such modifications and variations.

Claims (21)

What is claimed is:
1. A system comprising:
a service exchange comprising:
an orchestrator to execute a workflow that involves a plurality of applications and services of a plurality of data centers;
a message broker to exchange messages that comprise a call from the orchestrator to at least one of the applications and the services, and the orchestrator to react to an event or call from the at least one application or service; and
adapters to perform protocol and interface translations for information communicated between at least some of the applications and the message broker.
2. The system of claim 1, wherein the message broker is to check for confirmation of receipt of a given message from a target to which the given message is sent, and to cause resending of the given message in response to failure to receive the confirmation of receipt.
3. The system of claim 2, wherein the message broker is to receive the confirmation of receipt of the given message from one of the adapters.
4. The system of claim 1, wherein the message broker is to check for confirmation of commit of processing of a given message from a target to which the given message is sent, and to cause resending of the given message or indicating an error in response to failure to receive the confirmation of receipt.
5. The system of claim 4, wherein the message broker is to further perform rollback or notification of at least one management system in response to failure to receive the confirmation of receipt.
6. The system of claim 1, wherein the message broker is to check for loss or delay of the given message, and to perform remediation in response to detecting the loss or the delay of the given message.
7. The system of claim 1, wherein the message broker is to apply security to a message communicated to a target.
8. The system of claim 1, wherein the service exchange is part of a first data center of the plurality of data centers, and the service exchange further comprising a gateway to communicate over a network with a second data center of the plurality of data centers, the gateway to convert between a protocol used by the message broker and a protocol used over the network.
9. The system of claim 8, wherein the gateway is to check for loss or delay of a given message sent over the network, and to perform remediation in response to detecting the loss or the delay of the given message.
10. The system of claim 9, wherein the gateway is to add numbers and timestamps to messages or packets according to extensions supported by a WebSocket protocol.
11. The system of claim 8, wherein the gateway is to apply security to a message communicated to a target over the network.
12. The system of claim 1, wherein the message broker is to communicate interact through a gateway with an integration framework.
13. The system of claim 12, wherein the integration framework is selected from among an Enterprise Service Bus (ESB) framework and a Schools Interoperability Framework (SIF).
14. The system of claim 1, wherein the service exchange is part of a first data center of the plurality of data centers, and wherein services of a second data center of the plurality of data centers comprise software as a service (SaaS) services.
15. The system of claim 1, further comprising a management engine to manage a target goal associated with communications with the applications and the services across the plurality of data centers.
16. The system of claim 15, wherein the target goal specifies a target time duration goal for communication with an application or service.
17. The system of claim 15, wherein the target goal is specified by a service level agreement or a quality of service.
18. A method comprising:
executing, by an orchestrator of a service exchange in a first data center, a workflow that is associated with a target performance goal, the workflow comprising tasks of applications of the first data center and of a second data center;
communicating, during the executing of the workflow, information between the orchestrator and the processes of the applications through a message broker of the service exchange;
performing, by adapters of the service exchange, protocol an interface translations for information communicated between the message broker and the processes of the applications in the first data center; and
monitoring, by a management engine, performance of the workflow to determine whether the executing of the workflow is able to meet the target performance goal.
19. The method of claim 18, wherein monitoring the performance of the workflow comprises determining whether the executing of the workflow is able to satisfy a time duration goal of the workflow.
20. The method of claim 18, wherein monitoring the performance of the workflow comprises determining whether the executing of the workflow is able to satisfy an error rate goal or a resource consumption goal of the workflow.
21. An article comprising at least one non-transitory machine-readable storage medium storing instructions that upon execution cause a system to:
execute, by an orchestrator of a service exchange in a first data center, a workflow, the workflow comprising tasks of applications across a plurality of data centers, at least one of the data centers comprising a cloud of resources;
communicate, during the executing of the workflow, information between the orchestrator and the processes of the applications through a message broker of the service exchange;
checking, by the message broker, for confirmation of receipt and for confirmation of commit of processing of a given message sent to a target;
performing a remediation action in response to failing to receive the confirmation of receipt or the confirmation of commit; and
perform, by adapters of the service exchange, protocol and interface translations for information communicated between the information broker and at least some of the applications.
US14/563,331 2013-12-09 2014-12-08 Execution of a workflow that involves applications or services of data centers Abandoned US20150163179A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/563,331 US20150163179A1 (en) 2013-12-09 2014-12-08 Execution of a workflow that involves applications or services of data centers

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361913799P 2013-12-09 2013-12-09
US14/563,331 US20150163179A1 (en) 2013-12-09 2014-12-08 Execution of a workflow that involves applications or services of data centers

Publications (1)

Publication Number Publication Date
US20150163179A1 true US20150163179A1 (en) 2015-06-11

Family

ID=53271269

Family Applications (4)

Application Number Title Priority Date Filing Date
US14/563,327 Active US9229795B2 (en) 2013-12-09 2014-12-08 Execution of end-to-end processes across applications
US14/563,331 Abandoned US20150163179A1 (en) 2013-12-09 2014-12-08 Execution of a workflow that involves applications or services of data centers
US14/563,552 Active 2040-01-31 US11126481B2 (en) 2013-12-09 2014-12-08 Fulfilling a request based on catalog aggregation and orchestrated execution of an end-to-end process
US14/958,617 Active US9311171B1 (en) 2013-12-09 2015-12-03 Execution of end-to-end-processes across applications

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/563,327 Active US9229795B2 (en) 2013-12-09 2014-12-08 Execution of end-to-end processes across applications

Family Applications After (2)

Application Number Title Priority Date Filing Date
US14/563,552 Active 2040-01-31 US11126481B2 (en) 2013-12-09 2014-12-08 Fulfilling a request based on catalog aggregation and orchestrated execution of an end-to-end process
US14/958,617 Active US9311171B1 (en) 2013-12-09 2015-12-03 Execution of end-to-end-processes across applications

Country Status (1)

Country Link
US (4) US9229795B2 (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150288743A1 (en) * 2014-04-04 2015-10-08 Allan D. Clarke Application-specific assessment of cloud hosting suitability
US20160198003A1 (en) * 2015-01-02 2016-07-07 Siegfried Luft Architecture and method for sharing dedicated public cloud connectivity
US20160241596A1 (en) * 2015-02-16 2016-08-18 International Business Machines Corporation Enabling an on-premises resource to be exposed to a public cloud application securely and seamlessly
US20160248836A1 (en) * 2015-02-20 2016-08-25 International Business Machines Corporation Scalable self-healing architecture for client-server operations in transient connectivity conditions
US20160381027A1 (en) * 2015-06-29 2016-12-29 Location Sentry Corp System and method for detecting and reporting surreptitious usage
WO2017000616A1 (en) * 2015-06-30 2017-01-05 中兴通讯股份有限公司 Method and device for accessing cloud data, and storage medium
US9736219B2 (en) 2015-06-26 2017-08-15 Bank Of America Corporation Managing open shares in an enterprise computing environment
CN107992552A (en) * 2017-11-28 2018-05-04 南京莱斯信息技术股份有限公司 A kind of data interchange platform and method for interchanging data
US10033604B2 (en) 2015-08-05 2018-07-24 Suse Llc Providing compliance/monitoring service based on content of a service controller
US10031824B2 (en) * 2014-03-17 2018-07-24 Renesas Electronics Corporation Self-diagnosis device and self-diagnosis method
US10296952B2 (en) 2014-11-03 2019-05-21 Hewlett Packard Enterprise Development Lp Fulfillment of cloud service using marketplace system
US10412168B2 (en) * 2016-02-17 2019-09-10 Latticework, Inc. Implementing a storage system using a personal user device and a data distribution device
WO2019186585A1 (en) * 2018-03-27 2019-10-03 Oracle Financial Services Software Limited Computerized control of execution pipelines
US20190340043A1 (en) * 2018-05-04 2019-11-07 EMC IP Holding Company LLC Message affinity in geographically dispersed disaster restart systems
US10516601B2 (en) * 2018-01-19 2019-12-24 Citrix Systems, Inc. Method for prioritization of internet traffic by finding appropriate internet exit points
US10623276B2 (en) 2015-12-29 2020-04-14 International Business Machines Corporation Monitoring and management of software as a service in micro cloud environments
US10855625B1 (en) * 2016-05-11 2020-12-01 Workato, Inc. Intelligent, adaptable, and trainable bot that orchestrates automation and workflows across multiple applications
US10917358B1 (en) * 2019-10-31 2021-02-09 Servicenow, Inc. Cloud service for cross-cloud operations
CN112422582A (en) * 2020-12-02 2021-02-26 天翼电子商务有限公司 Heterogeneous protocol application access method
US20220012091A1 (en) * 2020-07-09 2022-01-13 Vmware, Inc. System and method for executing multi-stage distributed computing operations with independent rollback workflow
CN114006883A (en) * 2021-10-15 2022-02-01 南京三眼精灵信息技术有限公司 Cross-network data penetration interaction method, device, equipment and storage medium
US20220269596A1 (en) * 2021-02-24 2022-08-25 Capital One Services, Llc Methods, systems, and media for accessing data from a settlement file
US11467868B1 (en) * 2017-05-03 2022-10-11 Amazon Technologies, Inc. Service relationship orchestration service
WO2023144572A1 (en) * 2022-01-27 2023-08-03 Pittway Sarl Systems configured with a network communications architecture for electronic messaging and methods of use thereof

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9229795B2 (en) 2013-12-09 2016-01-05 Hewlett Packard Enterprise Development Lp Execution of end-to-end processes across applications
US10129078B2 (en) 2014-10-30 2018-11-13 Equinix, Inc. Orchestration engine for real-time configuration and management of interconnections within a cloud-based services exchange
US10356222B2 (en) 2015-03-30 2019-07-16 International Business Machines Corporation Reusable message flow between applications of a message broker integrated systems environment
EP3304285A1 (en) * 2015-06-03 2018-04-11 Telefonaktiebolaget LM Ericsson (publ) Implanted agent within a first service container for enabling a reverse proxy on a second container
US10261985B2 (en) 2015-07-02 2019-04-16 Microsoft Technology Licensing, Llc Output rendering in dynamic redefining application
US10198252B2 (en) 2015-07-02 2019-02-05 Microsoft Technology Licensing, Llc Transformation chain application splitting
US10198405B2 (en) 2015-07-08 2019-02-05 Microsoft Technology Licensing, Llc Rule-based layout of changing information
US20170024787A1 (en) * 2015-07-24 2017-01-26 Microsoft Technology Licensing, Llc Omnichannel services platform
US10277582B2 (en) * 2015-08-27 2019-04-30 Microsoft Technology Licensing, Llc Application service architecture
US10362146B1 (en) * 2015-09-30 2019-07-23 Open Text Corporation Method and system for enforcing governance across multiple content repositories using a content broker
CN106612188A (en) * 2015-10-21 2017-05-03 中兴通讯股份有限公司 Method and device for extending software function based on micro service architecture
US10108688B2 (en) 2015-12-22 2018-10-23 Dropbox, Inc. Managing content across discrete systems
CN106934680A (en) * 2015-12-29 2017-07-07 阿里巴巴集团控股有限公司 A kind of method and device for business processing
US9614781B1 (en) 2016-03-01 2017-04-04 Accenture Global Solutions Limited Data defined infrastructure
US10261891B2 (en) 2016-08-05 2019-04-16 International Business Machines Corporation Automated test input generation for integration testing of microservice-based web applications
US10528995B2 (en) 2016-11-04 2020-01-07 Micro Focus Llc Use of marketplace platform instances for reselling
US10574736B2 (en) * 2017-01-09 2020-02-25 International Business Machines Corporation Local microservice development for remote deployment
US11171892B2 (en) * 2017-02-27 2021-11-09 Ncr Corporation Service assistance and integration
US20180365087A1 (en) 2017-06-15 2018-12-20 International Business Machines Corporation Aggregating requests among microservices
CN107566153B (en) * 2017-07-21 2020-09-25 哈尔滨工程大学 Self-management micro-service implementation method
US10289525B2 (en) * 2017-08-21 2019-05-14 Amadeus S.A.S. Multi-layer design response time calculator
US11138539B2 (en) * 2017-08-25 2021-10-05 Target Brands, Inc. Robtic business process automation system utilizing reusable task-based microbots
CN107608804B (en) * 2017-09-21 2020-06-12 浪潮云信息技术有限公司 Task processing system and method
US10778797B2 (en) * 2018-04-05 2020-09-15 International Business Machines Corporation Orchestration engine facilitating management of operation of resource components
US10362097B1 (en) * 2018-06-05 2019-07-23 Capital One Services, Llc Processing an operation with a plurality of processing steps
US10671462B2 (en) * 2018-07-24 2020-06-02 Cisco Technology, Inc. System and method for message management across a network
US11093620B2 (en) * 2018-11-02 2021-08-17 ThreatConnect, Inc. Ahead of time application launching for cybersecurity threat intelligence of network security events
US11194766B2 (en) * 2018-11-06 2021-12-07 Dropbox, Inc. Technologies for integrating cloud content items across platforms
US11538091B2 (en) 2018-12-28 2022-12-27 Cloudblue Llc Method of digital product onboarding and distribution using the cloud service brokerage infrastructure
US11429440B2 (en) 2019-02-04 2022-08-30 Hewlett Packard Enterprise Development Lp Intelligent orchestration of disaggregated applications based on class of service
WO2020222724A1 (en) 2019-04-27 2020-11-05 Hewlett-Packard Development Company, L.P. Microservices data aggregation search engine updating
US11586470B2 (en) 2019-08-07 2023-02-21 International Business Machines Corporation Scalable workflow engine with a stateless orchestrator
US11863573B2 (en) * 2020-03-06 2024-01-02 ThreatConnect, Inc. Custom triggers for a network security event for cybersecurity threat intelligence
US20220327006A1 (en) * 2021-04-09 2022-10-13 Nb Ventures, Inc. Dba Gep Process orchestration in enterprise application of codeless platform
US11818117B2 (en) 2021-07-20 2023-11-14 Bank Of America Corporation Multi-party exchange platform

Citations (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020147858A1 (en) * 2001-02-14 2002-10-10 Ricoh Co., Ltd. Method and system of remote diagnostic, control and information collection using multiple formats and multiple protocols with verification of formats and protocols
US20080005155A1 (en) * 2006-04-11 2008-01-03 University Of Southern California System and Method for Generating a Service Oriented Data Composition Architecture for Integrated Asset Management
US20080133646A1 (en) * 2006-12-04 2008-06-05 Bea Systems, Inc. System and method for fully distributed network with agents
US20080177879A1 (en) * 2007-01-19 2008-07-24 Mayilraj Krishnan Transactional application processing in a distributed environment
US20080235366A1 (en) * 2007-03-21 2008-09-25 Inetco Systems Limited Method and system for monitoring messages passed over a network
US20090044201A1 (en) * 2007-08-08 2009-02-12 Lee Van H Using An Event Manager To Effect A Library Function Call
US20090158242A1 (en) * 2007-12-18 2009-06-18 Kabira Technologies, Inc., Library of services to guarantee transaction processing application is fully transactional
US20090217311A1 (en) * 2008-02-13 2009-08-27 Robert Kocyan Apparatus, system, and method for facilitating data flow between a first application programming interface and a second application programming
US20100027552A1 (en) * 2008-06-19 2010-02-04 Servicemesh, Inc. Cloud computing gateway, cloud computing hypervisor, and methods for implementing same
US20100125624A1 (en) * 2008-11-19 2010-05-20 International Business Machines Corporation Coupling state aware systems
US20110145326A1 (en) * 2009-12-11 2011-06-16 Electronics and Telecommunication Research Instutite WORKFLOW CUSTOMIZATION METHOD IN SaaS ENVIRONMENT
US20110179162A1 (en) * 2010-01-15 2011-07-21 Mayo Mark G Managing Workloads and Hardware Resources in a Cloud Resource
US20110231899A1 (en) * 2009-06-19 2011-09-22 ServiceMesh Corporation System and method for a cloud computing abstraction layer
US20110238458A1 (en) * 2010-03-24 2011-09-29 International Business Machines Corporation Dynamically optimized distributed cloud computing-based business process management (bpm) system
US20110296391A1 (en) * 2010-05-28 2011-12-01 Albrecht Gass Systems and Methods for Dynamically Replacing Code Objects Via Conditional Pattern Templates
US20120030689A1 (en) * 2010-07-29 2012-02-02 Oracle International Corporation Business application integration adapters management system
US20120158821A1 (en) * 2010-12-15 2012-06-21 Sap Ag Service delivery framework
US20120185913A1 (en) * 2008-06-19 2012-07-19 Servicemesh, Inc. System and method for a cloud computing abstraction layer with security zone facilities
US20120246287A1 (en) * 2011-02-04 2012-09-27 Opnet Technologies, Inc. Correlating input and output requests between client and server components in a multi-tier application
US20130024567A1 (en) * 2010-03-31 2013-01-24 British Telecommunications Public Limited Company Network monitor
US20130041707A1 (en) * 2009-11-05 2013-02-14 Subhra Bose Apparatuses, methods and systems for an incremental container user interface workflow optimizer
US20130232498A1 (en) * 2012-03-02 2013-09-05 Vmware, Inc. System to generate a deployment plan for a cloud infrastructure according to logical, multi-tier application blueprint
US20130232497A1 (en) * 2012-03-02 2013-09-05 Vmware, Inc. Execution of a distributed deployment plan for a multi-tier application in a cloud infrastructure
US20130232480A1 (en) * 2012-03-02 2013-09-05 Vmware, Inc. Single, logical, multi-tier application blueprint used for deployment and management of multiple physical applications in a cloud environment
US20130291121A1 (en) * 2012-04-26 2013-10-31 Vlad Mircea Iovanov Cloud Abstraction
US20130312012A1 (en) * 2012-05-17 2013-11-21 Go Daddy Operating Company, Llc. Updating and Consolidating Events in Computer Systems
US20140006482A1 (en) * 2012-07-02 2014-01-02 Vmware, Inc. Method and system for providing inter-cloud services
US20140006581A1 (en) * 2012-07-02 2014-01-02 Vmware, Inc. Multiple-cloud-computing-facility aggregation
US20140006580A1 (en) * 2012-07-02 2014-01-02 Vmware, Inc. Multi-tenant-cloud-aggregation and application-support system
US20140059226A1 (en) * 2012-08-21 2014-02-27 Rackspace Us, Inc. Multi-Level Cloud Computing System
US20140075034A1 (en) * 2012-09-07 2014-03-13 Oracle International Corporation Customizable model for throttling and prioritizing orders in a cloud environment
US20140082156A1 (en) * 2012-09-14 2014-03-20 Ca, Inc. Multi-redundant switchable process pooling for cloud it services delivery
US20140082131A1 (en) * 2012-09-14 2014-03-20 Ca, Inc. Automatically configured management service payloads for cloud it services delivery
US20140108665A1 (en) * 2012-10-16 2014-04-17 Citrix Systems, Inc. Systems and methods for bridging between public and private clouds through multilevel api integration
US20140108645A1 (en) * 2012-10-15 2014-04-17 Oracle International Corporation System and method for supporting a selection service in a server environment
US20140109115A1 (en) * 2012-10-16 2014-04-17 Daryl Low Hybrid applications
US20140129698A1 (en) * 2012-11-05 2014-05-08 Red Hat, Inc. Method and system for event notification
US20140136680A1 (en) * 2012-11-09 2014-05-15 Citrix Systems, Inc. Systems and methods for appflow for datastream
US9015324B2 (en) * 2005-03-16 2015-04-21 Adaptive Computing Enterprises, Inc. System and method of brokering cloud computing resources
US9274811B1 (en) * 2007-02-16 2016-03-01 Bladelogic, Inc. System and method for cloud provisioning and application deployment
US20160072727A1 (en) * 2011-03-08 2016-03-10 Rackspace Us, Inc. Pluggable Allocation in a Cloud Computing System

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7082426B2 (en) * 1993-06-18 2006-07-25 Cnet Networks, Inc. Content aggregation method and apparatus for an on-line product catalog
US20020184101A1 (en) * 2001-03-02 2002-12-05 Gidadhubli Rajiv Raghavendrarao Method and apparatus for integrating with multiple application systems
US20020147656A1 (en) * 2001-04-04 2002-10-10 Tam Richard K. E-commerce using a catalog
GB0315190D0 (en) * 2003-06-28 2003-08-06 Ibm Methods, apparatus and computer programs for visualization and management of data organisation within a data processing system
US7380039B2 (en) 2003-12-30 2008-05-27 3Tera, Inc. Apparatus, method and system for aggregrating computing resources
US20060178893A1 (en) * 2004-12-30 2006-08-10 Mccallie David P Jr System and method for brokering requests for collaboration
US20070074225A1 (en) 2005-09-22 2007-03-29 Frends Technology Inc. Apparatus, method and computer program product providing integration environment having an integration control functionality coupled to an integration broker
US8688816B2 (en) 2009-11-19 2014-04-01 Oracle International Corporation High availability by letting application session processing occur independent of protocol servers
US8301718B2 (en) 2009-11-25 2012-10-30 Red Hat, Inc. Architecture, system and method for a messaging hub in a real-time web application framework
US8635123B2 (en) * 2010-04-17 2014-01-21 Sciquest, Inc. Systems and methods for managing supplier information between an electronic procurement system and buyers' supplier management systems
US8543508B2 (en) 2010-07-09 2013-09-24 Visa International Service Association Gateway abstraction layer
US20120078731A1 (en) * 2010-09-24 2012-03-29 Richard Linevsky System and Method of Browsing Electronic Catalogs from Multiple Merchants
US9098830B2 (en) 2010-11-30 2015-08-04 Sap Se System and method for a process broker and backend adapter based process integration
EP2798784B1 (en) 2011-12-27 2019-10-23 Cisco Technology, Inc. System and method for management of network-based services
US9497095B2 (en) 2012-03-22 2016-11-15 International Business Machines Corporation Dynamic control over tracing of messages received by a message broker
US10185985B1 (en) * 2013-09-27 2019-01-22 Amazon Technologies, Inc. Techniques for item procurement
US20160253722A1 (en) * 2013-10-31 2016-09-01 Hewlett-Packard Development Company, L.P. Aggregating, presenting and fulfilling a number of catalogs
US9229795B2 (en) 2013-12-09 2016-01-05 Hewlett Packard Enterprise Development Lp Execution of end-to-end processes across applications

Patent Citations (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020147858A1 (en) * 2001-02-14 2002-10-10 Ricoh Co., Ltd. Method and system of remote diagnostic, control and information collection using multiple formats and multiple protocols with verification of formats and protocols
US9015324B2 (en) * 2005-03-16 2015-04-21 Adaptive Computing Enterprises, Inc. System and method of brokering cloud computing resources
US20080005155A1 (en) * 2006-04-11 2008-01-03 University Of Southern California System and Method for Generating a Service Oriented Data Composition Architecture for Integrated Asset Management
US20080133646A1 (en) * 2006-12-04 2008-06-05 Bea Systems, Inc. System and method for fully distributed network with agents
US20080177879A1 (en) * 2007-01-19 2008-07-24 Mayilraj Krishnan Transactional application processing in a distributed environment
US9274811B1 (en) * 2007-02-16 2016-03-01 Bladelogic, Inc. System and method for cloud provisioning and application deployment
US20080235366A1 (en) * 2007-03-21 2008-09-25 Inetco Systems Limited Method and system for monitoring messages passed over a network
US20090044201A1 (en) * 2007-08-08 2009-02-12 Lee Van H Using An Event Manager To Effect A Library Function Call
US20090158242A1 (en) * 2007-12-18 2009-06-18 Kabira Technologies, Inc., Library of services to guarantee transaction processing application is fully transactional
US20090217311A1 (en) * 2008-02-13 2009-08-27 Robert Kocyan Apparatus, system, and method for facilitating data flow between a first application programming interface and a second application programming
US20100027552A1 (en) * 2008-06-19 2010-02-04 Servicemesh, Inc. Cloud computing gateway, cloud computing hypervisor, and methods for implementing same
US20170180324A1 (en) * 2008-06-19 2017-06-22 Servicemesh, Inc. Cloud computing gateway, cloud computing hypervisor, and methods for implementing same
US9658868B2 (en) * 2008-06-19 2017-05-23 Csc Agility Platform, Inc. Cloud computing gateway, cloud computing hypervisor, and methods for implementing same
US20120185913A1 (en) * 2008-06-19 2012-07-19 Servicemesh, Inc. System and method for a cloud computing abstraction layer with security zone facilities
US20100125624A1 (en) * 2008-11-19 2010-05-20 International Business Machines Corporation Coupling state aware systems
US20110231899A1 (en) * 2009-06-19 2011-09-22 ServiceMesh Corporation System and method for a cloud computing abstraction layer
US20130041707A1 (en) * 2009-11-05 2013-02-14 Subhra Bose Apparatuses, methods and systems for an incremental container user interface workflow optimizer
US20110145326A1 (en) * 2009-12-11 2011-06-16 Electronics and Telecommunication Research Instutite WORKFLOW CUSTOMIZATION METHOD IN SaaS ENVIRONMENT
US20110179162A1 (en) * 2010-01-15 2011-07-21 Mayo Mark G Managing Workloads and Hardware Resources in a Cloud Resource
US20110238458A1 (en) * 2010-03-24 2011-09-29 International Business Machines Corporation Dynamically optimized distributed cloud computing-based business process management (bpm) system
US20130024567A1 (en) * 2010-03-31 2013-01-24 British Telecommunications Public Limited Company Network monitor
US20110296391A1 (en) * 2010-05-28 2011-12-01 Albrecht Gass Systems and Methods for Dynamically Replacing Code Objects Via Conditional Pattern Templates
US20120030689A1 (en) * 2010-07-29 2012-02-02 Oracle International Corporation Business application integration adapters management system
US20120158821A1 (en) * 2010-12-15 2012-06-21 Sap Ag Service delivery framework
US20120246287A1 (en) * 2011-02-04 2012-09-27 Opnet Technologies, Inc. Correlating input and output requests between client and server components in a multi-tier application
US20160072727A1 (en) * 2011-03-08 2016-03-10 Rackspace Us, Inc. Pluggable Allocation in a Cloud Computing System
US20130232497A1 (en) * 2012-03-02 2013-09-05 Vmware, Inc. Execution of a distributed deployment plan for a multi-tier application in a cloud infrastructure
US20130232480A1 (en) * 2012-03-02 2013-09-05 Vmware, Inc. Single, logical, multi-tier application blueprint used for deployment and management of multiple physical applications in a cloud environment
US20130232498A1 (en) * 2012-03-02 2013-09-05 Vmware, Inc. System to generate a deployment plan for a cloud infrastructure according to logical, multi-tier application blueprint
US20130291121A1 (en) * 2012-04-26 2013-10-31 Vlad Mircea Iovanov Cloud Abstraction
US20130312012A1 (en) * 2012-05-17 2013-11-21 Go Daddy Operating Company, Llc. Updating and Consolidating Events in Computer Systems
US20140006581A1 (en) * 2012-07-02 2014-01-02 Vmware, Inc. Multiple-cloud-computing-facility aggregation
US20140006580A1 (en) * 2012-07-02 2014-01-02 Vmware, Inc. Multi-tenant-cloud-aggregation and application-support system
US20140006482A1 (en) * 2012-07-02 2014-01-02 Vmware, Inc. Method and system for providing inter-cloud services
US20140059226A1 (en) * 2012-08-21 2014-02-27 Rackspace Us, Inc. Multi-Level Cloud Computing System
US20140075034A1 (en) * 2012-09-07 2014-03-13 Oracle International Corporation Customizable model for throttling and prioritizing orders in a cloud environment
US20140082131A1 (en) * 2012-09-14 2014-03-20 Ca, Inc. Automatically configured management service payloads for cloud it services delivery
US20140082156A1 (en) * 2012-09-14 2014-03-20 Ca, Inc. Multi-redundant switchable process pooling for cloud it services delivery
US20140108645A1 (en) * 2012-10-15 2014-04-17 Oracle International Corporation System and method for supporting a selection service in a server environment
US20140109115A1 (en) * 2012-10-16 2014-04-17 Daryl Low Hybrid applications
US20140108665A1 (en) * 2012-10-16 2014-04-17 Citrix Systems, Inc. Systems and methods for bridging between public and private clouds through multilevel api integration
US20140129698A1 (en) * 2012-11-05 2014-05-08 Red Hat, Inc. Method and system for event notification
US20140136680A1 (en) * 2012-11-09 2014-05-15 Citrix Systems, Inc. Systems and methods for appflow for datastream

Cited By (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10031824B2 (en) * 2014-03-17 2018-07-24 Renesas Electronics Corporation Self-diagnosis device and self-diagnosis method
US9813486B2 (en) * 2014-04-04 2017-11-07 Ca, Inc. Assessment of cloud hosting suitability for multiple applications
US20150288575A1 (en) * 2014-04-04 2015-10-08 Ca, Inc. Application-specific assessment of hosting suitability of multiple clouds
US20150288579A1 (en) * 2014-04-04 2015-10-08 Ca, Inc. Assessment of hosting suitability of multiple applications in a cloud
US20150288577A1 (en) * 2014-04-04 2015-10-08 Ca, Inc. Assessment of service level agreement compliance
US20150288746A1 (en) * 2014-04-04 2015-10-08 Ca, Inc. Assessment of cloud hosting suitability for multiple applications
US9800652B2 (en) * 2014-04-04 2017-10-24 Ca, Inc. Definition of a multi-node synthetic application for assessment of cloud-hosting suitability
US20150288614A1 (en) * 2014-04-04 2015-10-08 Ca, Inc. Definition of a multi-node synthetic application for assessment of cloud-hosting suitability
US9781194B2 (en) * 2014-04-04 2017-10-03 Ca, Inc. Application-specific assessment of hosting suitability of multiple clouds
US20150288743A1 (en) * 2014-04-04 2015-10-08 Allan D. Clarke Application-specific assessment of cloud hosting suitability
US9813487B2 (en) * 2014-04-04 2017-11-07 Ca, Inc. Assessment of service level agreement compliance
US9906585B2 (en) * 2014-04-04 2018-02-27 Ca, Inc. Assessment of hosting suitability of multiple applications in a cloud
US9800651B2 (en) * 2014-04-04 2017-10-24 Ca, Inc. Application-specific assessment of cloud hosting suitability
US11232497B2 (en) 2014-11-03 2022-01-25 Hewlett Packard Enterprise Development Lp Fulfillment of cloud service using marketplace system
US10296952B2 (en) 2014-11-03 2019-05-21 Hewlett Packard Enterprise Development Lp Fulfillment of cloud service using marketplace system
US20160198003A1 (en) * 2015-01-02 2016-07-07 Siegfried Luft Architecture and method for sharing dedicated public cloud connectivity
US20160241596A1 (en) * 2015-02-16 2016-08-18 International Business Machines Corporation Enabling an on-premises resource to be exposed to a public cloud application securely and seamlessly
US10038721B2 (en) * 2015-02-16 2018-07-31 International Business Machines Corporation Enabling an on-premises resource to be exposed to a public cloud application securely and seamlessly
US10044756B2 (en) * 2015-02-16 2018-08-07 International Business Machines Corporation Enabling an on-premises resource to be exposed to a public cloud application securely and seamlessly
US20160241633A1 (en) * 2015-02-16 2016-08-18 International Business Machines Corporation Enabling an on-premises resource to be exposed to a public cloud application securely and seamlessly
US20160248836A1 (en) * 2015-02-20 2016-08-25 International Business Machines Corporation Scalable self-healing architecture for client-server operations in transient connectivity conditions
US10609155B2 (en) * 2015-02-20 2020-03-31 International Business Machines Corporation Scalable self-healing architecture for client-server operations in transient connectivity conditions
US9736219B2 (en) 2015-06-26 2017-08-15 Bank Of America Corporation Managing open shares in an enterprise computing environment
US20160381027A1 (en) * 2015-06-29 2016-12-29 Location Sentry Corp System and method for detecting and reporting surreptitious usage
CN106331024A (en) * 2015-06-30 2017-01-11 中兴通讯股份有限公司 Method and device for accessing cloud data
WO2017000616A1 (en) * 2015-06-30 2017-01-05 中兴通讯股份有限公司 Method and device for accessing cloud data, and storage medium
US10033604B2 (en) 2015-08-05 2018-07-24 Suse Llc Providing compliance/monitoring service based on content of a service controller
US10623276B2 (en) 2015-12-29 2020-04-14 International Business Machines Corporation Monitoring and management of software as a service in micro cloud environments
US10412168B2 (en) * 2016-02-17 2019-09-10 Latticework, Inc. Implementing a storage system using a personal user device and a data distribution device
US10893104B2 (en) 2016-02-17 2021-01-12 Latticework, Inc. Implementing a storage system using a personal user device and a data distribution device
US10855625B1 (en) * 2016-05-11 2020-12-01 Workato, Inc. Intelligent, adaptable, and trainable bot that orchestrates automation and workflows across multiple applications
US11368415B2 (en) * 2016-05-11 2022-06-21 Workato, Inc. Intelligent, adaptable, and trainable bot that orchestrates automation and workflows across multiple applications
US11467868B1 (en) * 2017-05-03 2022-10-11 Amazon Technologies, Inc. Service relationship orchestration service
CN107992552A (en) * 2017-11-28 2018-05-04 南京莱斯信息技术股份有限公司 A kind of data interchange platform and method for interchanging data
US10516601B2 (en) * 2018-01-19 2019-12-24 Citrix Systems, Inc. Method for prioritization of internet traffic by finding appropriate internet exit points
US11349751B2 (en) 2018-01-19 2022-05-31 Citrix Systems, Inc. Method for prioritization of internet traffic by finding appropriate internet exit points
JP2021517288A (en) * 2018-03-27 2021-07-15 オラクル・フィナンシャル・サービシーズ・ソフトウェア・リミテッドOracle Financial Services Software Limited Computerized control of the execution pipeline
JP7461883B2 (en) 2018-03-27 2024-04-04 オラクル・フィナンシャル・サービシーズ・ソフトウェア・リミテッド Computerized control of the execution pipeline
WO2019186585A1 (en) * 2018-03-27 2019-10-03 Oracle Financial Services Software Limited Computerized control of execution pipelines
US11635993B2 (en) 2018-03-27 2023-04-25 Oracle Financial Services Software Limited Computerized control of execution pipelines
CN111512287A (en) * 2018-03-27 2020-08-07 甲骨文金融服务软件有限公司 Computerized control of execution pipelines
US10831550B2 (en) 2018-03-27 2020-11-10 Oracle Financial Services Software Limited Computerized control of execution pipelines
US10803010B2 (en) * 2018-05-04 2020-10-13 EMC IP Holding Company LLC Message affinity in geographically dispersed disaster restart systems
US20190340043A1 (en) * 2018-05-04 2019-11-07 EMC IP Holding Company LLC Message affinity in geographically dispersed disaster restart systems
US10917358B1 (en) * 2019-10-31 2021-02-09 Servicenow, Inc. Cloud service for cross-cloud operations
US11398989B2 (en) 2019-10-31 2022-07-26 Servicenow, Inc. Cloud service for cross-cloud operations
US11531564B2 (en) * 2020-07-09 2022-12-20 Vmware, Inc. Executing multi-stage distributed computing operations with independent rollback workflow
US20220012091A1 (en) * 2020-07-09 2022-01-13 Vmware, Inc. System and method for executing multi-stage distributed computing operations with independent rollback workflow
CN112422582A (en) * 2020-12-02 2021-02-26 天翼电子商务有限公司 Heterogeneous protocol application access method
US20220269596A1 (en) * 2021-02-24 2022-08-25 Capital One Services, Llc Methods, systems, and media for accessing data from a settlement file
US11775420B2 (en) * 2021-02-24 2023-10-03 Capital One Services, Llc Methods, systems, and media for accessing data from a settlement file
CN114006883A (en) * 2021-10-15 2022-02-01 南京三眼精灵信息技术有限公司 Cross-network data penetration interaction method, device, equipment and storage medium
WO2023144572A1 (en) * 2022-01-27 2023-08-03 Pittway Sarl Systems configured with a network communications architecture for electronic messaging and methods of use thereof

Also Published As

Publication number Publication date
US9311171B1 (en) 2016-04-12
US9229795B2 (en) 2016-01-05
US20150160989A1 (en) 2015-06-11
US20150161681A1 (en) 2015-06-11
US20160092283A1 (en) 2016-03-31
US11126481B2 (en) 2021-09-21

Similar Documents

Publication Publication Date Title
US20150163179A1 (en) Execution of a workflow that involves applications or services of data centers
US11050848B2 (en) Automatically and remotely on-board services delivery platform computing nodes
EP3455728B1 (en) Orchestrator for a virtual network platform as a service (vnpaas)
Petcu et al. Experiences in building a mOSAIC of clouds
US10044756B2 (en) Enabling an on-premises resource to be exposed to a public cloud application securely and seamlessly
JP6326417B2 (en) Multiple customer in-cloud identity management system based on LDAP
US9817657B2 (en) Integrated software development and deployment architecture and high availability client-server systems generated using the architecture
JP2018512084A (en) Highly scalable and fault-tolerant remote access architecture and how to connect to it
EP2630576B1 (en) Goal state communication in computer clusters
Strauch et al. ESBMT: A multi-tenant aware enterprise service bus
Seo et al. Cloud computing for ubiquitous computing on M2M and IoT environment mobile application
Zhou et al. CloudsStorm: A framework for seamlessly programming and controlling virtual infrastructure functions during the DevOps lifecycle of cloud applications
del Castillo et al. Openstack federation in experimentation multi-cloud testbeds
Sharma et al. Getting Started with Istio Service Mesh: Manage Microservices in Kubernetes
Sandru et al. Building an open-source platform-as-a-service with intelligent management of multiple cloud resources
EP3739452B1 (en) Enterprise messaging using a virtual message broker
US20130275489A1 (en) Integration of web services with a clustered actor based model
Kouki et al. RightCapacity: SLA-driven Cross-Layer Cloud Elasticity Management.
Rocha et al. CNS-AOM: design, implementation and integration of an architecture for orchestration and management of cloud-network slices
Petcu et al. Cloud resource orchestration within an open‐source component‐based platform as a service
Sharma et al. Getting Started with Istio Service Mesh
Cannarella Multi-Tenant federated approach to resources brokering between Kubernetes clusters
Pham et al. Flexible deployment of component-based distributed applications on the Cloud and beyond
Nair et al. Agent with Rule Engine: The" Glue'for Web Service Oriented Computing Applied to Network Management
Toolchain et al. D7. 3-INITIAL BULK DEPLOYMENT TOOL

Legal Events

Date Code Title Description
AS Assignment

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MAES, STEPHANE HERMAN;KIM, WOONG JOSEPH;DESAI, ANKIT ASHOK;AND OTHERS;SIGNING DATES FROM 20150106 TO 20150218;REEL/FRAME:034992/0137

AS Assignment

Owner name: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.;REEL/FRAME:037079/0001

Effective date: 20151027

AS Assignment

Owner name: ENTIT SOFTWARE LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP;REEL/FRAME:042746/0130

Effective date: 20170405

AS Assignment

Owner name: JPMORGAN CHASE BANK, N.A., DELAWARE

Free format text: SECURITY INTEREST;ASSIGNORS:ENTIT SOFTWARE LLC;ARCSIGHT, LLC;REEL/FRAME:044183/0577

Effective date: 20170901

Owner name: JPMORGAN CHASE BANK, N.A., DELAWARE

Free format text: SECURITY INTEREST;ASSIGNORS:ATTACHMATE CORPORATION;BORLAND SOFTWARE CORPORATION;NETIQ CORPORATION;AND OTHERS;REEL/FRAME:044183/0718

Effective date: 20170901

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

AS Assignment

Owner name: MICRO FOCUS LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:ENTIT SOFTWARE LLC;REEL/FRAME:050004/0001

Effective date: 20190523

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: MICRO FOCUS LLC (F/K/A ENTIT SOFTWARE LLC), CALIFORNIA

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0577;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:063560/0001

Effective date: 20230131

Owner name: NETIQ CORPORATION, WASHINGTON

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399

Effective date: 20230131

Owner name: MICRO FOCUS SOFTWARE INC. (F/K/A NOVELL, INC.), WASHINGTON

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399

Effective date: 20230131

Owner name: ATTACHMATE CORPORATION, WASHINGTON

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399

Effective date: 20230131

Owner name: SERENA SOFTWARE, INC, CALIFORNIA

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399

Effective date: 20230131

Owner name: MICRO FOCUS (US), INC., MARYLAND

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399

Effective date: 20230131

Owner name: BORLAND SOFTWARE CORPORATION, MARYLAND

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399

Effective date: 20230131

Owner name: MICRO FOCUS LLC (F/K/A ENTIT SOFTWARE LLC), CALIFORNIA

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399

Effective date: 20230131