US20220121545A1 - System and Method for Continuous Low-Overhead Monitoring of Distributed Applications Running on a Cluster of Data Processing Nodes - Google Patents

System and Method for Continuous Low-Overhead Monitoring of Distributed Applications Running on a Cluster of Data Processing Nodes Download PDF

Info

Publication number
US20220121545A1
US20220121545A1 US17/412,832 US202117412832A US2022121545A1 US 20220121545 A1 US20220121545 A1 US 20220121545A1 US 202117412832 A US202117412832 A US 202117412832A US 2022121545 A1 US2022121545 A1 US 2022121545A1
Authority
US
United States
Prior art keywords
application
data processing
data
processes
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/412,832
Inventor
Niall Joseph Dalton
Trevor Robinson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Silicon Valley Bank Inc
III Holdings 2 LLC
Original Assignee
III Holdings 2 LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by III Holdings 2 LLC filed Critical III Holdings 2 LLC
Priority to US17/412,832 priority Critical patent/US20220121545A1/en
Assigned to III HOLDINGS 2, LLC reassignment III HOLDINGS 2, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SILICON VALLEY BANK
Assigned to SILICON VALLEY BANK reassignment SILICON VALLEY BANK ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CALXEDA, INC.
Assigned to CALXEDA, INC. reassignment CALXEDA, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DALTON, NIALL JOSEPH, ROBINSON, TREVOR
Publication of US20220121545A1 publication Critical patent/US20220121545A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3089Monitoring arrangements determined by the means or processing involved in sensing the monitored data, e.g. interfaces, connectors, sensors, probes, agents
    • G06F11/3096Monitoring arrangements determined by the means or processing involved in sensing the monitored data, e.g. interfaces, connectors, sensors, probes, agents wherein the means or processing minimize the use of computing system or of computing system component resources, e.g. non-intrusive monitoring which minimizes the probe effect: sniffing, intercepting, indirectly deriving the monitored data from other directly available data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3006Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3024Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3065Monitoring arrangements determined by the means or processing involved in reporting the monitored data
    • G06F11/3072Monitoring arrangements determined by the means or processing involved in reporting the monitored data where the reporting involves data filtering, e.g. pattern matching, time or event triggered, adaptive or policy-based reporting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
    • G06F15/7814Specially adapted for real time processing, e.g. comprising hardware timers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17306Intercommunication techniques
    • G06F15/17331Distributed shared memory [DSM], e.g. remote direct memory access [RDMA]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0038System on Chip
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • inventions of the present invention relate to activity tracing and resource consumption monitoring in data processing systems. More specifically, embodiments of the present invention relate to systems and methods for continuous low-overhead monitoring of distributed applications running within a cluster of data processing nodes.
  • Typical distributed application monitoring generally involves two or more independent mechanisms.
  • a first example of such a mechanism is applications that are instrumented with tracing calls to an event logging application programming interface (API).
  • API application programming interface
  • a second example of such a mechanism is resource monitoring that is performed by a program or process running on each computing node and which invoked to perform an intended task. Such a program or process is commonly referred to as a daemon.
  • the logging API may store event data in multiple locations.
  • the most common locations are a) per-process, plain text log files stored on a local disk drive and b) an operating system event log (Unix syslogd or Windows Event Log).
  • an operating system event log Unix syslogd or Windows Event Log.
  • most events are disabled (or only enabled for statistical sampling) by default.
  • an operator may enable various subsets of events temporarily. The subsets are usually selected by specifying a severity threshold (e.g. error, warning, info, debug1, debug2) and/or a set of software modules. Often, enabling or disabling log messages requires restarting the application.
  • a severity threshold e.g. error, warning, info, debug1, debug2
  • the daemon can be configured to monitor (i.e., a resource monitor) overall hardware utilization (e.g. CPUs, disk drives, and network) and/or per-process activity. Metrics are gathered at a fixed interval and then stored on disk or sent via the network to an aggregating daemon. Because the resource monitor runs on the node being monitored, some amount of resource utilization overhead is incurred by the daemon itself. A visualization application may then produce charts using the aggregated data. Generally, the resource monitor has no visibility into the specific operations being performed by the monitored applications, and therefore cannot correlate resource utilization with specific application operations.
  • Embodiments of the present invention provide an improvement over known approaches for monitoring of and taking action on observations associated with distributed applications.
  • Application event reporting and application resource monitoring is unified in a manner that significantly reduces storage and aggregation overhead.
  • embodiments of the present invention can employ hardware and/or software support that reduces storage and aggregation overhead.
  • embodiments of the present invention can also provide for decentralized filtering, statistical analysis, and derived data streaming.
  • embodiments of the present invention are securely implemented (e.g., for use solely under the control of an operator) and can use a separate security domain for network traffic.
  • embodiments of the present invention offer a number of advantageous and beneficial functionalities.
  • One such functionality is a remotely observable, controllable, and programmable hardware and activity resource monitor that runs out of band on separate dedicated hardware, observing, filtering, aggregating, and reporting operator- or programmer-defined metrics or events.
  • Another such functionality is metrics and events generated by the resource monitor or by applications (either explicitly or by usage of instrumented APIs) are sent to a messaging endpoint such as an administrative console or to a remote memory for diagnostic or profiling use.
  • Another such functionality is all events are time-stamped with extremely low overhead using a timestamp register that is automatically synchronized across the cluster using dedicated hardware.
  • Still another such functionality is an operator having the ability to change a subset of reported events or their destination using an administrative tool.
  • a method of monitoring application-driven activity in an application central processing unit of a data processing node comprises a plurality of operations.
  • An application monitoring services module of a data processing node performs an operation for receiving at least one resource monitor command.
  • a management processor unit of the data processing node comprises the application monitoring services module and is coupled to an application central processing unit of the data processing node.
  • the application monitoring services module performs an operation for configuring an assessment protocol thereof dependent upon a resource assessment specification provided in the at least one monitor command.
  • the application monitoring services module performs an operation for assessing activity of the application central processing unit that arise from execution of an application running thereon and for outputting information derived from the activity to a recipient.
  • a data processing node comprises a plurality of application central processing units each having a respective application running thereon and a management processor unit coupled to each one of the application central processing units.
  • the management processor unit comprises an application monitoring services module including a resource assessor and an event reporter.
  • the management processor unit comprises dedicated system resources with respect to the application central processing units such that processes implemented by the application monitoring services module are out-of-band of application processes carried out on each one of the application central processing units.
  • the application monitoring services module is configured to selectively implement one or more processes for assessing activity of a particular one of the application central processing units that arise from execution of the respective application running thereon and is configured to selectively implements one or more processes for outputting events generated by a particular one of the application central processing units that arise from execution of the respective application running thereon.
  • a data processing system comprises a plurality of data processing nodes coupled to each other through an interconnect fabric.
  • Each one of the data processing nodes comprises an application central processing unit and a management processor unit coupled to the application central processing unit.
  • the application central processing unit of each one of the data processing nodes has an instance of a particular application running thereon.
  • the management processor unit of each one of the data processing nodes comprises an application monitoring services module.
  • the application monitoring services module of each one of the data processing nodes outputs a respective stream of time-stamped events that arise from execution of the instance of the particular application running on the application central processing unit thereof.
  • a target node which can be one of the data processing nodes or an external node (e.g., operator interface console) receives the respective stream of time-stamped events from each one of the data processing nodes and generates a composite stream of events from the time-stamped events of at least a portion of the respective streams thereof.
  • the composite stream of events is time-sequenced dependent upon global time-stamp information of each one of the time-stamped events.
  • FIG. 1 is a diagrammatic view of a data processing node configured in accordance with an embodiment of the present invention.
  • FIG. 2 is a diagrammatic view showing an arrangement of a resource monitor within the data processing node of FIG. 1 .
  • FIG. 3 is a diagrammatic view showing a management processor implementation of an event reporter within the data processing node of FIG. 1 .
  • FIG. 4 is a diagrammatic view showing an embedded library implementation of an event reporter within the data processing node of FIG. 1 .
  • FIG. 5 is a diagrammatic view showing an embodiment of a process for implementing a data recorder within the data processing node of FIG. 1 .
  • a data processing node 1 having a system on a chip (SOC) 10 configured in accordance with an embodiment of the present invention.
  • the SOC 10 has a management subsystem 12 and an application CPU subsystem 14 coupled to the management subsystem 12 .
  • Application monitoring services 16 are implement as one or more processes that reside in the management subsystem 12 and run on a management processor unit (MPU) 18 .
  • User applications 20 which can be different applications, different instances of the same application, etc, reside in the application CPU subsystem 14 and run on a respective one of more of a plurality of application central processing units (CPUs) 22 .
  • Each one of the application CPUs 22 includes one or more application processors and dedicated system resources ((e.g., memory, operating system, etc).
  • the MPU 18 includes one or more dedicated management processors and associated dedicated system resources (e.g., memory, software, utilities, status registers, UARTs, network MACs, SOC configuration information, etc) that execute management software for providing initialization and ongoing management (e.g., both autonomic and remotely controlled) of the SOC 10 .
  • dedicated management processors and associated dedicated system resources e.g., memory, software, utilities, status registers, UARTs, network MACs, SOC configuration information, etc
  • management software for providing initialization and ongoing management (e.g., both autonomic and remotely controlled) of the SOC 10 .
  • application monitoring services 16 and portions of the MPU 18 utilized for carrying out processes of the application monitoring services 16 are referred to herein as an application monitoring services module.
  • the application monitoring services 16 include a resource assessor 24 and an event reporter 26 .
  • a command issued by the resource assessor 24 or the event reporter 26 can include a resource assessment specification upon which an assessment protocol used by the application monitoring services module is configured.
  • the resource assessment specification included information defining a manner in which events and activities are to be monitored and/or reported and the assessment protocol is a framework in which the application monitoring services module applies such information in performing the monitoring and/or reporting.
  • the resource assessor 16 takes action based on observations associated with the distributed applications (e.g., a first daemon process) and the event reporter reports such event 24 (e.g., a second daemon process).
  • the event reporter and the resource assessor are independent, but related services.
  • the observations used by the resource assessor 24 may be events reported by the event reporter 26 . But, the observations will generally be periodic measurements gathered from a common component that both the resource assessor 24 and event reporter 26 use as a data source.
  • the resource assessor 24 and the event reporter 26 provide for an improvement over known approaches for monitoring of events associated with distributed applications and taking action on observations associated with the distributed applications.
  • Such a distributed application can be distributed across application CPUs of a plurality of data processing nodes, which can be on a common node card or a plurality of different node cards.
  • Nodes of the node cards can be interconnected by a fabric or other type of node interconnect structure. Further details of interconnecting nodes by a fabric are described in U.S. Non-Provisional patent application Ser. No. 13/705,340 filed on Apr. 23, 2012 and entitled “SYSTEM AND METHOD FOR HIGH PERFORMANCE, LOW-POWER DATA CENTER INTERCONNECT FABRIC”, which is incorporated herein in its entirety by reference.
  • the resource assessor 24 is implemented as an out-of-band management process on each data processing node of a system (e.g., a cluster of nodes including data processing node 1). This management process is out-of-band because it runs on the MPU core 18 of the management subsystem 12 and, therefore, does not consume resources of the application CPUs 22 . By transparently observing the application CPUs 22 , resource assessor 24 can notify an operator or other entity if resource consumption (e.g., CPU, memory, network, etc.) exhibits a condition that warrants such notification.
  • resource consumption e.g., CPU, memory, network, etc.
  • the resource assessor 24 is implemented in conjunction with an agent running within an operating system (OS) of an application CPU. For example, this agent may be necessary to measure application CPU utilization because the resource assessor 24 may not be able to distinguish the OS idle loop from actual work.
  • OS operating system
  • the resource assessor 24 runs within the MPU 18 on the data processing node 1 (and all or a portion of other nodes connected to thereto).
  • the resource assessor 24 is remotely accessible by an operator interface 30 (i.e., event/information target).
  • an operator interface 30 i.e., event/information target
  • the target of the events is illustrated as an external operator (i.e., the operator interface 30 )
  • this may in fact be a peer node in a cluster of nodes rather than an external target.
  • some set of nodes may choose to observe the operation of one another to ensure correct operation, as the mechanism provides a generally accessible and programmable tracing feature.
  • Resource assessor commands 32 (i.e., a form of a resource monitor command) are provided from the operator interface 30 to the resource assessor 24 .
  • Resource assessor information 34 such as, for example, derived data, alerts and the like is provided from the resource assessor 24 to the operator interface 30 in response to the resource assessor commands 32 .
  • the resource assessor 24 may observe the execution and actions of user application processes each running within a respective application CPU 22 of the data processing node 1.
  • the resource operating limits which are used by the resource assessor 24 to identify operating/behavior changes, can be operator defined or can be statistically derived from data being monitored by the resource assessor 24 .
  • the operator may program the resource assessor 24 using the operator interface to histogram typical message sizes. Statistically significant deviations in such sizes are typically indicative of misbehavior of the observed processes.
  • the resource assessor 24 is a programmable process, it can also be used to implement filtering of data, statistical analysis of the data stream to reduce the data volume, and streaming of the original or derived data to other nodes in the cluster continuously, periodically or when anomalies are identified.
  • the application monitoring services module in response to receiving a resource monitoring command, configures an assessment protocol dependent upon a resource assessment specification provided in the resource assessor command (e.g., to histogram typical message sizes) and, in accordance with the assessment protocol, the application monitoring services module assessing activity of the application central processing unit(s) that arise from execution of a user application running thereon outputs information derived from the activity (e.g., histogram(s)) to target recipient (e.g., the operator interface 30 ).
  • a resource assessment specification provided in the resource assessor command
  • the application monitoring services module assessing activity of the application central processing unit(s) that arise from execution of a user application running thereon outputs information derived from the activity (e.g., histogram(s)) to target recipient (e.g., the operator interface 30 ).
  • the resource assessor 24 offers the following capabilities and functionalities.
  • Application execution and use of machine resources can be directly observed in a manner requiring no changes to user operating system or application. No cooperation or knowledge of the user application is required.
  • Fine-grained continuous on-node monitoring is provided using CPU cores and hardware peripherals of a management subsystem (i.e., resources that are isolated from an application CPU subsystem connected to the management subsystem), which minimizes overhead on the user application and exposes micro-bursting behavior, which is otherwise difficult to observe.
  • Programmable computations are performed on collected data, allowing the operator to push monitoring code towards each node for scaling of resource assessing with the cluster size.
  • the application monitoring services 16 include an event reporter 26 .
  • the event reporter 26 executes on MPU 18 of a plurality of data processing nodes (i.e., node 1 to node N, which can be coupled to each other via fabric 59 ) in a manner that is isolated from their application CPUs 22 .
  • Resource assessor commands 42 i.e., a form of a resource monitor command
  • the operator interface 30 to the nodes 1 for enabling events to be traced and reported on.
  • the event reporter 26 Upon receipt of the commands, the event reporter 26 produces resource event information 40 in the form of a stream of time-stamped events from the respective data processing node to the operator interface 30 .
  • the stream of time-stamped events i.e., trace data
  • the stream of time-stamped events is provided to the operator interface console 30 , a remote memory location or both (i.e., target nodes) until a STOP command is received.
  • An operator can use commands to enable all tracing events to be produced, select some subset, or provide expressions used to evaluate whether any given event should be produced. This mechanism is independent of the user operating system or application and may be used to trace system provided event sources.
  • the event reporter 26 uses resources of the MPU 18 to manage trace data collection and can also use remote memory (via coarse-grained, large-block RDMA or fine-grained, cache line-sized access) and shared memory ring buffers for collection and aggregation.
  • the event reporter can be configured to immediately transmit events to a remote node such that they are retained and available even if a source node of the events becomes inaccessible (i.e., a data recorder).
  • the event reporter 26 can leverage and/or be built-upon functionalities such as, for example, shared ring buffers, remote memory, and/or node-to-node time synchronization. Further details of implementing the none-to-node time synchronization functionality are described in U.S. Non-Provisional patent application Ser. No. 13/899,751 filed on May 22, 2013 and entitled “TIME SYNCHRONIZATION BETWEEN NODES OF A SWITCHED INTERCONNECT FABRIC”, which is incorporated herein in its entirety by reference. Further details of implementing shared ring buffer functionality are described in U.S. Non-Provisional patent application Ser. No. 13/959,428 filed on Aug.
  • FIG. 4 the same flow of commands and data streams as shown above in reference to FIG. 3 are shown, except with the producer of resource event information 40 being a node services library (NS Lib) code 50 of the application CPU subsystem 14 as opposed to the MPU 12 of the management subsystem 12 .
  • Applications can use a suitable application programming interface (API) to emit tracing events that are aggregated in remote memory or sent to an aggregating node via a messaging API (feature provided by the NS Lib code 50 ).
  • the messaging API may also be configured to emit its own tracing events.
  • An API that includes the ability to emit tracing events containing arbitrary diagnostic information can be provided for languages such as, for example, C, C++, etc.
  • These events reported by the event reporter 26 are time-stamped via suitable time-stamp register that is synchronized across all nodes in a cluster (e.g., node 1 to node N).
  • This synchronization of time-stamping across all nodes in a cluster results in the events reported by the event reporter 26 being time stamped in accordance with time information that is global with respect to all of the nodes (i.e., global timestamp information).
  • the time stamp applied to each one of the events can be based upon a global time (t (G)) to which a local time (t (L)) of each node in a cluster of nodes is synchronized.
  • This global time stamping enables a recipient of the events to correlate the events (e.g., generate a time-sequenced stream of events therefrom) and to analyze in a meaningful fashion the events generated on a multitude of nodes. It has extremely low overhead and so it is possible to permanently or selectively enable this across an entire cluster, unlike the existing state of the art.
  • U.S. Non-Provisional patent application Ser. No. 13/899,751 filed on May 22, 2013 and entitled “TIME SYNCHRONIZATION BETWEEN NODES OF A SWITCHED INTERCONNECT FABRIC”, which is incorporated herein in its entirety by reference.
  • the operator or programmer may dynamically inject filtering logic to implement filtering or other analysis prior to event generation. This may be used to minimize the amount of events generated by identifying the most important or anomalous ones.
  • filtering is used to limit the volume of trace data to avoid overwhelming the system or the administrators.
  • An example of filtering is to associate a severity, such as error, warning, or informational, with each trace event and only retaining events above a specified threshold. Also, different subsystems can be assigned different severity thresholds.
  • the event reporter 26 in which it is configured to transmits events to a remote node immediately such that they're retained and available even if the source node becomes inaccessible. Accordingly, if a monitored machine crashes, the monitoring data up to the point of the crash is safely stored elsewhere.
  • a data recorder Such an implementation of the event reporter 26 is referred to herein as a data recorder.
  • the underlying functionality of the data recorder involves using hardware mechanisms such as, for example, remote memory and/or shared ring buffers to gather monitoring data in real-time with low overhead. Because these remote memories and ring buffers are hardware managed, the overhead for their use by the application is very low, allowing us to continuously generate events if desired. Furthermore, preferred implementations of remote memory and shared ring buffers operate in a non-blocking mode such that an application initiates a remote memory transfer without waiting for the transaction to complete. For example, use of node fabric hardware to perform the transfer in the background without application CPU intervention ensures that forward progress of the application is not blocked.
  • the buffers of events may then be observed continuously or on-demand by the operator or programmer to debug, profile, or investigate the execution of the system, including processes running on many different nodes targeting the same event buffer.
  • shared ring buffer functionality As disclosed above, further details of implementing shared ring buffer functionality are described in U.S. Non-Provisional patent application Ser. No. 13/959,428 filed on Aug. 5, 2013 and entitled “REMOTE MEMORY RING BUFFERS IN A CLUSTER OF DATA PROCESSING NODES”, which is incorporated herein in its entirety by reference, and further details of implementing remote memory functionality are described in U.S. Non-Provisional patent application Ser. No. 13/935,108 filed Jul. 3, 2013 and entitled “IMPLEMENTING REMOTE TRANSACTION FUNCTIONALITIES BETWEEN DATA PROCESSING NODES OF A SWITCHED INTERCONNECT FABRIC”, which is incorporated herein in its entirety by reference.
  • the data recorder provides a novel way to observe and investigate the operation of the cluster with data collected before, during, and after normal or anomalous execution.
  • the remote memory or ring buffers may be sized appropriately to capture the last N events, or the typical number of events in a certain period of time.
  • the events may not be spoofed or observed by users of the cluster. This implements an irrevocable log of actions by the processes being traced. If the events captured by the data recorder are emitted in a separate security domain, then for safety or security reasons, event data tracing may be monitored by a process/person without permission to interact with the application itself.
  • Such a separate security domain can be implemented, for example, at a particular node of a cluster of data processing nodes or at a node (i.e., apparatus) external to the cluster of data processing nodes (e.g., an operator interface console).
  • a plurality of nodes 1-n within a cluster which can be connected to each other via an interconnected fabric 59 , are streaming events to a remote target 60 (e.g., a remote memory or remote memory ring buffer) within the cluster.
  • a command and control process at an operator interface 30 which may be internal or external to the cluster, issues commands 40 to the nodes 1-n.
  • One or more of the nodes e.g., node 1
  • one or more other nodes e.g., node n
  • an application level context e.g., the node service library NS Lib 50 .
  • the same remote memory or ring buffer may be the target of the events, even though they are being generated in different contexts (i.e., system context vs. user context).
  • the operator interface 30 e.g., a control process thereof
  • the decision to enable tracing could also be made by either the event reporter 26 (i.e., a management processor process) or the node service library NS Lib 50 (i.e., a user application code).
  • the event reporter 26 i.e., a management processor process
  • the node service library NS Lib 50 i.e., a user application code
  • the events may also be data generated by the user applications.
  • a suitable remote memory ring buffers can provide low-overhead, non-blocking transmission of tracing events to a remote aggregation node.
  • an application tracing library initially writes events to a per-thread circular queue in local memory. The events in the queue are consumed by a separate (asynchronous) thread or process that merges the events in chronological order based on their timestamps.
  • each queue has a single producer (e.g., guaranteed by being per-thread) and a single consumer (e.g., a constraint enforced by the software), it can utilize low-overhead, wait-free synchronization between the producer and the consumer. Wait-freedom is a term of art that describes the strongest non-blocking guarantee of progress, combining guaranteed system-wide throughput with starvation-freedom for all threads of execution.
  • Single-producer, single-consumer synchronized queues are a well-known, simple data structure that can be implemented without special atomic primitives.
  • a data processing system configured in accordance with the present invention can provide numerous types of event trace (i.e., event reporter) buffer consumers.
  • event trace i.e., event reporter
  • Each of these consumers can run in numerous places such as, for example, one or more applications of a node generating the events, an event reporter running on one or more application cores of one or more nodes, or the event reporter running on the management processing unit of one or more nodes.
  • Management interfaces in each event reporter process provide for dynamic configuration of consumers.
  • One example of such a trace buffer consumer is a process merger that runs in a background thread, merging per-thread buffers into a per-process buffer, annotating each event with its thread of origin.
  • the destination buffer can be in one or more remote memories, providing fault tolerance and redundancy/fan-out.
  • a trace buffer consumer is a system merger that runs in a separate process, potentially on one of the dedicated management cores, merging per-thread or per-process buffers into a system-wide buffer and annotating each event with its thread and process of origin. This consumer requires the source buffers to be in shared local memory. As with the process merger, the destination buffer can be in remote memory.
  • Another example of such a trace buffer consumer is a formatter that transforms binary trace events stored in a thread, process, or system buffer into human-readable text.
  • a trace buffer consumer is a message sender that sends buffer contents or formatter output to one or more messaging endpoints using a suitable messaging (i.e., node messaging functionality), which automatically chooses the fastest transport available, such as remote direct memory access (RDMA).
  • RDMA remote direct memory access
  • Another example of such a trace buffer consumer is a message receiver that receives buffer contents or formatter output from a message sender.
  • Another example of such a trace buffer consumer is a disk writer that writes buffer contents or formatter output to non-volatile storage.
  • Still another example of such a trace buffer consumer is a shared ring buffer writer that writes buffer contents or formatter output to a remote memory ring buffer. Shared ring buffers provide hardware-accelerated aggregation/fan-in from multiple trace sources.
  • a resource assessor configured in accordance with the present invention and an event reporter configured in accordance with an embodiment of the present invention can be implemented on a data processing node.
  • a preferred implementation is on a data processing node comprising a system on a chip (SOC).
  • SOC system on a chip
  • a system on a chip refers to integration of one or more processors, one or more memory controllers, and one or more I/O controllers onto a single silicon chip.
  • a SOC configured in accordance with the present invention can be specifically implemented in a manner to provide functionalities definitive of a server.
  • a SOC in accordance with the present invention can be referred to as a server on a chip.
  • a server on a chip configured in accordance with the present invention can include a server memory subsystem, a server I/O controllers, and a server node interconnect.
  • this server on a chip will include a multi-core CPU, one or more memory controllers that support ECC, and one or more volume server I/O controllers that minimally includes Ethernet and SATA controllers.
  • the server on a chip can be structured as a plurality of interconnected subsystems, including a CPU subsystem, a peripherals subsystem, a system interconnect subsystem, and a management subsystem.
  • An exemplary embodiment of a server on a chip that is configured in accordance with the present invention is the ECX-1000 Series server on a chip offered by Calxeda incorporated.
  • the ECX-1000 Series server on a chip includes a SOC architecture that provides reduced power consumption and reduced space requirements.
  • the ECX-1000 Series server on a chip is well suited for computing environments such as, for example, scalable analytics, webserving, media streaming, infrastructure, cloud computing and cloud storage.
  • a node card configured in accordance with the present invention can include a node card substrate having a plurality of the ECX-1000 Series server on a chip instances (i.e., each a server on a chip unit) mounted on the node card substrate and connected to electrical circuitry of the node card substrate.
  • An electrical connector of the node card enables communication of signals between the node card and one or more other instances of the node card.
  • the ECX-1000 Series server on a chip includes a CPU subsystem (i.e., a processor complex) that uses a plurality of ARM brand processing cores (e.g., four ARM Cortex brand processing cores), which offer the ability to seamlessly turn on-and-off up to several times per second.
  • the CPU subsystem is implemented with server-class workloads in mind and comes with an ECC L2 cache to enhance performance and reduce energy consumption by reducing cache misses.
  • Complementing the ARM brand processing cores is a host of high-performance server-class I/O controllers via standard interfaces such as SATA and PCI Express interfaces. Table 3 below shows technical specification for a specific example of the ECX-1000 Series server on a chip.
  • Network Proxy Support to maintain network presence even with node powered off Management 1.
  • Separate embedded processor dedicated for Engine systems management 2.
  • Advanced power management with dynamic power capping 3.
  • Dedicated Ethernet MAC for out-of-band communication 4.
  • 72-bit DDR controller with ECC support Memory 2.
  • 32-bit physical memory addressing Controller 3.
  • Four (4) integrated Gen2 PCIe controllers 2.
  • aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more non-transitory computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the C programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the textual descriptions, flowchart illustrations and/or block diagrams, and combinations thereof.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the function/acts specified in the textual descriptions, flowchart illustrations and/or block diagrams, and combinations thereof.

Abstract

Embodiments of the present invention provide an improvement over known approaches for monitoring of and taking action on observations associated with distributed applications. Application event reporting and application resource monitoring is unified in a manner that significantly reduces storage and aggregation overhead. For example, embodiments of the present invention can employ hardware and/or software support that reduces storage and aggregation overhead. In addition to providing for fine-grained, continuous, decentralized monitoring of application activity and resource consumption, embodiments of the present invention can also provide for decentralized filtering, statistical analysis, and derived data streaming. Furthermore, embodiments of the present invention are securely implemented (e.g., for use solely under the control of an operator) and can use a separate security domain for network traffic.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This patent application claims priority from co-pending U.S. Provisional Patent Application having Ser. No. 61/747,022, filed 28 Dec. 2012, entitled “FLEET SERVICE SOLUTIONS”, having a common applicant herewith and being incorporated herein in its entirety by reference.
  • BACKGROUND 1. Field of the Invention
  • The embodiments of the present invention relate to activity tracing and resource consumption monitoring in data processing systems. More specifically, embodiments of the present invention relate to systems and methods for continuous low-overhead monitoring of distributed applications running within a cluster of data processing nodes.
  • 2. Description of Related Art
  • Typical distributed application monitoring generally involves two or more independent mechanisms. A first example of such a mechanism is applications that are instrumented with tracing calls to an event logging application programming interface (API). A second example of such a mechanism is resource monitoring that is performed by a program or process running on each computing node and which invoked to perform an intended task. Such a program or process is commonly referred to as a daemon.
  • With regard to applications that are instrumented with tracing calls to an event logging API, the logging API may store event data in multiple locations. The most common locations are a) per-process, plain text log files stored on a local disk drive and b) an operating system event log (Unix syslogd or Windows Event Log). To avoid CPU and storage overhead from formatting and storing event messages, most events are disabled (or only enabled for statistical sampling) by default. When troubleshooting functionality or performance problems, an operator may enable various subsets of events temporarily. The subsets are usually selected by specifying a severity threshold (e.g. error, warning, info, debug1, debug2) and/or a set of software modules. Often, enabling or disabling log messages requires restarting the application. Unfortunately, the need to enable logging after observing a problem requires the problem to be reproduced, which isn't always easy or even feasible. Due to the overhead of enabling tracing, which may incur thread serialization (e.g. locking) in a multi-threaded program, the application may experience timing changes which alter its behavior from that previously observed with tracing disabled.
  • With regard to resource monitoring that is performed by a daemon running on each computing node, the daemon can be configured to monitor (i.e., a resource monitor) overall hardware utilization (e.g. CPUs, disk drives, and network) and/or per-process activity. Metrics are gathered at a fixed interval and then stored on disk or sent via the network to an aggregating daemon. Because the resource monitor runs on the node being monitored, some amount of resource utilization overhead is incurred by the daemon itself. A visualization application may then produce charts using the aggregated data. Generally, the resource monitor has no visibility into the specific operations being performed by the monitored applications, and therefore cannot correlate resource utilization with specific application operations.
  • SUMMARY
  • Embodiments of the present invention provide an improvement over known approaches for monitoring of and taking action on observations associated with distributed applications. Application event reporting and application resource monitoring is unified in a manner that significantly reduces storage and aggregation overhead. For example, embodiments of the present invention can employ hardware and/or software support that reduces storage and aggregation overhead. In addition to providing for fine-grained, continuous, decentralized monitoring of application activity and resource consumption, embodiments of the present invention can also provide for decentralized filtering, statistical analysis, and derived data streaming. Furthermore, embodiments of the present invention are securely implemented (e.g., for use solely under the control of an operator) and can use a separate security domain for network traffic.
  • In view of the disclosure made herein, a skilled person will appreciate that embodiments of the present invention offer a number of advantageous and beneficial functionalities. One such functionality is a remotely observable, controllable, and programmable hardware and activity resource monitor that runs out of band on separate dedicated hardware, observing, filtering, aggregating, and reporting operator- or programmer-defined metrics or events. Another such functionality is metrics and events generated by the resource monitor or by applications (either explicitly or by usage of instrumented APIs) are sent to a messaging endpoint such as an administrative console or to a remote memory for diagnostic or profiling use. Another such functionality is all events are time-stamped with extremely low overhead using a timestamp register that is automatically synchronized across the cluster using dedicated hardware. Still another such functionality is an operator having the ability to change a subset of reported events or their destination using an administrative tool.
  • In one embodiment, a method of monitoring application-driven activity in an application central processing unit of a data processing node comprises a plurality of operations. An application monitoring services module of a data processing node performs an operation for receiving at least one resource monitor command. A management processor unit of the data processing node comprises the application monitoring services module and is coupled to an application central processing unit of the data processing node. In response to receiving the at least one monitor command, the application monitoring services module performs an operation for configuring an assessment protocol thereof dependent upon a resource assessment specification provided in the at least one monitor command. In accordance with the assessment protocol, the application monitoring services module performs an operation for assessing activity of the application central processing unit that arise from execution of an application running thereon and for outputting information derived from the activity to a recipient.
  • In another embodiment, a data processing node comprises a plurality of application central processing units each having a respective application running thereon and a management processor unit coupled to each one of the application central processing units. The management processor unit comprises an application monitoring services module including a resource assessor and an event reporter. The management processor unit comprises dedicated system resources with respect to the application central processing units such that processes implemented by the application monitoring services module are out-of-band of application processes carried out on each one of the application central processing units. The application monitoring services module is configured to selectively implement one or more processes for assessing activity of a particular one of the application central processing units that arise from execution of the respective application running thereon and is configured to selectively implements one or more processes for outputting events generated by a particular one of the application central processing units that arise from execution of the respective application running thereon.
  • In another embodiment, a data processing system comprises a plurality of data processing nodes coupled to each other through an interconnect fabric. Each one of the data processing nodes comprises an application central processing unit and a management processor unit coupled to the application central processing unit. The application central processing unit of each one of the data processing nodes has an instance of a particular application running thereon. The management processor unit of each one of the data processing nodes comprises an application monitoring services module. The application monitoring services module of each one of the data processing nodes outputs a respective stream of time-stamped events that arise from execution of the instance of the particular application running on the application central processing unit thereof. A target node, which can be one of the data processing nodes or an external node (e.g., operator interface console) receives the respective stream of time-stamped events from each one of the data processing nodes and generates a composite stream of events from the time-stamped events of at least a portion of the respective streams thereof. The composite stream of events is time-sequenced dependent upon global time-stamp information of each one of the time-stamped events.
  • These and other objects, embodiments, advantages and/or distinctions of the present invention will become readily apparent upon further review of the following specification, associated drawings and appended claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a diagrammatic view of a data processing node configured in accordance with an embodiment of the present invention.
  • FIG. 2 is a diagrammatic view showing an arrangement of a resource monitor within the data processing node of FIG. 1.
  • FIG. 3 is a diagrammatic view showing a management processor implementation of an event reporter within the data processing node of FIG. 1.
  • FIG. 4 is a diagrammatic view showing an embedded library implementation of an event reporter within the data processing node of FIG. 1.
  • FIG. 5 is a diagrammatic view showing an embodiment of a process for implementing a data recorder within the data processing node of FIG. 1.
  • DETAILED DESCRIPTION
  • As shown in FIG. 1, a data processing node 1 having a system on a chip (SOC) 10 configured in accordance with an embodiment of the present invention. The SOC 10 has a management subsystem 12 and an application CPU subsystem 14 coupled to the management subsystem 12. Application monitoring services 16 are implement as one or more processes that reside in the management subsystem 12 and run on a management processor unit (MPU) 18. User applications 20, which can be different applications, different instances of the same application, etc, reside in the application CPU subsystem 14 and run on a respective one of more of a plurality of application central processing units (CPUs) 22. Each one of the application CPUs 22 includes one or more application processors and dedicated system resources ((e.g., memory, operating system, etc). The MPU 18 includes one or more dedicated management processors and associated dedicated system resources (e.g., memory, software, utilities, status registers, UARTs, network MACs, SOC configuration information, etc) that execute management software for providing initialization and ongoing management (e.g., both autonomic and remotely controlled) of the SOC 10. In this regard, the application monitoring services 16 and portions of the MPU 18 utilized for carrying out processes of the application monitoring services 16 are referred to herein as an application monitoring services module.
  • The application monitoring services 16 include a resource assessor 24 and an event reporter 26. As will be appreciated from the following disclosures, embodiments of the present invention provide for application monitoring services to be implemented in a programmable manner. Such programmability enables monitoring and reporting of activities and events to be selectively configured by an operator or other entity. For example, a command issued by the resource assessor 24 or the event reporter 26 can include a resource assessment specification upon which an assessment protocol used by the application monitoring services module is configured. In effect, the resource assessment specification included information defining a manner in which events and activities are to be monitored and/or reported and the assessment protocol is a framework in which the application monitoring services module applies such information in performing the monitoring and/or reporting.
  • The resource assessor 16 takes action based on observations associated with the distributed applications (e.g., a first daemon process) and the event reporter reports such event 24 (e.g., a second daemon process). In this regard, the event reporter and the resource assessor are independent, but related services. The observations used by the resource assessor 24 may be events reported by the event reporter 26. But, the observations will generally be periodic measurements gathered from a common component that both the resource assessor 24 and event reporter 26 use as a data source. As will be discussed below in greater detail, the resource assessor 24 and the event reporter 26 provide for an improvement over known approaches for monitoring of events associated with distributed applications and taking action on observations associated with the distributed applications. Such a distributed application can be distributed across application CPUs of a plurality of data processing nodes, which can be on a common node card or a plurality of different node cards. Nodes of the node cards can be interconnected by a fabric or other type of node interconnect structure. Further details of interconnecting nodes by a fabric are described in U.S. Non-Provisional patent application Ser. No. 13/705,340 filed on Apr. 23, 2012 and entitled “SYSTEM AND METHOD FOR HIGH PERFORMANCE, LOW-POWER DATA CENTER INTERCONNECT FABRIC”, which is incorporated herein in its entirety by reference.
  • The resource assessor 24 is implemented as an out-of-band management process on each data processing node of a system (e.g., a cluster of nodes including data processing node 1). This management process is out-of-band because it runs on the MPU core 18 of the management subsystem 12 and, therefore, does not consume resources of the application CPUs 22. By transparently observing the application CPUs 22, resource assessor 24 can notify an operator or other entity if resource consumption (e.g., CPU, memory, network, etc.) exhibits a condition that warrants such notification. Examples of such conditions include, but are not limited to, a change in resource consumption that exceeds one or more resource operating limits of the node (e.g., a preset rate of change, a sustained excursion outside a preset limit, or the like). In some implementations, the resource assessor 24 is implemented in conjunction with an agent running within an operating system (OS) of an application CPU. For example, this agent may be necessary to measure application CPU utilization because the resource assessor 24 may not be able to distinguish the OS idle loop from actual work.
  • Referring now to FIG. 2, the resource assessor 24 runs within the MPU 18 on the data processing node 1 (and all or a portion of other nodes connected to thereto). The resource assessor 24 is remotely accessible by an operator interface 30 (i.e., event/information target). Although the target of the events is illustrated as an external operator (i.e., the operator interface 30), this may in fact be a peer node in a cluster of nodes rather than an external target. In fact, some set of nodes may choose to observe the operation of one another to ensure correct operation, as the mechanism provides a generally accessible and programmable tracing feature.
  • Resource assessor commands 32 (i.e., a form of a resource monitor command) are provided from the operator interface 30 to the resource assessor 24. Resource assessor information 34 such as, for example, derived data, alerts and the like is provided from the resource assessor 24 to the operator interface 30 in response to the resource assessor commands 32. The resource assessor 24 may observe the execution and actions of user application processes each running within a respective application CPU 22 of the data processing node 1.
  • The resource operating limits, which are used by the resource assessor 24 to identify operating/behavior changes, can be operator defined or can be statistically derived from data being monitored by the resource assessor 24. For example, the operator may program the resource assessor 24 using the operator interface to histogram typical message sizes. Statistically significant deviations in such sizes are typically indicative of misbehavior of the observed processes. Because the resource assessor 24 is a programmable process, it can also be used to implement filtering of data, statistical analysis of the data stream to reduce the data volume, and streaming of the original or derived data to other nodes in the cluster continuously, periodically or when anomalies are identified. In this regard, in response to receiving a resource monitoring command, the application monitoring services module configures an assessment protocol dependent upon a resource assessment specification provided in the resource assessor command (e.g., to histogram typical message sizes) and, in accordance with the assessment protocol, the application monitoring services module assessing activity of the application central processing unit(s) that arise from execution of a user application running thereon outputs information derived from the activity (e.g., histogram(s)) to target recipient (e.g., the operator interface 30).
  • In view of the disclosures made herein, a skilled person will appreciate that the resource assessor 24 offers the following capabilities and functionalities. Application execution and use of machine resources can be directly observed in a manner requiring no changes to user operating system or application. No cooperation or knowledge of the user application is required. Fine-grained continuous on-node monitoring is provided using CPU cores and hardware peripherals of a management subsystem (i.e., resources that are isolated from an application CPU subsystem connected to the management subsystem), which minimizes overhead on the user application and exposes micro-bursting behavior, which is otherwise difficult to observe. Programmable computations are performed on collected data, allowing the operator to push monitoring code towards each node for scaling of resource assessing with the cluster size.
  • As disclosed above in reference to FIG. 1, the application monitoring services 16 include an event reporter 26. For application running in a single-node or a multiple-node (distributed) manner, fine-grained insight into their execution is required for operational, debugging and profiling/tuning reasons. As shown in FIG. 3, the event reporter 26 executes on MPU 18 of a plurality of data processing nodes (i.e., node 1 to node N, which can be coupled to each other via fabric 59) in a manner that is isolated from their application CPUs 22. Resource assessor commands 42 (i.e., a form of a resource monitor command) are provided from the operator interface 30 to the nodes 1 for enabling events to be traced and reported on. Upon receipt of the commands, the event reporter 26 produces resource event information 40 in the form of a stream of time-stamped events from the respective data processing node to the operator interface 30. The stream of time-stamped events (i.e., trace data) is provided to the operator interface console 30, a remote memory location or both (i.e., target nodes) until a STOP command is received. An operator can use commands to enable all tracing events to be produced, select some subset, or provide expressions used to evaluate whether any given event should be produced. This mechanism is independent of the user operating system or application and may be used to trace system provided event sources. The event reporter 26 uses resources of the MPU 18 to manage trace data collection and can also use remote memory (via coarse-grained, large-block RDMA or fine-grained, cache line-sized access) and shared memory ring buffers for collection and aggregation. The event reporter can be configured to immediately transmit events to a remote node such that they are retained and available even if a source node of the events becomes inaccessible (i.e., a data recorder).
  • As disclosed above, the event reporter 26 can leverage and/or be built-upon functionalities such as, for example, shared ring buffers, remote memory, and/or node-to-node time synchronization. Further details of implementing the none-to-node time synchronization functionality are described in U.S. Non-Provisional patent application Ser. No. 13/899,751 filed on May 22, 2013 and entitled “TIME SYNCHRONIZATION BETWEEN NODES OF A SWITCHED INTERCONNECT FABRIC”, which is incorporated herein in its entirety by reference. Further details of implementing shared ring buffer functionality are described in U.S. Non-Provisional patent application Ser. No. 13/959,428 filed on Aug. 5, 2013 and entitled “REMOTE MEMORY RING BUFFERS IN A CLUSTER OF DATA PROCESSING NODES”, which is incorporated herein in its entirety by reference. Further details of implementing remote memory functionality are described in U.S. Non-Provisional patent application Ser. No. 13/935,108 filed Jul. 3, 2013 and entitled “IMPLEMENTING REMOTE TRANSACTION FUNCTIONALITIES BETWEEN DATA PROCESSING NODES OF A SWITCHED INTERCONNECT FABRIC”, which is incorporated herein in its entirety by reference.
  • Referring now to FIG. 4, the same flow of commands and data streams as shown above in reference to FIG. 3 are shown, except with the producer of resource event information 40 being a node services library (NS Lib) code 50 of the application CPU subsystem 14 as opposed to the MPU 12 of the management subsystem 12. Applications can use a suitable application programming interface (API) to emit tracing events that are aggregated in remote memory or sent to an aggregating node via a messaging API (feature provided by the NS Lib code 50). The messaging API may also be configured to emit its own tracing events. An API that includes the ability to emit tracing events containing arbitrary diagnostic information can be provided for languages such as, for example, C, C++, etc.
  • These events reported by the event reporter 26 are time-stamped via suitable time-stamp register that is synchronized across all nodes in a cluster (e.g., node 1 to node N). This synchronization of time-stamping across all nodes in a cluster (i.e., via node-to-node time synchronization) results in the events reported by the event reporter 26 being time stamped in accordance with time information that is global with respect to all of the nodes (i.e., global timestamp information). For example, the time stamp applied to each one of the events can be based upon a global time (t (G)) to which a local time (t (L)) of each node in a cluster of nodes is synchronized. This global time stamping enables a recipient of the events to correlate the events (e.g., generate a time-sequenced stream of events therefrom) and to analyze in a meaningful fashion the events generated on a multitude of nodes. It has extremely low overhead and so it is possible to permanently or selectively enable this across an entire cluster, unlike the existing state of the art. As disclosed above, further details of implementing the none-to-node time synchronization functionality are described in U.S. Non-Provisional patent application Ser. No. 13/899,751 filed on May 22, 2013 and entitled “TIME SYNCHRONIZATION BETWEEN NODES OF A SWITCHED INTERCONNECT FABRIC”, which is incorporated herein in its entirety by reference.
  • In either of the disclosed implementation of the event reporter 26 (i.e., the management processor implementation as shown and discussed in reference to FIG. 3 or the embedded library implementation as shown and discussed in reference to FIG. 4), the operator or programmer may dynamically inject filtering logic to implement filtering or other analysis prior to event generation. This may be used to minimize the amount of events generated by identifying the most important or anomalous ones. In at least one embodiment, filtering is used to limit the volume of trace data to avoid overwhelming the system or the administrators. An example of filtering is to associate a severity, such as error, warning, or informational, with each trace event and only retaining events above a specified threshold. Also, different subsystems can be assigned different severity thresholds.
  • Presented now is a discussion regarding an implementation of the event reporter 26 in which it is configured to transmits events to a remote node immediately such that they're retained and available even if the source node becomes inaccessible. Accordingly, if a monitored machine crashes, the monitoring data up to the point of the crash is safely stored elsewhere. Such an implementation of the event reporter 26 is referred to herein as a data recorder.
  • The underlying functionality of the data recorder involves using hardware mechanisms such as, for example, remote memory and/or shared ring buffers to gather monitoring data in real-time with low overhead. Because these remote memories and ring buffers are hardware managed, the overhead for their use by the application is very low, allowing us to continuously generate events if desired. Furthermore, preferred implementations of remote memory and shared ring buffers operate in a non-blocking mode such that an application initiates a remote memory transfer without waiting for the transaction to complete. For example, use of node fabric hardware to perform the transfer in the background without application CPU intervention ensures that forward progress of the application is not blocked. The buffers of events may then be observed continuously or on-demand by the operator or programmer to debug, profile, or investigate the execution of the system, including processes running on many different nodes targeting the same event buffer. As disclosed above, further details of implementing shared ring buffer functionality are described in U.S. Non-Provisional patent application Ser. No. 13/959,428 filed on Aug. 5, 2013 and entitled “REMOTE MEMORY RING BUFFERS IN A CLUSTER OF DATA PROCESSING NODES”, which is incorporated herein in its entirety by reference, and further details of implementing remote memory functionality are described in U.S. Non-Provisional patent application Ser. No. 13/935,108 filed Jul. 3, 2013 and entitled “IMPLEMENTING REMOTE TRANSACTION FUNCTIONALITIES BETWEEN DATA PROCESSING NODES OF A SWITCHED INTERCONNECT FABRIC”, which is incorporated herein in its entirety by reference.
  • The data recorder provides a novel way to observe and investigate the operation of the cluster with data collected before, during, and after normal or anomalous execution. The remote memory or ring buffers may be sized appropriately to capture the last N events, or the typical number of events in a certain period of time. Optionally, if the events are emitted in a separate security domain (e.g., a particular node of a cluster of nodes), the events may not be spoofed or observed by users of the cluster. This implements an irrevocable log of actions by the processes being traced. If the events captured by the data recorder are emitted in a separate security domain, then for safety or security reasons, event data tracing may be monitored by a process/person without permission to interact with the application itself. For example, systems operators may observe the correct operation of a production application without interacting with it. Such a separate security domain can be implemented, for example, at a particular node of a cluster of data processing nodes or at a node (i.e., apparatus) external to the cluster of data processing nodes (e.g., an operator interface console).
  • Referring now to FIG. 5, an embodiment of a process for implementing the data recorder is shown. A plurality of nodes 1-n within a cluster, which can be connected to each other via an interconnected fabric 59, are streaming events to a remote target 60 (e.g., a remote memory or remote memory ring buffer) within the cluster. A command and control process at an operator interface 30, which may be internal or external to the cluster, issues commands 40 to the nodes 1-n. One or more of the nodes (e.g., node 1) is streaming events 42 from the event reporter 26 while one or more other nodes (e.g., node n) is streaming events 42 from an application level context (e.g., the node service library NS Lib 50). In both cases, the same remote memory or ring buffer may be the target of the events, even though they are being generated in different contexts (i.e., system context vs. user context). Also, although the operator interface 30 (e.g., a control process thereof) is an external entity, the decision to enable tracing could also be made by either the event reporter 26 (i.e., a management processor process) or the node service library NS Lib 50 (i.e., a user application code). For example, if a particular user application encounters an error situation, a user application code can enable the particular user application to begin to generate events into remote memory for later investigation. Besides debug and profiling information, the events may also be data generated by the user applications.
  • It has been disclosed herein that a suitable remote memory ring buffers can provide low-overhead, non-blocking transmission of tracing events to a remote aggregation node. However, there are limits to the frequency at which events can be transmitted within a node, between nodes and/or to an operator interface. To accommodate applications that generate many events in sporadic bursts, an application tracing library initially writes events to a per-thread circular queue in local memory. The events in the queue are consumed by a separate (asynchronous) thread or process that merges the events in chronological order based on their timestamps. Because each queue has a single producer (e.g., guaranteed by being per-thread) and a single consumer (e.g., a constraint enforced by the software), it can utilize low-overhead, wait-free synchronization between the producer and the consumer. Wait-freedom is a term of art that describes the strongest non-blocking guarantee of progress, combining guaranteed system-wide throughput with starvation-freedom for all threads of execution. Single-producer, single-consumer synchronized queues are a well-known, simple data structure that can be implemented without special atomic primitives.
  • In is disclosed herein that a data processing system (e.g., a server) configured in accordance with the present invention can provide numerous types of event trace (i.e., event reporter) buffer consumers. Each of these consumers can run in numerous places such as, for example, one or more applications of a node generating the events, an event reporter running on one or more application cores of one or more nodes, or the event reporter running on the management processing unit of one or more nodes. Management interfaces in each event reporter process provide for dynamic configuration of consumers. One example of such a trace buffer consumer is a process merger that runs in a background thread, merging per-thread buffers into a per-process buffer, annotating each event with its thread of origin. The destination buffer can be in one or more remote memories, providing fault tolerance and redundancy/fan-out. Another example of such a trace buffer consumer is a system merger that runs in a separate process, potentially on one of the dedicated management cores, merging per-thread or per-process buffers into a system-wide buffer and annotating each event with its thread and process of origin. This consumer requires the source buffers to be in shared local memory. As with the process merger, the destination buffer can be in remote memory. Another example of such a trace buffer consumer is a formatter that transforms binary trace events stored in a thread, process, or system buffer into human-readable text. Another example of such a trace buffer consumer is a message sender that sends buffer contents or formatter output to one or more messaging endpoints using a suitable messaging (i.e., node messaging functionality), which automatically chooses the fastest transport available, such as remote direct memory access (RDMA). Writing to multiple remote endpoints provides fault tolerance and redundancy/fan-out. Another example of such a trace buffer consumer is a message receiver that receives buffer contents or formatter output from a message sender. Another example of such a trace buffer consumer is a disk writer that writes buffer contents or formatter output to non-volatile storage. Still another example of such a trace buffer consumer is a shared ring buffer writer that writes buffer contents or formatter output to a remote memory ring buffer. Shared ring buffers provide hardware-accelerated aggregation/fan-in from multiple trace sources.
  • When composed into an event distribution, aggregation, and storage network, these consumers provide a highly-customizable means of handling a large amount of monitoring data in real-time. While most consumers run asynchronously, waiting for a signal from the producer, which may in fact be another consumer, they can also run synchronously when composed within the same process. For example, a system merger that gathers events generated asynchronously can synchronously invoke a formatter for each merged event, and that formatter could in turn synchronously invoke a disk writer to store the generated text on a local disk. At any stage, including the event producer, trace events can be filtered or aggregated using system- and user-defined rules. An example of system-defined rules includes source subsystem, thread, or process ID or event severity, such as errors, warnings, or configuration changes.
  • As presented above, a resource assessor configured in accordance with the present invention and an event reporter configured in accordance with an embodiment of the present invention (i.e., application monitoring services) can be implemented on a data processing node. Furthermore, it has been disclosed that a preferred implementation is on a data processing node comprising a system on a chip (SOC). However, in view of the disclosures made herein, a skilled person will appreciate that implementation of application monitoring services is not limited to a particular type or configuration of data processing node or data processing apparatus.
  • In view of the disclosures made herein, a skilled person will appreciate that a system on a chip (SOC) refers to integration of one or more processors, one or more memory controllers, and one or more I/O controllers onto a single silicon chip. Furthermore, in view of the disclosures made herein, the skilled person will also appreciate that a SOC configured in accordance with the present invention can be specifically implemented in a manner to provide functionalities definitive of a server. In such implementations, a SOC in accordance with the present invention can be referred to as a server on a chip. In view of the disclosures made herein, the skilled person will appreciate that a server on a chip configured in accordance with the present invention can include a server memory subsystem, a server I/O controllers, and a server node interconnect. In one specific embodiment, this server on a chip will include a multi-core CPU, one or more memory controllers that support ECC, and one or more volume server I/O controllers that minimally includes Ethernet and SATA controllers. The server on a chip can be structured as a plurality of interconnected subsystems, including a CPU subsystem, a peripherals subsystem, a system interconnect subsystem, and a management subsystem.
  • An exemplary embodiment of a server on a chip that is configured in accordance with the present invention is the ECX-1000 Series server on a chip offered by Calxeda incorporated. The ECX-1000 Series server on a chip includes a SOC architecture that provides reduced power consumption and reduced space requirements. The ECX-1000 Series server on a chip is well suited for computing environments such as, for example, scalable analytics, webserving, media streaming, infrastructure, cloud computing and cloud storage. A node card configured in accordance with the present invention can include a node card substrate having a plurality of the ECX-1000 Series server on a chip instances (i.e., each a server on a chip unit) mounted on the node card substrate and connected to electrical circuitry of the node card substrate. An electrical connector of the node card enables communication of signals between the node card and one or more other instances of the node card.
  • The ECX-1000 Series server on a chip includes a CPU subsystem (i.e., a processor complex) that uses a plurality of ARM brand processing cores (e.g., four ARM Cortex brand processing cores), which offer the ability to seamlessly turn on-and-off up to several times per second. The CPU subsystem is implemented with server-class workloads in mind and comes with an ECC L2 cache to enhance performance and reduce energy consumption by reducing cache misses. Complementing the ARM brand processing cores is a host of high-performance server-class I/O controllers via standard interfaces such as SATA and PCI Express interfaces. Table 3 below shows technical specification for a specific example of the ECX-1000 Series server on a chip.
  • TABLE 3
    Example of ECX-1000 Series server on a chip technical specification
    Processor Cores
    1. Up to four ARM ® Cortex ™-A9 cores @ 1.1 to
    1.4 GHz
    2. NEON ® technology extensions for multimedia
    and SIMD processing
    3. Integrated FPU for floating point acceleration
    4. Calxeda brand TrustZone ® technology for
    enhanced security
    5. Individual power domains per core to minimize
    overall power consumption
    Cache
    1. 32 KB L1 instruction cache per core
    2. 32 KB L1 data cache per core
    3. 4 MB shared L2 cache with ECC
    Fabric Switch
    1. Integrated 80 Gb (8 x 8) crossbar switch with
    through-traffic support
    2. Five (5) 10 Gb external channels, three (3) 10 Gb
    internal channels
    3. Configurable topology capable of connecting up to
    4096 nodes
    4. Dynamic Link Speed Control from 1 Gb to 10 Gb
    to minimize power and maximize performance
    5. Network Proxy Support to maintain network
    presence even with node powered off
    Management 1. Separate embedded processor dedicated for
    Engine systems management
    2. Advanced power management with dynamic
    power capping
    3. Dedicated Ethernet MAC for out-of-band
    communication
    4. Supports IPMI 2.0 and DCMI management
    protocols
    5. Remote console support via Serial-over-LAN
    (SoL)
    Integrated 1. 72-bit DDR controller with ECC support
    Memory 2. 32-bit physical memory addressing
    Controller 3. Supports DDR3 (1.5 V) and DDR3L (1.35 V) at
    800/1066/1333 MT/s
    4. Single and dual rank support with mirroring
    PCI Express 1. Four (4) integrated Gen2 PCIe controllers
    2. One (1) integrated Gen1 PCIe controller
    3. Support for up to two (2) PCIe x8 lanes
    4. Support for up to four (4) PCIe x1, x2, or x4 lanes
    Networking
    1. Support 1 Gb and 10 Gb Ethernet
    Interfaces 2. Up to five (5) XAUI 10 Gb ports
    3. Up to six (6) 1 Gb SGMII ports (multiplexed
    w/XAUI ports)
    4. Three (3) 10 Gb Ethernet MACs supporting IEEE
    802.1Q VLANs, IPv4/6 checksum processing, and
    TCP/UDP/ICMP checksum offload
    5. Support for shared or private management LAN
    SATA
    1. Support for up to five (5) SATA disks
    Controllers 2. Compliant with Serial ATA 2.0, AHCI Revision
    1.3, and eSATA specifications
    3. SATA 1.5 Gb/s and 3.0 Gb/s speeds supported
    SD/eMMC 1. Compliant with SD 3.0 Host and MMC 4.4
    Controller (eMMC) specifications
    2. Supports 1 and 4-bit SD modes and 1/4/8-bit
    MMC modes
    3. Read/write rates up to 832 Mbps for MMC and up
    to 416 Mbps for SD
    System
    1. Three (3) I2C interfaces
    Integration 2. Two (2) SPI (master) interface
    Features 3. Two (2) high-speed UART interfaces
    4. 64 GPIO/Interrupt pins
    5. JTAG debug port
  • As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more non-transitory computer readable medium(s) having computer readable program code embodied thereon.
  • Any combination of one or more computer readable medium(s) (e.g., non-transitory computer readable medium(s)) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the C programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • Aspects of the present invention are/can be described herein with reference to textual descriptions, flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the present invention. It will be understood that portions of the textual descriptions, flowchart illustrations and/or block diagrams, and combinations thereof can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to product a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the function/acts specified in the textual descriptions, flowchart illustrations and/or block diagrams, and combinations thereof. These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the textual descriptions, flowchart illustrations and/or block diagrams, and combinations thereof. The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the function/acts specified in the textual descriptions, flowchart illustrations and/or block diagrams, and combinations thereof.
  • While the foregoing has been with reference to a particular embodiment of the invention, it will be appreciated by those skilled in the art that changes in this embodiment may be made without departing from the principles and spirit of the disclosure, the scope of which is defined by the appended claims.

Claims (33)

1-20. (canceled)
21. A method for monitoring one or more distributed applications running on a cluster comprising a plurality of interconnected data processing nodes, the plurality of data processing nodes comprising respective ones of a plurality of application processing modules each comprising a respective application running thereon, the method comprising:
receiving, at an application monitoring module of the cluster, at least one monitor command, wherein the application monitoring module is in data communication with each of the plurality of application processing modules via at least a management plane;
configuring the application monitoring module to selectively implement one or more processes for assessing activity of a particular one of the plurality of application processing modules that arise from execution of the respective application running thereon; and
using at least the configured application monitoring module, causing the application monitoring module to selectively implement the one or more processes for the assessing activity of a particular one of the plurality of application processing modules that arise from execution of the respective application running thereon;
wherein, the assessing activity is performed via the management plane such that the one or more processes implemented by the monitoring service module are out-of-band relative to processes of the respective applications running on each one of the application processing modules.
22. The method of claim 21, wherein:
the configuring the application monitoring module further comprises configuring the application monitoring module to selectively implement one or more processes for outputting event data generated by the particular one of the plurality of application processing modules; and
the using at least the configured application monitoring module further comprises using the application monitoring module to selectively implement the one or more processes for outputting event data generated by the particular one of the plurality of application processing modules.
23. The method of claim 22, wherein the assessing comprises using at least one filter function to manipulate execution of the respective application running thereon to configure at least one aspect of the generation of the event data.
24. The method of claim 22, wherein the outputting the event data comprises applying a time stamp to the event data based upon a global time to which a local time of each one of the nodes is synchronized.
25. The method of claim 22, further comprising implementing, by one or more of the plurality of data processing nodes, a first security domain not accessible by an external entity which issued the at least one monitor command, and implementing, by another one of the plurality of data processing nodes, a second security domain accessible by the external entity; and
wherein outputting the event data includes transmitting the event data to the another node, thereby enabling the events to be monitored by the external entity without allowing the external entity to interact with the respective application by which the events were generated.
26. The method of claim 21, wherein the configuring is responsive to receiving the command.
27. The method of claim 21, wherein the assessing activity includes alerting an external entity of resource consumption by a particular one of the plurality of application processing modules exceeding a prescribed limit.
28. The method of claim 27, wherein the application monitoring module comprises a resource assessor process, and is present on the management plane.
29. The method of claim 27, further comprising implementing, by one or more of the plurality of data processing nodes, a first security domain not accessible by the external entity, and implementing, by another one of the plurality of data processing nodes, a second security domain accessible by the external entity.
30. The method of claim 21, wherein the management plane comprises one or more dedicated system resources.
31. The method of claim 30, wherein the one or more dedicated system resources comprise one or more dedicated system resources with respect to the application processing modules; and
wherein the one or more dedicated system resources do not consume resources dedicated to the application processing modules.
32. A non-transitory computer readable apparatus comprising a storage medium, the storage medium comprising at least one compute program, the at least one computer program configured to, when executed on an out-of-band application monitoring system of a cluster, the cluster comprising a plurality of data processing nodes interconnected by at least the out-of-band application monitoring system, monitor one or more distributed applications running on the plurality of interconnected data processing nodes by at least:
receipt, at the application monitoring system of the cluster, at least one monitor command; and
responsive to the received at least one monitor command, configuration of the application monitoring system to cause implementation of one or more processes for assessing activity of an application processing module associated with at least one of the plurality of data processing nodes, the activity resulting from execution of an application running on the at least one data processing node; and
wherein, the assessing activity is performed via the out-of-band application monitoring system such that the implemented one or more processes are out-of-band relative to processes of the application running on the at least one data processing node.
33. The non-transitory computer readable apparatus of claim 32, wherein:
the configuration of the application monitoring system further comprises configuration of the application monitoring system to implement one or more processes for output of event data generated by the application processing module associated with at least one of the plurality of data processing nodes; and
the at least one computer program is further configured to, when executed, monitor, use at least the configured application monitoring system to selectively implement the one or more processes for output of the event data generated by the application processing module associated with at least one of the plurality of data processing nodes to a domain accessible by a computerized user process which issued the at least one monitor command.
34. The non-transitory computer readable apparatus of claim 33, wherein the assessing activity comprises use of at least one filter function to manipulate execution of the application running on the at least one data processing node to configure at least one aspect of the generation of the event data.
35. The non-transitory computer readable apparatus of claim 33, wherein the output of the event data comprises application of time stamp data to the event data based upon a global time to which a local time of each one of the plurality of nodes is synchronized.
36. The non-transitory computer readable apparatus of claim 33, wherein the at least one computer program is further configured to, when executed, implement, by one or more of the plurality of data processing nodes, a first security domain not accessible by the computerized user process, and implementing, by another one of the plurality of data processing nodes, a second security domain accessible by the computerized user process; and
wherein outputting the event data includes transmitting the event data to the another node, thereby enabling the events to be monitored by the computerized user process without allowing the computerized user process to interact with the application running on the at least one data processing node.
37. The non-transitory computer readable apparatus of claim 32, wherein the assessing activity includes alert of a computerized user process external to the cluster of resource consumption by a particular one of the application processing module associated with the at least one of the plurality of data processing nodes exceeding a prescribed limit.
38. The non-transitory computer readable apparatus of claim 37, wherein the at least one computer program is further configured to, when executed, implement a resource assessor process, and resource assessor process utilizing at least a portion of the out-of-band management system to assess resource utilization of the at least one data processing node relating to execution of the application running on the at least one data processing node.
39. The non-transitory computer readable apparatus of claim 37, wherein the at least one computer program is further configured to, when executed, implement, by one or more of the plurality of data processing nodes, a first security domain not accessible by a computerized user process which issued the at least one monitor command, and implementing, by another one of the plurality of data processing nodes, a second security domain accessible by the computerized user process.
40. The non-transitory computer readable apparatus of claim 32, wherein the out-of-band management system comprises one or more dedicated system resources.
41. The non-transitory computer readable apparatus of claim 40, wherein:
the one or more dedicated system resources comprise one or more dedicated system resources with respect to at least the application processing module; and
the one or more dedicated system resources do not consume resources dedicated to the application processing module.
42. A computerized system comprising:
a cluster of interconnected data processing nodes, the plurality of data processing nodes comprising respective ones of a plurality of central processing units each configured to execute a respective application thereon, and
an out-of-band monitoring system, the out-of-band monitoring system comprising an application monitoring module in data communication with each of the plurality of central processing units, the application monitoring module configured to provide one or more application monitoring services out-of-band of processes executed on each one of the central processing units;
wherein the application monitoring module is configured to:
selectively implement, responsive to one or more commands received thereby and issued by a computerized user process external to the system, one or more processes to assess activity of a particular one of the plurality of central processing units that arise from execution of the respective application thereof; and
based at least on the assessment of the activity, cause forwarding of generated event data related to the assessment to the computerized user process.
43. The system of claim 42, wherein the provision of the one or more application monitoring services out-of-band of processes executed on each one of the central processing units is enabled at least in part via configuration of at least the application monitoring module to use dedicated hardware and software resources for provision of the application monitoring services, the dedicated hardware and software resources not being part of the cluster of interconnected data processing nodes.
44. The system of claim 43, wherein the one or more commands comprise resource assessment specification data, the resource assessment specification data configured to enable configuration of a protocol, the protocol used in the assessment of the activity.
45. The system of claim 43, wherein the respective applications comprise at least a portion of a common distributed application.
46. The system of claim 42, wherein the assessment of the activity comprises assessment of consumption of one or more compute resources within at least the particular one of the plurality of central processing units.
47. The system of claim 43, wherein the assessment of the consumption of one or more compute resources within at least the particular one of the plurality of central processing units comprises assessment relative to an operator-defined limit.
48. The system of claim 47, wherein the one or more commands comprise resource assessment specification data, the resource assessment specification data configured to enable configuration of a protocol, the protocol used in the assessment of the activity, the resources assessment specification data comprising data indicative of the operator-defined limit.
49. The system of claim 43, wherein the assessment of the consumption of one or more compute resources within at least the particular one of the plurality of central processing units comprises assessment relative to a limit computed or derived by the out-of-band monitoring system itself.
50. The system of claim 42, wherein the one or more application monitoring services out-of-band of processes executed on each one of the central processing units comprise a plurality of decentralized services that can be provided to respective ones of a plurality of computerized user processes disposed on decentralized computerized apparatus external to the cluster.
51. The system of claim 48, wherein the out-of-band monitoring system further comprises at least one module configured to implement an event data store or log which is inaccessible to the plurality of computerized user processes.
52. The system of claim 49, wherein the implementation of the event data store or log which is inaccessible to the plurality of computerized user processes comprises use of a first security domain wherein the event data store or log is maintained and is inaccessible to the plurality of computerized user processes, and at least one second security domain whereby one or more of the computerized user processes may obtain at least portions of the event data without interaction of the application via the forwarded generated event data.
US17/412,832 2012-12-28 2021-08-26 System and Method for Continuous Low-Overhead Monitoring of Distributed Applications Running on a Cluster of Data Processing Nodes Pending US20220121545A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/412,832 US20220121545A1 (en) 2012-12-28 2021-08-26 System and Method for Continuous Low-Overhead Monitoring of Distributed Applications Running on a Cluster of Data Processing Nodes

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201261747022P 2012-12-28 2012-12-28
US14/137,921 US11132277B2 (en) 2012-12-28 2013-12-20 System and method for continuous low-overhead monitoring of distributed applications running on a cluster of data processing nodes
US17/412,832 US20220121545A1 (en) 2012-12-28 2021-08-26 System and Method for Continuous Low-Overhead Monitoring of Distributed Applications Running on a Cluster of Data Processing Nodes

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/137,921 Continuation US11132277B2 (en) 2012-12-28 2013-12-20 System and method for continuous low-overhead monitoring of distributed applications running on a cluster of data processing nodes

Publications (1)

Publication Number Publication Date
US20220121545A1 true US20220121545A1 (en) 2022-04-21

Family

ID=51018524

Family Applications (5)

Application Number Title Priority Date Filing Date
US14/137,921 Active 2034-03-20 US11132277B2 (en) 2012-12-28 2013-12-20 System and method for continuous low-overhead monitoring of distributed applications running on a cluster of data processing nodes
US14/137,940 Expired - Fee Related US10311014B2 (en) 2012-12-28 2013-12-20 System, method and computer readable medium for offloaded computation of distributed application protocols within a cluster of data processing nodes
US16/429,658 Active US11188433B2 (en) 2012-12-28 2019-06-03 System, method and computer readable medium for offloaded computation of distributed application protocols within a cluster of data processing nodes
US17/412,832 Pending US20220121545A1 (en) 2012-12-28 2021-08-26 System and Method for Continuous Low-Overhead Monitoring of Distributed Applications Running on a Cluster of Data Processing Nodes
US17/508,661 Pending US20220114070A1 (en) 2012-12-28 2021-10-22 System, Method and Computer Readable Medium for Offloaded Computation of Distributed Application Protocols within a Cluster of Data Processing Nodes

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US14/137,921 Active 2034-03-20 US11132277B2 (en) 2012-12-28 2013-12-20 System and method for continuous low-overhead monitoring of distributed applications running on a cluster of data processing nodes
US14/137,940 Expired - Fee Related US10311014B2 (en) 2012-12-28 2013-12-20 System, method and computer readable medium for offloaded computation of distributed application protocols within a cluster of data processing nodes
US16/429,658 Active US11188433B2 (en) 2012-12-28 2019-06-03 System, method and computer readable medium for offloaded computation of distributed application protocols within a cluster of data processing nodes

Family Applications After (1)

Application Number Title Priority Date Filing Date
US17/508,661 Pending US20220114070A1 (en) 2012-12-28 2021-10-22 System, Method and Computer Readable Medium for Offloaded Computation of Distributed Application Protocols within a Cluster of Data Processing Nodes

Country Status (1)

Country Link
US (5) US11132277B2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220114070A1 (en) * 2012-12-28 2022-04-14 Iii Holdings 2, Llc System, Method and Computer Readable Medium for Offloaded Computation of Distributed Application Protocols within a Cluster of Data Processing Nodes
US11658916B2 (en) 2005-03-16 2023-05-23 Iii Holdings 12, Llc Simple integration of an on-demand compute environment
US11656907B2 (en) 2004-11-08 2023-05-23 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11720290B2 (en) 2009-10-30 2023-08-08 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US11765101B2 (en) 2005-04-07 2023-09-19 Iii Holdings 12, Llc On-demand access to compute resources
US11960937B2 (en) 2004-03-13 2024-04-16 Iii Holdings 12, Llc System and method for an optimizing reservation in time of compute resources based on prioritization function and reservation policy parameter

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8782654B2 (en) 2004-03-13 2014-07-15 Adaptive Computing Enterprises, Inc. Co-allocating a reservation spanning different compute resources types
US20070266388A1 (en) 2004-06-18 2007-11-15 Cluster Resources, Inc. System and method for providing advanced reservations in a compute environment
US8176490B1 (en) 2004-08-20 2012-05-08 Adaptive Computing Enterprises, Inc. System and method of interfacing a workload manager and scheduler with an identity manager
US8863143B2 (en) 2006-03-16 2014-10-14 Adaptive Computing Enterprises, Inc. System and method for managing a hybrid compute environment
US8041773B2 (en) 2007-09-24 2011-10-18 The Research Foundation Of State University Of New York Automatic clustering for self-organizing grids
US10877695B2 (en) 2009-10-30 2020-12-29 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US9361202B2 (en) 2013-07-18 2016-06-07 International Business Machines Corporation Filtering system noises in parallel computer systems during thread synchronization
IN2014MU00662A (en) 2014-02-25 2015-10-23 Tata Consultancy Services Ltd
CN112422291B (en) * 2014-08-12 2022-01-28 艾高特有限责任公司 Social network engine based on zero-knowledge environment
US9984337B2 (en) * 2014-10-08 2018-05-29 Nec Corporation Parallelized machine learning with distributed lockless training
US10802998B2 (en) * 2016-03-29 2020-10-13 Intel Corporation Technologies for processor core soft-offlining
US10148619B1 (en) 2016-06-24 2018-12-04 EMC IP Holding Company LLC Identity-based application-level filtering of network traffic
KR102460416B1 (en) * 2016-10-24 2022-10-28 삼성에스디에스 주식회사 System and method for managing container-based distributed application
US10348604B2 (en) * 2017-02-01 2019-07-09 International Business Machines Corporation Monitoring a resource consumption of an application
CN108429625B (en) * 2017-02-13 2021-10-15 中兴通讯股份有限公司 Method and device for realizing fault diagnosis
JP2018157340A (en) * 2017-03-16 2018-10-04 沖電気工業株式会社 Radio communication device, program, and method
US10848375B2 (en) 2018-08-13 2020-11-24 At&T Intellectual Property I, L.P. Network-assisted raft consensus protocol
US10893005B2 (en) * 2018-09-17 2021-01-12 Xilinx, Inc. Partial reconfiguration for Network-on-Chip (NoC)
US10725946B1 (en) * 2019-02-08 2020-07-28 Dell Products L.P. System and method of rerouting an inter-processor communication link based on a link utilization value
US11119803B2 (en) * 2019-05-01 2021-09-14 EMC IP Holding Company LLC Method and system for offloading parity processing
US11294702B2 (en) 2019-05-01 2022-04-05 EMC IP Holding Company LLC Method and system for processing data using a processing pipeline and processing units
US11119802B2 (en) 2019-05-01 2021-09-14 EMC IP Holding Company LLC Method and system for offloading parallel processing of multiple write requests
US11139991B2 (en) * 2019-09-28 2021-10-05 Intel Corporation Decentralized edge computing transactions with fine-grained time coordination
US11204711B2 (en) 2019-10-31 2021-12-21 EMC IP Holding Company LLC Method and system for optimizing a host computing device power down through offload capabilities
CN111338705B (en) * 2020-02-13 2021-03-26 北京房江湖科技有限公司 Data processing method, device and storage medium
US11487683B2 (en) * 2020-04-15 2022-11-01 AyDeeKay LLC Seamlessly integrated microcontroller chip
CN112749056A (en) * 2020-12-30 2021-05-04 广州品唯软件有限公司 Application service index monitoring method and device, computer equipment and storage medium
CN113190498B (en) * 2021-04-09 2023-04-28 浙江毫微米科技有限公司 Frequency adjustment method and device and electronic equipment
CN114143814B (en) * 2021-12-13 2024-01-23 华北电力大学(保定) Multi-task unloading method and system based on heterogeneous edge cloud architecture
US11809512B2 (en) * 2021-12-14 2023-11-07 Sap Se Conversion of user interface events
US11842226B2 (en) * 2022-04-04 2023-12-12 Ambiq Micro, Inc. System for generating power profile in low power processor

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050038808A1 (en) * 2003-08-15 2005-02-17 Kutch Patrick G. System and method for utilizing a modular operating system (OS) resident agent allowing an out-of-band server management
US20050039171A1 (en) * 2003-08-12 2005-02-17 Avakian Arra E. Using interceptors and out-of-band data to monitor the performance of Java 2 enterprise edition (J2EE) applications
US20060250971A1 (en) * 2005-04-19 2006-11-09 Alcatel Context controlled data tap utilizing parallel logic for integrated link monitoring
US7610266B2 (en) * 2005-05-25 2009-10-27 International Business Machines Corporation Method for vertical integrated performance and environment monitoring
US20120158925A1 (en) * 2010-12-17 2012-06-21 Microsoft Corporation Monitoring a model-based distributed application
US20120167094A1 (en) * 2007-06-22 2012-06-28 Suit John M Performing predictive modeling of virtual machine relationships
US20150263913A1 (en) * 2007-12-20 2015-09-17 Amazon Technologies, Inc. Monitoring of services

Family Cites Families (128)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4850891A (en) 1988-04-04 1989-07-25 Augat Inc. Memory module socket
US5495533A (en) 1994-04-29 1996-02-27 International Business Machines Corporation Personal key archive
US5801985A (en) 1995-07-28 1998-09-01 Micron Technology, Inc. Memory system having programmable control parameters
US5732077A (en) 1995-11-13 1998-03-24 Lucent Technologies Inc. Resource allocation system for wireless networks
US6182139B1 (en) * 1996-08-05 2001-01-30 Resonate Inc. Client-side resource-based load-balancing with delayed-resource-binding using TCP state migration to WWW server farm
US6189111B1 (en) * 1997-03-28 2001-02-13 Tandem Computers Incorporated Resource harvesting in scalable, fault tolerant, single system image clusters
US5930167A (en) 1997-07-30 1999-07-27 Sandisk Corporation Multi-state non-volatile flash memory capable of being its own two state write cache
WO1999015999A1 (en) 1997-09-24 1999-04-01 Microsoft Corporation System and method for designing responses for electronic billing statements
US6735716B1 (en) * 1999-09-27 2004-05-11 Cisco Technology, Inc. Computerized diagnostics and failure recovery
US6757897B1 (en) * 2000-02-29 2004-06-29 Cisco Technology, Inc. Apparatus and methods for scheduling and performing tasks
US7720908B1 (en) * 2000-03-07 2010-05-18 Microsoft Corporation System and method for multi-layered network communications
US7979880B2 (en) 2000-04-21 2011-07-12 Cox Communications, Inc. Method and system for profiling iTV users and for providing selective content delivery
JP2002057645A (en) * 2000-08-10 2002-02-22 Ntt Docomo Inc Method for data transfer and mobile unit server
US7991633B2 (en) * 2000-12-12 2011-08-02 On Time Systems, Inc. System and process for job scheduling to minimize construction costs
US7706017B2 (en) * 2001-01-11 2010-04-27 Sharp Laboratories Of America, Inc. Systems and methods for providing load balance rendering for direct printing
US7231368B2 (en) * 2001-04-19 2007-06-12 Hewlett-Packard Development Company, L.P. E-ticket validation protocol
US6996822B1 (en) * 2001-08-01 2006-02-07 Unisys Corporation Hierarchical affinity dispatcher for task management in a multiprocessor computer system
US20030046330A1 (en) * 2001-09-04 2003-03-06 Hayes John W. Selective offloading of protocol processing
US7113980B2 (en) * 2001-09-06 2006-09-26 Bea Systems, Inc. Exactly once JMS communication
US7107578B1 (en) * 2001-09-24 2006-09-12 Oracle International Corporation Techniques for debugging computer programs involving multiple programming languages
US7107589B1 (en) * 2001-09-28 2006-09-12 Siebel Systems, Inc. Infrastructure for the automation of the assembly of schema maintenance scripts
US7225260B2 (en) * 2001-09-28 2007-05-29 Symbol Technologies, Inc. Software method for maintaining connectivity between applications during communications by mobile computer terminals operable in wireless networks
US6990662B2 (en) * 2001-10-31 2006-01-24 Hewlett-Packard Development Company, L.P. Method and system for offloading execution and resources for resource-constrained networked devices
US7127633B1 (en) 2001-11-15 2006-10-24 Xiotech Corporation System and method to failover storage area network targets from one interface to another
US20030126013A1 (en) 2001-12-28 2003-07-03 Shand Mark Alexander Viewer-targeted display system and method
US7640547B2 (en) 2002-02-08 2009-12-29 Jpmorgan Chase & Co. System and method for allocating computing resources of a distributed computing system
US7035854B2 (en) 2002-04-23 2006-04-25 International Business Machines Corporation Content management system and methodology employing non-transferable access tokens to control data access
US20030216927A1 (en) * 2002-05-17 2003-11-20 V. Sridhar System and method for automated safe reprogramming of software radios
US7076781B2 (en) 2002-05-31 2006-07-11 International Business Machines Corporation Resource reservation for large-scale job scheduling
US7480312B2 (en) * 2002-08-19 2009-01-20 Tehuti Networks Ltd. Network traffic accelerator system and method
US7496494B2 (en) * 2002-09-17 2009-02-24 International Business Machines Corporation Method and system for multiprocessor emulation on a multiprocessor host system
US7133915B2 (en) * 2002-10-10 2006-11-07 International Business Machines Corporation Apparatus and method for offloading and sharing CPU and RAM utilization in a network of machines
CN100463469C (en) 2002-10-25 2009-02-18 国际商业机器公司 Method, device and system for sharing applied program conversation information on multichannels
US7243351B2 (en) 2002-12-17 2007-07-10 International Business Machines Corporation System and method for task scheduling based upon the classification value and probability
WO2004081762A2 (en) * 2003-03-12 2004-09-23 Lammina Systems Corporation Method and apparatus for executing applications on a distributed computer system
US20040210663A1 (en) * 2003-04-15 2004-10-21 Paul Phillips Object-aware transport-layer network processing engine
US7451197B2 (en) * 2003-05-30 2008-11-11 Intel Corporation Method, system, and article of manufacture for network protocols
US7568199B2 (en) 2003-07-28 2009-07-28 Sap Ag. System for matching resource request that freeing the reserved first resource and forwarding the request to second resource if predetermined time period expired
US7376945B1 (en) 2003-12-02 2008-05-20 Cisco Technology, Inc. Software change modeling for network devices
US7526515B2 (en) * 2004-01-21 2009-04-28 International Business Machines Corporation Method and system for a grid-enabled virtual machine with movable objects
US20050160424A1 (en) * 2004-01-21 2005-07-21 International Business Machines Corporation Method and system for grid-enabled virtual machines with distributed management of applications
US8584129B1 (en) * 2004-02-20 2013-11-12 Oracle America, Inc. Dispenser determines responses to resource requests for a single respective one of consumable resource using resource management policy
EP1738258A4 (en) * 2004-03-13 2009-10-28 Cluster Resources Inc System and method for providing object triggers
US8782654B2 (en) * 2004-03-13 2014-07-15 Adaptive Computing Enterprises, Inc. Co-allocating a reservation spanning different compute resources types
US7200716B1 (en) * 2004-04-30 2007-04-03 Network Appliance, Inc. Method and apparatus to offload operations in a networked storage system
US20060048157A1 (en) 2004-05-18 2006-03-02 International Business Machines Corporation Dynamic grid job distribution from any resource within a grid environment
US20070266388A1 (en) * 2004-06-18 2007-11-15 Cluster Resources, Inc. System and method for providing advanced reservations in a compute environment
US7702779B1 (en) * 2004-06-30 2010-04-20 Symantec Operating Corporation System and method for metering of application services in utility computing environments
US7930422B2 (en) * 2004-07-14 2011-04-19 International Business Machines Corporation Apparatus and method for supporting memory management in an offload of network protocol processing
CN101014691A (en) 2004-09-08 2007-08-08 宝洁公司 Laundry treatment compositions with improved odor
DE602005025900D1 (en) * 2004-09-30 2011-02-24 Boehringer Ingelheim Pharma ALKYNIL-BASED DERIVATIVES OF BENZOPHENONES AS NON-NUCLEOSIDE REVERSE TRANSCRIPTASE INHIBITORS
CA2586763C (en) * 2004-11-08 2013-12-17 Cluster Resources, Inc. System and method of providing system jobs within a compute environment
US7596618B2 (en) 2004-12-07 2009-09-29 Hewlett-Packard Development Company, L.P. Splitting a workload of a node
US7827435B2 (en) * 2005-02-15 2010-11-02 International Business Machines Corporation Method for using a priority queue to perform job scheduling on a cluster based on node rank and performance
US7698430B2 (en) * 2005-03-16 2010-04-13 Adaptive Computing Enterprises, Inc. On-demand compute environment
US8863143B2 (en) * 2006-03-16 2014-10-14 Adaptive Computing Enterprises, Inc. System and method for managing a hybrid compute environment
US9015324B2 (en) * 2005-03-16 2015-04-21 Adaptive Computing Enterprises, Inc. System and method of brokering cloud computing resources
EP3203374B1 (en) * 2005-04-07 2021-11-24 III Holdings 12, LLC On-demand access to compute resources
US7949766B2 (en) 2005-06-22 2011-05-24 Cisco Technology, Inc. Offload stack for network, block and file input and output
WO2007006146A1 (en) * 2005-07-12 2007-01-18 Advancedio Systems Inc. System and method of offloading protocol functions
US7899864B2 (en) * 2005-11-01 2011-03-01 Microsoft Corporation Multi-user terminal services accelerator
KR100789425B1 (en) * 2006-04-10 2007-12-28 삼성전자주식회사 Method for sharing contents using digital living network alliance network
US8510430B2 (en) * 2006-08-03 2013-08-13 International Business Machines Corporation Intelligent performance monitoring based on resource threshold
US7428629B2 (en) 2006-08-08 2008-09-23 International Business Machines Corporation Memory request / grant daemons in virtual nodes for moving subdivided local memory space from VN to VN in nodes of a massively parallel computer system
US20080065835A1 (en) * 2006-09-11 2008-03-13 Sun Microsystems, Inc. Offloading operations for maintaining data coherence across a plurality of nodes
US7707185B1 (en) * 2006-10-19 2010-04-27 Vmware, Inc. Accessing virtual data storage units to offload operations from a computer system hosting a virtual machine to an offload server
CN100579072C (en) * 2006-12-22 2010-01-06 华为技术有限公司 Method and system for communication between IP devices
US20080196043A1 (en) * 2007-02-08 2008-08-14 David Feinleib System and method for host and virtual machine administration
US7929418B2 (en) * 2007-03-23 2011-04-19 Hewlett-Packard Development Company, L.P. Data packet communication protocol offload method and system
CA2687530C (en) * 2007-05-17 2013-04-23 Fat Free Mobile Inc. Method and system for transcoding web pages by limiting selection through direction
US9727440B2 (en) 2007-06-22 2017-08-08 Red Hat, Inc. Automatic simulation of virtual machine performance
US9354960B2 (en) 2010-12-27 2016-05-31 Red Hat, Inc. Assigning virtual machines to business application service groups based on ranking of the virtual machines
US8488444B2 (en) 2007-07-03 2013-07-16 Cisco Technology, Inc. Fast remote failure notification
US7840703B2 (en) * 2007-08-27 2010-11-23 International Business Machines Corporation System and method for dynamically supporting indirect routing within a multi-tiered full-graph interconnect architecture
US20090063690A1 (en) * 2007-09-05 2009-03-05 Motorola, Inc. Continuing an application session using a different device from one that originally initiated the application session while preserving session while preserving session state and data
US8041773B2 (en) * 2007-09-24 2011-10-18 The Research Foundation Of State University Of New York Automatic clustering for self-organizing grids
US7849223B2 (en) * 2007-12-07 2010-12-07 Microsoft Corporation Virtually synchronous Paxos
US7751401B2 (en) * 2008-06-30 2010-07-06 Oracle America, Inc. Method and apparatus to provide virtual toe interface with fail-over
US8166146B2 (en) * 2008-09-29 2012-04-24 International Business Machines Corporation Providing improved message handling performance in computer systems utilizing shared network devices
US8225074B2 (en) * 2008-10-02 2012-07-17 Nec Laboratories America, Inc. Methods and systems for managing computations on a hybrid computing platform including a parallel accelerator
CN102246489B (en) * 2008-10-08 2014-05-28 思杰系统有限公司 Systems and methods for connection management for asynchronous messaging over http
US8341262B2 (en) 2008-11-07 2012-12-25 Dell Products L.P. System and method for managing the offload type for offload protocol processing
US7970830B2 (en) * 2009-04-01 2011-06-28 Honeywell International Inc. Cloud computing for an industrial automation and manufacturing system
US8219676B2 (en) * 2009-06-22 2012-07-10 Citrix Systems, Inc. Systems and methods for web logging of trace data in a multi-core system
US8612439B2 (en) * 2009-06-30 2013-12-17 Commvault Systems, Inc. Performing data storage operations in a cloud storage environment, including searching, encryption and indexing
US8458324B2 (en) * 2009-08-25 2013-06-04 International Business Machines Corporation Dynamically balancing resources in a server farm
US9537957B2 (en) 2009-09-02 2017-01-03 Lenovo (Singapore) Pte. Ltd. Seamless application session reconstruction between devices
US20110103391A1 (en) 2009-10-30 2011-05-05 Smooth-Stone, Inc. C/O Barry Evans System and method for high-performance, low-power data center interconnect fabric
US9465771B2 (en) * 2009-09-24 2016-10-11 Iii Holdings 2, Llc Server on a chip and node cards comprising one or more of same
US10877695B2 (en) * 2009-10-30 2020-12-29 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US9110860B2 (en) * 2009-11-11 2015-08-18 Mellanox Technologies Tlv Ltd. Topology-aware fabric-based offloading of collective functions
US9389895B2 (en) 2009-12-17 2016-07-12 Microsoft Technology Licensing, Llc Virtual storage target offload techniques
US8161494B2 (en) * 2009-12-21 2012-04-17 Unisys Corporation Method and system for offloading processing tasks to a foreign computing environment
US20110153953A1 (en) 2009-12-23 2011-06-23 Prakash Khemani Systems and methods for managing large cache services in a multi-core system
US8346935B2 (en) 2010-01-15 2013-01-01 Joyent, Inc. Managing hardware resources by sending messages amongst servers in a data center
US8826270B1 (en) * 2010-03-16 2014-09-02 Amazon Technologies, Inc. Regulating memory bandwidth via CPU scheduling
WO2011127055A1 (en) * 2010-04-05 2011-10-13 Huawei Technologies, Co. Ltd. Method for dynamic discovery of control plane resources and services
US8493851B2 (en) 2010-05-07 2013-07-23 Broadcom Corporation Method and system for offloading tunnel packet processing in cloud computing
US8285800B2 (en) * 2010-06-25 2012-10-09 Compuware Corporation Service model creation using monitored data of the performance management tool
CN107608755A (en) 2010-07-01 2018-01-19 纽戴纳公司 Split process between cluster by process type to optimize the use of cluster particular configuration
US8627135B2 (en) * 2010-08-14 2014-01-07 Teradata Us, Inc. Management of a distributed computing system through replication of write ahead logs
US8924560B2 (en) * 2010-11-29 2014-12-30 At&T Intellectual Property I, L.P. Optimized game server relocation environment
US8886742B2 (en) * 2011-01-28 2014-11-11 Level 3 Communications, Llc Content delivery network with deep caching infrastructure
JP5839032B2 (en) * 2011-02-24 2016-01-06 日本電気株式会社 Network system, controller, and flow control method
US8533720B2 (en) * 2011-02-25 2013-09-10 International Business Machines Corporation Offloading work from one type to another type of processor based on the count of each type of service call instructions in the work unit
US9450875B1 (en) * 2011-09-23 2016-09-20 Google Inc. Cooperative fault tolerance and load balancing
US20130086298A1 (en) * 2011-10-04 2013-04-04 International Business Machines Corporation Live Logical Partition Migration with Stateful Offload Connections Using Context Extraction and Insertion
US9135741B2 (en) * 2012-01-23 2015-09-15 Nec Laboratories America, Inc. Interference-driven resource management for GPU-based heterogeneous clusters
US8862727B2 (en) * 2012-05-14 2014-10-14 International Business Machines Corporation Problem determination and diagnosis in shared dynamic clouds
US20140165196A1 (en) * 2012-05-22 2014-06-12 Xockets IP, LLC Efficient packet handling, redirection, and inspection using offload processors
WO2013177310A2 (en) * 2012-05-22 2013-11-28 Xockets IP, LLC Offloading of computation for rack level servers and corresponding methods and systems
US11023088B2 (en) * 2012-06-18 2021-06-01 Hewlett-Packard Development Company, L.P. Composing the display of a virtualized web browser
WO2014000274A1 (en) * 2012-06-29 2014-01-03 Intel Corporation Methods and systems to identify and migrate threads among system nodes based on system performance metrics
US9135048B2 (en) 2012-09-20 2015-09-15 Amazon Technologies, Inc. Automated profiling of resource usage
US8764555B2 (en) * 2012-10-02 2014-07-01 Nextbit Systems Inc. Video game application state synchronization across multiple devices
GB2508161A (en) * 2012-11-21 2014-05-28 Ibm Monitoring applications executing on a virtual machine and allocating the required resources to the virtual machine.
US11132277B2 (en) * 2012-12-28 2021-09-28 Iii Holdings 2, Llc System and method for continuous low-overhead monitoring of distributed applications running on a cluster of data processing nodes
US9250954B2 (en) * 2013-01-17 2016-02-02 Xockets, Inc. Offload processor modules for connection to system memory, and corresponding methods and systems
US20140348182A1 (en) * 2013-05-22 2014-11-27 Iii Holdings 2, Llc Time synchronization between nodes of a switched interconnect fabric
US9459957B2 (en) * 2013-06-25 2016-10-04 Mellanox Technologies Ltd. Offloading node CPU in distributed redundant storage systems
US20150012679A1 (en) 2013-07-03 2015-01-08 Iii Holdings 2, Llc Implementing remote transaction functionalities between data processing nodes of a switched interconnect fabric
US9304896B2 (en) 2013-08-05 2016-04-05 Iii Holdings 2, Llc Remote memory ring buffers in a cluster of data processing nodes
US9262257B2 (en) * 2014-04-21 2016-02-16 Netapp, Inc. Providing boot data in a cluster network environment
US9417968B2 (en) * 2014-09-22 2016-08-16 Commvault Systems, Inc. Efficiently restoring execution of a backed up virtual machine based on coordination with virtual-machine-file-relocation operations
US10257089B2 (en) * 2014-10-30 2019-04-09 At&T Intellectual Property I, L.P. Distributed customer premises equipment
US20160378570A1 (en) * 2015-06-25 2016-12-29 Igor Ljubuncic Techniques for Offloading Computational Tasks between Nodes
US10306808B2 (en) * 2015-10-28 2019-05-28 International Business Machines Corporation Rack housings having an adjustable air volume
US20220317692A1 (en) * 2022-06-23 2022-10-06 Intel Corporation Computational task offloading

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050039171A1 (en) * 2003-08-12 2005-02-17 Avakian Arra E. Using interceptors and out-of-band data to monitor the performance of Java 2 enterprise edition (J2EE) applications
US20050038808A1 (en) * 2003-08-15 2005-02-17 Kutch Patrick G. System and method for utilizing a modular operating system (OS) resident agent allowing an out-of-band server management
US20060250971A1 (en) * 2005-04-19 2006-11-09 Alcatel Context controlled data tap utilizing parallel logic for integrated link monitoring
US7610266B2 (en) * 2005-05-25 2009-10-27 International Business Machines Corporation Method for vertical integrated performance and environment monitoring
US20120167094A1 (en) * 2007-06-22 2012-06-28 Suit John M Performing predictive modeling of virtual machine relationships
US20150263913A1 (en) * 2007-12-20 2015-09-17 Amazon Technologies, Inc. Monitoring of services
US20120158925A1 (en) * 2010-12-17 2012-06-21 Microsoft Corporation Monitoring a model-based distributed application

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11960937B2 (en) 2004-03-13 2024-04-16 Iii Holdings 12, Llc System and method for an optimizing reservation in time of compute resources based on prioritization function and reservation policy parameter
US11656907B2 (en) 2004-11-08 2023-05-23 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11709709B2 (en) 2004-11-08 2023-07-25 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11762694B2 (en) 2004-11-08 2023-09-19 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11861404B2 (en) 2004-11-08 2024-01-02 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11886915B2 (en) 2004-11-08 2024-01-30 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11658916B2 (en) 2005-03-16 2023-05-23 Iii Holdings 12, Llc Simple integration of an on-demand compute environment
US11765101B2 (en) 2005-04-07 2023-09-19 Iii Holdings 12, Llc On-demand access to compute resources
US11831564B2 (en) 2005-04-07 2023-11-28 Iii Holdings 12, Llc On-demand access to compute resources
US11720290B2 (en) 2009-10-30 2023-08-08 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US20220114070A1 (en) * 2012-12-28 2022-04-14 Iii Holdings 2, Llc System, Method and Computer Readable Medium for Offloaded Computation of Distributed Application Protocols within a Cluster of Data Processing Nodes

Also Published As

Publication number Publication date
US20220114070A1 (en) 2022-04-14
US20140189104A1 (en) 2014-07-03
US20140189039A1 (en) 2014-07-03
US20190286610A1 (en) 2019-09-19
US10311014B2 (en) 2019-06-04
US11188433B2 (en) 2021-11-30
US11132277B2 (en) 2021-09-28

Similar Documents

Publication Publication Date Title
US20220121545A1 (en) System and Method for Continuous Low-Overhead Monitoring of Distributed Applications Running on a Cluster of Data Processing Nodes
US11070452B1 (en) Network dashboard with multifaceted utilization visualizations
US10560309B1 (en) Identifying a root cause of alerts within virtualized computing environment monitoring system
EP4270190A1 (en) Monitoring and policy control of distributed data and control planes for virtual nodes
US9916232B2 (en) Methods and systems of distributed tracing
EP4254199A2 (en) Multi-cluster dashboard for distributed virtualization infrastructure element monitoring and policy control
US20140068134A1 (en) Data transmission apparatus, system, and method
US9703944B2 (en) Debug architecture
US10691576B1 (en) Multiple reset types in a system
US9218258B2 (en) Debug architecture
CN102929769B (en) Virtual machine internal-data acquisition method based on agency service
US20150127994A1 (en) Trace Data Export to Remote Memory Using Remotely Generated Reads
US20150377965A1 (en) Debug architecture
US11816052B2 (en) System, apparatus and method for communicating telemetry information via virtual bus encodings
US9612934B2 (en) Network processor with distributed trace buffers
Suo et al. vnettracer: Efficient and programmable packet tracing in virtualized networks
Islam et al. Can parallel replication benefit hadoop distributed file system for high performance interconnects?
US20230195597A1 (en) Matchmaking-based enhanced debugging for microservices architectures
Stefanov et al. A review of supercomputer performance monitoring systems
JP5642725B2 (en) Performance analysis apparatus, performance analysis method, and performance analysis program
US10949321B1 (en) Operational management of a device
US10284435B2 (en) Method to visualize end user response time
US8788644B2 (en) Tracking data processing in an application carried out on a distributed computing system
US10713188B2 (en) Inter-process signaling system and method
Koop et al. Reducing network contention with mixed workloads on modern multicore, clusters

Legal Events

Date Code Title Description
AS Assignment

Owner name: SILICON VALLEY BANK, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CALXEDA, INC.;REEL/FRAME:057302/0516

Effective date: 20140701

Owner name: CALXEDA, INC., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DALTON, NIALL JOSEPH;ROBINSON, TREVOR;REEL/FRAME:057302/0457

Effective date: 20131217

Owner name: III HOLDINGS 2, LLC, DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SILICON VALLEY BANK;REEL/FRAME:057302/0630

Effective date: 20140630

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCV Information on status: appeal procedure

Free format text: NOTICE OF APPEAL FILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER