GB2514584A - Methods and apparatus for monitoring conditions prevailing in a distributed system - Google Patents

Methods and apparatus for monitoring conditions prevailing in a distributed system Download PDF

Info

Publication number
GB2514584A
GB2514584A GB1309604.5A GB201309604A GB2514584A GB 2514584 A GB2514584 A GB 2514584A GB 201309604 A GB201309604 A GB 201309604A GB 2514584 A GB2514584 A GB 2514584A
Authority
GB
United Kingdom
Prior art keywords
measurement
logging
measurement data
client application
services
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1309604.5A
Other versions
GB201309604D0 (en
Inventor
Mark Patrick Henry Eastman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ADVANCED BUSINESS SOFTWARE AND SOLUTIONS Ltd
Original Assignee
ADVANCED BUSINESS SOFTWARE AND SOLUTIONS Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ADVANCED BUSINESS SOFTWARE AND SOLUTIONS Ltd filed Critical ADVANCED BUSINESS SOFTWARE AND SOLUTIONS Ltd
Priority to GB1309604.5A priority Critical patent/GB2514584A/en
Publication of GB201309604D0 publication Critical patent/GB201309604D0/en
Priority to GB1409563.2A priority patent/GB2516357B/en
Publication of GB2514584A publication Critical patent/GB2514584A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/3476Data logging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3006Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/3495Performance evaluation by tracing or monitoring for systems

Abstract

Client application container 22-1 comprises application 24-1 accessed by client device 14-1. The container is on a different physical machine or a virtual machine in the same location compared to statistics server 16 where data are sent for persistence. Logging of performance measurements is started by application 24-1 calling local performance logger 30-1, which generates records comprising measurement results and metadata. Local logger 30-1 determines whether to store result remotely on 16 and passes results and metadata to asynchronous FIFO queue 33-1 via remote logger 32-1. Messages are then sent to primary logging environment 16 when sufficient resources are available without having significant impact on execution of application 24-1 (e.g. when the application is not particularly busy). Message transfers are through a representational state transfer (REST) service (RS) uniform API. Queue 33-1 uses processing threads of lower priority than those of application 24-1. Undue overheads in persisting logs are avoided.

Description

Methods and apparatus for monitoring conditions prevailing in a distributed system The present invention relates to apparatus and associated methods for monitoring conditions prevailing in a distributed system. The invention has particular although not exclusive relevance to apparatus and associated methods for monitoring conditions prevailing in a client-server based computer system.
When implementing distributed systems, such as client-server based systems that implement a distributed application structure in which one or more servers provide one or more client machines with access to resources and/or services, there is often a need to provide mechanisms by which system performance and the like can be measured, monitored, analysed, viewed and recorded. Such mechanisms may be required, for example, to provide an early, or even advanced indication, of potential technical issues arising in the system such as latency in one or more distributed applications exceeding an acceptable level, a client machine or application monopolising resources to the detriment of performance elsewhere in the system, communication bottlenecks arising, a significant reduction the ability of an end user is able to navigate an application efficiently and effectively, outright system failure, or the like. Thus, such mechanisms can beneficially allow appropriate corrective or preventative action to be taken promptly. Such mechanisms may also be required to allow the provider of a particular service or range of services, via the distributed system, to monitor system performance levels or the like against predetermined criteria such as acceptable latency levels, acceptable resource provision levels, acceptable resource usage levels, acceptable application navigation speeds or the like. These criteria may, for example, represent levels of performance agreed, in advance, with an end user and/or may represent levels of performance dictated by operational constraints such as communication bandwidths, resource availability, or the like.
However, the very act of measuring and monitoring system performance, and recording measurement data in a common location for analysis purposes, can add to the overall work of the application and therefore decrease the overall performance of the system because measuring and monitoring performance, and communicating the results for storing in a common location, requires system resources. In some cases this negative impact on performance can cause the results of a particular performance measurement to appear worse than it otherwise would and even to fail to meet a predetermine performance criteria that, in the absence of monitoring, would have been met.
Moreover, measuring and monitoring system performance can be particularly difficult in a distributed system in which a range of different distributed applications may be provided each of which may need performance information to be captured in a different way and/or each of which may be implemented using a different software platform/framework.
Accordingly, preferred embodiments of the present invention aim to provide methods and apparatus which overcome or at least alleviate one or more of the above issues.
In one aspect of the invention there is provided apparatus for monitoring conditions prevailing in a distributed system in which at least one client application is provided for access by a client device, the apparatus comprising: a client application environment in which the at least one client application and a measurement logging entity are provided; wherein the measurement logging entity comprises: an interface via which the measurement logging entity can receive, from each client application, measurement data representing a respective measure of performance for that client application; means for determining that said measurement data should be logged remotely from the local environment; and means for queuing, in a message queue, a message comprising said measurement data for sending to a primary logging environment for logging in a measurement database when said determining means determines that said measurement data should be logged remotely from the local environment; wherein said means for queuing is configured to send said message comprising said measurement data, from said message queue, to said primary logging environment at a time when sufficient resources are available to send said message without having a significant impact on execution of said at least one client application.
The measurement logging entity may comprise a first (local) measurement logging part and a second (remote') measurement logging part wherein: the first measurement logging part may comprise means for receiving measurement data via said interface, said means for determining that said measurement data should be logged remotely from the local environment, means for generating said message comprising said measurement data, and/or means for sending the generated message to the second measurement logging part; and the second measurement logging part may comprise means for receiving said generated message from said first measurement logging part, and/or said means for queuing said message.
The determining means may be configured for determining whether said measurement data should be logged remotely from the local environment or logged within the local environment.
The apparatus my comprise means for logging said measurement data locally when it is determined that said measurement data should be logged within the local environment.
The measurement logging entity may be configured to receive, from a client application, an indication that said measurement data should be logged locally and said determining means may be configured for determining that said measurement data should be logged, for that client application, within the local environment responsive to receipt of said indication that said measurement data should be logged locally.
The measurement logging entity may be configured to receive, from a client application, an indication that remote logging of said measurement data should be suspended and said determining means may be configured for determining that said measurement data, for that client application, should be logged within the local environment responsive to receipt of said indication that remote logging of said measurement data should be suspended.
The measurement logging entity may be configured to receive, from a client application, an indication that logging of said measurement data should cease and/or to disable logging of measurement data for that client application responsive to receipt of said indication that logging of said measurement data.
The queuing means may be configured to send said message comprising said measurement data to said primary logging environment via an interface that may be independent of a software platform or framework used to provide said client application.
The interface that may be independent of a software platform or framework may be a uniform application programing interface (API).
The API may be a representational state transfer (REST) service (RS) API.
The at least one client application may comprise a plurality of client applications and the measurement logging entity may be configured to receive respective measurement data from each said client application.
The apparatus may further comprise a plurality of further client application environments, each further client application environment comprising a respective measurement logging entity.
The apparatus may comprise the primary logging environment. The primary logging environment may comprise means for receiving said message comprising measurement data from said queuing means and/or means for logging said measurement data accordingly.
The receiving means of said primary logging environment is configured to receive a message comprising measurement data from the respective measurement logging entity of each of a plurality of client application environments.
The primary logging environment may further comprise a viewer entity for generating a visual display of stored measurement data.
The viewer entity may be configured to provide an alert when said measurement data indicates that a predetermined criterion has been, or is about to be, met.
The means for queuing may be configured to operate a processing thread having a lower priority than a processing thread that the at least one client application uses whereby said message comprising said measurement data may be sent to said primary logging environment at a time when sufficient resources are available to send said message without having a significant impact on execution of said at least one client application.
The means for queuing may comprise a scheduler that uses a background processing thread to process each message added to said message queue wherein said processing thread may have a lower scheduling priority than that of a general execution thread used by the at least one application whereby said message comprising said measurement data may be sent to said primary logging environment at a time when sufficient resources are available to send said message without having a significant impact on execution of said at least one client application.
The means for queuing may be configured to operate said message queue as a first in first out (FIFO) message queue.
The apparatus may be configured for use in a mobile execution environment.
The apparatus may be configured for monitoring conditions prevailing in a distributed system for supporting health and/or social care services.
The apparatus may be configured for monitoring conditions prevailing in a distributed system for supporting community health and/or social care services.
The apparatus may be configured for monitoring conditions prevailing in a distributed system for supporting mobile health and/or social care services.
The apparatus may be configured for monitoring conditions prevailing in a distributed system for supporting health and/or social care services in a care recipient's home.
The apparatus may be configured for monitoring conditions prevailing in a distributed system for supporting health and/or social care services in a residential care home.
The apparatus may be configured for monitoring conditions prevailing in a distributed system for supporting health and/or social care services of an urgent and unplanned nature.
The apparatus may be configured for monitoring conditions prevailing in a distributed system for supporting mental health and/or social care services.
The apparatus may be configured for monitoring conditions prevailing in a distributed system for supporting palliative, hospice or end of life health and/or social care services.
The apparatus may be configured for monitoring conditions prevailing in a distributed system for supporting health and/or social care services, for those with learning disabilities, in a school and/or care home.
The apparatus may be configured for monitoring conditions prevailing in a distributed system to track response times of external integrations.
The apparatus may be configured for monitoring conditions prevailing in a distributed system to track internal response times of subroutines and/or data retrieval.
The apparatus may be configured for monitoring conditions prevailing in a distributed system to track user decision making speed.
The apparatus may be configured for monitoring conditions prevailing in a distributed system for supporting employer services.
The apparatus may be configured for monitoring conditions prevailing in a distributed system by means of centralised statistics depository for performance measurements across a range of said employer services.
The apparatus may be configured for monitoring conditions prevailing in a distributed system for providing at least one of financial and accounting employer services, human resources employer services, payroll employer services, procurement employer services, document management employer services, supply chain management employer services, business analytics employer services, and business intelligence employer services.
The apparatus may be configured for monitoring conditions prevailing in a distributed system for supporting employer services in the public service sector.
The apparatus may be configured for monitoring conditions prevailing in a distributed system for supporting employer services in the private sector.
The apparatus may be configured for monitoring conditions prevailing in a distributed system for supporting employer services in the not-for-profit or voluntary sector.
The apparatus may be configured for monitoring conditions prevailing in a distributed system for supporting managed services.
The apparatus may be configured for monitoring conditions prevailing in a distributed system for supporting managed services comprising cloud computing services.
The apparatus may be configured for monitoring conditions prevailing in a distributed system for supporting managed services comprising data centre services.
The apparatus may be configured for monitoring conditions prevailing in a distributed system for supporting electronic learning services.
According to one aspect of the present invention there is provided an application configured to operate as the client application of the apparatus of any preceding claim, the application comprising: means for configuring said client application to perform a measurement of performance for the client application; means for performing a measurement of performance configured by said configuring means; and means for passing at least one result of the measurement of performance performed by said measurement performing means, as at least part of said measurement data, to said measurement logging entity for logging.
The configuring means may be operable to configure at least one start point and at least one end point for said measurement of performance.
The at least one result may comprise an elapsed time beginning at said at least one start point and ending at said at least one end point.
The results passing means may be configured for passing said result with associated metadata relating to said measurement, as at least part of said measurement data, to said measurement logging entity for logging.
The metadata may comprise at least one of: information identifying the client application to which the measurement data relates; information identifying an operation or group of operations for which the measurement was performed; information identifying a time at which the measurement was performed; and/or information indicating an approximate magnitude of the operation or group of operations to which the measurement relates.
The rnetadata may comprise information for identifying a data type of said measurement data (e.g. string, numeric, date and/or time related).
According to one aspect of the present invention there is provided a method for monitoring conditions prevailing in a distributed system in which at least one client application is provided for access by a client device, the method comprising: a logging entity: receiving via an interface via, from a client application, measurement data representing a respective measure of performance for that client application; determining that said measurement data should be logged remotely from the local environment; and queuing, in a message queue, a message comprising said measurement data for sending to a primary logging environment for logging in a measurement database when said determining step determines that said measurement data should be logged remotely from the local environment; and sending said message comprising said measurement data, from said queue, to said primary logging environment at a time when sufficient resources are available to send said message without having a significant impact on execution of said at least one client application.
According to one aspect of the present invention there is provided a method performed by a client application configured to operate as part of the apparatus referred to in an earlier aspect, the method comprising: performing a measurement of performance in accordance with a measurement configuration; and passing at least one result of the measurement of performance, as at least part of said measurement data, to said measurement logging entity for logging.
According to one aspect of the present invention there is provided a computer program product comprising computer implementable instructions which, when executed on a computer processing apparatus, cause said computer processing apparatus to become configured as an apparatus as referred to earlier or as an application referred to earlier.
According to one aspect of the present invention there is provided a computer program product comprising computer implementable instructions which, when executed on a computer processing apparatus, cause said computer processing apparatus to perform a method referred to earlier.
Aspects of the invention extend to computer program products such as computer readable storage media having instructions stored thereon which are operable to program a programmable processor to carry out a method as described in the aspects and possibilities set out above or recited in the claims and/or to program a suitably adapted computer to provide the apparatus recited in any of the claims.
Each feature disclosed in this specification (which term includes the claims) and/or shown in the drawings may be incorporated in the invention independently (or in combination with) any other disclosed and/or illustrated features. In particular but without limitation the features of any of the claims dependent from a particular independent claim may be introduced into that independent claim in any combination or individually.
Embodiments of the invention will now be described by way of example only with reference to the aftached figures in which: Figure 1 schematically illustrates a distributed system; Figure 2 shows a simplified flow chart illustrating typical steps performed by the client application to record a result of a performance measurement; Figure 3 shows a simplified sequence diagram illustrating typical steps performed by various entities to transfer a result of a performance measurement; and Figure 4 shows a simplified sequence diagram illustrating typical steps performed by a statistics runner to queue and transfer a result of a performance measurement to a statistics server.
Overview Figure 1 schematically illustrates a distributed system 10 comprising a server environment 12 and a number of client entities 14-1, 14-2, 14-3.
The server environment 12 comprises, a statistics server entity 16, a database server entity 18, a viewer entity 20, a number of distinct client application containers' 22-1, 22-2. 22-3, and a browser entity 23. Each client application container 22 comprises a respective execution environment in which an associated client application 24-1, 24- 2, 24-3 may run, on behalf of a client entity 14. In the present example, each client application container 22 is implemented on a different physical machine to the statistics server entity 16. It will be appreciated, however, that each client application container 22 may be implemented on a virtual machine in a common location with the statistics server entity 16 (e.g. on a common physical machine).
Each client application 24, in this example runs on a different respective software platform/framework. In this example, one client application 24-1 runs on a Java software platform, one client application 24-2 runs on a.Net software platform, and the other client application 24-3 runs on a different software platform. The other software platform may, for example, comprise an embedded database system where the execution logic is contained within stored procedures.
Each of the client applications 24 is also provided with a statistics service 26-1, 26-2.
26-3 which can be used by the client application 24 to record performance related measurement data. Each statistics service 26 comprises a respective local performance logger 30-1, 30-2, 30-3 for logging performance statistics locally and to isolate the calling client application 24 from the specifics of working with the remote logging process, and a remote performance logger 32-1, 32-2, 32-3 for logging performance statistics remotely via the statistics server entity 16. Accordingly, each statistics service 26 effectively provides a gateway to the statistics server entity 16 for any application deployed in the same container 22.
To aid each client application 24 to capture performance statistics with a minimum impact on wider system performance, a dedicated application programming interface (API) is provided between the client application 24 and the statistics service 26. This allows the client application 24 to log statistics virtually' without direct involvement in the actual delivery of the statistics to the statistics server entity 16, and hence without any undue overhead in process of persisting the measurement data. In the case of the application 24-1 running on a Java platform, for example, the API may be provided by means of a small Java class and, as those skilled in the art will appreciated, the respective API for each other software platform may be provided in an appropriate manner for that software platform Beneficially, in order to measure the performance statistics, each client application 24 is configurable (and reconfigurable) with a number of checkpoints' for triggering the start and the stop of corresponding performance measurements. A client application 24 may, for example, have a start' checkpoint and an end' checkpoint configured at the respective start and end of a particular operation or set of operations for which performance statistics are required. In this example, therefore, a timer can be initiated at the start checkpoint and terminated at the end checkpoint thereby giving a measure of the time period taken for the operation or set of operations to complete.
The resulting time period repiesents measurement data which can then be logged appropriately. Similarly, a plurality of start' and end' checkpoint pairs can be configured for starting and stopping an individual timer in order to produce a measured cumulative time period representing a sum of the time periods between each respective pair of start and stop checkpoints. Similarly, a plurality of stait' and end' checkpoint pairs can be configured each pair configured for starting and stopping a different respective timer to produce a corresponding measured time period.
The ability to configure (and reconfigure) the start and end checkpoints may be provided using any suitable means, for example by means of a class library that could be added to an application so that the application can start and stop the relative checkpoints that need to be measured.
Measurement data is typically logged with associated metadata providing additional information about the measurement. The metadata typically comprises, for example: information identifying the client application to which the measurement data relates; information identifying the operation or group of operations for which the measurement was performed; information identifying the time at which the measurement was performed; information indicating an approximate magnitude of the opelation or group of operations to which the measurement relates (e.g. against which to correlate a measured time peiiod); and/or othei such data.
In the case of the information indicating an approximate magnitude of the operation or group of operations this may, for example, be representative of the amount of data/information that requires processing in a particular operation or groups of operations (e.g. the numbei of rows of information that need processing), the number of sepalate operations in a set of one or more opelations and/oi the like. By way of illustration, if a measured time period is used to provide an indication of the length of time it is taking to process a particular data set of variable length, then metadata indicating an approximate magnitude of the operation can be used to provide an indication of the length of the data set thereby allowing this to be taken into account during analysis of a number of measured time periods for processing data sets of different lengths.
The nietadata may also be self-describing' of the data to which it relates comprising, for example, a definition of the type of data to which it relates to allow the storage of any type of statistical data, be it numeric, string, date or time related.
The performance statistics measured by the application can, thus! reflect any performance measure that the application is configured to log and any number of additional attributes can be included with the measuiement data that represents the performance statistic.
Each local performance loggei 30 is able to record the performance related measurement data locally (e.g. in a plain text file using appropliate delimiteis such as commas or tabs (or flat file')) or to send it to the remote performance logger 32 for remote persistence via the database server entity 18. The location (local or remote) to which the performance measuiements are sent is configuied externally to the client application 24 and can be flipped during execution if required.
The performance loggers 30, 32 of each statistic service 26 are arranged to receive lequests to log measurement data from the associated application 24 and to place the request into a respective asynchronous pooled queue 33-1, 33-2, 33-3 so that the client application 24, for which performance is being measured, can continue execution without any further overhead associated with capturing the statistic.
Client applications 24 are also provided with the ability to instruct the local performance logger 30 to initiate a suspension of sending measurement result messages to the statistics server entity 16 during which the measurement result messages are simply queued up for sending later. At some later point the client application 30 can instruct the release the queued measurement result messages thereby allowing them to be sent to the server. This facility can be used beneficially when a client application 24 requires as little impact on the performance as possible.
Client applications 24 are also provided with the ability to turn off measurement logging capability completely for that application. Client applications 24 are further provided with the ability to instruct the local performance logger 30 to log some or all measurement results to a local file system without queuing them for sending to the statistics server entity 16 or to log some or all measurement results to the statistics server entity 16 via the remote performance logger 32.
It can be seen! therefore, that this represents an asynchronous messaging system that can take the measured performance statistics from the application 24 and add them to a separate queue 33. It is then this queue 33 that is responsible for sending the measured performance statistics data to the statistics server entity 16 for persistence at a time when the act of sending the data will not impact the execution of the application (or when any such impact will be minimised).
Beneficially, the remote performance logger 34 is operable to send measurement data from the asynchronous queue 33 at times when the transfer of the measurement data will have a minimal impact on the performance of the wider system (e.g. when the client application to which the measurement data relates is not particularly busy).
The statistics server entity 16 comprises a service logger 34 for receiving measurement data, from each remote performance logger 32, for persistence to the database server entity 18. Advantageously, despite the differences in the different respective software platform/frameworks on which the client applications 24 are implemented, the distributed system 10 uses a uniform API between each local performance logger 32 and the service logger 34 on the statistics server entity 16. In this example, the uniform API comprises a representational state transfer (REST) service (RS) API which allows connection from applications using any language and application stack, via the hypertext transfer protocol (http), using appropriate request messages.
The service logger 34 is operable to log measurement data received from the remote performance loggers 32 to the database server 18. The service logger 34 also provides a number of enquiry functions to allow remote or local applications to interrogate the metadata for definitions associated with the stored measurement data.
The database server 18 comprises a relational database in which the measurement data and associated metadata is stored. The database server 18, in this example, utilises the so called Hibernate framework' although it will be appreciated that any suitable framework may be used to provide mapping to the relational database.
In this embodiment, the viewer entity 20 comprises a separate standalone application, running in its own execution environment, which provides a viewer for the statistics held in the relational database of the database server 18. The viewer entity is configured to allow an authorised user, after logging into the system, to review the current statistics, view appropriate statistical graphs, view trends, analyse the data, perform comparisons or the like. Any metadata stored with the measurement data can be extracted and used, for the purposes of reviewing the statistics, to inform an authorised user of particular pertinent information relating to a measurement and/or to perform secondary analysis/manipulation on the measurement data (e.g. to present measured data on a graph against the time at which the data was collected, to normalise measured time periods against data set size, etc.).
The viewer entity 20, in this embodiment, is connected the browser entity 23 via which the user can access the viewer entity 20 by means of a web browser or any other suitable viewer. This viewer entity 20 allows an authorised administrator to maintain the configuration of the system and monitor the overall statistics being logged into the database by means of the browser entity 23. In this example, the service logger 34 is shown as being deployed, as part of the statistics server entity 16, in conjunction with the viewer entity 20. It will be appreciated, however, that the viewer entity 20 and statistic server entity 16 may be deployed in isolation from one another.
It can be seen, therefore, that the proposed methods and apparatus for monitoring conditions prevailing in a distributed system provides a flexible way of capturing and monitoring performance statistics relating to the execution of various tasks by distributed client applications thereby aiding in the tracking of system performance measurements against appropriate criteria (e.g. criteria agreed by service level agreement) with minimal impact on the operation of the underlying client applications and hence on the operation of the wider system.
For example, having a central statistics server entity 16, and database server 18, helps to ensure that when there is a need to analyse statistics the resulting measurement results are all held in a common location even if individual components doing the logging are deployed to multiple disparate systems (possibly at different geographic locations).
Of particular benefit is the asynchronous queuing mechanism at the client application side that helps to ensure that the measuring of a particular performance statistic and the logging of that performance statistic has a relatively small impact on the performance of that client application as possible.
The ability to provide metadata I self-describing together with the statistics to which the metadata relates is particularly beneficial because it provides additional flexibility for suitable measurements to be defined, at the client application side, without significant reconfiguring at the statistics server entity side. The ability to provide metadata I self-describing together with the statistics to which the metadata also allows additional information to be stored that allows improved comparison of one set of measurement data with another. For instance, storing information indicating an approximate magnitude of the operation or group of operations to which a particular measured time period relates allows a better comparison of the measured time period with other measured time periods for an operation, or group of operations, with a different magnitude.
Measurement and Recordal Procedure -Client Application A procedure employed by a client application 24 to log a particular performance measurement result will now be described, by way of example only, with reference to Figure 2 which shows a simplified flow chart illustrating typical steps performed by the client application 24 to record the result of the performance measurement.
In this example, the client application is a web application that is configured to store server side timings for responding to key screens. The client application 24 is a servlet based application that responds to browser interactions by processing the normal http get' and post' commands.
In this example the client application is configured to store server side timings for a number of the key screens that have been defined as a measurable criterion for assessing client application I system performance.
When a measurement is to be performed the client application 24 first creates an S instance of the local performance logger 30 by instigating an associated call at S21 0.
This effectively extends the basic underlying statistics measuring capability and supports this underlying capability by providing a shortcut for the client application 24 to store' measured performance data (e.g. elapsed time statistics) locally by passing captured measurement data to the local performance logger 30. It is this local performance logger 30, rather than the client application itself, that determines how to best to log the performance statistics.
After creating the instance of the performance logging statistics service 26, the client application 24 identifies the current time at S212 immediately before starting the unit of work (i.e. the operation or group of operations) which is to be timed at S214. Once the operation or group of operations which is to be timed is completed, the client application 24 once again identifies the current time at S216.
At S21 8, the client application calls the local performance logger 30 in order to initiate generation, locally, of an appropriate statistic record for the unit of work that has just been completed including any associated metadata as required for that measurement. The statistics record includes, in this example: information identifying the client application 24 to which the measurement data relates; information identifying the function/unit of work being performed; information indicating an approximate magnitude of the operation or group of operations to which the measurement relates (such as a measure of the size of the dataset being processed, for example a measure of the number of bytes, rows, columns, pages, or the like that require processing); information identifying the time at which the measurement was performed (e.g. in the form of a timestamp or the like); and/or any other such metadata data that the client application has been configured to log for the specific measurement being carried out and/or the specific client application 24 performing the measurement.
As far as the client application 24 is concerned this is all that is necessary to measure a performance statistic and persist the results, via the statistics server entity 16, at the database server entity 18. The location and configuration of the statistics servei entity 24 are externalised from the client application 24 itself.
Transfer of Measurement Data -Client Application 1 Statistics Service A procedure for transferring measurement data from the client application 24 to the asynchronous queue 33 for log a particular performance measurement result will now be described in more detail! by way of example only, with reference to Figure 3, which shows a simplified sequence diagram illustrating typical steps performed by the various entities to transfer the result of the performance measurement.
The procedure starts, at S310, when the client application 24 initiates the logging of a measurement result by calling the local performance logger 30 (e.g. at S218 in Figure 2). The local performance logger 30 generates a measurement record comprising the measurement result and associated metadata and decides whether to store the information locally (e.g. in a flat file) or to send it to the remote performance logger 32 for remote persistence via the statistics server entity 16 and the database server entity 18.
The performance logger 30 determines whether the results should be stored locally or remotely. If, as shown in Figure 3, the measurements are to be stored remotely then a measurement result message comprising the measurement result(s) and any metadata is passed to the remote performance logger 32 at S312 where the measurement result message is passed to a so called statistics runner' which comprises the measurement queue 33 at S314 onto which the measurement result message is placed until it is sent to the statistics server entity 16.
The statistics runner is, in effect, a separate scheduler (or scheduler class') that uses a single background processing thread to perform the processing of each measurement result message added to it. It queues the incoming messages and processes them sequentially in a first in first out (FIFO) order. Advantageously, the statistics runner uses a processing thread that has deliberately had its scheduling priority lowered below that of general execution threads (e.g. those used by the client application 24) to help ensure that it yields to higher priority processing threads. The operating system will thus schedule this thread for execution when higher priority threads are not processing data and when the transmission of a measurement result message will therefore have minimal impact on performance.
Statistics Runner The statistics runner 33 will now be described in more detail, by way of example only, with reference to Figure 4, which shows a simplified sequence diagram illustrating typical steps performed by the statistics runner 33 to queue and transfer the result of the performance measurement to the statistics server 16.
The statistics runner 33 creates a performance measurement stolage object (teimed StatisticsVO') 40 to hold details of the performance measurement (S410). The statistics runner 33 then populates this object 40 with at least some of the key attributes which apply to any performance measurement, including, for example, a group name (in the example set to PERF') (at S412) and a type name (in the example set to SUMMARY') (at S414) identifying a type of measurement to which the measurement data relates. A map object (termed HashMap') 42 is then created to hold all specific attributes associated with that measurement type (e.g. the measurement associated metadata), such as: information identifying the client application to which the measurement data ielates (e.g. at S416); information identifying the function/unit of work being performed (e.g. at S418); information indicating an approximate magnitude of the operation or group of operations to which the measurement relates (e.g. a iow count as illustrated at S420); infoimation indicating the measured elapsed time (e.g. S421); and information identifying the time at which the measurement was performed (e.g. at S422). The specific attributes associated with that measurement type are held along with additional details associated with the captuied measurement data (at S424). The map object 42 is then passed to the performance measuiement stolage object 40 (at S426).
The statistics runner 33 is now ready to send this statistics across the associated API to the service logger 34 for persistence into the database of the database server 18.
To perform this action the statistics runner 33 gets a client connection 44 (referred to as Statistics Client' in this example) to the statistics server entity 16 to instantiate (at 5428), by means of an associated factory class (in this example a REST Service client factory class), a helper entity 46 (or class', referred to as StatisticsRs' in this example) that will facilitate performance of the transfer service. This instance of the helpei class 46 is then used to send the performance measurement stolage object across the associated API to the service logger 34, via the http protocol, for persistence into the database of the database server 18 using an insert statistic service (at S430). The helper entity 46 is essentially responsible for marshalling all the data in the perfoimance measurement storage object 40 into the necessaiy http protocol format ready for transmission to the server 16 If the system detects errors during this process the system will attempt to retry and send the request again. After a numbei of attempts (e.g. three or any other appropriate number) the system will pause sending the messages to the service loggei 34 for a predetermined length of time to help ensure that the processing thread used by the statistics runner does not waste resources attempting to send measurement reports when the server is probably not contactable.
At the statistics server entity end, the system waits for incoming requests to log measurement results and will process the insert statistic service by inserting the necessary records into the database of the database server 18. In this example, an Object Relational Mapping library called Hibernate is used to map the performance measurement object 40 into the necessary database table records.
Modifications and alternatives In the above embodiments, a number of software modules were described.
It will be appreciated that whilst the distributed system is described as comprising a number of distinct entities that may be distributed geographically, all elements of the distributed system may be implemented in a single apparatus, for example having a number of autonomous processes corresponding to the different entities that interact with one another by means of message passing. Similarly the client entities may be located on one or more client machines that are separate to a server machine on which the various server entities are provided.
It will be appreciated that the service 26 may be shared between multiple client applications 24 residing or executing within the same container 22. This means that if multiple client applications 24 are deployed to a single container 22, then they all share the same single instance of the service 26 thereby providing additional benefits in terms of minimising the overall impact of the performance logging on the wider system.
It will be appreciated that a plurality of service loggers 34 may be provided on different virtual or physical machines each of which writes to the same database server 18. Generally there will only be one viewer entity 20 even if a plurality of loggers 34 are deployed (although a plurality of viewers is not precluded). Providing a plurality of deployments of the service logger 34 can beneficially be used to provide greater load balancing and concurrency of updates to the database server 18.
It will be appreciated that the viewer entity or other similar entity may be configured to provide automated (e.g. real-time) alerts in dependence on the statistics being logged in the relational database. An alert may be issued when the data being accumulated in the database indicates that a particulai system performance related issue has arisen or is about to arise. For example, an alert may be issued when the performance data being logged indicates that latency (or similar parameter) in the system has exceeded -or is approaching -a predefined trigger level. The system may also be configured to make an automated response when the data being accumulated in the database indicates that a particular system performance related issue has arisen or is about to arise. The response may, for example, include taking preventative or corrective action such as providing more resources to an application that appears to (or about to) be experiencing a performance related issue and/or removing resources from (or shutting down) lower priority tasks or applications.
It will be appreciated that the service logger component may be enhanced to additionally support other messaging technologies such as Extensible Markup Language (XML') Simple Object Access Protocol (SOAP') messages, Java Message Service (JMS'), Extensible Messaging and Piesence Protocol (XMPP') etc. This will allow a greater flexibility as it allows a developer to choose the most appropriate transport to exchange the peiformance statistic details.
It will be appieciated that techniques may be piovicled for adding statistical values derived from the execution of the code without actually modifying the source code of an application. One such example in the Java world is to use cross cutting Aspect Oriented techniques to allow external definitions to be injected' into the executing code.
A store and foiwaid mechanism could be provided to speed up the client application even further. This may comprise storing a local cache of the statistics on the client prior to delivery of them to the statistics server. Another store and forward approach may be to utilise a distributed database such as NoSOL databases that provide eventual consistency. This means the database is responsible foi the exchange and eventual update of a server database ready for analysis and the viewer 20.
The set of language stacks that sit upon the uniform (e.g. REST) API may be expanded so that developeis within that stack can more ieadily utilise the statistic gathering functionality.
The browsei based statistics viewer may be adapted to incorporate alerts and alaims so that an eaily indication can be provided as to potential problems occuiling within lunning applications or the widei system.
It will be appreciated that the principles and concepts disclosed herein may be extended to a mobile execution environment such that mobile applications can centrally capture and store mobile performance statistics with minimal impact on the mobile application to which the performance statistics relate and hence any mobile device on which the mobile application is provided.
As those skilled in the art will appreciate, the software modules may be provided in compiled or un-compiled form and may be supplied to the different devices (e.g. client machines and/or server machines) as a signal over a computer network, or on a recording medium. Further, the functionality performed by part or all of this software may be performed using one or more dedicated hardware circuits.
As those skilled in the art will appreciate, the apparatus and methods disclosed herein have many different applications to provide technical benefits in any of a
number of different distinct fields.
The apparatus and/or methods disclosed herein may, for example, be provided for monitoring conditions prevailing in a distributed system for supporting health and/or social care services. Such services may comprise, for example, at least one of: community health and/or social care services; mobile health and/or social care services; health and/or social care services in a care recipient's home; health and/or social care services in a residential care home; health and/or social care services of an urgent and unplanned nature (e.g. accident and emergency and/or telephone services such as 111 or the like); mental health and/or social care services; palliative, hospice or end of life health and/or social care services; health and/or social care services, for those with learning disabilities (e.g. in a school and/or care home); The apparatus and/or methods disclosed herein may, for example, be provided for monitoring conditions prevailing in a distributed system to: track response times of external integrations (e.g. integrations with external systems provided by other parties or by other groups or departments within the same organisation; track internal response times of subroutines and/or data retrieval; and/or to track user decision making speed.
The apparatus and/or methods disclosed herein may, for example, be provided for monitoring conditions prevailing in a distributed system for supporting employer services (e.g. business services).
The apparatus and/or methods disclosed herein may, for example, be provided for monitoring conditions prevailing in a distributed system by means of centralised statistics depository for performance measurements across a range of employer services.
The employer services may, for example, comprise any of financial and accounting employer services, human resources employer services, payroll employer services, procurement employer services, document management employer services, supply chain management employer services, business analytics employer services, and/or business intelligence employei services.
The apparatus and/or methods disclosed herein may, for example, be provided for monitoring conditions prevailing in a distributed system for supporting employer services in any of a number of different sectors for example: the public service sector; the private sector; an/or the not-for-profit or voluntary sector.
The apparatus and/or methods disclosed herein may, for example, be provided for monitoring conditions prevailing in a distributed system for supporting managed services. The managed services may, for example, comprise: cloud computing services and/or data centre services.
The apparatus and/or methods disclosed herein may, for example, be provided for monitoring conditions prevailing in a distributed system for supporting electronic learning services.
Various other modifications will be apparent to those skilled in the art and will not be described in further detail here.

Claims (52)

  1. Claims 1. Apparatus for monitoring conditions prevailing in a distributed system in which at least one client application is provided for access by a client device! the apparatus comprising: a client application environment in which the at least one client application and a measurement logging entity are provided; wherein the measurement logging entity comprises: an interface via which the measurement logging entity can receive, from each client application, measurement data representing a respective measure of performance for that client application; means for determining that said measurement data should be logged remotely from the local environment; and means for queuing, in a message queue, a message comprising said measurement data for sending to a primary logging environment for logging in a measurement database when said determining means determines that said measurement data should be logged remotely from the local environment; wherein said means for queuing is configured to send said message comprising said measurement data, from said message queue, to said primary logging environment at a time when sufficient resources are available to send said message without having a significant impact on execution of said at least one client application.
  2. 2. Apparatus as claimed in claim 1 wherein the measurement logging entity comprises a first (local') measurement logging part and a second (remote') measurement logging part wherein: the first measurement logging part comprises means for receiving measurement data via said interface, said means for determining that said measurement data should be logged remotely from the local environment, means for generating said message comprising said measurement data, and means for sending the generated message to the second measurement logging part; and the second measurement logging part comprises means for receiving said generated message from said first measurement logging part, and said means for queuing said message.
  3. 3. Apparatus as claimed in claim 1 or 2 wherein the determining means is configured for determining whether said measurement data should be logged remotely from the local environment or logged within the local environment.
  4. 4. Apparatus as claimed in claim 3 comprising means for logging said measurement data locally when it is determined that said measurement data should be logged within the local environment.
  5. 5. Apparatus as claimed in any preceding claim wherein the measurement logging entity is configured to receive, from a client application, an indication that said measurement data should be logged locally and said determining means is configured for determining that said measurement data should be logged, for that client application, within the local environment responsive to receipt of said indication that said measurement data should be logged locally.
  6. 6. Apparatus as claimed in any preceding claim wherein the measurement logging entity is configured to receive, from a client application, an indication that remote logging of said measurement data should be suspended and said determining means is configured for determining that said measurement data, for that client application, should be logged within the local environment responsive to receipt of said indication that remote logging of said measurement data should be suspended.
  7. 7. Apparatus as claimed in any preceding claim wherein the measurement logging entity is configured to receive, from a client application, an indication that logging of said measurement data should cease and to disable logging of measurement data for that client application responsive to receipt of said indication that logging of said measurement data.
  8. 8. Apparatus as claimed in any preceding claim wherein said queuing means is configured to send said message comprising said measurement data to said primary logging environment via an interface that is independent of a software platform or framework used to provide said client application.
  9. 9. Apparatus as claimed in claim 8 wherein said interface that is independent of a software platform or framework is a uniform application programing interface (API).
  10. 10. Apparatus as claimed in claim 9 wherein said API is a representational state transfer (REST) service (RS) API.
  11. 11. Apparatus as claimed in any preceding claim wherein said at least one client application comprises a plurality of client applications and wherein said measurement logging entity is configured to receive respective measurement data from each said client application.
  12. 12. Apparatus as claimed in any preceding claim further comprising a plurality of further client application environments, each further client application environment comprising a respective measurement logging entity.
  13. 13. Apparatus as claimed in any preceding claim further comprising the primary logging environment wherein said primary logging environment comprises means for receiving said message comprising measurement data from said queuing means and means for logging said measurement data accordingly.
  14. 14. Apparatus as claimed in claim 13 wherein said receiving means of said primary logging environment is configured to receive a message comprising measurement data from the respective measurement logging entity of each of a plurality of client application environments.
  15. 15. Apparatus as claimed in claim 13 or 14 wherein said primary logging environment further comprises a viewer entity for generating a visual display of stored measurement data.
  16. 16. Apparatus as claimed in any of claims 13 to 15 wherein said viewer entity is configured to provide an alert when said measurement data indicates that a predetermined criterion has been, or is about to be, met.
  17. 17. Apparatus as claimed in any preceding claim wherein said means for queuing is configured to operate a processing thread having a lower priority than a processing thread that the at least one client application uses whereby said message comprising said measurement data is sent to said primary logging environment at a time when sufficient resources are available to send said message without having a significant impact on execution of said at least one client application.
  18. 18. Apparatus as claimed in any preceding claim wherein said means for queuing comprises a scheduler that uses a background processing thread to process each message added to said message queue wherein said processing thread has a lower scheduling priority than that of a general execution thread used by the at least one application whereby said message comprising said measurement data is sent to said primary logging environment at a time when sufficient resources are available to send said message without having a significant impact on execution of said at least one client application.
  19. 19. Apparatus as claimed in any preceding claim wherein said means for queuing is configured to operate said message queue as a first in first out (FIFO) message queue.
  20. 20. Apparatus as claimed in any preceding claim configured for use in a mobile execution environment.
  21. 21. Apparatus as claimed in any of claims 1 to 20 configured for monitoring conditions prevailing in a distributed system for supporting health and/or social care services.
  22. 22. Apparatus as claimed in claim 21 configured for monitoring conditions prevailing in a distributed system for supporting community health and/or social care services.
  23. 23. Apparatus as claimed in claim 21 configured for monitoring conditions prevailing in a distributed system for supporting mobile health and/or social care services.
  24. 24. Apparatus as claimed in claim 21 configured for monitoring conditions prevailing in a distributed system for supporting health and/or social care services in a care recipient's home.
  25. 25. Apparatus as claimed in claim 21 configured for monitoring conditions prevailing in a distributed system for supporting health and/or social care services in a residential care home.
  26. 26. Apparatus as claimed in claim 21 configured for monitoring conditions prevailing in a distributed system for supporting health and/or social care services of an urgent and unplanned nature.
  27. 27. Apparatus as claimed in claim 21 configured for monitoring conditions prevailing in a distributed system for supporting mental health and/or social care services.
  28. 28. Apparatus as claimed in claim 21 configured for monitoring conditions prevailing in a distributed system for supporting palliative, hospice or end of life health and/or social care services.
  29. 29. Apparatus as claimed in claim 21 configured for monitoring conditions prevailing in a distributed system for supporting health and/or social care services, for those with learning disabilities, in a school and/or care home.
  30. 30. Apparatus as claimed in any of claims 21 to 29 configured for monitoring conditions prevailing in a distributed system to track response times of external integrations.
  31. 31. Apparatus as claimed in any of claims 21 to 30 configured for monitoring conditions prevailing in a distributed system to track internal response times of subroutines and/or data retrieval.
  32. 32. Apparatus as claimed in any of claims 21 to 31 configured for monitoring conditions prevailing in a distributed system to track user decision making speed.
  33. 33. Apparatus as claimed in any of claims 1 to 20 configured for monitoring conditions prevailing in a distributed system for supporting employer services.
  34. 34. Apparatus as claimed in claim 33 configured for monitoring conditions prevailing in a distributed system by means of centralised statistics depository for performance measurements across a range of said employer services.
  35. 35. Apparatus as claimed in claim 33 or 34 configured for monitoring conditions prevailing in a distributed system for providing at least one of financial and accounting employer services, human resources employer services, payroll employer services, procurement employer services, document management employer services, supply chain management employer services, business analytics employer services, and business intelligence employer services.
  36. 36. Apparatus as claimed in any of claims 33 to 36 configured for monitoring conditions prevailing in a distributed system for supporting employer services in the public service sector.
  37. 37. Apparatus as claimed in any of claims 33 to 36 configured for monitoring conditions prevailing in a distributed system for supporting employer services in the private sector.
  38. 38. Apparatus as claimed in any of claims 33 to 36 configured for monitoring conditions prevailing in a distributed system for supporting employer services in the not-for-profit or voluntary sector.
  39. 39. Apparatus as claimed in any of claims 1 to 20 configured for monitoring conditions prevailing in a distributed system for supporting managed services.
  40. 40. Apparatus as claimed in claim 39 configured for monitoring conditions prevailing in a distributed system for supporting managed services comprising cloud computing services.
  41. 41. Apparatus as claimed in claim 39 or 40 configured for monitoring conditions prevailing in a distributed system for supporting managed services comprising data centre services.
  42. 42. Apparatus as claimed in any of claims 1 to 20 configured for monitoring conditions prevailing in a distributed system for supporting electronic learning services.
  43. 43. An application configured to operate as the client application of the apparatus of any preceding claim, the application comprising: means for configuring said client application to perform a measurement of performance for the client application; means for performing a measurement of performance configured by said configuring means; and means for passing at least one result of the measurement of performance performed by said measurement performing means, as at least part of said measurement data, to said measurement logging entity for logging.
  44. 44. An application as claimed in claim 43 wherein said configuring means is operable to configure at least one start point and at least one end point for said measurement of performance.
  45. 45. An application as claimed in any claim 44 wherein said at least one result comprises an elapsed time beginning at said at least one start point and ending at said at least one end point.
  46. 46. An application as claimed in any of claims claim 43 to 45 wherein said results passing means is configured for passing said result with associated metadata relating to said measurement, as at least part of said measurement data, to said measurement logging entity for logging.
  47. 47. An application as claimed in claim 46 wherein said metadata comprises at least one of: information identifying the client application to which the measurement data relates; information identifying an operation or group of operations for which the measurement was performed; information identifying a time at which the measurement was performed; and/or information indicating an approximate magnitude of the operation or group of operations to which the measurement relates.
  48. 48. An application as claimed in claim 46 or 46 wherein said metadata comprises information for identifying a data type of said measurement data (e.g. string, numeric, date and/or time related).
  49. 49. A method for monitoring conditions prevailing in a distributed system in which at least one client application is provided for access by a client device, the method comprising: a logging entity: receiving via an interface via, from a client application, measurement data representing a respective measure of performance for that client application; determining that said measurement data should be logged remotely from the local environment; and queuing, in a message queue, a message comprising said measurement data for sending to a primary logging environment for logging in a measurement database when said determining step determines that said measurement data should be logged remotely from the local environment; and sending said message comprising said measurement data, from said queue, to said primary logging environment at a time when sufficient resources are available to send said message without having a significant impact on execution of said at least one client application.
  50. 50. A method performed by a client application configured to operate as part of the apparatus of any of claims ito 42, the method comprising: performing a measurement of performance in accordance with a measurement configuration; and passing at least one result of the measurement of performance, as at least pad of said measurement data, to said measurement logging entity for logging.
  51. 51. A computer program product comprising computer implementable instructions which, when executed on a computer processing apparatus, cause said computer processing apparatus to become configured as the apparatus of any of claims ito 42 or as an application according to any of claims 43 to 48.
  52. 52. A computer program product comprising computer implementable instructions which, when executed on a computer processing apparatus, cause said computer processing apparatus to perform a method according to claim 49 or 50.
GB1309604.5A 2013-05-29 2013-05-29 Methods and apparatus for monitoring conditions prevailing in a distributed system Withdrawn GB2514584A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1309604.5A GB2514584A (en) 2013-05-29 2013-05-29 Methods and apparatus for monitoring conditions prevailing in a distributed system
GB1409563.2A GB2516357B (en) 2013-05-29 2014-05-29 Methods and apparatus for monitoring conditions prevailing in a distributed system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1309604.5A GB2514584A (en) 2013-05-29 2013-05-29 Methods and apparatus for monitoring conditions prevailing in a distributed system

Publications (2)

Publication Number Publication Date
GB201309604D0 GB201309604D0 (en) 2013-07-10
GB2514584A true GB2514584A (en) 2014-12-03

Family

ID=48784865

Family Applications (2)

Application Number Title Priority Date Filing Date
GB1309604.5A Withdrawn GB2514584A (en) 2013-05-29 2013-05-29 Methods and apparatus for monitoring conditions prevailing in a distributed system
GB1409563.2A Expired - Fee Related GB2516357B (en) 2013-05-29 2014-05-29 Methods and apparatus for monitoring conditions prevailing in a distributed system

Family Applications After (1)

Application Number Title Priority Date Filing Date
GB1409563.2A Expired - Fee Related GB2516357B (en) 2013-05-29 2014-05-29 Methods and apparatus for monitoring conditions prevailing in a distributed system

Country Status (1)

Country Link
GB (2) GB2514584A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105204972A (en) * 2015-09-09 2015-12-30 北京思特奇信息技术股份有限公司 Method and system for unified release and management of executable programs
EP3467660A3 (en) * 2017-10-06 2019-06-19 Chicago Mercantile Exchange, Inc. Dynamic tracer message logging based on bottleneck detection
WO2023026086A1 (en) * 2021-08-25 2023-03-02 Sensetime International Pte. Ltd. Logging method and apparatus, electronic device, and computer-readable storage medium
AU2021240197A1 (en) * 2021-08-25 2023-03-16 Sensetime International Pte. Ltd. Logging method and apparatus, electronic device, and computer-readable storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10268714B2 (en) 2015-10-30 2019-04-23 International Business Machines Corporation Data processing in distributed computing
CN106708693A (en) * 2015-11-16 2017-05-24 亿阳信通股份有限公司 Alarm data processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040236764A1 (en) * 2003-02-28 2004-11-25 Hitachi, Ltd. Information processing system, method for outputting log data, and computer-readable medium storing a computer software program for the same
US20050028171A1 (en) * 1999-11-12 2005-02-03 Panagiotis Kougiouris System and method enabling multiple processes to efficiently log events
JP2006085372A (en) * 2004-09-15 2006-03-30 Toshiba Corp Information processing system
US20060167951A1 (en) * 2005-01-21 2006-07-27 Vertes Marc P Semantic management method for logging or replaying non-deterministic operations within the execution of an application process

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6785893B2 (en) * 2000-11-30 2004-08-31 Microsoft Corporation Operating system event tracker having separate storage for interrupt and non-interrupt events and flushing the third memory when timeout and memory full occur
US7895371B2 (en) * 2007-03-09 2011-02-22 Kabushiki Kaisha Toshiba System and method for on demand logging of document processing device status data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050028171A1 (en) * 1999-11-12 2005-02-03 Panagiotis Kougiouris System and method enabling multiple processes to efficiently log events
US20040236764A1 (en) * 2003-02-28 2004-11-25 Hitachi, Ltd. Information processing system, method for outputting log data, and computer-readable medium storing a computer software program for the same
JP2006085372A (en) * 2004-09-15 2006-03-30 Toshiba Corp Information processing system
US20060167951A1 (en) * 2005-01-21 2006-07-27 Vertes Marc P Semantic management method for logging or replaying non-deterministic operations within the execution of an application process

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105204972A (en) * 2015-09-09 2015-12-30 北京思特奇信息技术股份有限公司 Method and system for unified release and management of executable programs
EP3467660A3 (en) * 2017-10-06 2019-06-19 Chicago Mercantile Exchange, Inc. Dynamic tracer message logging based on bottleneck detection
US10416974B2 (en) 2017-10-06 2019-09-17 Chicago Mercantile Exchange Inc. Dynamic tracer message logging based on bottleneck detection
US10990366B2 (en) 2017-10-06 2021-04-27 Chicago Mercantile Exchange Inc. Dynamic tracer message logging based on bottleneck detection
US11520569B2 (en) 2017-10-06 2022-12-06 Chicago Mercantile Exchange Inc. Dynamic tracer message logging based on bottleneck detection
WO2023026086A1 (en) * 2021-08-25 2023-03-02 Sensetime International Pte. Ltd. Logging method and apparatus, electronic device, and computer-readable storage medium
AU2021240197A1 (en) * 2021-08-25 2023-03-16 Sensetime International Pte. Ltd. Logging method and apparatus, electronic device, and computer-readable storage medium

Also Published As

Publication number Publication date
GB2516357B (en) 2015-08-19
GB201309604D0 (en) 2013-07-10
GB201409563D0 (en) 2014-07-16
GB2516357A (en) 2015-01-21

Similar Documents

Publication Publication Date Title
CN108874640B (en) Cluster performance evaluation method and device
US10467105B2 (en) Chained replication techniques for large-scale data streams
US10761829B2 (en) Rolling version update deployment utilizing dynamic node allocation
US10795905B2 (en) Data stream ingestion and persistence techniques
US10691716B2 (en) Dynamic partitioning techniques for data streams
US10412158B2 (en) Dynamic allocation of stateful nodes for healing and load balancing
US8745434B2 (en) Platform for continuous mobile-cloud services
US9471585B1 (en) Decentralized de-duplication techniques for largescale data streams
GB2514584A (en) Methods and apparatus for monitoring conditions prevailing in a distributed system
Sukhija et al. Towards a framework for monitoring and analyzing high performance computing environments using kubernetes and prometheus
US10635644B2 (en) Partition-based data stream processing framework
CN105653425B (en) Monitoring system based on complex event processing engine
CN111459763B (en) Cross-kubernetes cluster monitoring system and method
US7562138B2 (en) Shared memory based monitoring for application servers
US10331484B2 (en) Distributed data platform resource allocator
CN101719852A (en) Method and device for monitoring performance of middle piece
CN105069029B (en) A kind of real-time ETL system and method
Gibb et al. The technologies required for fusing hpc and real-time data to support urgent computing
US20180287914A1 (en) System and method for management of services in a cloud environment
CN109525422A (en) A kind of daily record data method for managing and monitoring
US11294704B2 (en) Monitoring and reporting performance of online services using a monitoring service native to the online service
González et al. HerdMonitor: monitoring live migrating containers in cloud environments
US9092282B1 (en) Channel optimization in a messaging-middleware environment
EP4066117B1 (en) Managing provenance information for data processing pipelines
CN111105314A (en) Insurance data clearing system

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)