US20140281720A1 - System and method of performing a health check on a process integration component - Google Patents

System and method of performing a health check on a process integration component Download PDF

Info

Publication number
US20140281720A1
US20140281720A1 US13/801,849 US201313801849A US2014281720A1 US 20140281720 A1 US20140281720 A1 US 20140281720A1 US 201313801849 A US201313801849 A US 201313801849A US 2014281720 A1 US2014281720 A1 US 2014281720A1
Authority
US
United States
Prior art keywords
checks
component
health check
integration
checking whether
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US13/801,849
Other versions
US9146798B2 (en
Inventor
Vikas Gupta
Aby Jose
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SAP SE
Original Assignee
SAP SE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SAP SE filed Critical SAP SE
Priority to US13/801,849 priority Critical patent/US9146798B2/en
Assigned to SAP AG reassignment SAP AG ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GUPTA, VIKAS, JOSE, ABY
Assigned to SAP SE reassignment SAP SE CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: SAP AG
Publication of US20140281720A1 publication Critical patent/US20140281720A1/en
Application granted granted Critical
Publication of US9146798B2 publication Critical patent/US9146798B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/006Identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0748Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a remote unit communicating with a single-box computer node experiencing an error/fault
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0709Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a distributed system consisting of a plurality of standalone computer nodes, e.g. clusters, client-server systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0751Error or fault detection not based on redundancy
    • G06F11/0763Error or fault detection not based on redundancy by bit configuration check, e.g. of formats or tags
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3051Monitoring arrangements for monitoring the configuration of the computing system or of the computing system component, e.g. monitoring the presence of processing resources, peripherals, I/O links, software programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3006Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems

Definitions

  • This document generally relates to systems and methods for use with process integration components. More specifically, this document relates methods and systems for performing a health check on a process integration component.
  • ERP Enterprise resource planning
  • PI process integration
  • FIG. 1 is a block diagram of an application system, in accordance with an example embodiment.
  • FIG. 2 is a diagram illustrating a process integration (PI) component, in accordance with an example embodiment.
  • FIG. 3 is a flow diagram illustrating a method, in accordance with an example embodiment, of performing a health check on a PI component.
  • FIG. 4 is a flow diagram illustrating a method, in accordance with an example embodiment, of performing a system landscape directory (SLD) connection test.
  • SLD system landscape directory
  • FIG. 5 is a flow diagram illustrating a method, in accordance with an example embodiment, of checking message processing in a PI system.
  • FIG. 6 is a flow diagram illustrating a method, in accordance with an example embodiment, of performing basic technical checks for a PI system.
  • FIG. 7 is a flow diagram illustrating a method, in accordance with an example embodiment, of checking technical aspects of a user.
  • FIG. 8 is a flow diagram illustrating a method, in accordance with an example embodiment, of automatically troubleshooting a problem with the internet communication management (ICM).
  • ICM internet communication management
  • FIG. 9 is a flow diagram illustrating a method, in accordance with an example embodiment, of automatically troubleshooting a problem with a web service runtime.
  • FIG. 10 is an interaction diagram illustrating a method, in accordance with an example embodiment, of performing a PI health check.
  • FIG. 11 is a block diagram of a computer processing system at a server system, within which a set of instructions may be executed for causing the computer to perform any one or more of the methodologies discussed herein.
  • ERP enterprise resource planning
  • SRM Supplier Relationship Management systems
  • PI process integration
  • FIG. 1 is a block diagram of an application system, in accordance with an example embodiment.
  • the application system 100 comprises heterogeneous software and/or hardware components 102 to 116 , which are connected to each other as shown by the solid lines in FIG. 1 , and which may operate together in the application system 100 to process, for example, a business scenario.
  • the application system 100 may comprise an enterprise resource planning (ERP) system 102 .
  • ERP enterprise resource planning
  • the ERP 102 may integrate internal and external management information across an entire organization, embracing different activities and/or services of an enterprise.
  • the ERP system 102 automates the activities and/or services with an integrated computer-based application.
  • the ERP system 102 can run on a variety of hardware and/or network configurations, typically employing a database to store its data.
  • the ERP system 102 may be associated with (e.g., directly or indirectly connected to and/or in (networked) communication with) a business intelligence (BI) component 104 , one or more third parties 106 and 108 , a supply chain management (SCM) component 110 , and/or a SRM component 112 .
  • the SRM component 112 and/or the SCM component 110 may further be associated with at least one proprietary service 114 .
  • at least one of the third parties 106 may also be associated with at least one proprietary service 116 .
  • the BI component 104 may provide historical, current, and predictive views of business processes and/or business scenarios, for example, performed on the ERP 102 .
  • the SCM component 110 may manage a network of interconnected businesses involved in the provision of product and/or service packages called for by end consumers such as the ERP system 102 .
  • the SCM component 110 may span movement and storage of raw materials, work-in-process inventory, and finished goods from point of origin to point of consumption (also referred to as a supply chain).
  • the SRM component 112 may specify collaborations with suppliers that are vital to the success of the ERP system 102 (e.g., to maximize the potential value of those relationships). All of these systems may be integrated via a PI component 118 .
  • FIG. 2 is a diagram illustrating a PI component, in accordance with an example embodiment.
  • the PI component 200 contains several sub-components, including an integration builder 202 , integration server 204 , system landscape directory 206 , and central monitoring component 208 .
  • the integration builder 202 may define objects for an integration repository 210 at design or configuration time.
  • the integration repository 210 may maintain various objects useful to providing process integration functions, such as business scenarios, business processes, message interfaces, message types, data types, message mappings, and interface mappings.
  • the application developer refers to these objects in defining interactive flow between applications. This interactive flow is stored in the integration directory 212 . When an application is executed, the messaging flow is drawn from the integration directory 212 .
  • the integration server 204 can act in accordance with this defined flow, and thus, for example, send messages to multiple receives, split a message into multiple messages, route a message based on content, etc.
  • the PI component 200 may manage the interactions between multiple entities 214 a - 214 d . These entities may include, for example, an application 214 a operated by the same entity as the PI component 200 , a third party application 214 b operated by a different entity than the PI component 200 , a marketplace/business partner 214 c , and a third party middleware component 214 a.
  • the system landscape directory 206 is a directory of available installable software and updated data about systems already installed.
  • the central monitoring component 208 provides monitoring of many, if not all, aspects of the system. Some of the aspects monitored may trigger events in the integration server 204 .
  • an automated process performs various health checks on the PI component 200 to ensure that any major problems are detected early and often. This can help the customer reduce the downtime associated with such detected problems. Automatic execution of this health check may be performed periodically. Alerts for any detected problems (or just the results of the health check) can be delivered via email, for example. Additionally, automated troubleshooting steps can be performed to resolve many common issues, thus reducing or even eliminating the need for the customer to participate in fixing any problems. Furthermore, a complete log of checks and any troubleshooting steps can be maintained, which can be used by technical support in case of future or unresolved problems.
  • the automated process may be embodied as a PI scenario which reads input from an Extensible Markup Language (XML file).
  • the input may include a list of checks to be executed, and the components against which to execute these checks.
  • the PI scenario can execute one or more application programming interfaces (APIs) to determine a health status for various components, as described by the list of checks to be executed. After execution of the API; the PI scenario can send a notification email to a system administrator if any of the health checks fail.
  • the frequency with which the PI scenario is repeated can be set so that the PI scenario is executed automatically at a specified frequency and sends early alerts for detected issues.
  • FIG. 3 is a flow diagram illustrating a method, in accordance with an example embodiment, of performing a health check on a PI component.
  • the method 300 utilizes a PI health check scenario 302 and a list of checks 304 .
  • the PI health check scenario 302 is executed.
  • the process may continue to loop back to operation 308 . If, however, one or more of the executed checks are unsuccessful, then at operation 314 , automatic troubleshooting is performed And/or, at operation 316 , an alert is sent to a system administrator.
  • the first category involves checking the system landscape directory (SLD) connection, which can include performing several tests, including determining if remote function call (RFC) connections relevant to the SLD are functioning (these RFC connections may include, for example, SAPSLDAPI and LCRSAPRFC), determining if the server-access settings in particular APIs (such as SLDAPICUST) are correct, determining if it is possible to read data from the SLD and an exchange profile, and determining if the integration server has a business system defined.
  • SLD system landscape directory
  • RFC remote function call
  • FIG. 4 is a flow diagram illustrating a method 400 , in accordance with an example embodiment, of performing an SLD connection test.
  • This method 400 may be performed for each application server, if more than one exist.
  • the method 400 makes reference to sections of results presented in response to the SLD connection test.
  • the method involves examining these sections and using these sections to make certain assumptions about the connection.
  • it is determined if a user can log into the SLD. This determination may include using stored login information to make a trial attempt to log in.
  • the system administrator may be notified, for example, via email. As will be described later, however, in some embodiments automatic troubleshooting may take place.
  • the second general category of checks involves checking message processing in the PI system, which can include performing tests for receiving exchange infrastructure (XI) messages from an XI monitor (e.g., SXI_MONI) and checking whether XI messages are being processed in the system. In this check, messages that have already been processed are extracted, and the system can then check the processing status of the most recent message to determine whether it has reached the integration engine.
  • XI exchange infrastructure
  • FIG. 5 is a flow diagram illustrating a method 500 , in accordance with an example embodiment, of checking message processing in a PI system.
  • one or more messages are retrieved from an XI monitor.
  • a most recent message of the one or more retrieved messages is identified.
  • a message monitoring component is accessed to determine a processing status of the most recent of the one or more messages.
  • the third general category of checks may involve basic technical checks for the PI system, which can include performing several tests, including checking whether an internet communication management (ICM) is active in the system, checking whether a web service runtime is operating properly, checking whether a basic configuration for process agent framework is available, and checking whether an XI cache update is working.
  • ICM internet communication management
  • FIG. 6 is a flow diagram illustrating a method 600 , in accordance with an example embodiment, for performing basic technical checks for a PI system.
  • the system may log onto the integration engine.
  • the system may execute a Supply Chain Manager Internet Communication Manager (SMICM) transaction.
  • SMICM Supply Chain Manager Internet Communication Manager
  • the system may verify a traffic light to establish the status of the ICM.
  • a traffic light is a software function that monitors a status and automatically indicates the status to another system or user.
  • the system may call transaction se38 and execute the program SRT_ADMIN_CHECK.
  • Transaction se38 calls an Advanced Business Application Programming (ABAP) editor and ASRT_ADMIN_CHECK runs a program for checking technical settings and returns a report.
  • ASRT_ADMIN_CHECK runs a program for checking technical settings and returns a report.
  • it may be determined if there are any errors with the web service runtime based on the results of the execution in operation 610 .
  • an SXI_CACHE transaction is called.
  • SXI_CACHE shows the content of the cache.
  • a cache refresh is then called at operation 618 .
  • the fourth general category of checks involves checking technical aspects of a user, which may include performing tests for checking whether a user is able to log into a system, and whether a set of roles is assigned to the user.
  • FIG. 7 is a flow diagram illustrating a method 700 , in accordance with an example embodiment, for checking technical aspects of a user.
  • a login is attempted.
  • a specific set of roles is part of the retrieved set of roles assigned to the user. If the specific set of roles is not part of the retrieved set of roles assigned to the user, then the process proceeds to operation 706 . If the specific set of roles is part of the retrieved set of roles assigned to the user, then the process loops back to operation 702 and is repeated when a periodic check is next requested.
  • a step of alerting a system administrator is performed if any of the checks detect a problem.
  • automatic troubleshooting steps may be performed. The following are example automatic troubleshooting processes that can be performed for various checks.
  • FIG. 8 is a flow diagram illustrating a method 800 , in accordance with an example embodiment, for automatically troubleshooting a problem with the ICM.
  • it is determined whether an ICM executable is missing.
  • it is determined whether the ICM was stopped manually.
  • it is determined whether an invalid (or outdated) version of the ICM is being used by, for example, comparing the version of the ICM with a predefined version number. If any of these are determined to be the issue, then, at operation 808 , a system administrator may be alerted to the problem. If not, then, at operation 810 , it is determined if a new network connection to the ICM can be created.
  • This may be determined by, for example, examining the initial screen of an ICM monitor and determining if values for peak and maximum within a connections-used area are the same, which means that all the connections were used up at a point in time. If that is the case, then, at operation 812 , the parameter icm/max_conn may be increased. This increases the maximum amount of connections allowed for the ICM. At operation 814 , it may be determined if the problem is resolved based on the occurrence of operation 812 . If not, then the problem may be that the ICM queue for requests has overflowed. Thus, at operation 816 the number of threads is increased, which should solve the problem if there were too few threads configured. At operation 818 it is determined if the occurrence of operation 816 resolves the problem. If not, then, at operation 808 , a system administrator may be alerted to check any hanging threads to determine the issue.
  • the ICM it is determined if the ICM has any remaining buffer. This may be determined by checking whether all message passing interface (MPI) buffers have been used, perhaps by checking whether a parameter peak buffer usage reaches a parameter total #Mpi buffer. If there is no buffer remaining, then, at operation 822 , the buffer size may be increased.
  • MPI message passing interface
  • a port it is determined if a port can be found. A port is found if a port has been defined for this connection. If no port can be found, then the method 800 proceeds to operation 808 . If a port can be found, then the method 800 proceeds to operation 826 . At operation 826 it may be determined if a Secure Sockets Layer (SSL error) has occurred. If a port cannot be found or an SSL error has occurred, then the process may alert the system administrator at 808 . If it is determined at operation 826 that no SSL error has occurred, then the method 800 ends.
  • SSL error Secure Sockets Layer
  • FIG. 9 is a flow diagram illustrating a method 900 , in accordance with an example embodiment, for automatically troubleshooting a problem with a web service runtime.
  • the result of the check will not only include an indication that the check has failed or passed, but also include an error message identifying the area of failure, in the case of a failure.
  • the subsequent actions taken can depend on the error message.
  • bgRFC allows applications to record data that is later received by a called application.
  • the name of the bgRFC destination can be maintained in two different areas.
  • transaction SBGRFCCONF allows for basic configuration tasks to be performed on bgRFC settings.
  • the method 900 proceeds to operation 912 .
  • a report e.g., RSEHCONFIG
  • FIG. 10 is an interaction diagram illustrating a method 1000 , in accordance with an example embodiment, of performing a PI health check.
  • the method 1000 may utilize various components and entities, including integration repository 1002 , integration server 1004 , PI component 1006 , and system administrator 1008 .
  • the integration repository 1002 sends a PI health check scenario to the integration server 1004 .
  • the integration repository 1002 sends an XML list of checks to the integration server 1004 .
  • the integration server 1004 executes the health check scenario, which performs the checks at operation 1016 on the PI component 1006 . Once errors are discovered, at operation 1018 , troubleshooting may be performed.
  • the PI component 1006 may alert the system administrator 1008 of the issue.
  • FIG. 11 is a block diagram of a computer processing system 1100 at a server system, within which a set of instructions may be executed for causing the computer to perform any one or more of the methodologies discussed herein.
  • Embodiments may also, for example, be deployed by Software-as-a-Service (SaaS), application service provider (ASP), or utility computing providers, in addition to being sold or licensed via traditional channels.
  • the computer may be a server computer, a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), cellular telephone, or any processing device capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that device.
  • PC personal computer
  • PDA personal digital assistant
  • cellular telephone or any processing device capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that device.
  • the term “computer” shall also be taken to include any collection of computers that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
  • the example computer processing system 1100 includes processor 1102 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both), main memory 1104 and static memory 1106 , which communicate with each other via bus 1108 .
  • the processing system 1100 may further include graphics display unit 1110 (e.g., a plasma display, a liquid crystal display (LCD) or a cathode ray tube (CRT)).
  • graphics display unit 1110 e.g., a plasma display, a liquid crystal display (LCD) or a cathode ray tube (CRT)
  • the processing system 1100 also includes alphanumeric input device 1112 (e.g., a keyboard), a cursor control device 1114 (e.g., a mouse, touch screen, or the like), a storage unit 1116 , a signal generation device 1118 (e.g., a speaker), and a network interface device 1120 .
  • alphanumeric input device 1112 e.g., a keyboard
  • cursor control device 1114 e.g., a mouse, touch screen, or the like
  • storage unit 1116 e.g., a keyboard
  • signal generation device 1118 e.g., a speaker
  • network interface device 1120 e.g., a network interface
  • the storage unit 1116 includes machine-readable medium 1122 on which is stored one or more sets of instructions 1124 and data structures (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein.
  • the instructions 1124 may also reside, completely or at least partially, within the main memory 1104 and/or within the processor 1102 during execution thereof by the processing system 1100 , the main memory 1104 and the processor 1102 also constituting machine-readable, tangible media.
  • the instructions 1124 may further be transmitted or received over network 1126 via a network interface device 1120 utilizing any one of a number of well-known transfer protocols (e.g., HTTP).
  • HTTP transfer protocol
  • machine-readable medium 1122 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions 1124 .
  • the term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the computer and that cause the computer to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding or carrying data structures utilized by or associated with such a set of instructions.
  • the term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.
  • machine readable medium is used generally to refer to media such as main memory, secondary memory, removable storage, hard disks, flash memory, disk drive memory, CD-ROM and other forms of persistent memory.
  • program storage devices as may be used to describe storage devices containing executable computer code for operating various methods, shall not be construed to cover transitory subject matter, such as carrier waves or signals.
  • program storage devices and “machine-readable medium” are terms used generally to refer to media such as main memory, secondary memory, removable storage disks, hard disk drives, and other tangible storage devices or components.

Abstract

In an example embodiment, a method of performing a health check on a process integration (PI) component is provided. A PI health check scenario is loaded into the PI component, the PI health check scenario including a reference to a list of checks. The PI health check scenario is then executed using the PI component, causing one or more checks in the list of checks to be performed at a predetermined frequency. The system can then automatically determine if one or more of the one or more checks fail.

Description

    TECHNICAL FIELD
  • This document generally relates to systems and methods for use with process integration components. More specifically, this document relates methods and systems for performing a health check on a process integration component.
  • BACKGROUND
  • Enterprise resource planning (ERP) systems allow for the integration of internal and external management information across an entire organization, including financial/accounting, manufacturing, sales and service, customer relationship management, and the like. The purpose of ERP is to facilitate the flow of information between business functions inside the organization and management connections to outside entities. One commonly used component in an ERP system is a process integration (PI) component. The PI component coordinates how various process components exchange data with one another.
  • BRIEF DESCRIPTION OF DRAWINGS
  • The present disclosure is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
  • FIG. 1 is a block diagram of an application system, in accordance with an example embodiment.
  • FIG. 2 is a diagram illustrating a process integration (PI) component, in accordance with an example embodiment.
  • FIG. 3 is a flow diagram illustrating a method, in accordance with an example embodiment, of performing a health check on a PI component.
  • FIG. 4 is a flow diagram illustrating a method, in accordance with an example embodiment, of performing a system landscape directory (SLD) connection test.
  • FIG. 5 is a flow diagram illustrating a method, in accordance with an example embodiment, of checking message processing in a PI system.
  • FIG. 6 is a flow diagram illustrating a method, in accordance with an example embodiment, of performing basic technical checks for a PI system.
  • FIG. 7 is a flow diagram illustrating a method, in accordance with an example embodiment, of checking technical aspects of a user.
  • FIG. 8 is a flow diagram illustrating a method, in accordance with an example embodiment, of automatically troubleshooting a problem with the internet communication management (ICM).
  • FIG. 9 is a flow diagram illustrating a method, in accordance with an example embodiment, of automatically troubleshooting a problem with a web service runtime.
  • FIG. 10 is an interaction diagram illustrating a method, in accordance with an example embodiment, of performing a PI health check.
  • FIG. 11 is a block diagram of a computer processing system at a server system, within which a set of instructions may be executed for causing the computer to perform any one or more of the methodologies discussed herein.
  • DETAILED DESCRIPTION
  • The description that follows includes illustrative systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative embodiments. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the inventive subject matter. It will be evident, however, to those skilled in the art, that embodiments of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques have not been shown in detail.
  • While the following description will describe various embodiments related to an enterprise resource planning (ERP) system, one of ordinary skill in the art will recognize that the claims should not be limited to merely ERP embodiments, as the solution described herein could apply to other systems such as Customer Relationship Management (CRM) systems, Supplier Relationship Management systems (SRM), or any other system having a process integration (PI) component.
  • FIG. 1 is a block diagram of an application system, in accordance with an example embodiment. The application system 100 comprises heterogeneous software and/or hardware components 102 to 116, which are connected to each other as shown by the solid lines in FIG. 1, and which may operate together in the application system 100 to process, for example, a business scenario. The application system 100 may comprise an enterprise resource planning (ERP) system 102. The ERP 102 may integrate internal and external management information across an entire organization, embracing different activities and/or services of an enterprise. The ERP system 102 automates the activities and/or services with an integrated computer-based application. The ERP system 102 can run on a variety of hardware and/or network configurations, typically employing a database to store its data. The ERP system 102 may be associated with (e.g., directly or indirectly connected to and/or in (networked) communication with) a business intelligence (BI) component 104, one or more third parties 106 and 108, a supply chain management (SCM) component 110, and/or a SRM component 112. The SRM component 112 and/or the SCM component 110 may further be associated with at least one proprietary service 114. Furthermore, at least one of the third parties 106 may also be associated with at least one proprietary service 116. The BI component 104 may provide historical, current, and predictive views of business processes and/or business scenarios, for example, performed on the ERP 102. Common functionality of business intelligence technologies may comprise reporting, online analytical processing, analytics, data mining, business performance management, benchmarking, text mining, and/or predictive analytics. The functionality may be used to support better decision making in the ERP system 102. The SCM component 110 may manage a network of interconnected businesses involved in the provision of product and/or service packages called for by end consumers such as the ERP system 102. The SCM component 110 may span movement and storage of raw materials, work-in-process inventory, and finished goods from point of origin to point of consumption (also referred to as a supply chain). The SRM component 112 may specify collaborations with suppliers that are vital to the success of the ERP system 102 (e.g., to maximize the potential value of those relationships). All of these systems may be integrated via a PI component 118.
  • FIG. 2 is a diagram illustrating a PI component, in accordance with an example embodiment. The PI component 200 contains several sub-components, including an integration builder 202, integration server 204, system landscape directory 206, and central monitoring component 208. The integration builder 202 may define objects for an integration repository 210 at design or configuration time. The integration repository 210 may maintain various objects useful to providing process integration functions, such as business scenarios, business processes, message interfaces, message types, data types, message mappings, and interface mappings. The application developer refers to these objects in defining interactive flow between applications. This interactive flow is stored in the integration directory 212. When an application is executed, the messaging flow is drawn from the integration directory 212. During execution, depending upon the specified code, the integration server 204 can act in accordance with this defined flow, and thus, for example, send messages to multiple receives, split a message into multiple messages, route a message based on content, etc.
  • The PI component 200 may manage the interactions between multiple entities 214 a-214 d. These entities may include, for example, an application 214 a operated by the same entity as the PI component 200, a third party application 214 b operated by a different entity than the PI component 200, a marketplace/business partner 214 c, and a third party middleware component 214 a.
  • The system landscape directory 206 is a directory of available installable software and updated data about systems already installed.
  • The central monitoring component 208 provides monitoring of many, if not all, aspects of the system. Some of the aspects monitored may trigger events in the integration server 204.
  • When the PI component 200 is not operating properly or operating at reduced efficiency, business messages may not be processed, which creates a big problem for a customer. Any downtime in a business setting could potentially cause significant monetary losses. While customers could detect the issues on their own and report them, the time delay in doing so costs money.
  • In an example embodiment, an automated process is provided that performs various health checks on the PI component 200 to ensure that any major problems are detected early and often. This can help the customer reduce the downtime associated with such detected problems. Automatic execution of this health check may be performed periodically. Alerts for any detected problems (or just the results of the health check) can be delivered via email, for example. Additionally, automated troubleshooting steps can be performed to resolve many common issues, thus reducing or even eliminating the need for the customer to participate in fixing any problems. Furthermore, a complete log of checks and any troubleshooting steps can be maintained, which can be used by technical support in case of future or unresolved problems.
  • In an example embodiment, the automated process may be embodied as a PI scenario which reads input from an Extensible Markup Language (XML file). The input may include a list of checks to be executed, and the components against which to execute these checks. The PI scenario can execute one or more application programming interfaces (APIs) to determine a health status for various components, as described by the list of checks to be executed. After execution of the API; the PI scenario can send a notification email to a system administrator if any of the health checks fail. The frequency with which the PI scenario is repeated can be set so that the PI scenario is executed automatically at a specified frequency and sends early alerts for detected issues.
  • FIG. 3 is a flow diagram illustrating a method, in accordance with an example embodiment, of performing a health check on a PI component. Here, the method 300 utilizes a PI health check scenario 302 and a list of checks 304. At operation 306, the PI health check scenario 302 is executed. At operation 308, it is determined whether a timer is equal to a predefined value. If the timer is not equal to the predefined value, then the process loops until the timer is equal to the predefined value. Once the timer is equal to the predefined value, then at operation 310, one or more of the checks in the list of checks 304 is performed. At operation 312 it is determined if any of the checks are unsuccessful. If none of the executed checks are unsuccessful, then the process may continue to loop back to operation 308. If, however, one or more of the executed checks are unsuccessful, then at operation 314, automatic troubleshooting is performed And/or, at operation 316, an alert is sent to a system administrator.
  • In an example embodiment, there are four general categories of checks that can be automatically executed. The first category involves checking the system landscape directory (SLD) connection, which can include performing several tests, including determining if remote function call (RFC) connections relevant to the SLD are functioning (these RFC connections may include, for example, SAPSLDAPI and LCRSAPRFC), determining if the server-access settings in particular APIs (such as SLDAPICUST) are correct, determining if it is possible to read data from the SLD and an exchange profile, and determining if the integration server has a business system defined.
  • FIG. 4 is a flow diagram illustrating a method 400, in accordance with an example embodiment, of performing an SLD connection test. This method 400 may be performed for each application server, if more than one exist. The method 400 makes reference to sections of results presented in response to the SLD connection test. The method involves examining these sections and using these sections to make certain assumptions about the connection. At operation 402, it is determined if a user can log into the SLD. This determination may include using stored login information to make a trial attempt to log in. At operation 404, it is determined if there is an “RFC Ping Successful” message in a properties of RFC destination SAPSLDAPI section. At operation 406, it is determined if there is a statement “Function call terminated successfully” along with a list of one or more business systems in a “calling function LCR_LIST_BUSINESS_SYSTEMS” section. At operation 408, it is determined if there is a statement “Function call terminated successfully” and a business system of the integration server in a “Calling function LCR_GET_OWN_BUSINESS_SYSTEM” section.
  • At operation 410, it is determined if there is a statement “Function call terminated successfully” in a “Calling function LCR_GET_BS_DETAILS” section. At operation 412, it is determined if there is an http://<host>:500<sysnr>/sap/xi/engine?type=entry URL in the “Calling function LCR_GET_BS_DETAILS” section.
  • At operation 414, it is determined if there is a statement “RFC Ping successful” in a “Properties of RFC destination LCRSAPRFC” section.
  • At operation 416, it is determined if there is a statement “Function call terminated successfully” in a “Calling function EXCHANGE_PROFILE_GET_PARAMETER” section.
  • If any of these tests fail, then, at operation 418, the system administrator may be notified, for example, via email. As will be described later, however, in some embodiments automatic troubleshooting may take place.
  • Of course, these descriptions of statements and sections are merely illustrative and are not intended to be limiting. In other example embodiments, similar, but not identical, statements located in similar, but not identical, sections may be examined.
  • The second general category of checks involves checking message processing in the PI system, which can include performing tests for receiving exchange infrastructure (XI) messages from an XI monitor (e.g., SXI_MONI) and checking whether XI messages are being processed in the system. In this check, messages that have already been processed are extracted, and the system can then check the processing status of the most recent message to determine whether it has reached the integration engine.
  • FIG. 5 is a flow diagram illustrating a method 500, in accordance with an example embodiment, of checking message processing in a PI system. At operation 502, one or more messages are retrieved from an XI monitor. At operation 504, a most recent message of the one or more retrieved messages is identified. At operation 506, a message monitoring component is accessed to determine a processing status of the most recent of the one or more messages. At operation 508, it is determined if an error occurred in the processing of the most recent message of the retrieved one or more messages. This determination can be made by examining the message status. For example, the message status may be listed as a success. If not, it can be assumed that an error has occurred. If it is assumed an error has occurred, then, at operation 510, the system administrator may be alerted. If no error is assumed or detected, the process may loop back to operation 502 and repeated when a periodic check is next requested.
  • The third general category of checks may involve basic technical checks for the PI system, which can include performing several tests, including checking whether an internet communication management (ICM) is active in the system, checking whether a web service runtime is operating properly, checking whether a basic configuration for process agent framework is available, and checking whether an XI cache update is working.
  • FIG. 6 is a flow diagram illustrating a method 600, in accordance with an example embodiment, for performing basic technical checks for a PI system. At operation 602, the system may log onto the integration engine. At operation 604, the system may execute a Supply Chain Manager Internet Communication Manager (SMICM) transaction. At operation 606, the system may verify a traffic light to establish the status of the ICM. A traffic light is a software function that monitors a status and automatically indicates the status to another system or user. At operation 608, it is determined whether the status of the ICM is correct.
  • At operation 610, the system may call transaction se38 and execute the program SRT_ADMIN_CHECK. Transaction se38 calls an Advanced Business Application Programming (ABAP) editor and ASRT_ADMIN_CHECK runs a program for checking technical settings and returns a report. At operation 612, it may be determined if there are any errors with the web service runtime based on the results of the execution in operation 610.
  • At operation 614, it is determined whether a basic configuration for a process agent framework is available.
  • At operation 616, an SXI_CACHE transaction is called. SXI_CACHE shows the content of the cache. A cache refresh is then called at operation 618. At operation 620, it is determined if cache contents are up-to-date. If the contents of the cache are not up-to-date, then at operation 622 a system administrator may be alerted. If the contents of the cache are up-to-date, then the process may periodically repeat to operation 602.
  • The fourth general category of checks involves checking technical aspects of a user, which may include performing tests for checking whether a user is able to log into a system, and whether a set of roles is assigned to the user. FIG. 7 is a flow diagram illustrating a method 700, in accordance with an example embodiment, for checking technical aspects of a user. At operation 702, a login is attempted. At operation 704, it is determined if the login attempt is successful. If the login attempt is not successful, then, at operation 706, the system administrator may be alerted. If the login attempt is successful, then at operation 708, a set of roles assigned to the user may be retrieved (such as for example by calling a function module BAPI_USER_GET_DETAIL). At operation 710, it is determined if a specific set of roles is part of the retrieved set of roles assigned to the user. If the specific set of roles is not part of the retrieved set of roles assigned to the user, then the process proceeds to operation 706. If the specific set of roles is part of the retrieved set of roles assigned to the user, then the process loops back to operation 702 and is repeated when a periodic check is next requested.
  • It should be noted that in the above flow diagrams, a step of alerting a system administrator is performed if any of the checks detect a problem. In some embodiments, however, either in addition to or in lieu of alerting a system administrator, automatic troubleshooting steps may be performed. The following are example automatic troubleshooting processes that can be performed for various checks.
  • FIG. 8 is a flow diagram illustrating a method 800, in accordance with an example embodiment, for automatically troubleshooting a problem with the ICM. At operation 802, it is determined whether an ICM executable is missing. At operation 804, it is determined whether the ICM was stopped manually. At operation 806, it is determined whether an invalid (or outdated) version of the ICM is being used by, for example, comparing the version of the ICM with a predefined version number. If any of these are determined to be the issue, then, at operation 808, a system administrator may be alerted to the problem. If not, then, at operation 810, it is determined if a new network connection to the ICM can be created. This may be determined by, for example, examining the initial screen of an ICM monitor and determining if values for peak and maximum within a connections-used area are the same, which means that all the connections were used up at a point in time. If that is the case, then, at operation 812, the parameter icm/max_conn may be increased. This increases the maximum amount of connections allowed for the ICM. At operation 814, it may be determined if the problem is resolved based on the occurrence of operation 812. If not, then the problem may be that the ICM queue for requests has overflowed. Thus, at operation 816 the number of threads is increased, which should solve the problem if there were too few threads configured. At operation 818 it is determined if the occurrence of operation 816 resolves the problem. If not, then, at operation 808, a system administrator may be alerted to check any hanging threads to determine the issue.
  • At operation 820, it is determined if the ICM has any remaining buffer. This may be determined by checking whether all message passing interface (MPI) buffers have been used, perhaps by checking whether a parameter peak buffer usage reaches a parameter total #Mpi buffer. If there is no buffer remaining, then, at operation 822, the buffer size may be increased.
  • At operation 824, it is determined if a port can be found. A port is found if a port has been defined for this connection. If no port can be found, then the method 800 proceeds to operation 808. If a port can be found, then the method 800 proceeds to operation 826. At operation 826 it may be determined if a Secure Sockets Layer (SSL error) has occurred. If a port cannot be found or an SSL error has occurred, then the process may alert the system administrator at 808. If it is determined at operation 826 that no SSL error has occurred, then the method 800 ends.
  • FIG. 9 is a flow diagram illustrating a method 900, in accordance with an example embodiment, for automatically troubleshooting a problem with a web service runtime. In many cases, the result of the check will not only include an indication that the check has failed or passed, but also include an error message identifying the area of failure, in the case of a failure. The subsequent actions taken can depend on the error message. At operation 902, it is determined if the error message indicates a problem with background remote function call (bgRFC) settings. bgRFC allows applications to record data that is later received by a called application. The name of the bgRFC destination can be maintained in two different areas. One is in the general configuration of the bgRFC and the other is in the configuration of the WS runtime. It is possible that two different values were set for the bgRFC destination, which would result in problems when scheduling sequences. If such an error occurs, then, at operation 904, transaction SBGRFCCONF may be called. The transaction SBGRFCCONF allows for basic configuration tasks to be performed on bgRFC settings. At operation 906, it is determined if the name of the inbound destination displayed there corresponds to the name set for the bgRFC destination. If not, then at operation 908 a system administrator is alerted.
  • At operation 910, it is determined if the error message indicates a problem with the event handler. If the event handler is active, events that occur in connection with web services messaging are processed. Cancelling sequences with a sequence monitor is based on the event handler operating properly. If the event handler is not operating properly, then sequences may be cancelled. If it is determined at operation 910 that there is a problem with the event handler, the method 900 proceeds to operation 912. At operation 912, it is determined whether the service destination of a client has been set correctly. This may be determined by, for example, calling a report (e.g., RSEHCONFIG) and checking to see if the value 0 is in the maximum allowed processes field. If it is determined that the destination has not been set correctly, method 900 proceeds to operation 914, where this value can be changed. After operation 914, method 900 ends.
  • FIG. 10 is an interaction diagram illustrating a method 1000, in accordance with an example embodiment, of performing a PI health check. The method 1000 may utilize various components and entities, including integration repository 1002, integration server 1004, PI component 1006, and system administrator 1008. At operation 1010, the integration repository 1002 sends a PI health check scenario to the integration server 1004. At operation 1012, the integration repository 1002 sends an XML list of checks to the integration server 1004. At operation 1014, the integration server 1004 executes the health check scenario, which performs the checks at operation 1016 on the PI component 1006. Once errors are discovered, at operation 1018, troubleshooting may be performed. At operation 1020, the PI component 1006 may alert the system administrator 1008 of the issue.
  • FIG. 11 is a block diagram of a computer processing system 1100 at a server system, within which a set of instructions may be executed for causing the computer to perform any one or more of the methodologies discussed herein.
  • Embodiments may also, for example, be deployed by Software-as-a-Service (SaaS), application service provider (ASP), or utility computing providers, in addition to being sold or licensed via traditional channels. The computer may be a server computer, a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), cellular telephone, or any processing device capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that device. Further, while only a single computer is illustrated, the term “computer” shall also be taken to include any collection of computers that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
  • The example computer processing system 1100 includes processor 1102 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both), main memory 1104 and static memory 1106, which communicate with each other via bus 1108. The processing system 1100 may further include graphics display unit 1110 (e.g., a plasma display, a liquid crystal display (LCD) or a cathode ray tube (CRT)). The processing system 1100 also includes alphanumeric input device 1112 (e.g., a keyboard), a cursor control device 1114 (e.g., a mouse, touch screen, or the like), a storage unit 1116, a signal generation device 1118 (e.g., a speaker), and a network interface device 1120.
  • The storage unit 1116 includes machine-readable medium 1122 on which is stored one or more sets of instructions 1124 and data structures (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. The instructions 1124 may also reside, completely or at least partially, within the main memory 1104 and/or within the processor 1102 during execution thereof by the processing system 1100, the main memory 1104 and the processor 1102 also constituting machine-readable, tangible media.
  • The instructions 1124 may further be transmitted or received over network 1126 via a network interface device 1120 utilizing any one of a number of well-known transfer protocols (e.g., HTTP).
  • While the machine-readable medium 1122 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions 1124. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the computer and that cause the computer to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding or carrying data structures utilized by or associated with such a set of instructions. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.
  • While various implementations and exploitations are described, it will be understood that these embodiments are illustrative and that the scope of the claims is not limited to them. In general, techniques for maintaining consistency between data structures may be implemented with facilities consistent with any hardware system or hardware systems defined herein. Many variations, modifications, additions, and improvements are possible.
  • Plural instances may be provided for components, operations or structures described herein as a single instance. Finally, boundaries between various components, operations, and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of the claims. In general, structures and functionality presented as separate components in the exemplary configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the claims.
  • While the embodiments are described with reference to various implementations and exploitations, it will be understood that these embodiments are illustrative, and that the scope of claims provided below is not limited to the embodiments described herein. In general, the techniques described herein may be implemented with facilities consistent with any hardware system or hardware systems defined herein. Many variations, modifications, additions, and improvements are possible.
  • The term “machine readable medium” is used generally to refer to media such as main memory, secondary memory, removable storage, hard disks, flash memory, disk drive memory, CD-ROM and other forms of persistent memory. It should be noted that program storage devices, as may be used to describe storage devices containing executable computer code for operating various methods, shall not be construed to cover transitory subject matter, such as carrier waves or signals. “Program storage devices” and “machine-readable medium” are terms used generally to refer to media such as main memory, secondary memory, removable storage disks, hard disk drives, and other tangible storage devices or components.
  • Plural instances may be provided for components, operations, or structures described herein as a single instance. Finally, boundaries between various components, operations, and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of the claims. In general, structures and functionality presented as separate components in the exemplary configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the claims and their equivalents.

Claims (20)

What is claimed is:
1. A method of performing a health check on a process integration (PI) component, the method comprising:
loading a PI health check scenario into the PI component, the PI health check scenario defining functions including a reference to a list of checks to be performed on the process integration component;
executing the PI health check scenario using the PI component, causing one or more checks in the list of checks to be performed at a predetermined frequency; and
automatically determining if one or more of the one or more checks fail.
2. The method of claim 1, wherein the one or more checks include checking a connection to a system landscape directory.
3. The method of claim 1, wherein the one or more checks include performing basic technical checks on the PI component.
4. The method of claim 3, wherein the basic technical checks include checking whether Internet Connection Management (ICM) is active.
5. The method of claim 3, wherein the basic technical checks include checking whether a web service runtime is operating properly.
6. The method of claim 3, wherein the basic technical checks include checking whether basic configuration for a process agent framework is available.
7. The method of claim 3, wherein the basic technical checks include checking whether an exchange integration cache update is operating properly.
8. The method of claim 1, wherein the one or more checks include checking whether a user is able to log in to the PI component.
9. The method of claim 8, wherein the one or more checks include checking whether a set of roles are assigned to the user.
10. A process integration component comprising:
an integration builder including:
an integration repository, and
an integration directory;
a central monitoring component;
a system landscape directory; and
an integration server configured to execute a PI health check scenario causing one or more checks in a list of checks in the PI health check scenario to be performed at a predetermined frequency, and automatically to determine if one or more of the one or more checks fail.
11. The process integration component of claim 10, coupled to one or more process components in an Enterprise Resource Planning (ERP) system.
12. The process integration component of claim 11, wherein the one or more process components include an application distributed by a party that distributes the process integration component.
13. The process integration component of claim 11, wherein the one or more process components include a third party application.
14. The process integration component of claim 11, wherein the one or more process components include a marketplace or business partner application.
15. The process integration component of claim 11, wherein the one or more process components include a third party middleware component.
16. A non-transitory computer-readable storage medium comprising instructions that, when executed by at least one processor of a machine, cause the machine to perform operations of performing a health check on a process integration (PI) component, the medium further comprising:
loading a PI health check scenario into the PI component, the PI health check scenario including a reference to a list of checks;
executing the PI health check scenario using the PI component, causing one or more checks in the list of checks to be performed at a predetermined frequency; and
automatically determining if one or more of the one or more checks fail.
17. The non-transitory computer-readable storage medium of claim 16, wherein the one or more checks include checking a connection to a system landscape directory.
18. The non-transitory computer-readable storage medium of claim 16, wherein the one or more checks include performing basic technical checks on the PI component.
19. The non-transitory computer-readable storage medium of claim 18, wherein the basic technical checks include checking whether Internet Connection Management (ICM) is active.
20. The non-transitory computer-readable storage medium of claim 18, wherein the basic technical checks include checking whether a web service runtime is operating properly.
US13/801,849 2013-03-13 2013-03-13 System and method of performing a health check on a process integration component Active 2033-09-14 US9146798B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/801,849 US9146798B2 (en) 2013-03-13 2013-03-13 System and method of performing a health check on a process integration component

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/801,849 US9146798B2 (en) 2013-03-13 2013-03-13 System and method of performing a health check on a process integration component

Publications (2)

Publication Number Publication Date
US20140281720A1 true US20140281720A1 (en) 2014-09-18
US9146798B2 US9146798B2 (en) 2015-09-29

Family

ID=51534196

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/801,849 Active 2033-09-14 US9146798B2 (en) 2013-03-13 2013-03-13 System and method of performing a health check on a process integration component

Country Status (1)

Country Link
US (1) US9146798B2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11868206B2 (en) * 2021-05-11 2024-01-09 Sap Se Automated mass message processing

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030202088A1 (en) * 2002-04-25 2003-10-30 Knight Timothy D. Videoconference with a call center
US20080319808A1 (en) * 2004-02-17 2008-12-25 Wofford Victoria A Travel Monitoring
US20090106605A1 (en) * 2007-10-19 2009-04-23 Oracle International Corporation Health monitor
US7607040B2 (en) * 2005-02-18 2009-10-20 Hewlett-Packard Development Company, L.P. Methods and systems for conducting processor health-checks
US20090265139A1 (en) * 2008-04-22 2009-10-22 Klein Bradley Diagnostics for centrally managed computer system
US20100043004A1 (en) * 2008-08-12 2010-02-18 Ashwini Kumar Tambi Method and system for computer system diagnostic scheduling using service level objectives
US7787388B2 (en) * 2008-05-23 2010-08-31 Egenera, Inc. Method of and a system for autonomously identifying which node in a two-node system has failed
US7865606B1 (en) * 2002-12-13 2011-01-04 Sap Ag Adapter framework
US7930681B2 (en) * 2005-12-30 2011-04-19 Sap Ag Service and application management in information technology systems
US7954014B2 (en) * 2007-09-18 2011-05-31 Sap Ag Health check framework for enterprise systems
US8015039B2 (en) * 2006-12-14 2011-09-06 Sap Ag Enterprise verification and certification framework
US8060864B1 (en) * 2005-01-07 2011-11-15 Interactive TKO, Inc. System and method for live software object interaction
US20120054334A1 (en) * 2010-08-31 2012-03-01 Sap Ag Central cross-system pi monitoring dashboard
US20130247136A1 (en) * 2012-03-14 2013-09-19 International Business Machines Corporation Automated Validation of Configuration and Compliance in Cloud Servers

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030202088A1 (en) * 2002-04-25 2003-10-30 Knight Timothy D. Videoconference with a call center
US7865606B1 (en) * 2002-12-13 2011-01-04 Sap Ag Adapter framework
US20080319808A1 (en) * 2004-02-17 2008-12-25 Wofford Victoria A Travel Monitoring
US8060864B1 (en) * 2005-01-07 2011-11-15 Interactive TKO, Inc. System and method for live software object interaction
US7607040B2 (en) * 2005-02-18 2009-10-20 Hewlett-Packard Development Company, L.P. Methods and systems for conducting processor health-checks
US7930681B2 (en) * 2005-12-30 2011-04-19 Sap Ag Service and application management in information technology systems
US8015039B2 (en) * 2006-12-14 2011-09-06 Sap Ag Enterprise verification and certification framework
US7954014B2 (en) * 2007-09-18 2011-05-31 Sap Ag Health check framework for enterprise systems
US20090106605A1 (en) * 2007-10-19 2009-04-23 Oracle International Corporation Health monitor
US20090265139A1 (en) * 2008-04-22 2009-10-22 Klein Bradley Diagnostics for centrally managed computer system
US7787388B2 (en) * 2008-05-23 2010-08-31 Egenera, Inc. Method of and a system for autonomously identifying which node in a two-node system has failed
US20100043004A1 (en) * 2008-08-12 2010-02-18 Ashwini Kumar Tambi Method and system for computer system diagnostic scheduling using service level objectives
US20120054334A1 (en) * 2010-08-31 2012-03-01 Sap Ag Central cross-system pi monitoring dashboard
US20130247136A1 (en) * 2012-03-14 2013-09-19 International Business Machines Corporation Automated Validation of Configuration and Compliance in Cloud Servers

Also Published As

Publication number Publication date
US9146798B2 (en) 2015-09-29

Similar Documents

Publication Publication Date Title
US11042458B2 (en) Robotic optimization for robotic process automation platforms
CN111538634B (en) Computing system, method, and storage medium
US10726045B2 (en) Resource-efficient record processing in unified automation platforms for robotic process automation
US8762929B2 (en) System and method for exclusion of inconsistent objects from lifecycle management processes
US11456936B2 (en) Detection and cleanup of unused microservices
US9652744B2 (en) Smart user interface adaptation in on-demand business applications
US11900162B2 (en) Autonomous application management for distributed computing systems
US11381488B2 (en) Centralized, scalable, resource monitoring system
US10102239B2 (en) Application event bridge
US20160026698A1 (en) Enabling business process continuity on periodically replicated data
US20140258250A1 (en) Flexible Control Framework Featuring Standalone Rule Engine
US9632904B1 (en) Alerting based on service dependencies of modeled processes
US9141425B2 (en) Framework for critical-path resource-optimized parallel processing
US9146798B2 (en) System and method of performing a health check on a process integration component
US8621492B2 (en) Application level contexts
US8468529B2 (en) Correlating, logging and tracing messaging events between workflow instances with globally unique identifiers
CN113626379A (en) Research and development data management method, device, equipment and medium
CN114816477A (en) Server upgrading method, device, equipment, medium and program product
US11494713B2 (en) Robotic process automation analytics platform
US20210064389A1 (en) Software component configuration alignment
US11650847B2 (en) Auto-recovery job scheduling framework
US20230206144A1 (en) Methods, apparatuses and computer program products for generating an incident and change management user interface
US11223547B1 (en) Managing information technology infrastructure based on user experience
CN116701123A (en) Task early warning method, device, equipment, medium and program product
CN117193754A (en) Method, apparatus, electronic device and computer readable medium for processing service request

Legal Events

Date Code Title Description
AS Assignment

Owner name: SAP AG, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GUPTA, VIKAS;JOSE, ABY;REEL/FRAME:030370/0492

Effective date: 20130322

AS Assignment

Owner name: SAP SE, GERMANY

Free format text: CHANGE OF NAME;ASSIGNOR:SAP AG;REEL/FRAME:033625/0223

Effective date: 20140707

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8