US20170116112A1 - Exploratory testing on multiple system landscapes - Google Patents
Exploratory testing on multiple system landscapes Download PDFInfo
- Publication number
- US20170116112A1 US20170116112A1 US14/918,828 US201514918828A US2017116112A1 US 20170116112 A1 US20170116112 A1 US 20170116112A1 US 201514918828 A US201514918828 A US 201514918828A US 2017116112 A1 US2017116112 A1 US 2017116112A1
- Authority
- US
- United States
- Prior art keywords
- response
- received
- validated
- landscapes
- system landscape
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3672—Test management
- G06F11/3692—Test management for test results analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3672—Test management
- G06F11/3688—Test management for test execution, e.g. scheduling of test suites
Definitions
- the present disclosure relates to systems, software, and computer-implemented methods for testing applications on multiple system landscapes.
- New and updated applications are developed based on a platform.
- software vendors, system integrators (SIs), and IT departments need to test and validate the new applications on all or at least the most prominent combinations of system landscapes of the platform.
- One computer-implemented method includes: identifying instructions to test a plurality of system landscapes, executing a test of a first system landscape from the plurality of system landscapes, validating a response received from the first system landscape by a user associated with the testing, executing tests of at least a subset of the remaining plurality of system landscapes which includes sending requests including the predefined input to the entry point of each of the subset of the remaining plurality of system landscapes, receiving responses from the subset of the remaining plurality of system landscapes, and comparing each received response to the validated response from the first system landscape, and in response to the comparison, generating a result set of the comparison of each received response to the validated response.
- FIG. 1 is a block diagram illustrating an example system for testing applications on multiple system landscapes.
- FIG. 2 is a flow diagram of an example interaction between the components of the example system.
- FIG. 3 is a flowchart of an example method for testing applications on multiple system landscapes.
- FIG. 4 is an example screenshot of a sample generated result.
- the present disclosure describes systems and tools for providing simultaneous and/or concurrent exploratory testing applications on multiple system landscapes.
- a system landscape is a combination of all of the systems and components required to run the applications.
- software vendors, SIs, and IT departments need to test the applications on all or at least the most prominent combinations of system landscapes of the platform.
- This disclosure describes the process to test applications on multiple system landscapes by validating one test result of one system landscape and comparing test results of the rest of the system landscapes to the validated test result.
- New and updated applications are developed based on a platform, such as a platform-as-a-service.
- the platform includes a backend, a middleware, and a frontend.
- a Leave Request application may have the following system landscape options based on a various possible implementations:
- test information e.g., test request, test responses, validated test response, system landscapes
- test information may be stored and reviewed, when an error or possible error is found for a particular system landscape, it is easy to later access the stored test information and validate a solution to the error after modifications are made to the particular system landscape.
- This testing mechanism will ensure consistency among platforms, while strictly manual testing may result in some inconsistencies. In addition, platforms with performance issues on particular step are also easier identified. Any suitable testing algorithm for multiple system landscapes will benefit from the solution.
- FIG. 1 is a block diagram illustrating an example system 100 for testing applications on multiple system landscapes.
- the illustrated system 100 includes or is communicably coupled with one or more servers 102 , a client 122 , a Multi-URL (Uniform Resource Locator) comparison system 140 , and a network 160 .
- functionality of two or more systems or servers may be provided by a single system or server.
- the functionality of one illustrated system or server may be provided by multiple systems or servers.
- server 102 is illustrated as a single server, server 102 is meant to represent any combination of systems, including front end, middleware, and backend systems, as appropriate.
- a UI of the application may reside on the front end
- the middleware may provide standard services for the application (e.g., in the form of oData protocol)
- the backend may perform the actual data processing. While not necessary for the described tools and advantages to be realized, such multi-tier architectures can result in a large number of system landscape combinations, as multiple front ends, middleware, and backend systems may be combined in various ways.
- server 102 may be any computer or processing device such as, for example, a blade server, general-purpose personal computer (PC), Mac®, workstation, UNIX-based workstation, or any other suitable device.
- FIG. 1 illustrates server 102 as a single system, server 102 can be implemented using two or more systems, as well as computers other than servers, including a server pool.
- the present disclosure contemplates computers other than general-purpose computers, as well as computers without conventional operating systems.
- illustrated server 102 , client 122 , and Multi-URL comparison system 140 may each be adapted to execute any operating system, including Linux, UNIX, Windows, Mac OS®, JavaTM, AndroidTM, or iOS.
- the illustrated systems may also include or be communicably coupled with a communication server, an e-mail server, a web server, a caching server, a streaming data server, and/or other suitable server or computer.
- server 102 may be any suitable computing server or system for running applications in response to requests for testing the applications.
- the server 102 is described herein in terms of responding to requests for testing applications from users at client 122 and other clients.
- the server 102 may, in some implementations, be a part of a larger system providing additional functionality.
- server 102 may be part of an enterprise business application or application suite providing one or more of enterprise relationship management, data management systems, customer relationship management, and others.
- server 102 may receive a request to execute a testing or validity-related operation, and can provide a response back to the appropriate requestor.
- the server 102 may be associated with a particular URL for web-based applications. The particular URL can trigger execution of a plurality of components and systems.
- server 102 includes an interface 104 , a processor 106 , a backend application 108 , and memory 110 .
- the server 102 is a simplified representation of one or more systems and/or servers that provide the described functionality, and is not meant to be limiting, but rather an example of the systems possible.
- the interface 104 is used by the server 102 for communicating with other systems in a distributed environment—including within the system 100 —connected to the network 160 , e.g., client 122 and other systems communicably coupled to the network 160 .
- the interface 104 comprises logic encoded in software and/or hardware in a suitable combination and operable to communicate with the network 160 .
- the interface 104 may comprise software supporting one or more communication protocols associated with communications such that the network 160 or interface's hardware is operable to communicate physical signals within and outside of the illustrated environment 100 .
- Network 160 facilitates wireless or wireline communications between the components of the environment 100 (i.e., between server 102 and client 122 , between server 102 and Multi-URL comparison system 140 , and among others), as well as with any other local or remote computer, such as additional clients, servers, or other devices communicably coupled to network 160 , including those not illustrated in FIG. 1 .
- the network 160 is depicted as a single network, but may be comprised of more than one network without departing from the scope of this disclosure, so long as at least a portion of the network 160 may facilitate communications between senders and recipients.
- one or more of the illustrated components may be included within network 160 as one or more cloud-based services or operations.
- the Multi-URL comparison system 140 may be cloud-based services.
- the network 160 may be all or a portion of an enterprise or secured network, while in another instance, at least a portion of the network 160 may represent a connection to the Internet. In some instances, a portion of the network 160 may be a virtual private network (VPN). Further, all or a portion of the network 160 can comprise either a wireline or wireless link.
- Example wireless links may include 802.11ac/ad,/af/a/b/g/n, 802.20, WiMax, LTE, and/or any other appropriate wireless link.
- the network 160 encompasses any internal or external network, networks, sub-network, or combination thereof operable to facilitate communications between various computing components inside and outside the illustrated system 100 .
- the network 160 may communicate, for example, Internet Protocol (IP) packets, Frame Relay frames, Asynchronous Transfer Mode (ATM) cells, voice, video, data, and other suitable information between network addresses.
- IP Internet Protocol
- ATM Asynchronous Transfer Mode
- the network 160 may also include one or more local area networks (LANs), radio access networks (RANs), metropolitan area networks (MANs), wide area networks (WANs), all or a portion of the Internet, and/or any other communication system or systems at one or more locations.
- LANs local area networks
- RANs radio access networks
- MANs metropolitan area networks
- WANs wide area networks
- the server 102 includes a processor 106 . Although illustrated as a single processor 106 in FIG. 1 , two or more processors may be used according to particular needs, desires, or particular implementations of the environment 100 .
- Each processor 106 may be a central processing unit (CPU), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or another suitable component.
- the processor 106 executes instructions and manipulates data to perform the operations of the server 102 .
- the processor 106 executes the algorithms and operations described in the illustrated figures, including the operations performing the functionality associated with the server 102 generally, as well as the various software modules (e.g., the backend application 108 ), including the functionality for sending communications to and receiving transmissions from client 122 .
- the various software modules e.g., the backend application 108
- the backend application 108 represents an application, set of applications, software, software modules, or combination of software and hardware used to perform operations related to testing applications in the server 102 .
- the backend application 108 can perform operations including receiving requests for testing applications in the server 102 , running tests for the applications, providing test response, and performing standard operations associated with the backend application 108 .
- the backend application 108 can include and provide various functionality to assist in the management and execution of testing applications.
- the backend application 108 may be an entry point associated with execution of a particular instance of an end-to- end or composite application, where when execution is initiated at the backend application 108 , one or more additional applications and/or systems
- “software” includes computer-readable instructions, firmware, wired and/or programmed hardware, or any combination thereof on a tangible medium (transitory or non-transitory, as appropriate) operable when executed to perform at least the processes and operations described herein.
- each software component may be fully or partially written or described in any appropriate computer language including C, C++, JavaScript, JavaTM, Visual Basic, assembler, Perl®, any suitable version of 4GL, as well as others.
- server 102 includes memory 110 , or multiple memories 110 .
- the memory 110 may include any memory or database module and may take the form of volatile or non-volatile memory including, without limitation, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), removable media, or any other suitable local or remote memory component.
- the memory 110 may store various objects or data, including financial and/or business data, application information including URLs and settings, user information, behavior and access rules, administrative settings, password information, caches, backup data, repositories storing business and/or dynamic information, and any other appropriate information including any parameters, variables, algorithms, instructions, rules, constraints, or references thereto associated with the purposes of the backend application 108 and/or server 102 . Additionally, the memory 110 may store any other appropriate data, such as VPN applications, firmware logs and policies, firewall policies, a security or access log, print or other reporting files, as well as others.
- Client 122 may be any computing device operable to connect to or communicate with Multi-URL comparison system 140 , other clients (not illustrated), or other components via network 160 , as well as with the network 160 itself, using a wireline or wireless connection, and can include a desktop computer, a mobile device, a tablet, a server, or any other suitable computer device.
- client 122 comprises an electronic computer device operable to receive, transmit, process, and store any appropriate data associated with the system 100 of FIG. 1 .
- client 122 can be a particular thing within a group of the internet of things, such as a connected appliance or tool.
- client 122 includes an interface 124 , a processor 126 , a graphical user interface (GUI) 128 , a client application 130 , and memory 132 .
- Interface 124 and processor 126 may be similar to or different than the interface 104 and processor 106 described with regard to server 102 .
- processor 126 executes instructions and manipulates data to perform the operations of the client 122 .
- the processor 126 can execute some or all of the algorithms and operations described in the illustrated figures, including the operations performing the functionality associated with the client application 130 and the other components of client 122 .
- interface 124 provides the client 122 with the ability to communicate with other systems in a distributed environment—including within the system 100 —connected to the network 160 .
- Client 122 executes a client application 130 .
- the client application 130 may operate with or without requests to the server 102 —in other words, the client application 130 may execute its functionality without requiring the server 102 in some instances, such as by accessing data stored locally on the client 122 .
- the client application 130 may be operable to interact with the server 102 by sending requests via Multi-URL comparison system 140 to the server 102 for testing applications.
- the client application 130 may be a standalone web browser, while in others, the client application 130 may be an application with a built-in browser.
- the client application 130 can be a web-based application or a standalone application developed for the particular client 122 .
- the client application 130 can be a native iOS application for iPad, a desktop application for laptops, as well as others.
- the client application 130 where the client 122 is a particular thing (e.g., device) within a group of the internet of things, may be software associated with the functionality of the thing or device.
- the client application 130 may be an application that requests for application test results from the server 102 for presentation and/or execution on client 122 .
- client application 130 may be an agent or client-side version of the backend application 108 .
- Memory 132 may be similar to or different from memory 110 of the server 102 .
- memory 132 may store various objects or data, including any parameters, variables, algorithms, instructions, rules, constraints, or references thereto associated with the purposes of the client application 130 and/or client 122 .
- the memory 132 may store any other appropriate data, such as VPN applications, firmware logs and policies, firewall policies, a security or access log, print or other reporting files, as well as others.
- the illustrated client 122 is intended to encompass any computing device such as a desktop computer, laptop/notebook computer, mobile device, smartphone, personal data assistant (PDA), tablet computing device, one or more processors within these devices, or any other suitable processing device.
- the client 122 may comprise a computer that includes an input device, such as a keypad, touch screen, or other device that can accept user information, and an output device that conveys information associated with the operation of the client application 130 or the client 122 itself, including digital data, visual information, or a GUI 128 , as shown with respect to the client 122 .
- Multi-URL comparison system 140 may be any computing component operable to connect to or communicate with one or more servers 102 , client 122 , or other components via network 160 , as well as with the network 160 itself, using a wireline or wireless connection, and can include a browser extension (e.g., a browser plug-in), a network proxy, a cloud-based application, or any other suitable components. Although shown separately from the client 122 in FIG. 1 , in some implementations, Multi-URL comparison system 140 may be part of the client 122 as a browser extension to the client application 130 or alternatively, as part of the functionality of the client application 130 . In general, Multi-URL comparison system 140 comprises an electronic computer component operable to receive, transmit, process, and store any appropriate data associated with the system 100 of FIG. 1 . In some instances, Multi-URL comparison system 140 can be a particular thing within a group of the internet of things, such as a connected appliance or tool.
- Multi-URL comparison system 140 includes an interface 142 , a processor 144 , a system landscape management module 146 , an application testing module 148 , a response comparison module 150 , and memory 152 .
- the Multi-URL comparison system 140 may include additional and/or different components not shown in the block diagram. In some implementations, components may also be omitted from the block diagram. For example, when the Multi- URL comparison system 140 is implemented as a browser extension to the client application 130 , the processor 144 may be omitted.
- Interface 142 and processor 144 may be similar to or different than the interface 104 and processor 106 described with regard to server 102 .
- processor 144 executes instructions and manipulates data to perform the operations of the Multi-URL comparison system 140 .
- the processor 144 can execute some or all of the algorithms and operations described in the illustrated figures, including the operations performing the functionality associated with the system landscape management module 146 , the application testing module 148 , the response comparison module 150 , and the other components of Multi-URL comparison system 140 .
- interface 142 provides the Multi-URL comparison system 140 with the ability to communicate with other systems in a distributed environment—including within the system 100 —connected to the network 160 .
- the Multi-URL comparison system 140 includes one or more software and/or firmware components that implement the system landscape management module 146 .
- the system landscape management module 146 can provide functionality associated with managing system landscapes 156 and identifying URLs 158 for corresponding system landscapes 156 .
- Each system landscape in the system landscapes 156 represents or is associated with a combination of components for executing a particular version of an application.
- one system landscape includes a backend component of ERP 6.0 EhP2 SPS 05 (February 2009), an oData provisioning component of NetWeaver Gateway AddOn, and a frontend component of Fiori ABAP Frontend Server.
- an associated entry point is generated for or identified for initiating execution.
- the entry point in one instance may be a URL identifying the associated system landscape, wherein requests sent to the URL result in a response from the associated system landscape after the application is executed there.
- the entry point is a URI (Uniform Resource Identifier) of the associated system landscape.
- the system landscape management module 146 can perform landscape management using a technique different than the landscape management technique described herein.
- the Multi-URL comparison system 140 includes one or more software and/or firmware components that implement the application testing module 148 .
- the application testing module 148 can provide functionality associated with testing a particular version of an application against all or at least the most prominent combinations of system landscapes 156 .
- the application testing module 148 receives a testing request from the client 122 or from another system, including from an internal user of the Multi-URL comparison system 140 , and stores the testing request locally (e.g., in memory 152 ). For each tested system landscape, the application testing module 148 sends the same testing request to the associated URL identified by the system landscape management module 146 and corresponding to that particular system landscape, the application testing module 148 subsequently receiving a response from the associated URL to which the testing request was sent.
- one of the received responses is selected to be used for validation and, upon review and validation, as the validated response.
- the received responses, including the validated response are stored locally (e.g., in memory 152 ) and may be used by the response comparison module 150 .
- the application testing module 148 can perform application testing using a technique different than the application testing technique described herein.
- the Multi-URL comparison system 140 also includes one or more software and/or firmware components that implement the response comparison module 150 .
- the response comparison module 150 can provide functionality associated with comparing test responses and generating a result set of the comparison of each received response to the validated response. It is assumed that the same testing request should generate the same response on all system landscapes for the comparison to be meaningful. For some network options, responses for the same testing request may include different headers. In those cases, responses excluding headers are used for validation and comparison. In some implementations, responses received from system landscapes (e.g., network responses) are compared without being rendered. In some implementations, responses are rendered (e.g., rendered responses) before being compared.
- response time (e.g., the time between sending a request and receiving a corresponding response) for each tested system landscape is gathered and compared by the Multi-URL comparison system 140 .
- the response comparison module 150 can perform response comparison using a technique different than the response comparison technique described herein.
- Multi-URL comparison system 140 includes memory 152 .
- Memory 152 may be similar to or different from memory 110 of the server 102 .
- memory 152 may store various objects or data, including any parameters, variables, algorithms, instructions, rules, constraints, or references thereto associated with the purposes of the system landscape management module 146 , the application testing module 148 , and/or the response comparison module 150 .
- illustrated memory 152 includes results 154 , system landscapes 156 , and URLs 158 .
- the results 154 may store generated result sets associated with different testing requests and/or different sets of system landscapes.
- the system landscapes 156 may store or reference a set of system landscapes that the Multi-URL comparison system 140 can access.
- the URLs 158 may store URLs and/or URIs corresponding to some or all of the system landscapes stored in the system landscapes 156 .
- FIG. 1 While portions of the software elements illustrated in FIG. 1 are shown as individual modules that implement the various features and functionality through various objects, methods, or other processes, the software may instead include a number of sub- modules, third-party services, components, libraries, and such, as appropriate. Conversely, the features and functionality of various components can be combined into single components as appropriate.
- FIG. 2 is a flow diagram of an example interaction 200 for testing applications on multiple system landscapes.
- the interaction 200 may include additional and/or different components not shown in the flow diagram. Components may also be omitted from the interaction 200 , and additional messages may be added to the interaction 200 .
- the components illustrated in FIG. 2 may be similar to or different from those described in FIG. 1 .
- client 202 is connected to Multi-URL comparison system 204 .
- the Multi-URL comparison system 204 connects to a plurality of URLs (e.g., URL 1 to URLn), where each URL is associated with a different system landscape associated with a particular application for testing.
- network responses are compared to a validated response.
- rendered results may be compared to a validated rendered result instead.
- the Multi-URL comparison system 204 selects network response 214 of URL 1 206 to be validated manually.
- the Multi-URL comparison system 204 automatically compares network responses 218 of the rest URLs 208 (e.g., URL 2 to URLn) to the validated network response.
- the client 202 transmits a request 210 to the Multi-URL comparison system 204 for testing an application on multiple system landscapes.
- the Multi-URL comparison system 204 stores the request, identifies a plurality of system landscapes used or potentially used to execute the application, and generates entry points for the identified plurality of system landscapes (e.g., URL 1 to URLn).
- the Multi-URL comparison system 204 then transmits a request 212 to entry point 206 (e.g., URL 1 ) and transmits requests 216 to entry points 208 (e.g., URL 2 to URLn).
- Network response 214 is received from URL 1 206 , while network responses 218 are received from URL 2 to URLn 208 .
- the request 212 and the request 216 are transmitted at the same time, or alternatively, are transmitted as a common set of requests.
- a request/response combination to be used for validation of a response set may be determined after the requests are sent, and after at least some of the responses are received. In other words, the request to be validated may not be selected until multiple responses have been received in response to the requests.
- the Multi-URL comparison system 204 transmits the request to one entry point and waits for a network response before transmitting the request to another entry point. In others, the requests can be sent concurrently and/or simultaneously to the various entry points 206 , 208 .
- the Multi-URL comparison system 204 transmits the network response from URL 1 220 to the client 202 for validation.
- a user of the client 202 validates the network response, e.g., manually.
- the validated network response 222 is transmitted to the Multi-URL comparison system 204 .
- the Multi-URL comparison system 204 compares the received network responses 218 to the validated network response 214 and generates a result set of the comparison of each received network response to the validated network response.
- headers in network responses are ignored when performing the comparison.
- a set of rendered results associated with the network responses may be compared instead of the content of the network responses itself.
- FIG. 3 is a flowchart of an example method 300 for testing applications on multiple system landscapes. It will be understood that method 300 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate.
- a client, a server, or other computing device can be used to execute method 300 and related methods and obtain any data from the memory of a client, the server, or the other computing device.
- the method 300 and related methods are executed by one or more components of the system 100 described above with respect to FIG. 1 .
- the method 300 and related methods can be executed by the Multi-URL comparison system 140 of FIG. 1 .
- Each system landscape includes a combination of components for executing a particular version of an application.
- Each system landscape is associated with an entry point for execution.
- the entry point is a version-specific URL or URI.
- the instructions to test the plurality of system landscapes are received from a client, and the testing is performed by a proxy system remote from the client.
- each system landscape includes a unique combination of components as compared to each of the other system landscapes.
- a test of a first system landscape from the plurality of system landscapes is executed.
- the first system landscape is selected randomly from the plurality of system landscapes.
- the first system landscape is selected from the plurality of system landscapes based on a ranking
- Executing the test of the first system landscape includes sending a request to the entry point of the first system landscape.
- the request includes a predefined input for the test, e.g., for which a set of expected or calculable results can be estimated or predicted.
- a response received from the first system landscape is validated by a user associated with the testing.
- validating the response received from the first system landscape includes validating a rendered result of the received response.
- validating the response includes validating contents of a network response received from the first system landscape.
- tests of at least a subset of the remaining plurality of system landscapes are executed without user input. In some instances, tests of at least the subset of the remaining plurality of system landscapes are executed in response to validating the response received from the first system landscape. In some instances, tests of at least the subset of the remaining plurality of system landscapes are executed concurrently with the execution of the test of the first system landscape. In some implementations, a response to be validated by the user associated with the testing may be randomly or manually selected from a plurality responses received from the plurality of system landscapes. Executing the tests of the subset of the remaining plurality of system landscapes includes, for example, three process actions as described as following.
- requests are sent to the entry point of each of the subset of the remaining plurality of system landscapes.
- the requests include the predefined input for the tests similar to the predefined input associated with the request sent to the first system landscape.
- responses from the subset of the remaining plurality of system landscapes are received.
- each received response is compared to the validated response from the first system landscape.
- comparing each received response to the validated response from the first system landscape includes comparing rendered results of each received response to the rendered result of the validated response.
- comparing each received response to the validated response includes comparing network responses of each received response to the validated network response.
- isolated shadow browser instance e.g., same browser on the same client
- responses from relevant URLs are rendered and resulted images are compared.
- a result set of the comparison of each received response to the validated response is generated in response to the comparison.
- the generated result set is presented to a user associated with the testing.
- At 345 at least one received response deviating from the validated response is identified.
- the system landscape associated with the received response deviating from the validated response is identified as being associated with a possible error based on the failing or deviating response to the test.
- the at least one received response deviating from the validated response is based on a determination that the at least one received response includes different contents as compared to the validated response or that a rendered result from a particular system landscape does not match a rendered results associated with the validated response.
- an indication that at least one modification to a system landscape associated with a received response deviating from the validated response is received.
- the indication may be received at a time after the initial testing of 320 through 340 is performed.
- a new testing request is sent to the system landscape at which the at least one modification is identified.
- the new request can include the predefined input to the entry point of the system landscape associated with the received response deviating from the validated response.
- the new request is a previously stored request including the predefined input.
- test information e.g., test request, test responses, validated test response, system landscapes
- a solution of the error e.g., the modification
- an updated response is received from the system landscape associated with the at least one modification.
- the updated received response is compared to the validated response. Based on the comparison, a determination as to whether the new response matches the validated response can be made, where if the responses match, it can be considered that the system landscape associated with the at least one modification has been corrected by those modifications.
- the validated response and each of the received responses are associated with a respective response time (e.g., the time between sending a request and receiving a corresponding response).
- a particular response time is determined to deviate from an average response time by more than a predetermined threshold time, threshold percentage, or by a number of standard deviations from the average response time, among others.
- the average response time is a calculated average time of all respective response time.
- a system landscape associated with the determined response time deviation is identified as associated with an issue in the generated result set.
- FIG. 4 illustrates an example screenshot 400 of a sample generated result.
- network responses and response time are compared.
- URL 402 lists the URLs (e.g., URL 1 to URLn) that are tested.
- Results 404 lists the comparison result of the network responses.
- Response time 406 lists the comparison result of the response time.
- Flag 408 indicates whether a system landscape associated with a particular URL is identified as an error for failing the test.
- network response from URL 1 is identified as validated response.
- Network responses from URL 2 to URLn are compared to the validated response. For example, network response from URL 2 matches the validated response while network response from URL 3 does not match the validated response 410 .
- flag 420 identifies the system landscape corresponding with entry point URL 3 as being associated with an error for failing the test.
- Average response time T 412 is calculated by averaging all respective response time (e.g., t 1 to tn).
- response time t 2 414 for URL 2 deviates from the average response time T by more than a predetermined threshold time 416 .
- deviations by a predefined percentage of or standard deviation from the average response time T may alternatively identify a possible error.
- flag 418 identifies the system landscape corresponding with entry point URL 2 as being associated with an error for failing the test.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
Description
- The present disclosure relates to systems, software, and computer-implemented methods for testing applications on multiple system landscapes.
- New and updated applications are developed based on a platform. To support the new and updated applications, software vendors, system integrators (SIs), and IT departments need to test and validate the new applications on all or at least the most prominent combinations of system landscapes of the platform.
- The present disclosure involves systems, software, and computer-implemented methods for testing applications on multiple system landscapes. One computer-implemented method includes: identifying instructions to test a plurality of system landscapes, executing a test of a first system landscape from the plurality of system landscapes, validating a response received from the first system landscape by a user associated with the testing, executing tests of at least a subset of the remaining plurality of system landscapes which includes sending requests including the predefined input to the entry point of each of the subset of the remaining plurality of system landscapes, receiving responses from the subset of the remaining plurality of system landscapes, and comparing each received response to the validated response from the first system landscape, and in response to the comparison, generating a result set of the comparison of each received response to the validated response.
- While generally described as computer-implemented software embodied on non-transitory, tangible media that processes and transforms the respective data, some or all of the aspects may be computer-implemented methods or further included in respective systems or other devices for performing this described functionality. The details of these and other aspects and embodiments of the present disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the disclosure will be apparent from the description and drawings, and from the claims.
-
FIG. 1 is a block diagram illustrating an example system for testing applications on multiple system landscapes. -
FIG. 2 is a flow diagram of an example interaction between the components of the example system. -
FIG. 3 is a flowchart of an example method for testing applications on multiple system landscapes. -
FIG. 4 is an example screenshot of a sample generated result. - The present disclosure describes systems and tools for providing simultaneous and/or concurrent exploratory testing applications on multiple system landscapes. A system landscape is a combination of all of the systems and components required to run the applications. To support the applications on a platform, software vendors, SIs, and IT departments need to test the applications on all or at least the most prominent combinations of system landscapes of the platform. This disclosure describes the process to test applications on multiple system landscapes by validating one test result of one system landscape and comparing test results of the rest of the system landscapes to the validated test result.
- New and updated applications (e.g., new components, solutions, versions of applications, etc.) are developed based on a platform, such as a platform-as-a-service. In some instances, the platform includes a backend, a middleware, and a frontend. To test the new applications in the platform, multiple system landscapes of the platform need to be generated and subsequently tested for validity. For example, a Leave Request application may have the following system landscape options based on a various possible implementations:
- Backend
-
- ERP 6.0 EhP2 SPS 05 (February 2009)
- ERP 6.0 EhP3 SPS 05 (August 2009)
- ERP 6.0 EhP4 SPS 05 (November 2009)
- ERP 6.0 EhP4/NW 7.01—SPS 05 (November 2009)
- ERP 6.0 EhP5—SPS 03 (December 2010)
- ERP 6.0 EhP6—SPS 01 (November 2011)
- ERP 6.0 SPS 15 (February 2009)
- oData provisioning
-
- NetWeaver Gateway Hub
- NetWeaver Gateway AddOn
- HCl oData Provisioning
- HCl application edition
- Frontend
-
- Fiori ABAP Frontend Server (works with NetWeaver Gateway Add On or Hub only)
- Fiori cloud edition—Internal Access Point (works with NetWeaver Gateway Add On or Hub only)
- Fiori cloud edition—External Access Point
Overall, a total of 56 system landscapes may need to be tested in order to support Leave Request application for all suitable permutations.
- There are several approaches to perform application testing on multiple system landscapes, including manual testing, automated testing, and others. In manual testing, each system landscape is tested manually. As the number of system landscapes increases, the time required for the testing increases. In automated testing, inputs and expected outputs need to be predefined before running the automated test. Predefining expected output is expensive, especially for newly developed application capabilities and new user interfaces. This disclosure is focused on automatically testing multiple system landscapes, manually validating a test response of a single system landscape selected from the multiple system landscapes, and automatically comparing the rest of the test responses to the validated test response. The present solution may provide the following advantages, among others. Manually validating only one test response reduces the time that is required to manually test against multiple system landscapes. Automatically comparing the rest of the test responses allows the testing to be performed for all system landscapes with approximately the same time as testing a single system landscape. Since test information (e.g., test request, test responses, validated test response, system landscapes) may be stored and reviewed, when an error or possible error is found for a particular system landscape, it is easy to later access the stored test information and validate a solution to the error after modifications are made to the particular system landscape. This testing mechanism will ensure consistency among platforms, while strictly manual testing may result in some inconsistencies. In addition, platforms with performance issues on particular step are also easier identified. Any suitable testing algorithm for multiple system landscapes will benefit from the solution.
- Turning to the illustrated embodiment,
FIG. 1 is a block diagram illustrating anexample system 100 for testing applications on multiple system landscapes. Specifically, the illustratedsystem 100 includes or is communicably coupled with one ormore servers 102, aclient 122, a Multi-URL (Uniform Resource Locator)comparison system 140, and anetwork 160. Although shown separately, in some implementations, functionality of two or more systems or servers may be provided by a single system or server. In some implementations, the functionality of one illustrated system or server may be provided by multiple systems or servers. Whileserver 102 is illustrated as a single server,server 102 is meant to represent any combination of systems, including front end, middleware, and backend systems, as appropriate. In such implementations, a UI of the application may reside on the front end, the middleware may provide standard services for the application (e.g., in the form of oData protocol), and the backend may perform the actual data processing. While not necessary for the described tools and advantages to be realized, such multi-tier architectures can result in a large number of system landscape combinations, as multiple front ends, middleware, and backend systems may be combined in various ways. - As used in the present disclosure, the term “computer” is intended to encompass any suitable processing device. For example,
server 102 may be any computer or processing device such as, for example, a blade server, general-purpose personal computer (PC), Mac®, workstation, UNIX-based workstation, or any other suitable device. Moreover, althoughFIG. 1 illustratesserver 102 as a single system,server 102 can be implemented using two or more systems, as well as computers other than servers, including a server pool. In other words, the present disclosure contemplates computers other than general-purpose computers, as well as computers without conventional operating systems. Further, illustratedserver 102,client 122, andMulti-URL comparison system 140 may each be adapted to execute any operating system, including Linux, UNIX, Windows, Mac OS®, Java™, Android™, or iOS. According to one implementation, the illustrated systems may also include or be communicably coupled with a communication server, an e-mail server, a web server, a caching server, a streaming data server, and/or other suitable server or computer. - In general,
server 102 may be any suitable computing server or system for running applications in response to requests for testing the applications. Theserver 102 is described herein in terms of responding to requests for testing applications from users atclient 122 and other clients. However, theserver 102 may, in some implementations, be a part of a larger system providing additional functionality. For example,server 102 may be part of an enterprise business application or application suite providing one or more of enterprise relationship management, data management systems, customer relationship management, and others. For testing purposes,server 102 may receive a request to execute a testing or validity-related operation, and can provide a response back to the appropriate requestor. In some implementations, theserver 102 may be associated with a particular URL for web-based applications. The particular URL can trigger execution of a plurality of components and systems. - As illustrated,
server 102 includes aninterface 104, aprocessor 106, a backend application 108, andmemory 110. In general, theserver 102 is a simplified representation of one or more systems and/or servers that provide the described functionality, and is not meant to be limiting, but rather an example of the systems possible. - The
interface 104 is used by theserver 102 for communicating with other systems in a distributed environment—including within thesystem 100—connected to thenetwork 160, e.g.,client 122 and other systems communicably coupled to thenetwork 160. Generally, theinterface 104 comprises logic encoded in software and/or hardware in a suitable combination and operable to communicate with thenetwork 160. More specifically, theinterface 104 may comprise software supporting one or more communication protocols associated with communications such that thenetwork 160 or interface's hardware is operable to communicate physical signals within and outside of the illustratedenvironment 100. -
Network 160 facilitates wireless or wireline communications between the components of the environment 100 (i.e., betweenserver 102 andclient 122, betweenserver 102 andMulti-URL comparison system 140, and among others), as well as with any other local or remote computer, such as additional clients, servers, or other devices communicably coupled tonetwork 160, including those not illustrated inFIG. 1 . In the illustrated system, thenetwork 160 is depicted as a single network, but may be comprised of more than one network without departing from the scope of this disclosure, so long as at least a portion of thenetwork 160 may facilitate communications between senders and recipients. In some instances, one or more of the illustrated components may be included withinnetwork 160 as one or more cloud-based services or operations. For example, theMulti-URL comparison system 140 may be cloud-based services. Thenetwork 160 may be all or a portion of an enterprise or secured network, while in another instance, at least a portion of thenetwork 160 may represent a connection to the Internet. In some instances, a portion of thenetwork 160 may be a virtual private network (VPN). Further, all or a portion of thenetwork 160 can comprise either a wireline or wireless link. Example wireless links may include 802.11ac/ad,/af/a/b/g/n, 802.20, WiMax, LTE, and/or any other appropriate wireless link. In other words, thenetwork 160 encompasses any internal or external network, networks, sub-network, or combination thereof operable to facilitate communications between various computing components inside and outside the illustratedsystem 100. Thenetwork 160 may communicate, for example, Internet Protocol (IP) packets, Frame Relay frames, Asynchronous Transfer Mode (ATM) cells, voice, video, data, and other suitable information between network addresses. Thenetwork 160 may also include one or more local area networks (LANs), radio access networks (RANs), metropolitan area networks (MANs), wide area networks (WANs), all or a portion of the Internet, and/or any other communication system or systems at one or more locations. - As illustrated in
FIG. 1 , theserver 102 includes aprocessor 106. Although illustrated as asingle processor 106 inFIG. 1 , two or more processors may be used according to particular needs, desires, or particular implementations of theenvironment 100. Eachprocessor 106 may be a central processing unit (CPU), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or another suitable component. Generally, theprocessor 106 executes instructions and manipulates data to perform the operations of theserver 102. Specifically, theprocessor 106 executes the algorithms and operations described in the illustrated figures, including the operations performing the functionality associated with theserver 102 generally, as well as the various software modules (e.g., the backend application 108), including the functionality for sending communications to and receiving transmissions fromclient 122. - The backend application 108 represents an application, set of applications, software, software modules, or combination of software and hardware used to perform operations related to testing applications in the
server 102. In the present solution, the backend application 108 can perform operations including receiving requests for testing applications in theserver 102, running tests for the applications, providing test response, and performing standard operations associated with the backend application 108. The backend application 108 can include and provide various functionality to assist in the management and execution of testing applications. As noted, the backend application 108 may be an entry point associated with execution of a particular instance of an end-to- end or composite application, where when execution is initiated at the backend application 108, one or more additional applications and/or systems - Regardless of the particular implementation, “software” includes computer-readable instructions, firmware, wired and/or programmed hardware, or any combination thereof on a tangible medium (transitory or non-transitory, as appropriate) operable when executed to perform at least the processes and operations described herein. In fact, each software component may be fully or partially written or described in any appropriate computer language including C, C++, JavaScript, Java™, Visual Basic, assembler, Perl®, any suitable version of 4GL, as well as others.
- As illustrated,
server 102 includesmemory 110, ormultiple memories 110. Thememory 110 may include any memory or database module and may take the form of volatile or non-volatile memory including, without limitation, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), removable media, or any other suitable local or remote memory component. Thememory 110 may store various objects or data, including financial and/or business data, application information including URLs and settings, user information, behavior and access rules, administrative settings, password information, caches, backup data, repositories storing business and/or dynamic information, and any other appropriate information including any parameters, variables, algorithms, instructions, rules, constraints, or references thereto associated with the purposes of the backend application 108 and/orserver 102. Additionally, thememory 110 may store any other appropriate data, such as VPN applications, firmware logs and policies, firewall policies, a security or access log, print or other reporting files, as well as others. -
Client 122 may be any computing device operable to connect to or communicate withMulti-URL comparison system 140, other clients (not illustrated), or other components vianetwork 160, as well as with thenetwork 160 itself, using a wireline or wireless connection, and can include a desktop computer, a mobile device, a tablet, a server, or any other suitable computer device. In general,client 122 comprises an electronic computer device operable to receive, transmit, process, and store any appropriate data associated with thesystem 100 ofFIG. 1 . In some instances,client 122 can be a particular thing within a group of the internet of things, such as a connected appliance or tool. - As illustrated,
client 122 includes aninterface 124, aprocessor 126, a graphical user interface (GUI) 128, aclient application 130, andmemory 132.Interface 124 andprocessor 126 may be similar to or different than theinterface 104 andprocessor 106 described with regard toserver 102. In general,processor 126 executes instructions and manipulates data to perform the operations of theclient 122. Specifically, theprocessor 126 can execute some or all of the algorithms and operations described in the illustrated figures, including the operations performing the functionality associated with theclient application 130 and the other components ofclient 122. Similarly,interface 124 provides theclient 122 with the ability to communicate with other systems in a distributed environment—including within thesystem 100—connected to thenetwork 160. -
Client 122 executes aclient application 130. Theclient application 130 may operate with or without requests to theserver 102—in other words, theclient application 130 may execute its functionality without requiring theserver 102 in some instances, such as by accessing data stored locally on theclient 122. In others, theclient application 130 may be operable to interact with theserver 102 by sending requests viaMulti-URL comparison system 140 to theserver 102 for testing applications. In some implementations, theclient application 130 may be a standalone web browser, while in others, theclient application 130 may be an application with a built-in browser. Theclient application 130 can be a web-based application or a standalone application developed for theparticular client 122. For example, theclient application 130 can be a native iOS application for iPad, a desktop application for laptops, as well as others. In another example, theclient application 130, where theclient 122 is a particular thing (e.g., device) within a group of the internet of things, may be software associated with the functionality of the thing or device. In some instances, theclient application 130 may be an application that requests for application test results from theserver 102 for presentation and/or execution onclient 122. In some instances,client application 130 may be an agent or client-side version of the backend application 108. -
Memory 132 may be similar to or different frommemory 110 of theserver 102. In general,memory 132 may store various objects or data, including any parameters, variables, algorithms, instructions, rules, constraints, or references thereto associated with the purposes of theclient application 130 and/orclient 122. Additionally, thememory 132 may store any other appropriate data, such as VPN applications, firmware logs and policies, firewall policies, a security or access log, print or other reporting files, as well as others. - The illustrated
client 122 is intended to encompass any computing device such as a desktop computer, laptop/notebook computer, mobile device, smartphone, personal data assistant (PDA), tablet computing device, one or more processors within these devices, or any other suitable processing device. For example, theclient 122 may comprise a computer that includes an input device, such as a keypad, touch screen, or other device that can accept user information, and an output device that conveys information associated with the operation of theclient application 130 or theclient 122 itself, including digital data, visual information, or aGUI 128, as shown with respect to theclient 122. -
Multi-URL comparison system 140 may be any computing component operable to connect to or communicate with one ormore servers 102,client 122, or other components vianetwork 160, as well as with thenetwork 160 itself, using a wireline or wireless connection, and can include a browser extension (e.g., a browser plug-in), a network proxy, a cloud-based application, or any other suitable components. Although shown separately from theclient 122 inFIG. 1 , in some implementations,Multi-URL comparison system 140 may be part of theclient 122 as a browser extension to theclient application 130 or alternatively, as part of the functionality of theclient application 130. In general,Multi-URL comparison system 140 comprises an electronic computer component operable to receive, transmit, process, and store any appropriate data associated with thesystem 100 ofFIG. 1 . In some instances,Multi-URL comparison system 140 can be a particular thing within a group of the internet of things, such as a connected appliance or tool. - As illustrated,
Multi-URL comparison system 140 includes aninterface 142, aprocessor 144, a systemlandscape management module 146, anapplication testing module 148, aresponse comparison module 150, andmemory 152. In some implementations, theMulti-URL comparison system 140 may include additional and/or different components not shown in the block diagram. In some implementations, components may also be omitted from the block diagram. For example, when the Multi-URL comparison system 140 is implemented as a browser extension to theclient application 130, theprocessor 144 may be omitted. -
Interface 142 andprocessor 144 may be similar to or different than theinterface 104 andprocessor 106 described with regard toserver 102. In general,processor 144 executes instructions and manipulates data to perform the operations of theMulti-URL comparison system 140. Specifically, theprocessor 144 can execute some or all of the algorithms and operations described in the illustrated figures, including the operations performing the functionality associated with the systemlandscape management module 146, theapplication testing module 148, theresponse comparison module 150, and the other components ofMulti-URL comparison system 140. Similarly,interface 142 provides theMulti-URL comparison system 140 with the ability to communicate with other systems in a distributed environment—including within thesystem 100—connected to thenetwork 160. - The
Multi-URL comparison system 140 includes one or more software and/or firmware components that implement the systemlandscape management module 146. The systemlandscape management module 146 can provide functionality associated with managingsystem landscapes 156 and identifyingURLs 158 for correspondingsystem landscapes 156. Each system landscape in thesystem landscapes 156 represents or is associated with a combination of components for executing a particular version of an application. For example, for the Leave Request application described before, one system landscape includes a backend component of ERP 6.0 EhP2 SPS 05 (February 2009), an oData provisioning component of NetWeaver Gateway AddOn, and a frontend component of Fiori ABAP Frontend Server. For each system landscape, an associated entry point is generated for or identified for initiating execution. For example, the entry point in one instance may be a URL identifying the associated system landscape, wherein requests sent to the URL result in a response from the associated system landscape after the application is executed there. In some instances, the entry point is a URI (Uniform Resource Identifier) of the associated system landscape. Note that, in some implementations, the systemlandscape management module 146 can perform landscape management using a technique different than the landscape management technique described herein. - The
Multi-URL comparison system 140 includes one or more software and/or firmware components that implement theapplication testing module 148. Theapplication testing module 148 can provide functionality associated with testing a particular version of an application against all or at least the most prominent combinations ofsystem landscapes 156. In some implementations, theapplication testing module 148 receives a testing request from theclient 122 or from another system, including from an internal user of theMulti-URL comparison system 140, and stores the testing request locally (e.g., in memory 152). For each tested system landscape, theapplication testing module 148 sends the same testing request to the associated URL identified by the systemlandscape management module 146 and corresponding to that particular system landscape, theapplication testing module 148 subsequently receiving a response from the associated URL to which the testing request was sent. In some implementations, one of the received responses is selected to be used for validation and, upon review and validation, as the validated response. The received responses, including the validated response, are stored locally (e.g., in memory 152) and may be used by theresponse comparison module 150. Note that, in some implementations, theapplication testing module 148 can perform application testing using a technique different than the application testing technique described herein. - The
Multi-URL comparison system 140 also includes one or more software and/or firmware components that implement theresponse comparison module 150. Theresponse comparison module 150 can provide functionality associated with comparing test responses and generating a result set of the comparison of each received response to the validated response. It is assumed that the same testing request should generate the same response on all system landscapes for the comparison to be meaningful. For some network options, responses for the same testing request may include different headers. In those cases, responses excluding headers are used for validation and comparison. In some implementations, responses received from system landscapes (e.g., network responses) are compared without being rendered. In some implementations, responses are rendered (e.g., rendered responses) before being compared. In some implementations, response time (e.g., the time between sending a request and receiving a corresponding response) for each tested system landscape is gathered and compared by theMulti-URL comparison system 140. Note that, in some implementations, theresponse comparison module 150 can perform response comparison using a technique different than the response comparison technique described herein. - As illustrated,
Multi-URL comparison system 140 includesmemory 152.Memory 152 may be similar to or different frommemory 110 of theserver 102. In general,memory 152 may store various objects or data, including any parameters, variables, algorithms, instructions, rules, constraints, or references thereto associated with the purposes of the systemlandscape management module 146, theapplication testing module 148, and/or theresponse comparison module 150. For example, illustratedmemory 152 includesresults 154,system landscapes 156, andURLs 158. Theresults 154 may store generated result sets associated with different testing requests and/or different sets of system landscapes. Thesystem landscapes 156 may store or reference a set of system landscapes that theMulti-URL comparison system 140 can access. TheURLs 158 may store URLs and/or URIs corresponding to some or all of the system landscapes stored in thesystem landscapes 156. - While portions of the software elements illustrated in
FIG. 1 are shown as individual modules that implement the various features and functionality through various objects, methods, or other processes, the software may instead include a number of sub- modules, third-party services, components, libraries, and such, as appropriate. Conversely, the features and functionality of various components can be combined into single components as appropriate. -
FIG. 2 is a flow diagram of anexample interaction 200 for testing applications on multiple system landscapes. In some implementations, theinteraction 200 may include additional and/or different components not shown in the flow diagram. Components may also be omitted from theinteraction 200, and additional messages may be added to theinteraction 200. The components illustrated inFIG. 2 may be similar to or different from those described inFIG. 1 . - As illustrated in
FIG. 2 ,client 202 is connected toMulti-URL comparison system 204. TheMulti-URL comparison system 204 connects to a plurality of URLs (e.g., URL1 to URLn), where each URL is associated with a different system landscape associated with a particular application for testing. In thisexample interaction 200, network responses are compared to a validated response. In other instances, rendered results may be compared to a validated rendered result instead. TheMulti-URL comparison system 204 selectsnetwork response 214 ofURL1 206 to be validated manually. TheMulti-URL comparison system 204 automatically comparesnetwork responses 218 of the rest URLs 208 (e.g., URL2 to URLn) to the validated network response. - The
client 202 transmits arequest 210 to theMulti-URL comparison system 204 for testing an application on multiple system landscapes. In response to receiving therequest 210, theMulti-URL comparison system 204 stores the request, identifies a plurality of system landscapes used or potentially used to execute the application, and generates entry points for the identified plurality of system landscapes (e.g., URL1 to URLn). TheMulti-URL comparison system 204 then transmits arequest 212 to entry point 206 (e.g., URL1) and transmitsrequests 216 to entry points 208 (e.g., URL2 to URLn).Network response 214 is received fromURL1 206, whilenetwork responses 218 are received from URL2 toURLn 208. In some implementations, therequest 212 and therequest 216 are transmitted at the same time, or alternatively, are transmitted as a common set of requests. In those instances, a request/response combination to be used for validation of a response set may be determined after the requests are sent, and after at least some of the responses are received. In other words, the request to be validated may not be selected until multiple responses have been received in response to the requests. In some implementations, theMulti-URL comparison system 204 transmits the request to one entry point and waits for a network response before transmitting the request to another entry point. In others, the requests can be sent concurrently and/or simultaneously to the various entry points 206, 208. In the present illustration, theMulti-URL comparison system 204 transmits the network response fromURL1 220 to theclient 202 for validation. In some implementations, a user of theclient 202 validates the network response, e.g., manually. After being validated, the validatednetwork response 222 is transmitted to theMulti-URL comparison system 204. In response to receiving the validated network response, theMulti-URL comparison system 204 compares the receivednetwork responses 218 to the validatednetwork response 214 and generates a result set of the comparison of each received network response to the validated network response. In some implementations, headers in network responses are ignored when performing the comparison. In some instances, a set of rendered results associated with the network responses may be compared instead of the content of the network responses itself. -
FIG. 3 is a flowchart of anexample method 300 for testing applications on multiple system landscapes. It will be understood thatmethod 300 and related methods may be performed, for example, by any suitable system, environment, software, and hardware, or a combination of systems, environments, software, and hardware, as appropriate. For example, one or more of a client, a server, or other computing device can be used to executemethod 300 and related methods and obtain any data from the memory of a client, the server, or the other computing device. In some implementations, themethod 300 and related methods are executed by one or more components of thesystem 100 described above with respect toFIG. 1 . For example, themethod 300 and related methods can be executed by theMulti-URL comparison system 140 ofFIG. 1 . - At 305, instructions to test a plurality of system landscapes are identified. Each system landscape includes a combination of components for executing a particular version of an application. Each system landscape is associated with an entry point for execution. In some instances, the entry point is a version-specific URL or URI. In some instances, the instructions to test the plurality of system landscapes are received from a client, and the testing is performed by a proxy system remote from the client. In some instances, each system landscape includes a unique combination of components as compared to each of the other system landscapes.
- At 310, a test of a first system landscape from the plurality of system landscapes is executed. In some instances, the first system landscape is selected randomly from the plurality of system landscapes. In some instances, the first system landscape is selected from the plurality of system landscapes based on a ranking Executing the test of the first system landscape includes sending a request to the entry point of the first system landscape. The request includes a predefined input for the test, e.g., for which a set of expected or calculable results can be estimated or predicted.
- At 315, a response received from the first system landscape is validated by a user associated with the testing. In some instances, validating the response received from the first system landscape includes validating a rendered result of the received response. In some instances, validating the response includes validating contents of a network response received from the first system landscape.
- At 320, tests of at least a subset of the remaining plurality of system landscapes are executed without user input. In some instances, tests of at least the subset of the remaining plurality of system landscapes are executed in response to validating the response received from the first system landscape. In some instances, tests of at least the subset of the remaining plurality of system landscapes are executed concurrently with the execution of the test of the first system landscape. In some implementations, a response to be validated by the user associated with the testing may be randomly or manually selected from a plurality responses received from the plurality of system landscapes. Executing the tests of the subset of the remaining plurality of system landscapes includes, for example, three process actions as described as following. At 325, requests are sent to the entry point of each of the subset of the remaining plurality of system landscapes. The requests include the predefined input for the tests similar to the predefined input associated with the request sent to the first system landscape. At 330, responses from the subset of the remaining plurality of system landscapes are received. At 335, each received response is compared to the validated response from the first system landscape. In some instances, comparing each received response to the validated response from the first system landscape includes comparing rendered results of each received response to the rendered result of the validated response. In some instances, comparing each received response to the validated response includes comparing network responses of each received response to the validated network response. In some instances, isolated shadow browser instance (e.g., same browser on the same client) for every URL may be implemented so that the comparison can be performed in a single browser. In some instances, responses from relevant URLs are rendered and resulted images are compared.
- At 340, a result set of the comparison of each received response to the validated response is generated in response to the comparison. In some instances, the generated result set is presented to a user associated with the testing.
- In some implementations, further operations for testing applications on multiple system landscapes can be optionally performed. At 345, at least one received response deviating from the validated response is identified. The system landscape associated with the received response deviating from the validated response is identified as being associated with a possible error based on the failing or deviating response to the test. In some instances, the at least one received response deviating from the validated response is based on a determination that the at least one received response includes different contents as compared to the validated response or that a rendered result from a particular system landscape does not match a rendered results associated with the validated response.
- At 350, an indication that at least one modification to a system landscape associated with a received response deviating from the validated response (e.g., that failed the comparison with the validated response) is received. The indication may be received at a time after the initial testing of 320 through 340 is performed. At 355, a new testing request is sent to the system landscape at which the at least one modification is identified. The new request can include the predefined input to the entry point of the system landscape associated with the received response deviating from the validated response. In some instances, the new request is a previously stored request including the predefined input. When the test information (e.g., test request, test responses, validated test response, system landscapes) is stored, it is easy to later access the stored test information and validate a solution of the error (e.g., the modification) on the particular system landscape that fails the test. At 360, an updated response is received from the system landscape associated with the at least one modification. At 365, the updated received response is compared to the validated response. Based on the comparison, a determination as to whether the new response matches the validated response can be made, where if the responses match, it can be considered that the system landscape associated with the at least one modification has been corrected by those modifications.
- Additional process actions (not shown in
FIG. 3 ) may be added to improve the performance of the testing mechanism. In some instances, the validated response and each of the received responses are associated with a respective response time (e.g., the time between sending a request and receiving a corresponding response). A particular response time is determined to deviate from an average response time by more than a predetermined threshold time, threshold percentage, or by a number of standard deviations from the average response time, among others. In some instances, the average response time is a calculated average time of all respective response time. A system landscape associated with the determined response time deviation is identified as associated with an issue in the generated result set. -
FIG. 4 illustrates anexample screenshot 400 of a sample generated result. In this example, network responses and response time are compared.URL 402 lists the URLs (e.g., URL1 to URLn) that are tested.Results 404 lists the comparison result of the network responses.Response time 406 lists the comparison result of the response time.Flag 408 indicates whether a system landscape associated with a particular URL is identified as an error for failing the test. - In the
example screenshot 400, network response from URL1 is identified as validated response. Network responses from URL2 to URLn are compared to the validated response. For example, network response from URL2 matches the validated response while network response from URL3 does not match the validatedresponse 410. As a result,flag 420 identifies the system landscape corresponding with entry point URL3 as being associated with an error for failing the test. - Average
response time T 412 is calculated by averaging all respective response time (e.g., t1 to tn). In this example,response time t2 414 for URL2 deviates from the average response time T by more than apredetermined threshold time 416. In other instances, deviations by a predefined percentage of or standard deviation from the average response time T may alternatively identify a possible error. As a result,flag 418 identifies the system landscape corresponding with entry point URL2 as being associated with an error for failing the test. - Alternative methods of testing applications on multiple system landscapes may be used in other implementations. Those described herein are examples and are not meant to be limiting.
- The preceding figures and accompanying description illustrate example systems, processes, and computer-implementable techniques. While the illustrated systems and processes contemplate using, implementing, or executing any suitable technique for performing these and other tasks, it will be understood that these systems and processes are for illustration purposes only and that the described or similar techniques may be performed at any appropriate time, including concurrently, individually, or in combination, or performed by alternative components or systems. In addition, many of the operations in these processes may take place simultaneously, concurrently, and/or in different orders than as shown. Moreover, the illustrated systems may use processes with additional operations, fewer operations, and/or different operations, so long as the methods remain appropriate.
- In other words, although this disclosure has been described in terms of certain embodiments and generally associated methods, alterations and permutations of these embodiments and methods will be apparent to those skilled in the art. Accordingly, the above description of example embodiments does not define or constrain this disclosure. Other changes, substitutions, and alterations are also possible without departing from the spirit and scope of this disclosure.
Claims (21)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/918,828 US9652367B1 (en) | 2015-10-21 | 2015-10-21 | Exploratory testing on multiple system landscapes |
US15/484,323 US10296450B2 (en) | 2015-10-21 | 2017-04-11 | Exploratory testing on multiple system landscapes |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/918,828 US9652367B1 (en) | 2015-10-21 | 2015-10-21 | Exploratory testing on multiple system landscapes |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/484,323 Continuation US10296450B2 (en) | 2015-10-21 | 2017-04-11 | Exploratory testing on multiple system landscapes |
Publications (2)
Publication Number | Publication Date |
---|---|
US20170116112A1 true US20170116112A1 (en) | 2017-04-27 |
US9652367B1 US9652367B1 (en) | 2017-05-16 |
Family
ID=58558794
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/918,828 Active US9652367B1 (en) | 2015-10-21 | 2015-10-21 | Exploratory testing on multiple system landscapes |
US15/484,323 Active 2035-11-02 US10296450B2 (en) | 2015-10-21 | 2017-04-11 | Exploratory testing on multiple system landscapes |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/484,323 Active 2035-11-02 US10296450B2 (en) | 2015-10-21 | 2017-04-11 | Exploratory testing on multiple system landscapes |
Country Status (1)
Country | Link |
---|---|
US (2) | US9652367B1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11488114B2 (en) | 2020-02-20 | 2022-11-01 | Sap Se | Shared collaborative electronic events for calendar services |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107729244B (en) * | 2017-10-12 | 2020-12-11 | 北京元心科技有限公司 | Multi-system testing method and device, mobile terminal and testing equipment |
CN108399114B (en) * | 2018-03-21 | 2021-02-02 | 财付通支付科技有限公司 | System performance testing method and device and storage medium |
US10503437B2 (en) * | 2018-04-30 | 2019-12-10 | EMC IP Holding Company LLC | Distributed service level management with performance resilience objectives |
US11354115B2 (en) * | 2020-07-30 | 2022-06-07 | Ncr Corporation | Methods and a system for interface extensions |
Family Cites Families (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2025120A1 (en) * | 1989-09-28 | 1991-03-29 | John W. White | Portable and dynamic distributed application architecture |
US5732273A (en) * | 1995-08-11 | 1998-03-24 | Digital Equipment Corporation | System for monitoring compute system performance |
US6636585B2 (en) * | 2000-06-26 | 2003-10-21 | Bearingpoint, Inc. | Metrics-related testing of an operational support system (OSS) of an incumbent provider for compliance with a regulatory scheme |
US7047521B2 (en) * | 2001-06-07 | 2006-05-16 | Lynoxworks, Inc. | Dynamic instrumentation event trace system and methods |
US7346893B2 (en) * | 2002-03-28 | 2008-03-18 | Sap Ag | Exchange infrastructure system and method |
US7089456B2 (en) * | 2002-06-03 | 2006-08-08 | Honeywell International, Inc | Error response test system and method using test mask variable |
EP1680741B1 (en) * | 2003-11-04 | 2012-09-05 | Kimberly-Clark Worldwide, Inc. | Testing tool for complex component based software systems |
US7774172B1 (en) * | 2003-12-10 | 2010-08-10 | The Mathworks, Inc. | Method for using a graphical debugging tool |
US7984304B1 (en) * | 2004-03-02 | 2011-07-19 | Vmware, Inc. | Dynamic verification of validity of executable code |
US7650594B2 (en) * | 2004-05-27 | 2010-01-19 | National Instruments Corporation | Graphical program analyzer with framework for adding user-defined tests |
US7539983B2 (en) * | 2005-01-14 | 2009-05-26 | Microsoft Corporation | Tool for processing software programs using modified live-ness definition |
US7945898B1 (en) * | 2006-03-16 | 2011-05-17 | Avaya Inc. | Handling loops in programs and examining feasible software behavior for detecting malicious code |
CN100568808C (en) * | 2006-10-12 | 2009-12-09 | 国际商业机器公司 | Method and apparatus to a plurality of webserver parallel work-flows |
US8321430B2 (en) | 2009-05-04 | 2012-11-27 | Sap Portals Israel Ltd. | Resource efficient handling change notifications in graph structures |
US8751558B2 (en) | 2010-03-22 | 2014-06-10 | Sap Ag | Mashup infrastructure with learning mechanism |
US20120041922A1 (en) | 2010-08-15 | 2012-02-16 | Sap Portals Israel Ltd | Shareable content container |
US8930908B2 (en) * | 2010-12-20 | 2015-01-06 | Sap Se | Aspect and system landscape capability-driven automatic testing of software applications |
US8683433B2 (en) * | 2011-06-22 | 2014-03-25 | Business Objects Software Limited | Adaptive change management in computer system landscapes |
US9218189B2 (en) | 2011-10-04 | 2015-12-22 | Sap Portals Israel Ltd | Managing a contextual workspace |
US20130086495A1 (en) | 2011-10-04 | 2013-04-04 | Sap Portals Israel Ltd | Managing Social Suggestions in a Contextual Workspace |
US9245245B2 (en) | 2011-10-06 | 2016-01-26 | Sap Portals Israel Ltd | Managing semantic data in a contextual workspace |
US9213954B2 (en) | 2011-10-06 | 2015-12-15 | Sap Portals Israel Ltd | Suggesting data in a contextual workspace |
US9251032B2 (en) * | 2011-11-03 | 2016-02-02 | Fujitsu Limited | Method, computer program, and information processing apparatus for analyzing performance of computer system |
US8583678B2 (en) | 2011-11-21 | 2013-11-12 | Sap Portals Israel Ltd | Graphical exploration of a database |
US9898393B2 (en) * | 2011-11-22 | 2018-02-20 | Solano Labs, Inc. | System for distributed software quality improvement |
US20130139081A1 (en) | 2011-11-28 | 2013-05-30 | Sap Portals Israel Ltd | Viewing previous contextual workspaces |
US9152947B2 (en) | 2011-12-05 | 2015-10-06 | Sap Portals Isreal Ltd | Real-time social networking |
US8924530B2 (en) * | 2011-12-12 | 2014-12-30 | Sap Se | Multi-phase monitoring of hybrid system landscapes |
US9164990B2 (en) | 2011-12-20 | 2015-10-20 | Sap Portals Israel Ltd | Annotating contextual workspaces |
US20130290094A1 (en) * | 2012-04-25 | 2013-10-31 | Seema Varma Srivastava | Methods and systems to explicitly and implicitly measure media impact |
US20140013000A1 (en) | 2012-07-03 | 2014-01-09 | Sap Portals Israel Ltd. | Social graph based permissions, publishing, and subscription |
US9070109B2 (en) | 2012-07-10 | 2015-06-30 | Sap Portals Israel Ltd | Dynamic presentation of a user profile |
US20140040177A1 (en) | 2012-08-06 | 2014-02-06 | Sap Portals Israel Ltd. | Runtime adaptation in dynamic workspaces |
US9053152B2 (en) | 2012-08-06 | 2015-06-09 | Sap Portals Israel Ltd | Search and context based creation in dynamic workspaces |
US20140040178A1 (en) | 2012-08-06 | 2014-02-06 | Sap Portals Israel Ltd. | Rule-based creation in dynamic workspaces |
JP6217212B2 (en) * | 2013-07-25 | 2017-10-25 | 富士通株式会社 | Test program, test method and test apparatus |
WO2016178661A1 (en) * | 2015-05-04 | 2016-11-10 | Hewlett Packard Enterprise Development Lp | Determining idle testing periods |
-
2015
- 2015-10-21 US US14/918,828 patent/US9652367B1/en active Active
-
2017
- 2017-04-11 US US15/484,323 patent/US10296450B2/en active Active
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11488114B2 (en) | 2020-02-20 | 2022-11-01 | Sap Se | Shared collaborative electronic events for calendar services |
Also Published As
Publication number | Publication date |
---|---|
US10296450B2 (en) | 2019-05-21 |
US20170220460A1 (en) | 2017-08-03 |
US9652367B1 (en) | 2017-05-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10296450B2 (en) | Exploratory testing on multiple system landscapes | |
US10528624B2 (en) | Optimal hash calculation of archive files and their file entries | |
US9225515B2 (en) | Shared portal context session | |
US8955080B2 (en) | Managing single sign-ons between different entities | |
US10091282B2 (en) | Metadata-driven dynamic load balancing in multi-tenant systems | |
EP3449375B1 (en) | Monitoring of interactions between services | |
US8935743B2 (en) | Web service security cockpit | |
US10049033B2 (en) | Application gateway for cloud computing systems | |
US8745635B2 (en) | Managing business process messaging | |
US10102239B2 (en) | Application event bridge | |
US11258852B2 (en) | Dynamic topology switch for data replication | |
US10972564B2 (en) | System and method for automating actions in distributed computing | |
US20170171221A1 (en) | Real-time Scanning of IP Addresses | |
US11347574B2 (en) | Systems and methods for processing software application notifications | |
US20200293310A1 (en) | Software development tool integration and monitoring | |
US10289528B2 (en) | Targeted user notification of bug fixes | |
US20160149948A1 (en) | Automated Cyber Threat Mitigation Coordinator | |
US10891193B2 (en) | Application health monitoring and automatic remediation | |
US10324766B1 (en) | Task profile collection | |
US8793326B2 (en) | System, method and computer program product for reconstructing data received by a computer in a manner that is independent of the computer | |
US20140173587A1 (en) | Managing an application modification process | |
US11334558B2 (en) | Adaptive metadata refreshing | |
US11025593B2 (en) | Template-based session control in proxy solutions | |
Kusumadewi et al. | Performance Analysis of Devops Practice Implementation Of CI/CD Using Jenkins |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SAP PORTALS ISRAEL LTD, ISRAEL Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VAINER, VITALY;REEL/FRAME:036844/0452 Effective date: 20151021 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |