WO2021188097A1 - Output of a performance parameter to be satisfied for an application deployed in a deployment configuration - Google Patents

Output of a performance parameter to be satisfied for an application deployed in a deployment configuration Download PDF

Info

Publication number
WO2021188097A1
WO2021188097A1 PCT/US2020/023174 US2020023174W WO2021188097A1 WO 2021188097 A1 WO2021188097 A1 WO 2021188097A1 US 2020023174 W US2020023174 W US 2020023174W WO 2021188097 A1 WO2021188097 A1 WO 2021188097A1
Authority
WO
WIPO (PCT)
Prior art keywords
application
deployment configuration
performance
performance parameter
version
Prior art date
Application number
PCT/US2020/023174
Other languages
French (fr)
Inventor
Daniele Antunes PINHEIRO
Jhonny Marcos Acordi MERTZ
Leonardo Dias Marquezini
Matthias Oliveira de NUNES
Original Assignee
Hewlett-Packard Development Company, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L.P. filed Critical Hewlett-Packard Development Company, L.P.
Priority to PCT/US2020/023174 priority Critical patent/WO2021188097A1/en
Publication of WO2021188097A1 publication Critical patent/WO2021188097A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3419Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment by assessing time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3452Performance evaluation by statistical analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0751Error or fault detection not based on redundancy
    • G06F11/0754Error or fault detection not based on redundancy by exceeding limits
    • G06F11/076Error or fault detection not based on redundancy by exceeding limits by exceeding a count or rate limit, e.g. word- or bit count limit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3664Environments for testing or debugging software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/81Threshold
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/10Requirements analysis; Specification techniques

Abstract

An apparatus receives a plurality of deployment configurations to deploy an application in a simulated environment. The apparatus executes a plurality of stress tests in the simulated environment with respect to the application as the application is deployed in each deployment configuration among the plurality of deployment configurations. The apparatus determines, when performance of the application during execution of a respective stress test reaches an upper threshold value, a performance limit of the application for the respective deployment configuration among the plurality of deployment configurations. The apparatus outputs a performance parameter for each deployment configuration which is to be satisfied for the application when the application is to be deployed in the corresponding deployment configuration in an actual environment. The performance parameters are determined based on the performance limit of the application determined for the respective deployment configuration.

Description

OUTPUT OF A PERFORMANCE PARAMETER TO BE SATISFIED FOR AN
APPLICATION DEPLOYED IN A DEPLOYMENT CONFIGURATION
BACKGROUND
[0001] During the development of software for an application to be hosted by a service provider, a commitment between the service provider and a client may define certain performance metrics to be satisfied by the service provider when hosting the application. For example, an agreement between the service provider and the client may include expected customer support reply times, an expected load capacity, and other service thresholds to be met. When the agreement between the service provider and the client is formed at a beginning stage of the software development, the performance metrics to be satisfied are defined based on limited information about the application and the users who will be using the application. Therefore, the performance metrics outlined in the agreement may not be accurate or realistically achievable according to the initial terms of the agreement, when the software is fully developed. This may be due to, for example, technical limitations, higher escalation costs, or changes in the needs of the client.
BRIEF DESCRIPTION OF THE DRAWINGS [0002] FIG. 1 is an example process by which a user can obtain a recommendation for a service level agreement;
[0003] FIG. 2 is an example system in which a user can obtain a recommendation for a service level agreement;
[0004] FIG. 3 is an example apparatus for obtaining a recommendation for a service level agreement;
[0005] FIG. 4 is an example workflow for obtaining a recommendation for a service level agreement;
[0006] FIG. 5 is an example chart illustrating performance limits of different features of a software application captured by stress tests executed over different software application releases.
[0007] FIG. 6 is an example chart illustrating performance limits of a specific feature of a software application, or of the software application overall, as captured by stress tests executed over different software application releases; and
[0008] FIG. 7 is an example table illustrating performance parameters for different features of a software application and for different versions of the software application.
DETAILED DESCRIPTION
[0009] Various examples of the disclosure will now be described with reference to the accompanying drawings, wherein like reference characters denote like elements. Examples to be explained in the following may be modified and implemented in various different forms. [0010] When it is stated in the disclosure that one element is "connected to" or "coupled to" another element, the expression encompasses an example of a direct connection or direct coupling, as well as a connection with another element interposed therebetween. Further, when it is stated herein that one element "includes" another element, unless otherwise stated explicitly, it means that yet another element may be further included rather than being excluded. [0011] When beginning a new software project, stakeholders may struggle to define an appropriate service level agreement (SLA). A SLA generally encompasses a commitment between a service provider and a client where terms of the service are defined. For example, the terms of the service may include an expected performance quality, availability of the service, and responsibilities of the parties. In the context of internet related services relating to an application, a service provider and client may have a SLA with terms regarding a mean time between failures, mean time to repair, mean time to recovery, throughput, load, response time, and other measureable parameters which are to be satisfied by the service provider. At the beginning of the software project the parties to the SLA may have limited information about the software application and the anticipated users of the software application.
Thus, a SLA may be suggested based on previous or similar software projects, or what the parties envision in terms of workload for the software system being developed. However, such an ad-hoc definition may not be reliable because it does not consider real-world information about the software being developed, such as business complexities, the technologies involved, or various other constraints. Therefore, the initial SLA may be adjusted significantly as the release deadline for the software application approaches. The revision and adjustment process of the SLA is time-consuming and can also be error-prone because it is not known whether the terms of the SLA which define the parameters to be satisfied by the service provider for the given application are achievable. That is, it is not known whether the terms set forth in the SLA are adequate or realistic for the software application.
[0012] According to the examples provided in this disclosure, the continuous integration and delivery process utilized to develop the software application may include the asynchronous execution of stress tests that assesses the performance limits of a software application under different deployment configurations (or deployment setups). For example, the stress tests may be automatically generated. Based on the results of the stress tests, aggregated data can be obtained about the application performance and a user can be provided with a recommended SLA. A user can review and define an accurate SLA that reflects the software application specifications and the practical realities for deploying the software application in a real-world environment by a service provider. The user can be provided with various recommendations for different SLAs for a single deployment configuration or for different deployment configurations. The user can select a recommended SLA according to various preferences or criteria, for example, budget constraints, application performance goals, user expectations, and the like. [0013] The examples disclosed herein provide a way for the user to simulate and predict the performance of a software application hosted by a service provider, for example in terms of costs and resource consumption of different deployment configurations, so that an SLA may be appropriately defined.
[0014] The examples described herein provide ways to integrate performance and/or stress tests into a continuous integration/continuous deployment (Cl/CD) process, evaluate variations on different deployment setups automatically, aggregate historic results, and generate information to allow a user to define performance parameters which are accurate and realistically reflect how an application may be expected to perform in a given deployment configuration. In examples described herein, performance metrics of the software during various stages of development, as applied to different deployment configurations, can be collected. For example, analyzing the collected performance metrics can assist a user in forming a service level agreement between a service provider and a client.
[0015] Referring now to FIG. 1 , an example process by which a user can obtain a recommendation for a SLA is illustrated.
[0016] At operation 110, an apparatus receives an input from a user. The input from the user at operation 110 may include the software application (code) that is to be tested.
[0017] The input from the user at operation 110 may also include an input selecting the performance and/or stress tests to be applied to the software application according to various deployment configurations. As another example, the performance and/or stress tests to be applied to the software application may be set by default.
[0018] A stress test refers to a type of software test that verifies the stability and reliability of the software system. The stress test can be used to measure or evaluate the robustness and error handling capabilities of the software under extremely heavy load conditions. For example, stress tests may be used to find the software application limits based on inputs in terms of an acceptable level of error rate and response times, i.e., without "crashing." As an example stress test, an input can be made to characterize the stress test in which a number of requests to the software application during a performance test execution constantly increases. As an example, the software application may be stressed by increasing a number of requests to the software application until the specified error rate or degrading response time is reached, i.e., until the software application crashes. The number of requests to the application may be increased at a constant rate, or at a variable rate, for example.
[0019] A performance limit of the software application, or upper limit of the software application, may be the number of requests at the time or just prior to the time, when the software application crashes. As another example, the performance limit of the software application, or upper limit of the software application, may be the number of requests at the time, or just prior to the time, that the software application crashes, with respect to the deployment configuration being utilized during the stress test, as well as the version of the software application being tested. Another example of the performance limit of the software application, or upper limit of the software application, may be the response time of the software application at the time or just prior to the time, when the software application crashes. As another example, the performance limit of the software application, or upper limit of the software application, may be the response time of the software application at the time, or just prior to the time, that the software application crashes, with respect to the deployment configuration being utilized during the stress test, as well as the version of the software application being tested.
[0020] The stress test ends when the software application is determined to have crashed. The software application may be determined to have crashed when an error rate and/or response time of the software application exceeds an upper threshold value, which may be predefined by a user.
[0021] The input from the user at operation 110 may also include an input selecting or defining the infrastructure, or resources which are deployed, for each of the different deployment configurations.
[0022] A deployment configuration refers to the set of resources or infrastructure used, for example by a service provider, to deploy a software application in a software system. For example, the set of resources or infrastructure may specify a number of processors to be used, a type of processor to be used, a type of memory to be used, a size of memory, an operating system, and the like. Each deployment configuration may have an associated cost to deploy the software application. [0023] For example, the deployment configurations may be determined automatically by a default setting, manually set by a user, or may be selected from existing deployment configurations that are offered by various service providers. Each deployment configuration can have an associated cost or expenditure associated with it. That is, a service provider may charge a fee for deploying the software application with respect to a particular deployment configuration when hosting the software application.
[0024] In Table 1 illustrated below, example deployment configurations 1 , 2, and 3 are illustrated. For example, in Table 1 the deployment configuration 1 includes an infrastructure having 512 megabytes of RAM, a processor speed of 2.5 Ghz, and a disk size of 500 gigabytes, at a rate of $5 per hour. Deployment configuration 2 includes an infrastructure having 1 gigabyte of RAM, a processor speed of 4 Ghz, and a disk size of 1.5 terabytes, at a rate of $10 per hour. Finally, deployment configuration 3 includes an infrastructure having 3 gigabytes of RAM, a processor speed of 4 Ghz, and a disk size of 2.5 terabytes, at a rate of $15 per hour.
[0025] Table 1
Figure imgf000009_0001
[0026] These are merely example deployment configurations and other deployment configurations are possible, and the infrastructure of the deployment configuration can include other features or resources, for example, the number of processors may be variable, the rate may be based on a monthly basis rather than an hourly basis, and the type of memory may be further specified.
[0027] At operation 120, the software application may be deployed in each of the different deployment configurations and the stress tests executed with respect to the software application as it is deployed in each of the different deployment configurations. The software application may be stress-tested until a number of errors or a response time exceeds an upper threshold. The stress test may performed in a simulated environment with respect to the software application as the software application is deployed in various deployment configurations.
[0028] At operation 130, information regarding the performance of the software application, based upon the stress tests applied to the software application in each of the different deployment configurations, may be collected and stored in a storage medium, for example, a database.
[0029] In Table 2 illustrated below, example results of the stress tests applied to a software applications in each of deployment configurations 1 , 2, and 3 are illustrated.
[0030] Table 2
Figure imgf000010_0001
Figure imgf000011_0001
[0031] Referring to Table 2, in deployment configuration 1 , during the stress test the number of requests were increased until the software application crashed.
At or just prior to the software application crashing, the number of requests per second was 50, which can be considered as a performance limit of the software application with respect to deployment configuration 1. In deployment configuration 2, during the stress test the number of requests were increased until the software application crashed. At or just prior to the software application crashing, the number of requests per second was 150, which can be considered as a performance limit of the software application with respect to deployment configuration 2. And in deployment configuration 3, during the stress test the number of requests were increased until the software application crashed. At or just prior to the software application crashing, the number of requests per second was 300, which can be considered as a performance limit of the software application with respect to deployment configuration 2. Other performance limits can be obtained from the stress test. For example, average or median response times can also be determined based on the execution of the stress test.
[0032] At operation 140, the collected information may be analyzed and used to determine performance parameters of the software application in each of the different deployment configurations, based upon the performance limit ascertained via the various stress tests. [0033] A performance parameter refers to a performance metric or criteria that is to be satisfied, for example by a service provider, when the software application is deployed in an actual or real-world environment in a corresponding deployment configuration. The performance parameter may be a response time, for example, a median response time. As an example, when the performance parameter is a median response time, when the software application is deployed in a corresponding deployment configuration, the software application is expected to respond to a request within the median response time at least half of the time. The response time may be associated with the software application as a whole, or to a particular web page, for example.
[0034] As another example, the performance parameter may be a load level, for example a load that is to be satisfied for some predefined duration of time. As an example, when the performance parameter is a number of requests per year, when the software application is deployed in a corresponding deployment configuration, the software application is expected to accommodate the number of requests per year without crashing. The load level may also be a number of requests per month or per second, for example. The load level may be associated with the software application as a whole, or to a particular web page, for example.
[0035] A performance parameter may be determined for the software application based on performance limits of the software application obtained in previous stress tests, for example. A performance parameter for a current version of the software application may be determined based on performance parameter of previous versions of the software application. For example, performance parameters of more recent versions of the software application may be weighted more heavily than performance parameters of older versions of the software application. For example, a decaying average of performance parameters from previous versions of the software application may be used to determine the performance parameter of the current version of the software application.
[0036] At operation 150, recommendations for a service level agreement may be generated and output based on the determined performance parameters of the software application in each of the different deployment configurations. [0037] For example, aggregated information regarding the performance limits of the software application and possible scale costs, may be generated and output to the user. The output may be displayed on a screen, for example, or may be in the form of a report which is printed or transmitted electronically, for example, by email. The output may include charts and tables with summarized data and recommendations which are readily understandable by a user.
[0038] In Table 3 illustrated below, example recommendations for an SLA are illustrated.
[0039] Table 3
Figure imgf000013_0001
Figure imgf000014_0001
[0040] Referring to Table 3, if a client has a limited budget and cannot afford a rate of $10 per hour, based on the results of the stress tests applied to the software application in the deployment environment of deployment configuration 1 (which has a rate of $5 per hour), the terms of the SLA should define a limit of 50 requests per second without crashing. As another example, if a client wants the software application to be able to handle 600 requests per second, the client may scale deployment configuration 3 (which has a rate of $15 per hour) such that the total rate would be $30 in the terms of the SLA. While deployment configurations 1 and 2 could be similarly scaled, the total rate would be higher than that of deployment configuration 3. Finally, according to the results of the stress tests applied to the software application in each of the deployment configurations, deployment configuration 3 is optimal in terms of processing requests per second compared to the hourly rate. That is, deployment configuration 3 can handle 100 requests per second for every $5 expended per hour. In contrast, deployment configuration 1 can handle 100 requests per second for every $10 expended per hour, and deployment configuration 2 can handle 100 requests per second for every $6.67 expended per hour. Thus, a user may elect to have a service provider implement deployment configuration 3 with respect to the software application, and define the terms of the SLA according to the performance limits identified in a stress test applied with respect to the software application as it was deployed in deployment configuration 3. For example, a performance parameter for an SLA may include the expectation that the service provider can handle 300 requests per second without crashing when the SLA having deployment configuration 3 is selected. The SLA may further extrapolate such information to define other performance parameters in terms of requests per month or requests per year. A performance parameter for an SLA may also include the expectation that the service provider can respond to a request to the software application within a specified period of time, for example an average or median response time of 1000 ms. A response time specified in the SLA may be determined according to a performance limit identified in a stress test. The response time specified in the SLA may be a median response time expected to be satisfied by the service provider and/or a response time expected to be satisfied by the service provider a predetermined percentage of time (e.g., 95% of the time), for example.
[0041] If the software application is updated or changed, for example during development of the software application, operations 110 through 140 may be performed again, by deploying the updated / changed software application in each of the different deployment configurations and the stress tests may be executed with respect to the updated / changed software application as it is deployed in each of the different deployment configurations. The information regarding the performance of the updated / changed software application, based upon the stress tests applied to the updated / changed software application in each of the different deployment configurations, may also be collected and stored in the storage medium. The collected information may be analyzed and used to determine the performance limits of the updated / changed software application in each of the different deployment configurations. [0042] Thus, by performing operations 110 through 150 again as a version of the software application changes, the results of the various performance and stress tests executed with respect to the different versions of the software application may be continuously stored and tracked over time. In an example, a user may manually decide to have operations 110 through 150 performed again when a software application is updated or changed, however the disclosure is not so limited. For example, any of operations 110 through 150 may be automatically performed when a software application is updated or changed. [0043] The historical results from multiple stress test executions corresponding to different versions of the software application, deployed in different deployment configurations, can be aggregated to provide a summarized report of the software application performance over time and a recommendation of an appropriate SLA. This information may be presented with the support of charts and tables to allow the analysis of the correlation between feature evolution, hardware cost/consumption, the response time (throughput), and scalability in terms of application instances.
[0044] For example, an appropriate SLA may be recommended based on aggregating and calculating a decaying average, which considers observations over time, and recognizes that a recent score is more representative of the current performance of the software application, and thus assigns a higher weight on a recent score compared to a less recent score. An appropriate SLA may be recommended or generated based on various constraints or criteria of the client. For example, the recommended SLA may be generated according to criteria which specifies a budget constraint of the user, a minimum number of requests per second, a user profile of the user, an optimal deployment configuration in terms of performance of the software application relative to cost, or combinations thereof.
[0045] The above-described operations can be applied with respect to the software application as a whole, or with respect to components or features of the software application, for example a home page, a client profile page, and the like. Therefore, performance parameters of the software application can be determined for the software application as a whole and/or for components or features of the software application. For example, a SLA that may be generated and recommended according to the above-described operations may be as illustrated below in Table 4.
[0046] Table 4
Figure imgf000017_0001
[0047] Referring to Table 4, performance parameters of the home page for the software application specify that the service provider, according to a specified deployment configuration determined according to the above-described operations and based upon results of stress tests applied to the software application in the specified deployment configuration, should be expected, in an actual or real-world environment, to handle 300,000,000 requests to the home page per year, 25,000,000 requests to the home page per month, 50 requests to the home page per second, and have a median response time of less than 1000 ms, and a response time of less than 1500 ms 95 percent of the time. Likewise, the service provider, according to the specified deployment configuration determined according to the above-described operations and based upon results of stress tests applied to the software application, should be expected to handle 12,000,000 requests to the client profile page per year,
1 ,000,000 requests to the client profile page per month, 2 requests to the client profile page per second, and have a median response time of less than 3000 ms, and a response time of less than 4500 ms 95 percent of the time.
[0048] Referring now to FIG. 2, an example system in which a user can obtain a recommendation for a SLA is illustrated. The example system 2000 includes a client device 200 and a service provider 300.
[0049] The client device 200 includes a processor 210, a stress test application 220, a storage 230, a communication interface 240, a user interface 250, a display 260, and a service level agreement generator application 270. The client device 200 may include a computer, for example, a desktop or laptop computer, a smartphone, a tablet, and the like.
[0050] The service provider 300 includes a processor 310, a storage 320, and a communication interface 330. The service provider 300 may include a server, for example. For example, the service provider 300 may be a server which hosts a software application for a client device 200, where the software application is accessible by various users, for example by an external device (not shown). The external device may be similar to the client device and include a desktop or laptop computer, a smartphone, a tablet, and the like. The client device 200, service provider 300, and external device may be connected with one another in a wired and/or wireless manner to exchange information. [0051] The processor 210 of the client device 200 may execute instructions stored in the storage 230, and the processor 210 of the service provider 300 may execute instructions stored in the storage 320. The processors 210, 310 may include, for example, a processor, an arithmetic logic unit, a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), an image processor, a microcomputer, a field programmable array, a programmable logic unit, an application-specific integrated circuit (ASIC), a microprocessor, or combinations thereof.
[0052] The storage 230 of the client device 200 and the storage 320 of the service provider 300 may include, for example, machine readable storage devices which may be any electronic, magnetic, optical, or other physical storage device that stores executable instructions. For example, the storages 230, 320 may include a nonvolatile memory device, such as a Read Only Memory (ROM), Programmable Read Only Memory (PROM), Erasable Programmable Read Only Memory (EPROM), and flash memory, a USB drive, a volatile memory device such as a Random Access Memory (RAM), a hard disk, floppy disks, a blue-ray disk, or optical media such as CD ROM discs and DVDs, or combinations thereof. While not illustrated, the external device may also include a processor and storage.
[0053] The client device 200 may include a user interface 250 and/or display 260 to receive an input from a user to control an operation of the client device 200, and the display 260 can display information regarding the client device 200. The user interface 250 may include, for example, a keyboard, a mouse, a joystick, a button, a switch, an electronic pen or stylus, a gesture recognition sensor, an input sound device or voice recognition sensor such as a microphone, an output sound device such as a speaker, a track ball, a remote control, a touchscreen, or combinations thereof. The display 260 may include a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, active matrix organic light emitting diode (AMOLED), flexible display, 3D display, a plasma display panel (PDP), a cathode ray tube (CRT) display, and the like, for example. The display 260 may also include a touchscreen display to receive the user input. While not illustrated, the service provider 300 and external device may also include a user interface and/or display.
[0054] The client device 200 and the service provider 300 may be connected with one another in a wired and/or wireless manner, for example through communication interfaces 240, 330. The client device 200 and the service provider 300 may be connected to one another over a network such as a local area network (LAN), wireless local area network (WLAN), wide area network (WAN), personal area network (PAN), virtual private network (VPN), or the like. For example, wireless communication between elements of the examples disclosed herein may be performed via a wireless LAN, Wi-Fi, Bluetooth,
ZigBee, Wi-Fi direct (WFD), ultra wideband (UWB), infrared data association (IrDA), Bluetooth low energy (BLE), near field communication (NFC), a radio frequency (RF) signal, and the like. For example, the wired communication connection between the client device 200 and the service provider 300 may be performed via a pair cable, a coaxial cable, an optical fiber cable, an Ethernet cable, and the like. While not illustrated, the external device may also include a communication interface.
[0055] The client device 200 may include a stress testing application 220 to perform stress testing to determine the robustness of a software application.
The stress testing application 220 may be a program executed by the processor 210, or may include a module which is part of a program, for example, part of SLA generator application 270. For example, the stress test application 220 may be applied to the software application to simulate a certain number of users accessing the software application via the internet. For example, the stress test application 220 may increase a number of requests to the software application reaches an upper threshold value, for example, until a specified error rate or degrading response time is reached, i.e. , until the software application "crashes." The stress test application 220 may include different types of stress tests. The stress test application 220 may be executed with respect to the software application according to different deployment configurations which are, for example, defined by a user, default deployment configurations, deployment configurations offered by a service provider, or combinations thereof. The stress test application 220 may be stored locally on the client device 200. In another example, the stress test application 220 may be stored remotely and accessed from the client device 200 to execute the stress test with respect to the software application. For example, the stress test application 220 may be stored at the service provider 300 or an external device, and accessed remotely by a user of the client device 200 via communication interface 240, or accessed by a user of the service provider 300.
[0056] The client device 200 may include a service level agreement (SLA) generator application 270 to generate deployment configuration options for a software application to be implemented by a service provider in connection with the SLA. The SLA generator application 270 may be a program executed by the processor 210, or may include a module which is part of another program. The recommended deployment configurations for the SLA may be determined according to results of stress tests performed by the stress test application 220. The SLA generator application 270 may output a summarized report of the software application performance over time and recommendations of various SLAs which may be appropriate for the user. This information may be presented in the form of charts and tables generated by the SLA generator application 270 to allow the analysis of the correlation between feature evolution, hardware cost/consumption, the response time, throughput, and scalability in terms of application instances. A user may select an appropriate SLA having a corresponding deployment configuration that is appropriate for the user and software application. The selected deployment configuration from the generated SLA may then be implemented by the service provider when hosting the software application according to the terms of the SLA. The SLA generator application 270 may be stored on the client device 200, as illustrated in FIG. 2. In another example, the SLA generator application 270 may be stored remotely and accessed from the client device 200 via communication interface 240 to execute the stress test with respect to the software application. For example, the SLA generator application 270 may be stored at the service provider 300 or an external device, and accessed remotely by a user of the client device 200 via communication interface 240, or accessed by a user of the service provider 300. [0057] Example operations will now be described with respect to FIG. 2.
[0058] For example, a user of client device 200 may provide an input to the client device 200 via the user interface 250 and/or display 260 to execute the SLA generator application 270 to generate deployment configuration options for a software application to be implemented by a service provider in connection with a service level agreement. The user may provide an input to the client device 200 via the user interface 250 and/or display 260 to load or store a software application (code) to the storage 230 of the client device 200. In another example, the software application may be loaded or stored to the storage 230 before the SLA generator application 270 is executed.
[0059] In connection with the SLA generator application 270, the user may provide an input to the client device 200 via the user interface 250 and/or display 260 to select performance and/or stress tests to be applied to the software application according to various deployment configurations. The performance and/or stress tests to be applied to the software application may be set by default or be defined by the user. The SLA generator application 270 may execute the stress tests with respect to the software application using the stress test application 220.
[0060] In connection with the SLA generator application 270, the user may provide an input to the client device 200 via the user interface 250 and/or display 260 to select or define the infrastructure, or resources which are deployed, for each of the different deployment configurations to be utilized when performing the stress tests. For example, the deployment configurations may be determined automatically by a default setting, manually set by the user, or may be selected from existing deployment configurations that are offered by various service providers. Example deployment configurations and resources to be utilized in a deployment configuration such as processing speed, storage, and cost information, were previously discussed above.
[0061] The SLA generator application 270 may deploy the software application in each of the different deployment configurations and the stress tests may be executed with respect to the software application using the stress test application 220 as the software application is deployed in each of the different deployment configurations.
[0062] The SLA generator application 270 may collect information regarding the performance of the software application, based upon the stress tests applied to the software application in each of the different deployment configurations, and store the information in the storage 230. As another example, the collected information may be stored remotely via communication interface 240, for example, in storage 320 of the service provider 300 or a storage of an external device.
[0063] The SLA generator application 270 may analyze the collected information to determine performance parameters of the software application in each of the different deployment configurations, based upon performance limits ascertained via the various stress tests. For example, the SLA generator application 270 may aggregate information regarding the performance limits of the software application and possible scale costs, and generate and output such information to the user. The output may be displayed on the display 260, for example, or may be in the form of a report which is printed or transmitted electronically, for example, by email. The SLA generator application 270 may output charts and tables with summarized data and recommendations which are readily understandable by a user. The SLA generator application 270 may output recommendations of various SLAs which may be appropriate for the user. The recommended SLAs include corresponding deployment configurations to be deployed together with the software application by the service provider. The recommended SLAs also include performance parameters of the software application to be satisfied by the service provider. For example, a performance parameter for an SLA may include load information and/or response time information. For example, when deployment configuration 3 is recommended for the SLA, the load information may be 300 requests per second, which indicates the service provider is expected to host the software application such that the software application is capable of handling 300 requests per second without crashing.
[0064] In connection with the SLA generator application 270, the user may provide an input to the client device 200 via the user interface 250 and/or display 260 to select an appropriate SLA having a corresponding deployment configuration and performance parameter that is appropriate for the user and software application. The selected deployment configuration from the generated SLA may then be implemented by the service provider when hosting the software application according to the terms of the selected SLA which include the performance parameters determined by the SLA generator application 270.
[0065] The above examples describe execution of the SLA generator application 270 at the client device 200, where various inputs may be provided by a user of the client device 200. However, the disclosure is not so limited. For example, the SLA generator application 270 may be executed automatically when the software application is loaded or stored to the client device 200, or whenever the software application is updated, for example when a version of the software application is changed. The stress tests and deployment configurations utilized in connection with the stress tests may be automatically selected, for example according to preset or default settings, according to settings which were applied previously, for example to previous versions of the software application, or according to a user profile of the user of the client device 200. The selection of a recommended SLA may also be made automatically, for example according to preset or default settings, according to cost constraints of the user, according to a user profile of the user of the client device 200, or combinations thereof.
[0066] The above examples also describe execution of the SLA generator application 270 at the client device 200, however the disclosure is not so limited. For example, the SLA generator application 270 may be executed at the service provider 300, and the above-described operations may be performed by the service provider 300 according to inputs provided by a user, or performed automatically as discussed above. Therefore, a repetitive description herein is omitted for the sake of brevity.
[0067] FIG. 3 illustrates an example apparatus 3000 for obtaining a recommendation for a SLA, according to the examples described herein.
[0068] In an example, the apparatus 3000 may be the client device 200 or service provider 300. The apparatus 3000 includes processor 3100 and non- transitory computer readable storage medium 3200. The non-transitory computer readable storage medium 3200 may include instructions 3210, 3220, 3230, 3230, 3240, 3250, and 3260 that, when executed by the processor 3100, cause the processor 3100 to perform various functions.
[0069] The instructions 3210 include instructions to store a software application in a storage of the apparatus 3000, for example, the non-transitory computer readable storage medium 3200. The instructions 3220 include instructions to receive deployment configurations to deploy the software application in a simulated environment. The instructions 3230 include instructions to select stress tests to be applied with respect to the software application in each deployment configuration among the obtained deployment configurations, in the simulated environment. The instructions 3240 include instructions to execute the selected stress tests with respect to the software application deployed in each deployment configuration among the deployment configurations, in the simulated environment. The instructions 3250 determine a performance limit of the software application, for each deployment configuration among the deployment configurations, according to the results of the stress tests executed with respect to the software application deployed in each deployment configuration. For example, the instructions 3250 determine, when a performance of the application during execution of a respective stress test reaches an upper threshold value, the performance limit of the application for the respective deployment configuration among the plurality of deployment configurations. The instructions 3260 output at least one recommended service level agreement which includes deployment configurations and performance parameters of the software application which can be satisfied by a service provider in an actual or real-world environment based on the performance limits of the software application determined according to the results of the stress tests for each of the deployment configurations. For example, the instructions 3260 output a performance parameter for each deployment configuration which is to be satisfied for the application when the application is to be deployed in the corresponding deployment configuration in the actual or real-world environment, based on the performance limit of the application determined for the respective deployment configuration.
[0070] Additional instructions may be stored on the non-transitory computer readable storage medium 3200. For example, non-transitory computer readable storage medium 3200 may include instructions to determine the performance limit when at least one of an error rate of the application or a response time of the application reaches the upper threshold value during execution of the stress test. The performance limit may include, for example, a load level and/or a response time of the application when the error rate and/or the response time of the application reaches the upper threshold value during execution of the stress test.
[0071] In an example, non-transitory computer readable storage medium 3200 may include instructions to execute a first stress test, in the simulated environment, with respect to a first version of the application as the first version of the application is deployed in a first deployment configuration, and execute a second stress test, in the simulated environment, with respect to a second version of the application as the second version of the application is deployed in the first deployment configuration. The non-transitory computer readable storage medium 3200 may include instructions to determine, when the performance of the first version of the application during execution of the first stress test reaches the upper threshold value, a first performance limit of the first version of the application for the first deployment configuration, and determine, when the performance of the second version of the application during execution of the second stress test executed with respect to the second version of the application reaches the upper threshold value, a second performance limit of the second version of the application for the first deployment configuration.
[0072] The non-transitory computer readable storage medium 3200 may include instructions to determine a first performance parameter for the first deployment configuration which is to be satisfied for the first version of the application when the first version of the application is to be deployed in the first deployment configuration in the actual environment, based on the first performance limit, and determine a second performance parameter for the first deployment configuration which is to be satisfied for the second version of the application when the second version of the application is to be deployed in the first deployment configuration in the actual environment, based on the second performance limit.
[0073] The non-transitory computer readable storage medium 3200 may include instructions to determine the performance parameter for the application by weighting the second performance parameter more heavily than the first performance parameter.
[0074] In an example, the non-transitory computer readable storage medium 3200 may include instructions to execute a first stress test, in the simulated environment, with respect to a first feature of the application as the first feature of the application is deployed in a first deployment configuration, and to execute a second stress test, in the simulated environment, with respect to a second feature of the application as the second feature of the application is deployed in the first deployment configuration. The non-transitory computer readable storage medium 3200 may include instructions to determine, when the performance of the first feature of the application during execution of the first stress reaches the upper threshold value, a first performance limit of the first feature of the application for the first deployment configuration, and to determine, when the performance of the second feature of the application during execution of the second stress test reaches the upper threshold value, a second performance limit of the second feature of the application for the first deployment configuration. The non-transitory computer readable storage medium 3200 may include instructions to output a first performance parameter for the first deployment configuration which is to be satisfied for the first feature of the application when the first feature of the application is to be deployed in the first deployment configuration in the actual environment, based on the first performance limit, and output a second performance parameter for the first deployment configuration which is to be satisfied for the second feature of the application when the second feature of the application is to be deployed in the first deployment configuration in the actual environment, based on the second performance limit. The first feature may be one of a home web page, a search web page, a client profile web page, and the second feature may be another one of the home web page, the search web page, and the client profile web page. [0075] The non-transitory computer readable storage medium 3200 may include instructions to determine a first performance parameter of the application for a first deployment configuration among the plurality of deployment configurations based on a first plurality of performance limits determined based on a first plurality of stress tests executed in the simulated environment with respect to the application as the application is deployed in the first deployment configuration, and to determine a second performance parameter of the application for a second deployment configuration among the plurality of deployment configurations based on a second plurality of performance limits determined based on a second plurality of stress tests executed in the simulated environment with respect to the application as the application is deployed in the second deployment configuration.
[0076] The non-transitory computer readable storage medium 3200 may include instructions to output a first recommendation for a first service level agreement having the first performance parameter as a level of performance to be met by a service provider providing the application to an end user by deploying the application in the first deployment configuration in the actual environment, and to output a second recommendation for a second service level agreement having the second performance parameter as another level of performance to be met by the service provider providing the application to the end user by deploying the application in the second deployment configuration in the actual environment. [0077] FIG. 4 illustrates an example workflow 4000 for obtaining a recommendation for a SLA, according to the examples described herein.
[0078] Referring to FIG. 4, stress tests can be integrated and executed as part of an operation for an asynchronous continuous integration/continuous deployment (Cl/CD) process pipeline. By incorporating the stress tests as part of the operation for the asynchronous Cl/CD process pipeline, a fully automated process to track the performance of the software application performance under different deployment configurations can be obtained. Based on data obtained over time regarding the performance of the software application, appropriate SLAs may be recommended by the SLA generator application 270, or an assessment can be made regarding whether certain specifications in an SLA can be satisfied. For example, the data obtained over time can include performance limits of the software application, for example performance limits of the software application as a whole, or for specific features of the software application. The performance limits can be ascertained based on stress tests executed with respect to the software application deployed in a deployment configuration, for example with respect to various versions of the software application. The execution of the stress tests with respect to the software application deployed in a deployment configuration may be performed periodically when integrated into a Cl/CD pipeline. The whole software application and/or each feature/endpoint of the software application may be tested separately. The stress test may executed automatically based on an existing testing scenario. For example, the stress test may be executed on the software application by increasing a number of requests to the software application at a preset or variable rate until a specified or threshold error rate or threshold degrading response time is reached, i.e., until the software application crashes. Once the threshold is reached, the stress test is stopped and the results may be considered as upper limits or performance limits of the software application for the version of the software application deployed in the deployment configuration utilized in the stress test.
[0079] Referring to FIG. 4, at operation 4010 a developer may store or load a software application or code to storage 230 of the client device 200. As another example, at operation 4010 instead of storing an entire software application, the developer may provide additional code, update existing code, or otherwise change a master version of the software application by merging a pull request to the master version of the software application.
[0080] At operation 4020 a process for automatically generating a SLA recommendation may be triggered based on the occurrence of operation 4010 by which the code repository has been changed or updated.
[0081] At operation 4030, the SLA generator application 270 may deploy the updated or changed software application in different deployment configurations and stress tests may be executed using the stress test application 220 with respect to the software application as it is deployed in each of the different deployment configurations. As previously described, the different deployment configurations may be preset so that the workflow may be performed automatically without further user intervention. As another example, the SLA generator application 270 may provide a user an opportunity to provide an input setting the deployment configurations. Likewise, as previously described, the stress tests to be performed on the software application may be preset so that the workflow may be performed automatically without further user intervention. As another example, the SLA generator application 270 may provide a user an opportunity to provide an input selecting the stress tests to be performed on the software application.
[0082] At operation 4040, the SLA generator application 270 may automatically collect information regarding the performance of the software application, based upon the stress tests applied to the software application in each of the different deployment configurations, and store the test results in the storage 230. As another example, the test results may be stored remotely via communication interface 240, for example, in storage 320 of the service provider 300.
[0083] At operation 4050, the SLA generator application 270 may automatically generate a report by analyzing the stress test results to determine performance limits of the application. The SLA generator application 270 may determine performance parameters of the software application in each of the different deployment configurations, based upon the performance limits ascertained via the executed stress tests.
[0084] At operation 4060, the SLA generator application 270 may automatically transmit the report to the developer or user. The developer or user can receive the report in any form, for example by a visual display in the form of charts and/or tables, by email, and the like. The SLA generator application 270 may output recommendations of various SLAs which may be appropriate for the user for the current version of the software application that has been stress tested. The recommended SLAs include corresponding deployment configurations to be deployed together with current version of the software application by the service provider. The recommended SLAs also include performance parameters of the current version of software application to be satisfied by the service provider.
[0085] In the context of workflow 4000 which represents an asynchronous Cl/CD process pipeline, operations 4010 through 4060 may be continuously repeated, for example automatically, each time the software application is changed, for example each time a merge pull request to the master version of the software application is performed.
[0086] Referring now to FIG. 5, an example chart illustrating performance limits of different features of a software application captured by stress tests executed over different software application releases (versions), is provided.
[0087] In accordance with examples of the disclosure, the SLA generator application 270 may generate charts such as that illustrated in FIG. 5 to enable a user to evaluate different scenarios and deployment configurations of each release candidate of a software application. The charts can enable a user to evaluate performance and throughput, for example.
[0088] In FIG. 5, response times are illustrated on the y-axis considering different software application releases (versions) which are illustrated on the x- axis, for different features of the software application, which are illustrated using respective lines on the chart. The response times correspond to performance limits of the software application for a respective feature during a stress test for a certain deployment configuration. The deployment configuration may be the same, or the deployment configuration may be varied as versions of the software application change. The different features may correspond to different pages of a website for a software application to be hosted by a service provider, for example. A user evaluating a chart generated by the SLA generator application 270 is able to evaluate the evolution of the software application, with new incoming features and old features being removed, for example. As an example, in FIG. 5, for versions #29 to #35 the feature POST /consents/.search (purposeSearchType=all), it can be seen that a spike in response time occurred, exceeding 4500 ms before dropping and rising again to 4000 ms. The feature was then removed from the software application. As another example, in FIG.
5, feature POST /consents/.search (purposeSearchType=match, latest=fals) was introduced at version #40. It can be seen that over the course of various iterations of the software application, the response time decreased to about 2100 ms. The response time for the software application overall or the response time for a feature of the software application with respect to a version of the software application deployed in a deployment configuration, may be considered as performance limits of the software application.
[0089] Other performance-related metrics can be presented with the aid of the charts generated by the SLA generator application 270. For example, referring now to FIG. 6, an example chart illustrating performance limits of a specific feature of a software application, or of the software application overall, as captured by stress tests executed over different software application releases (versions), is provided.
[0090] In accordance with examples of the disclosure, the SLA generator application 270 may generate charts such as that illustrated in FIG. 6 to enable a user to evaluate different scenarios and deployment configurations of each release candidate of a software application. The charts can enable a user to evaluate performance and throughput, for example.
[0091] In FIG. 6 a number of requests per second are illustrated on the y-axis considering different software application releases (versions) which are illustrated on the x-axis. In FIG. 6, the maximum requests per second (throughput) is reported for different software application releases observed over time. As an example, in FIG. 6, for versions #41 to #46 of the software application, the maximum requests per second which can be handled by a corresponding deployment configuration without crashing is about 70. The maximum requests per second for a specific feature of the software application with respect to a version of the software application deployed in a deployment configuration, may be considered as a performance limit of the software application. As another example, the maximum requests per second for the software application overall with respect to a version of the software application deployed in a deployment configuration, may be considered as a performance limit of the software application. [0092] The information collected based on the execution of the stress tests can be stored in a storage, for example storage 230, which may include a database. The information stored in the storage can be aggregated and used by the SLA generator application 270 to provide a SLA recommendation for each deployment configuration. For example, the SLA generator application 270 may provide a SLA recommendation for each deployment configuration based on a decaying average, which considers observations overtime, and recognizes that a recent measurement is more representative of the current performance of the software application, and thus a greater weight is assigned to a recent measurement compared to an earlier measurement in determining the SLA recommendation. The SLA generator application 270 may consider a predetermined number of observations, for example the last three observations, last five observations, last ten observations, or all of the observations, in determining the decaying average.
[0093] Referring now to FIG. 7, an example table illustrating performance parameters for different features of a software application and for different versions of the software application, is provided. The performance parameters for different features of the software application may be obtained based on performance limits of the software application which are identified or determined according to results of stress tests executed over different software application releases (versions) for a deployment configuration. The performance parameters may be used to generate a SLA recommendation and may be incorporated into a SLA. [0094] In accordance with examples of the disclosure, the SLA generator application 270 may generate charts such as that illustrated in FIG. 7 to enable a user to evaluate different performance parameters with respect to different versions of the application and/or with respect to different deployment configurations of each release candidate of a software application. The charts can enable a user to make a decision regarding limitations of the software application. This information can be used to ascertain limits of the software application regarding its scalability and costs. Thus, the report and recommendation generated by the SLA generator application 270 may be a valuable resource when estimating resource consumption and hardware costs. [0095] In FIG. 7, the SLA recommendation may be based on a decaying average of historic performance data. For example, for the endpoint or feature of "/users" the median response time in deployment configuration 1 , version 0.0.1 of the software application is 1000 ms. This value can be considered as a first performance parameter. Likewise, the median response time in deployment configuration 1 , version 0.0.2 and version 0.0.3 of the software application is 1100 ms and 1300 ms, respectively. These values can be considered as second and third performance parameters, respectively. The SLA generator application 270 determines a recommended performance parameter for the feature of /users according to a decaying average of the historic performance data. In the example, the historic data for the feature of /users includes median response times of 1000 ms, 1100 ms, and 1300 ms, where the value of 1300 ms is weighted more heavily than the previous median response times of 1000 ms and 1100 ms, and the value of 1100 ms is weighted more heavily than the previous median response time of 1000 ms. In the example of FIG. 7, based on the historical data, a recommended performance parameter for a SLA for the feature of /user as determined by the SLA generator application 270 is 1133 ms for the median response time. The recommended performance parameter is determined by the SLA generator application 270 based on the values of the first to third performance parameters, according to a decaying average. Similarly, in the example of FIG. 7, based on the historical data, a recommended performance parameter for a SLA for the feature of /user as determined by the SLA generator application 270 is 2900 ms for a response time to be achieved 95% of the time. A service provider may implement the deployment configuration associated with the SLA recommendation to achieve or satisfy the performance parameters associated with the software application as determined according to the historical data. For example, a service provider may deploy deployment configuration 1 to satisfy performance parameters associated with the median response time of 1133 ms and the 95% response time of 2900 ms with respect to the feature of /users in an actual or real-world environment.
[0096] Therefore, in contrast to defining a SLA beforehand without knowledge of the performance of a software application in any particular deployment configuration, according to examples of the disclosure performance parameters to be satisfied by a service provider according to a SLA may be recommended based on performance limits identified according to the execution of stress tests with respect to different versions of the software application. A service provider or a client can know beforehand the deployment configuration to be deployed which is capable of achieving the performance parameters of the SLA recommendation, and cost information for deploying that deployment configuration can be ascertained with better certainty to allow stakeholders to make informed decisions, or to scale the software application and/or deployment configuration as necessary.
[0097] According to the above-described examples, automated performance analysis mechanisms are provided to help a user improve the ability to securely scale rapidly and cost-effectively. For example, a user can review and decide on an appropriate SLA, taking into account different deployment configurations. Users can estimate resource consumption and hardware costs when increasing or decreasing the specifications of the SLA.
[0098] Based on the results output by the SLA generator application described herein, developers of the software application scale the software application appropriate to achieve a specified SLA. The SLA generator application described herein provides information in a Cl/CD process that can inform developers of a software application to fail a build when a regression is detected, where failure of the build may automatically be performed. Developers of the software application can also investigate performance decays based upon the output of the SLA generator application which provides historic data about the application performance. [0099] The SLA generator application described herein can reduce costs and save time by providing a method to accurately estimate costs for scaling a software application release. Further, resources can be better managed by increasing the predictability of a software application release in terms of scalability and costs.
[00100] The terms "module" as used herein, may refer to, but is not limited to, a software or hardware component or device, such as a Field Programmable Gate Array (FPGA) or Application Specific Integrated Circuit (ASIC), which performs certain tasks. A module may be configured to reside on an addressable storage medium and configured to execute on one or more processors. Thus, a module may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. The functionality provided for in the components and modules may be combined into fewer components and modules or further separated into additional components and modules.
[00101] Executable instructions to perform processes or operations in accordance with the above-described examples may be recorded in a machine readable storage. A controller or processor may execute the executable instructions to perform the processes or operations. Examples of instructions include both machine code, such as that produced by a compiler, and files containing higher level code that may be executed by the controller using an interpreter. The instructions may be executed by a processor or a plurality of processors included in the controller. The machine readable storage may be distributed among computer systems connected through a network and computer-readable codes or instructions may be stored and executed in a decentralized manner.
[00102] Each block of the flowchart illustrations may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some examples, the functions noted in the blocks may occur out of order. For example, two blocks shown in succession may be executed substantially concurrently (simultaneously) or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
[00103] The foregoing examples are merely examples and are not to be construed as limiting the disclosure. The disclosure can be readily applied to other types of apparatuses. Various modifications may be made which are also intended to be encompassed by the disclosure. Also, the description of the examples of the disclosure is intended to be illustrative, and not to limit the scope of the claims.

Claims

WHAT IS CLAIMED IS:
1. A non-transitory machine readable storage comprising instructions that, when executed, cause a processor to: receive a plurality of deployment configurations to deploy an application in a simulated environment; execute a plurality of stress tests, in the simulated environment, with respect to the application as the application is deployed in each deployment configuration among the plurality of deployment configurations; determine, when a performance of the application during execution of a respective stress test reaches an upper threshold value, a performance limit of the application for the respective deployment configuration among the plurality of deployment configurations; and output a performance parameter for each deployment configuration which is to be satisfied for the application when the application is to be deployed in the corresponding deployment configuration in an actual environment, based on the performance limit of the application determined for the respective deployment configuration.
2. The non-transitory machine readable storage of claim 1 , further comprising instructions that when executed cause the processor to: determine the performance limit when at least one of an error rate of the application or a response time of the application reaches the upper threshold value, wherein the performance limit includes at least one of a load level or a response time of the application when at least one of the error rate or the response time of the application reaches the upper threshold value.
3. The non-transitory machine readable storage of claim 1 , wherein the plurality of stress tests include a first stress test and a second stress test, the plurality of deployment configurations includes a first deployment configuration, and the non-transitory machine readable storage further comprises instructions that when executed cause the processor to: execute the first stress test, in the simulated environment, with respect to a first version of the application as the first version of the application is deployed in the first deployment configuration, execute the second stress test, in the simulated environment, with respect to a second version of the application as the second version of the application is deployed in the first deployment configuration, determine, when the performance of the first version of the application during execution of the first stress test reaches the upper threshold value, a first performance limit of the first version of the application for the first deployment configuration determine, when the performance of the second version of the application during execution of the second stress test executed with respect to the second version of the application reaches the upper threshold value, a second performance limit of the second version of the application for the first deployment configuration, and output the performance parameter for the first deployment configuration which is to be satisfied for the application when the application is to be deployed in the first deployment configuration in the actual environment, based on the first performance limit and the second performance limit.
4. The non-transitory machine readable storage of claim 3, further comprising instructions that when executed cause the processor to: determine a first performance parameter for the first deployment configuration which is to be satisfied for the first version of the application when the first version of the application is to be deployed in the first deployment configuration in the actual environment, based on the first performance limit, determine a second performance parameter for the first deployment configuration which is to be satisfied for the second version of the application when the second version of the application is to be deployed in the first deployment configuration in the actual environment, based on the second performance limit, and determine the performance parameter for the application based on the first performance parameter and the second performance parameter, by weighting the second performance parameter more heavily than the first performance parameter.
5. The non-transitory machine readable storage of claim 1 , wherein the plurality of stress tests include a first stress test and a second stress test, the plurality of deployment configurations includes a first deployment configuration, the performance parameter for the first deployment configuration which is to be satisfied for the application when the application is to be deployed in the first deployment configuration in the actual environment includes a first performance parameter and a second performance parameter, and the non-transitory machine readable storage further comprises instructions that when executed cause the processor to: execute the first stress test, in the simulated environment, with respect to a first feature of the application as the first feature of the application is deployed in the first deployment configuration, execute the second stress test, in the simulated environment, with respect to a second feature of the application as the second feature of the application is deployed in the first deployment configuration, determine, when the performance of the first feature of the application during execution of the first stress reaches the upper threshold value, a first performance limit of the first feature of the application for the first deployment configuration, determine, when the performance of the second feature of the application during execution of the second stress test reaches the upper threshold value, a second performance limit of the second feature of the application for the first deployment configuration, output the first performance parameter for the first deployment configuration which is to be satisfied for the first feature of the application when the first feature of the application is to be deployed in the first deployment configuration in the actual environment, based on the first performance limit, and output the second performance parameter for the first deployment configuration which is to be satisfied for the second feature of the application when the second feature of the application is to be deployed in the first deployment configuration in the actual environment, based on the second performance limit.
6. The non-transitory machine readable storage of claim 5, wherein the first feature is one of a home web page, a search web page, a client profile web page, and the second feature is another one of the home web page, the search web page, and the client profile web page.
7. The non-transitory machine readable storage of claim 1 , wherein the plurality of stress tests include a first plurality of stress tests and a second plurality of stress tests, the plurality of deployment configurations include a first deployment configuration and a second deployment configuration, the performance parameter includes a first performance parameter and a second performance parameter, and the non-transitory machine readable storage further comprises instructions that when executed cause the processor to: determine the first performance parameter of the application for the first deployment configuration based on a first plurality of performance limits determined based on the first plurality of stress tests executed in the simulated environment with respect to the application as the application is deployed in the first deployment configuration, determine the second performance parameter of the application for the second deployment configuration based on a second plurality of performance limits determined based on the second plurality of stress tests executed in the simulated environment with respect to the application as the application is deployed in the second deployment configuration, output a first recommendation for a first service level agreement having the first performance parameter as a level of performance to be met by a service provider providing the application to an end user by deploying the application in the first deployment configuration in the actual environment, and output a second recommendation for a second service level agreement having the second performance parameter as another level of performance to be met by the service provider providing the application to the end user by deploying the application in the second deployment configuration in the actual environment.
8. An apparatus, comprising: a user interface to receive an input setting a plurality of deployment configurations to deploy an application in a simulated environment; a memory to store instructions; and a processor to execute the instructions stored in the memory to: execute a plurality of stress tests, in the simulated environment, with respect to the application as the application is deployed in each deployment configuration among the plurality of deployment configurations, determine, when a performance of the application during execution of a respective stress test reaches an upper threshold value, a performance limit of the application for the respective deployment configuration among the plurality of deployment configurations, and determine a performance parameter for each deployment configuration which is to be satisfied for the application when the application is to be deployed in the corresponding deployment configuration in an actual environment, based on the performance limit of the application determined for the respective deployment configuration; and a display to output the performance parameter.
9. The apparatus of claim 8, wherein the processor is to execute the instructions stored in the memory to determine the performance limit when at least one of an error rate of the application or a response time of the application reaches the upper threshold value, the performance limit including at least one of a load level or a response time of the application when at least one of the error rate or the response time of the application reaches the upper threshold value.
10. The apparatus of claim 8, wherein the plurality of stress tests include a first stress test and a second stress test, the plurality of deployment configurations includes a first deployment configuration, and the processor is to execute the instructions stored in the memory to: execute the first stress test, in the simulated environment, with respect to a first version of the application as the first version of the application is deployed in the first deployment configuration, execute the second stress test, in the simulated environment, with respect to a second version of the application as the second version of the application is deployed in the first deployment configuration, determine, when the performance of the first version of the application during execution of the first stress test reaches the upper threshold value, a first performance limit of the first version of the application for the first deployment configuration, determine, when the performance of the second version of the application during execution of the second stress test reaches the upper threshold value, a second performance limit of the second version of the application for the first deployment configuration, and determine the performance parameter for the first deployment configuration which is to be satisfied for the application when the application is to be deployed in the first deployment configuration in the actual environment, based on the first performance limit and the second performance limit, and the display is to output the performance parameter for the first deployment configuration.
11 . The apparatus of claim 10, wherein the processor is to execute the instructions stored in the memory to: determine a first performance parameter for the first deployment configuration which is to be satisfied for the first version of the application when the first version of the application is to be deployed in the first deployment configuration in the actual environment, determine a second performance parameter for the first deployment configuration which is to be satisfied for the second version of the application when the second version of the application is to be deployed in the first deployment configuration in the actual environment, and determine the performance parameter for the application based on the first performance parameter and the second performance parameter, by weighting the second performance parameter more heavily than the first performance parameter.
12. The apparatus of claim 8, wherein the plurality of stress tests include a first stress test and a second stress test, the plurality of deployment configurations includes a first deployment configuration, the performance parameter for the first deployment configuration which is to be satisfied for the application when the application is to be deployed in the first deployment configuration in the actual environment includes a first performance parameter and a second performance parameter, and the processor is to execute the instructions stored in the memory to: execute the first stress test, in the simulated environment, with respect to a first feature of the application as the first feature of the application is deployed in the first deployment configuration, execute the second stress test, in the simulated environment, with respect to a second feature of the application as the second feature of the application is deployed in the first deployment configuration, determine, when the performance of the first feature of the application during execution of the first stress test reaches the upper threshold value, a first performance limit of the first feature of the application for the first deployment configuration, determine, when the performance of the second feature of the application during execution of the second stress test reaches the upper threshold value, a second performance limit of the second feature of the application for the first deployment configuration, determine the first performance parameter for the first deployment configuration which is to be satisfied for the first feature of the application when the first feature of the application is to be deployed in the first deployment configuration in the actual environment, based on the first performance limit, and determine the second performance parameter for the first deployment configuration which is to be satisfied for the second feature of the application when the second feature of the application is to be deployed in the first deployment configuration in the actual environment, based on the second performance limit, and the display is to output the first performance parameter and the second performance parameter for the first deployment configuration, wherein the first feature is one of a home web page, a search web page, a client profile web page, and the second feature is another one of the home web page, the search web page, and the client profile web page.
13. The apparatus of claim 8, wherein the plurality of stress tests include a first plurality of stress tests and a second plurality of stress tests, the plurality of deployment configurations include a first deployment configuration and a second deployment configuration, the performance parameter includes a first performance parameter and a second performance parameter, and the processor is to execute the instructions stored in the memory to: determine the first performance parameter of the application for the first deployment configuration among the plurality of deployment configurations based on a first plurality of performance limits determined based on the first plurality of stress tests executed in the simulated environment with respect to the application as the application is deployed in the first deployment configuration, determine the second performance parameter of the application for the second deployment configuration among the plurality of deployment configurations based on a second plurality of performance limits determined based on the second plurality of stress tests executed in the simulated environment with respect to the application as the application is deployed in the second deployment configuration, determine a first recommendation for a first service level agreement having the first performance parameter as a level of performance to be met by a service provider providing the application to an end user by deploying the application in the first deployment configuration in the actual environment, and determine a second recommendation for a second service level agreement having the second performance parameter as another level of performance to be met by the service provider providing the application to the end user by deploying the application in the second deployment configuration in the actual environment, and the display is to: output the first recommendation for the first service level agreement, and output the second recommendation for the second service level agreement.
14. A method, comprising: executing a first stress test, in a simulated environment, with respect to a first version of an application deployed in a first deployment configuration among a plurality of deployment configurations; executing a second stress test, in a simulated environment, with respect to the first version of the application deployed in a second deployment configuration among the plurality of deployment configurations; determining a first performance parameter for the first deployment configuration satisfiable for the first version of the application when the first version of the application is deployed in the first deployment configuration in an actual environment, based on a first performance limit of the first version of the application determined when a performance of the first version of the application during execution of the first stress test reaches an upper threshold value; determining a second performance parameter for the second deployment configuration satisfiable for the first version of the application when the first version of the application is deployed in the second deployment configuration in the actual environment, based on a second performance limit of the first version of the application determined when the performance of the first version of the application during execution of the second stress test reaches the upper threshold value; and outputting the first performance parameter for the first deployment configuration and the second performance parameter for the second deployment configuration.
15. The method of claim 14, further comprising: executing a third stress test, in the simulated environment, with respect to a second version of the application deployed in the first deployment configuration; determining a third performance parameter for the first deployment configuration satisfiable for the second version of the application when the second version of the application is deployed in the first deployment configuration in the actual environment, based on a third performance limit of the second version of the application determined when a performance of the second version of the application during execution of the third stress test reaches the upper threshold value; determining a fourth performance parameter for the application based on the first performance parameter and the third performance parameter, by weighting the third performance parameter more heavily than the first performance parameter; and deploying the second version of the application in the first deployment configuration in the actual environment based on the fourth performance parameter, or deploying the first version of the application in the second deployment configuration in the actual environment based on the second performance parameter.
PCT/US2020/023174 2020-03-17 2020-03-17 Output of a performance parameter to be satisfied for an application deployed in a deployment configuration WO2021188097A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2020/023174 WO2021188097A1 (en) 2020-03-17 2020-03-17 Output of a performance parameter to be satisfied for an application deployed in a deployment configuration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2020/023174 WO2021188097A1 (en) 2020-03-17 2020-03-17 Output of a performance parameter to be satisfied for an application deployed in a deployment configuration

Publications (1)

Publication Number Publication Date
WO2021188097A1 true WO2021188097A1 (en) 2021-09-23

Family

ID=77771611

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/023174 WO2021188097A1 (en) 2020-03-17 2020-03-17 Output of a performance parameter to be satisfied for an application deployed in a deployment configuration

Country Status (1)

Country Link
WO (1) WO2021188097A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150006966A1 (en) * 2011-09-14 2015-01-01 Amazon Technologies, Inc. Cloud-based test execution
US20150186236A1 (en) * 2012-05-08 2015-07-02 Amazon Technologies, Inc. Scalable testing in a production system with autoscaling
US9317407B2 (en) * 2010-03-19 2016-04-19 Novell, Inc. Techniques for validating services for deployment in an intelligent workload management system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9317407B2 (en) * 2010-03-19 2016-04-19 Novell, Inc. Techniques for validating services for deployment in an intelligent workload management system
US20150006966A1 (en) * 2011-09-14 2015-01-01 Amazon Technologies, Inc. Cloud-based test execution
US20150186236A1 (en) * 2012-05-08 2015-07-02 Amazon Technologies, Inc. Scalable testing in a production system with autoscaling

Similar Documents

Publication Publication Date Title
US11283900B2 (en) Enterprise performance and capacity testing
US9727448B1 (en) Method and system for software application testing recommendations
US11157471B2 (en) Generic autonomous database tuning as a service for managing backing services in cloud
US10198702B2 (en) End-to end project management
US11036615B2 (en) Automatically performing and evaluating pilot testing of software
US20160092516A1 (en) Metric time series correlation by outlier removal based on maximum concentration interval
US20160094422A1 (en) Statistical pattern correlation of events in cloud deployments using codebook approach
US9202188B2 (en) Impact analysis of change requests of information technology systems
US10445217B2 (en) Service regression detection using real-time anomaly detection of application performance metrics
EP4182796B1 (en) Machine learning-based techniques for providing focus to problematic compute resources represented via a dependency graph
US11687335B2 (en) Software defect prediction model
US9852007B2 (en) System management method, management computer, and non-transitory computer-readable storage medium
US10459835B1 (en) System and method for controlling quality of performance of digital applications
WO2019204898A1 (en) Workload scheduling in a distributed computing environment based on an applied computational value
US11960873B2 (en) System and method for managing a model for solving issues using a set of actions performed on the client environment
US20220058064A1 (en) Api selection system and api selection method
US10360132B2 (en) Method and system for improving operational efficiency of a target system
CN111539756A (en) System and method for identifying and targeting users based on search requirements
US20230086361A1 (en) Automatic performance evaluation in continuous integration and continuous delivery pipeline
US10462026B1 (en) Probabilistic classifying system and method for a distributed computing environment
US11704228B1 (en) Crowd-sourced automatic generation of user interface tests for enterprise-specific mobile applications
JP2020166829A (en) System and method of asynchronous selection of compatible components
Kumar et al. A stochastic process of software fault detection and correction for business operations
CN112817869A (en) Test method, test device, test medium, and electronic apparatus
US10176306B2 (en) Information processing apparatus, evaluation method, and storage medium for evaluating application program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20926297

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20926297

Country of ref document: EP

Kind code of ref document: A1