US20190215262A1 - System and method for dynamically testing networked target systems - Google Patents

System and method for dynamically testing networked target systems Download PDF

Info

Publication number
US20190215262A1
US20190215262A1 US16/352,425 US201916352425A US2019215262A1 US 20190215262 A1 US20190215262 A1 US 20190215262A1 US 201916352425 A US201916352425 A US 201916352425A US 2019215262 A1 US2019215262 A1 US 2019215262A1
Authority
US
United States
Prior art keywords
traffic
prnss
testing
performance
outside
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/352,425
Inventor
Alon Girmonsky
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CA Inc
Original Assignee
CA Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CA Inc filed Critical CA Inc
Priority to US16/352,425 priority Critical patent/US20190215262A1/en
Assigned to Blazemeter Ltd. reassignment Blazemeter Ltd. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GIRMONSKY, ALON
Assigned to CA, INC. reassignment CA, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Blazemeter Ltd.
Publication of US20190215262A1 publication Critical patent/US20190215262A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/50Testing arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/142Network analysis or design using statistical or mathematical methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Algebra (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Pure & Applied Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

A method and system for conducting performance hybrid traffic testing of a networked target system (NTS) are disclosed. The method comprises initializing at least one private cloud source (PRCS) to generate traffic within a private network security system (PRNSS); initializing at least one public cloud source (PUCS) to generate traffic outside of the PRNSS; customizing the traffic within the PRNSS and the traffic outside of the PRNSS; gathering information respective of the performance of the at least one PUCS and the at least one PRCS; and generating a performance testing report respective of the gathered information.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • The present patent is a continuation of U.S. patent application Ser. No. 14/563,603, filed 8 Dec. 2014, which claims the benefit of U.S. Provisional Application No. 61/914,417 filed on Dec. 11, 2013, the contents of which are hereby incorporated by reference.
  • TECHNICAL FIELD
  • The present invention relates generally to testing system performance and, more particularly, to systems and methods for testing the functionality of an enterprise's system through a plurality of private cloud sources and a plurality of public cloud sources.
  • BACKGROUND
  • The worldwide web offers real-time access to large amounts of data, enabling enterprises to provide information via widely accessible electronic communications networks in the form of a variety of computer programs. Such enterprises commonly spend a great deal of time and money testing the computer programs they develop in order to ascertain such programs' levels of functionality and performance. As established in existing solutions, to test the operation of such computer programs, performance tests are performed to determine a program's behavior under both normal and anticipated peak load conditions, to troubleshoot performance problems, and to provide unit tests on a daily basis to developed modules.
  • Due to the complexity of many enterprises' systems, which typically include secured private cloud sources as well as public cloud sources, testing has become highly difficult to perform. Some existing solutions offer distributed systems deployed in private cloud resources in order to test the performance of an enterprise's system. Such existing solutions are usually cumbersome and not cost effective. Other existing solutions offer a bypass to the secured system. Such a bypass is extremely complex and sometimes impossible depending on the system's architecture.
  • It would therefore be advantageous to provide a solution that would overcome the deficiencies of the prior art by testing the performance of an enterprise's system that includes both private and public cloud sources in a cost effective manner.
  • SUMMARY
  • A summary of several example embodiments of the disclosure follows. This summary is provided for the convenience of the reader to provide a basic understanding of such embodiments and does not wholly define the breadth of the disclosure. This summary is not an extensive overview of all contemplated embodiments, and is intended to neither identify key or critical elements of all embodiments nor to delineate the scope of any or all embodiments. Its sole purpose is to present some concepts of one or more embodiments in a simplified form as a prelude to the more detailed description that is presented later. For convenience, the term some embodiments may be used herein to refer to a single aspect or multiple embodiments of the disclosure.
  • The disclosure relates in various embodiments to a method for conducting performance testing of a networked target system (NTS). The method comprises initializing at least one private cloud source (PRCS) to generate traffic within a private network security system (PRNSS); initializing at least one public cloud source (PUCS) to generate traffic outside of the PRNSS; customizing the traffic within the PRNSS and the traffic outside of the PRNSS; gathering information respective of the performance of the at least one PUCS and the at least one PRCS; and generating a performance testing report respective of the gathered information.
  • The disclosure relates in various embodiments to a system for conducting performance testing of a networked target system (NTS). The system comprises a processing system; and a memory, the memory containing instructions that, when executed by the processing system, configure the system to: initialize at least one private cloud source (PRCS) to generate traffic within a private network security system (PRNSS); initialize at least one public cloud source (PUCS) to generate traffic outside of the PRNSS; customize the traffic within the PRNSS and the traffic outside of the PRNSS; gather information respective of the performance of the at least one PUCS and the at least one PRCS; and generate a performance testing report respective of the gathered information.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The subject matter disclosed herein is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the disclosed embodiments will be apparent from the following detailed description taken in conjunction with the accompanying drawings.
  • FIG. 1 is a schematic block diagram showing operation of a cloud system utilized to describe various embodiments for hybrid traffic testing; and
  • FIG. 2 is a flowchart describing a method for testing the performance of a networked target system through a plurality of private cloud sources and a plurality of public cloud sources according to an embodiment.
  • FIG. 3 is a flowchart illustrating customization of traffic according to an embodiment.
  • DETAILED DESCRIPTION
  • It is important to note that the embodiments disclosed herein are only examples of the many advantageous uses of the innovative teachings herein. In general, statements made in the specification of the present application do not necessarily limit any of the various disclosed embodiments. Moreover, some statements may apply to some inventive features but not to others. In general, unless otherwise indicated, singular elements may be in plural and vice versa with no loss of generality. In the drawings, like numerals refer to like parts through several views.
  • By way of example, the disclosed embodiments include a method and system for hybrid traffic testing in a hybrid cloud environment. A hybrid cloud environment is a storage system wherein some data is stored locally and other data is stored over a cloud. The system is configured to test the performance of a networked target system through a plurality of private cloud sources and a plurality of public cloud sources. In an exemplary embodiment, a plurality of agents are deployed in the private and public cloud sources, and the agents are configured to generate traffic therefrom. A
  • FIG. 1 shows an exemplary and non-limiting block diagram illustrating a system 100 for hybrid traffic testing according to an embodiment. A networked target system (NTS) 10 includes an enterprise's server 110. The enterprise's server 110 is connected to a network 120. The NTS 10 further includes a plurality of enterprises' private cloud sources (PRCS) 130-1 through 130-N (collectively referred to hereinafter as PRCSs 130 or individually as a PRCS 130, merely for simplicity purposes) which are also communicatively connected to the network 120.
  • The NTS 10 typically includes a processing system 11 communicatively connected to a memory 12. In one implementation, the memory 12 contains instructions that, when executed by the processing system 11, results in the performance of the methods discussed herein below. Specifically, the processing system 11 may include machine-readable media for storing software. Software shall be construed broadly to mean any type of instructions, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Instructions may include code (e.g., in source code format, binary code format, executable code format, or any other suitable format of code). The instructions, when executed by the one or more processors, cause the processing system 11 to perform the various functions described herein. In an embodiment, the processing system 11 may include one or more processors. The one or more processors may be implemented with any combination of general-purpose microprocessors, multi-core processors, microcontrollers, digital signal processors (DSPs), field programmable gate array (FPGAs), programmable logic devices (PLDs), controllers, state machines, gated logic, discrete hardware components, dedicated hardware finite state machines, or any other suitable entities that can perform calculations or other manipulations of information.
  • In an embodiment, each of the PRCSs 130-1 through 130-N has an agent 135-1 through 135-N installed therein (collectively referred to hereinafter as agents 135 or individually as an agent 135, merely for simplicity purposes). The agents 135 are typically software codes that are installed in a memory of the PRCSs 130 and are executed by processing elements of such PRCSs 130. In an embodiment, an agent 135 may be realized as a network appliance connected to a respective PRCS. The network 120 may be a wireless, cellular, or wired network, a local area network (LAN), a wide area network (WAN), a metro area network (MAN), the Internet, the worldwide web (WWW), a combination thereof, and the like.
  • The connections of the PRCSs 130 and the enterprise's server 110 to the network 120 are secured by a private network security system (PRNSS) 15. The PRNSS 15, which may be software-based or hardware-based, controls the incoming and outgoing traffic. The NTS 10 further comprises one or more public cloud sources (PUCS) 140-1 through 140-M that are further connected to the network 120.
  • A performance testing server (PTS) 150 is also connected to the network 120. The PTS 150 is configured to continuously receive communications from each of the agents 135. Responsive thereto, the PTS 150 is configured to manage generation of traffic conducted by each one of the agents 135 through the NTS 10. Management of traffic generation may include, but is not limited to, customization of traffic flows. Customization of traffic is described further herein below with respect to FIG. 3. The PTS 150 is further configured to generate traffic by each of the PUCSs 140 through the NTS 10. The PTS 150 is also configured to gather information regarding performance of the NTS 10 respective of the generated traffic within the PRNSS 15, as well as information which is publically available through the PUCSs 140. Such gathered information may be provided by, e.g., PUCs 140.
  • The gathered information is used by the PTS 150 in order to generate a performance testing report. Generation of performance testing reports may include, but is not limited to, aggregating statistical data of the perform tests and calculating metrics respective thereof. The performance testing reports may include, but are not limited to, estimated maximum capacity, errors received during testing, throughput of the System under Test, response time, and so on. A database 160 is further connected to the network 120. Such database may be used to store information respective of the performance of the NTS 10. It should be apparent to one of ordinary skill in the art that the PTS 150 may be further configured to test the performance of a plurality of networked target systems.
  • As a non-limiting example, a request to test the performance of the NTS 10 is received. In this example, the enterprise server 110 sends a message regarding the test to the agents 135 over the network 120 as controlled by the PRNSS 15, thereby preparing the PRCs 130 for testing with a PUC 140 by configuring the agents 135 to communicate with the PTS 150. Responsive thereto, the PTS 150 is configured to generate traffic by each of the agents 135 and the PUC 140. The PTS 150 further gathers information related to the testing and generates a performance testing report respective thereof. The report is saved in database 160.
  • FIG. 2 depicts an exemplary and non-limiting flowchart 200 illustrating a method for performance testing of a networked target system in accordance with an embodiment. It should be noted that this description of the method of FIG. 2 is conducted respective of the system of FIG. 1. Other suitable systems may be used to perform the method of FIG. 2 without departing from the scope of the disclosed embodiments.
  • In S210, a request to initiate a performance test of at least one networked target system (e.g., the NTS 10) and metadata respective of the networked target system is received. Such received metadata may include, but is not limited to, testing parameters. Testing parameters may include, but are not limited to, the amount of traffic to be generated within the PRNSS 15, the amount of traffic to be generated outside of the PRNSS 15, which PRCSs 130 to be initialized, which PUCSs 140 to be initialized, information to be gathered during testing, and so on.
  • In S220, respective of the received metadata, each of a plurality of PRCSs 130 is initialized to generate traffic within the PRNSS 15. In S230, each of a plurality of PUCSs 140 is further initialized to generate traffic outside the PRNSS 15. In an embodiment, initialization is performed using the agents 135 installed within each PRCS. In an embodiment, the traffic generated within the PRNSS 15 and the traffic generated outside the PRNSS 15 may be customized. This customization may be performed to ensure, for example, that the combination of traffic meets user requirements. User requirements for generated traffic may include, but are not limited to, total volume of traffic, volume of traffic within the PRNSS 15, volume of traffic outside of the PRNSS 15, time to generate traffic, and so on. As a non-limiting example, user requirements in consumer electronics retail may be to simulate a traffic volume expected during a large sales event, such as Black Friday. Customization of traffic is described further herein below with respect to FIG. 3.
  • In S240, information about the performance of the NTS 10 is gathered. In various embodiments, such information may be gathered respective of the information required to complete a desired performance testing report. In an embodiment, the desired performance testing report may be indicated by, e.g., a server (e.g., the server 110). As a non-limiting example, if a desired performance testing report should include the estimated maximum capacity of the NTS, information related to respective of various amounts of traffic may be gathered.
  • In S250, a performance testing report is generated respective of the gathered information. The performance testing report may include, but is not limited to, estimated maximum capacity, the number and/or types of errors received during testing, throughput of the System under Test, response time, load times for particular webpages, transactions per second, and so on. In S260, it is checked whether additional requests have been received and, if so, execution continues with S220; otherwise, execution terminates.
  • As a non-limiting example, a request to test the performance of an NTS and metadata are received. The metadata indicates the number of PRCSs and the number of PUCSs to be initialized pursuant to testing, as well as what information to be gathered. In this example, the desired testing report should include the estimated response times as well as the number and types of errors received during testing. Based on the received metadata, the indicated numbers of PRCSs and PUCSs are initialized. During testing, information required to generate a performance testing report is gathered. In this example, such information includes the response times at various stages of testing, as well as all errors received during testing. Based on such information, a performance testing report is generated.
  • FIG. 3 is an exemplary and non-limiting flowchart 300 illustrating customization of traffic according to an embodiment. In S310, generation of traffic within a PRNSS e.g., the PRNSS 15) and outside of the PRNSS is detected. In an embodiment, S310 may further or alternatively include receiving a request to customize traffic flows. In S320, user requirements for generated traffic are received or retrieved. User requirements for generated traffic may include, but are not limited to, a maximum total volume of traffic, a maximum volume of traffic within the PRNSS, a maximum volume of traffic outside of the PRNSS, time to generate traffic, a number of traffic volume settings to test, and so on.
  • In S330, a desired performance testing report is identified. The desired performance testing report may include, but is not limited to, an estimated maximum capacity of a NTS, performance of the NTS at various total traffic volumes, performance of the NTS respective of constant traffic within the PRNSS and variable traffic outside of the PRNSS, and performance of the NTS respective of variable traffic within the PRNSS and constant traffic outside of the PRNSS. In S340, the detected traffic within the PRNSS and outside of the PRNSS is modified. In a non-limiting embodiment, this is performed based on the desired performance testing report and the user requirements for generated traffic. As a non-limiting example, if the user requirements include a maximum total volume of traffic and the desired performance testing report should include performance of the NTS at various total volumes, the traffic flows within and outside of the PRNSS may be modified so that traffic is equal to a total volume of traffic that is less than or equal to the maximum total volume of traffic.
  • In S350, it is determined whether an additional performance testing is required and, if so, execution returns to S340; otherwise, execution terminates. In an embodiment, additional performance testing may be required if, e.g., the user requirements include a number of traffic volume settings to test and the actual number of traffic volume settings tested is less than the number of traffic volume settings to test.
  • The various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU, whether or not such a computer or processor is explicitly shown. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit. Furthermore, a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal.
  • All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the invention and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.

Claims (19)

What is claimed is:
1. A method for conducting performance testing of a networked target system (NTS), comprising:
initializing at least one private cloud source (PRCS) to generate traffic within a private network security system (PRNSS);
initializing at least one public cloud source (PUCS) to generate traffic outside of the PRNSS;
customizing the traffic within the PRNSS and the traffic outside of the PRNSS;
gathering information respective of the performance of the at least one PUCS and the at least one PRCS; and
generating a performance testing report respective of the gathered information.
2. The method of claim 1, wherein the gathered information comprises at least statistical data respective of the performance of the at least one PUCS and the at least one PRCS.
3. The method of claim 2, further comprising:
aggregating the statistical data; and
calculating at least one performance testing metric based on the aggregated statistical data.
4. The method of claim 1, wherein the at least one performance testing metric is one any of: an estimated maximum capacity, at least one error received during testing, a throughput of a system under test, and a response time.
5. The method of claim 1, further comprising:
receiving at least one testing parameter respective of the NTS.
6. The method of claim 5, wherein the at least one testing parameter is any one of:
an amount of traffic to be generated within the PRNSS, an amount of traffic to be generated outside of the PRNSS, and information to be gathered during testing
7. The method of claim 6, wherein customizing the traffic within the PRNSS and the traffic outside of the PRNSS is at least based on the testing parameter.
8. The method of claim 1, wherein customizing the traffic within the PRNSS and the traffic outside of the PRNSS is based on at least a user requirement.
9. The method of claim 8, wherein the user requirement is any one of: a total volume of traffic, a volume of traffic within the PRNSS, a volume of traffic outside of the PRNSS, and a time to generate traffic.
10. A non-transitory computer readable medium having stored thereon instructions for causing one or more processing units to execute the method according to claim 1.
11. A system for conducting performance testing of a networked target system (NTS), comprising:
a processing system; and
a memory, the memory containing instructions that, when executed by the processing system, configure the system to:
initialize at least one private cloud source (PRCS) to generate traffic within a private network security system (PRNSS);
initialize at least one public cloud source (PUCS) to generate traffic outside of the PRNSS;
customize the traffic within the PRNSS and the traffic outside of the PRNSS;
gather information respective of the performance of the at least one PUCS and the at least one PRCS; and
generate a performance testing report respective of the gathered information.
12. The system of claim 11, wherein the gathered information comprises at least statistical data respective of the performance of the at least one PUCS and the at least one PRCS.
13. The system of claim 12, wherein the system is further configured to:
aggregate the statistical data; and
calculate at least one performance testing metric based on the aggregated statistical data.
14. The system of claim 11, wherein the at least one performance testing metric is any of: an estimated maximum capacity, at least one error received during testing, a throughput of a system under test, and a response time.
15. The system of claim 11, wherein the system is further configured to:
receive at least one testing parameter respective of the NTS.
16. The system of claim 15, wherein the at least one testing parameter is any of: an amount of traffic to be generated within the PRNSS, an amount of traffic to be generated outside of the PRNSS, and information to be gathered during testing.
17. The system of claim 16, wherein the customization of the traffic within the PRNSS and the traffic outside of the PRNSS is based on at least the testing parameter.
18. The system of claim 11, wherein customization of traffic within the PRNSS and the traffic outside of the PRNSS is based on at least a user requirement.
19. The system of claim 18, wherein the user requirement is any one of: a total volume of traffic, a volume of traffic within the PRNSS, a volume of traffic outside of the PRNSS, and a time to generate traffic.
US16/352,425 2013-12-11 2019-03-13 System and method for dynamically testing networked target systems Abandoned US20190215262A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/352,425 US20190215262A1 (en) 2013-12-11 2019-03-13 System and method for dynamically testing networked target systems

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201361914417P 2013-12-11 2013-12-11
US14/563,603 US10237161B2 (en) 2013-12-11 2014-12-08 System and method for dynamically testing networked target systems
US16/352,425 US20190215262A1 (en) 2013-12-11 2019-03-13 System and method for dynamically testing networked target systems

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/563,603 Continuation US10237161B2 (en) 2013-12-11 2014-12-08 System and method for dynamically testing networked target systems

Publications (1)

Publication Number Publication Date
US20190215262A1 true US20190215262A1 (en) 2019-07-11

Family

ID=53272287

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/563,603 Active 2034-12-22 US10237161B2 (en) 2013-12-11 2014-12-08 System and method for dynamically testing networked target systems
US16/352,425 Abandoned US20190215262A1 (en) 2013-12-11 2019-03-13 System and method for dynamically testing networked target systems

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/563,603 Active 2034-12-22 US10237161B2 (en) 2013-12-11 2014-12-08 System and method for dynamically testing networked target systems

Country Status (1)

Country Link
US (2) US10237161B2 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9647919B1 (en) * 2014-12-04 2017-05-09 Amazon Technologies Automated determination of maximum service throughput
US10521612B2 (en) 2017-06-21 2019-12-31 Ca, Inc. Hybrid on-premises/software-as-service applications
US10353804B1 (en) * 2019-01-22 2019-07-16 Capital One Services, Llc Performance engineering platform and metric management
US11277198B2 (en) * 2020-05-04 2022-03-15 Hughes Network Systems, Llc Monitoring operational status and detecting faults in a high throughput satellite system

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5218605A (en) 1990-01-31 1993-06-08 Hewlett-Packard Company Software modules for testing computer hardware and software
US7301952B2 (en) 2000-04-06 2007-11-27 The Distribution Systems Research Institute Terminal-to-terminal communication connection control method using IP transfer network
US6546513B1 (en) 2000-06-02 2003-04-08 Advanced Micro Devices Data processing device test apparatus and method therefor
US7274684B2 (en) 2001-10-10 2007-09-25 Bruce Fitzgerald Young Method and system for implementing and managing a multimedia access network device
US7222269B2 (en) * 2001-12-06 2007-05-22 Ns Solutions Corporation Performance evaluation device, performance evaluation information managing device, performance evaluation method, performance evaluation information managing method, performance evaluation system
US6801940B1 (en) 2002-01-10 2004-10-05 Networks Associates Technology, Inc. Application performance monitoring expert
US7835293B2 (en) * 2005-09-13 2010-11-16 Cisco Technology, Inc. Quality of service testing of communications networks
US9094257B2 (en) * 2006-06-30 2015-07-28 Centurylink Intellectual Property Llc System and method for selecting a content delivery network
US7844036B2 (en) 2006-08-14 2010-11-30 Soasta, Inc. Visual test automation tool for message-based applications, web applications and SOA systems
US7808918B2 (en) 2006-08-22 2010-10-05 Embarq Holdings Company, Llc System and method for dynamically shaping network traffic
US8239831B2 (en) 2006-10-11 2012-08-07 Micro Focus (Ip) Limited Visual interface for automated software testing
US7840851B2 (en) 2008-02-15 2010-11-23 Red Hat, Inc. Annotating GUI test automation playback and debugging
US8341462B2 (en) 2010-07-19 2012-12-25 Soasta, Inc. System and method for provisioning and running a cross-cloud test grid
US9450834B2 (en) * 2010-07-19 2016-09-20 Soasta, Inc. Animated globe showing real-time web user performance measurements
US9495473B2 (en) 2010-07-19 2016-11-15 Soasta, Inc. Analytic dashboard with user interface for producing a single chart statistical correlation from source and target charts during a load test
US9229842B2 (en) 2010-07-19 2016-01-05 Soasta, Inc. Active waterfall charts for continuous, real-time visualization of website performance data
US9785533B2 (en) 2011-10-18 2017-10-10 Soasta, Inc. Session template packages for automated load testing
US9047410B2 (en) * 2012-07-18 2015-06-02 Infosys Limited Cloud-based application testing
US8839042B2 (en) * 2012-08-31 2014-09-16 Ca, Inc. Dynamic load calculation and predictive scaling
CN103795749B (en) * 2012-10-30 2017-03-01 国际商业机器公司 The method and apparatus operating in the problem of software product in cloud environment for diagnosis
US10104121B2 (en) * 2013-07-03 2018-10-16 Fortinet, Inc. Application layer-based single sign on

Also Published As

Publication number Publication date
US10237161B2 (en) 2019-03-19
US20150163124A1 (en) 2015-06-11

Similar Documents

Publication Publication Date Title
US20190215262A1 (en) System and method for dynamically testing networked target systems
US20210182106A1 (en) Resource configuration prediction method and device
US10521284B2 (en) System and method for management of deployed services and applications
CN107317730B (en) Method, equipment and system for monitoring state of block chain node
Han et al. Evaluating blockchains for IoT
US8825752B1 (en) Systems and methods for providing intelligent automated support capable of self rejuvenation with respect to storage systems
US9712410B1 (en) Local metrics in a service provider environment
WO2019062304A1 (en) Method, device and system for managing computing resources of block chain node
Gill et al. RADAR: Self‐configuring and self‐healing in resource management for enhancing quality of cloud services
US9996442B2 (en) Cloud computing benchmarking
US10198284B2 (en) Ensuring operational integrity and performance of deployed converged infrastructure information handling systems
US20210216303A1 (en) Deployment routing of clients by analytics
US10680902B2 (en) Virtual agents for facilitation of network based storage reporting
EP3692443B1 (en) Application regression detection in computing systems
US10735270B1 (en) Computer-based systems configured for network modelling and monitoring using programming object bindings and methods of use thereof
US8713522B2 (en) Validating the configuration of distributed systems
Guzek et al. A holistic model of the performance and the energy efficiency of hypervisors in a high‐performance computing environment
US11669374B2 (en) Using machine-learning methods to facilitate experimental evaluation of modifications to a computational environment within a distributed system
US20140096125A1 (en) Systems and methods for installing, managing, and provisioning applications
US10176067B1 (en) On-demand diagnostics in a virtual environment
US8949824B2 (en) Systems and methods for installing, managing, and provisioning applications
CN110908910B (en) Block chain-based test monitoring method and device and readable storage medium
US11777810B2 (en) Status sharing in a resilience framework
US11561848B2 (en) Policy-based logging using workload profiles
Mendonça et al. Assessing performance and energy consumption in mobile applications

Legal Events

Date Code Title Description
AS Assignment

Owner name: CA, INC., NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BLAZEMETER LTD.;REEL/FRAME:048589/0065

Effective date: 20180322

Owner name: BLAZEMETER LTD., ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GIRMONSKY, ALON;REEL/FRAME:048589/0014

Effective date: 20141207

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION