WO2020000726A1 - 性能测试报告的生成方法、电子装置及可读存储介质 - Google Patents

性能测试报告的生成方法、电子装置及可读存储介质 Download PDF

Info

Publication number
WO2020000726A1
WO2020000726A1 PCT/CN2018/107704 CN2018107704W WO2020000726A1 WO 2020000726 A1 WO2020000726 A1 WO 2020000726A1 CN 2018107704 W CN2018107704 W CN 2018107704W WO 2020000726 A1 WO2020000726 A1 WO 2020000726A1
Authority
WO
WIPO (PCT)
Prior art keywords
performance test
time
preset
data
series data
Prior art date
Application number
PCT/CN2018/107704
Other languages
English (en)
French (fr)
Inventor
余剑波
林铭森
王瑞然
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020000726A1 publication Critical patent/WO2020000726A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3065Monitoring arrangements determined by the means or processing involved in reporting the monitored data

Definitions

  • the present application relates to the field of computer technology, and in particular, to a method for generating a performance test report, an electronic device, and a readable storage medium.
  • test reports produced by the existing performance testing tools have various forms and all have shortcomings.
  • the test report data generated by jmeter, a performance testing tool can only be parsed and viewed by the tool itself, and its results cannot be viewed if it is separated from the Jmeter tool. If the result data is to be shared, it is difficult to pass it to the sharer, and the tool needs to be attached.
  • the test result report generated by the performance test tool Gatling after the performance test is a fixed static html file format, and the extension cannot be modified to share its test result data. That is, the performance test reports generated by the existing performance test tools are stored in the respective test tools in the form of files, which is not easy to share, and the test result data is not easy to save for a long time and is easy to lose.
  • the purpose of the present application is to provide a method for generating a performance test report, an electronic device and a readable storage medium, which are intended to generate a performance test report that is convenient to share and save.
  • a first aspect of the present application provides an electronic device, the electronic device includes a memory and a processor, and the memory stores a performance test report generation system that can run on the processor.
  • the performance test report generation system is executed by the processor, the following steps are implemented:
  • the second aspect of the present application further provides a method for generating a performance test report.
  • the method for generating a performance test report includes:
  • the third aspect of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores a performance test report generation system, and the performance test report generation system may be at least A processor executes to cause the at least one processor to execute the steps of the method for generating a performance test report as described above.
  • the method, electronic device, and readable storage medium for generating the performance test report converts the performance test result data of the preset performance test tool into time series data and stores the time series data in a preset time series database through preset rules; Define a test index to take corresponding time-series data from the preset time-series database and form a performance test report.
  • the performance test result data can be parsed, converted into time series data, and stored in a time series database.
  • the performance test result data can be shared by simply taking out corresponding time series data from the preset time series database according to a user-defined test index, which is very convenient.
  • the performance test result data can be stored in the time series database for a long time without being lost.
  • FIG. 1 is a schematic diagram of an operating environment of a preferred embodiment of a system 10 for generating a performance test report of the present application;
  • FIG. 2 is a schematic flowchart of an embodiment of a method for generating a performance test report of this application.
  • FIG. 1 is a schematic diagram of an operating environment of a preferred embodiment of a system 10 for generating a performance test report of the present application.
  • the performance test report generating system 10 is installed and run in the electronic device 1.
  • the electronic device 1 may include, but is not limited to, a memory 11, a processor 12, and a display 13.
  • FIG. 1 only shows the electronic device 1 with components 11-13, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
  • the memory 11 is at least one type of readable computer storage medium.
  • the memory 11 may be an internal storage unit of the electronic device 1, such as a hard disk or a memory of the electronic device 1.
  • the memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in hard disk, a smart memory card (SMC), and a secure digital device. (Secure Digital, SD) card, Flash card, etc.
  • the memory 11 may further include both an internal storage unit of the electronic device 1 and an external storage device.
  • the memory 11 is configured to store application software installed on the electronic device 1 and various types of data, such as program codes of the performance test report generation system 10.
  • the memory 11 may also be used to temporarily store data that has been output or will be output.
  • the processor 12 may be a central processing unit (CPU), a microprocessor, or other data processing chip, configured to run program codes or process data stored in the memory 11, such as The performance test report generation system 10 and the like are executed.
  • CPU central processing unit
  • microprocessor or other data processing chip, configured to run program codes or process data stored in the memory 11, such as The performance test report generation system 10 and the like are executed.
  • the display 13 may be an LED display, a liquid crystal display, a touch-type liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like.
  • the display 13 is used to display information processed in the electronic device 1 and to display a visualized user interface, such as a finally generated performance test report.
  • the components 11-13 of the electronic device 1 communicate with each other through a system bus.
  • the performance test report generation system 10 includes at least one computer-readable instruction stored in the memory 11, and the at least one computer-readable instruction can be executed by the processor 12 to implement the embodiments of the present application.
  • Step S1 obtaining performance test result data of a preset performance test tool
  • Step S2 Analyze various performance test indicators included in the performance test result data, and convert the performance test result data of each performance test indicator into time series data of a period number type or a time point type according to a preset rule; Time series data of epoch type or time point type is stored in the preset time series database;
  • Step S3 Take out corresponding time series data from the preset time series database according to a user-defined test index, and form a performance test report.
  • the performance test report generation system receives a performance test report generation request sent by a user, for example, receives a performance test report generation request sent by a user through a terminal such as a mobile phone, a tablet computer, or a self-service terminal device.
  • Performance test report generation request sent from pre-installed clients in tablets, kiosks, and other terminals, or receive performance test reports sent by users on browser systems in mobile phones, tablets, and kiosks Generate request.
  • the performance test result data of a preset performance test tool such as a stress test tool Jmeter, a high performance server performance test tool Gatling, etc. are obtained, and the generated performance test is performed.
  • the result data is analyzed, and it is transformed into time series data according to the performance test index algorithm and stored in a time series database such as the distributed time series database influxdb.
  • Users can pre-define sql into templates according to the key test indicators they need to view in order to establish key indicator test report templates, and extract time series data corresponding to key test indicators from time series data stored in a time series database such as the distributed time series database influxdb. , And fill it into the established key indicator test report template, so that users can view the test report corresponding to each key test indicator after performing the performance operation.
  • performance test indicators for collecting performance test result data of a preset performance test tool such as a stress test tool Jmeter, a high-performance server performance test tool Gatling, and the like include the following types: 1 registered users, registered users Refers to registered users in the software. These users are potential users of the system and may be online at any time. The significance of this indicator is to let the test engineer understand the total amount of data in the system data and how many users of the system may be online at the same time. 2. Number of online users. The number of online users refers to the number of users who have logged in to the system at a certain time. The number of online users only counts the number of users who log in to the system. These users may not all operate the system and put pressure on the server. 3.
  • the number of concurrent users is different from the number of online users.
  • the number of concurrent users refers to the number of online users who send requests to the server at a certain moment. It is an important indicator of the server's concurrent capacity and synchronization and coordination ability.
  • the number of users sending the same or different requests that is, it can include the same request for a certain service, or different requests for multiple services; on the other hand, it includes the number of users sending the same request to the server at the same time , Limited to the same request for a business. 4.
  • the corresponding time of the request, the corresponding time is the time that the user feels that the software system is servicing it.
  • the corresponding time of a request refers to the time from a request initiated by the client to the corresponding end of the request received from the server by the client. 5.
  • the corresponding time of the transaction refers to the set of operations performed by the user on the client to perform one or more services.
  • the corresponding time of the transaction is a measure of the time spent by the user to perform these sets of operations. In the performance test, the corresponding time of the transaction is generally obtained by calculating the difference between the start time and the end time of the transaction. 6 clicks per second, clicks per second refers to the number of HTTP requests submitted to the web server per second, it is a common indicator to measure the processing capacity of the server. 7. Throughput.
  • TPS Transaction PerSecond
  • resource utilization refers to the use of resources, such as CPU usage, memory usage, network bandwidth usage, disk I / O input and output monitoring system hardware indicators.
  • resources such as CPU usage, memory usage, network bandwidth usage, disk I / O input and output monitoring system hardware indicators.
  • a single performance test index may be selected from the above performance test indexes based on the user's needs for different test reports, or several performance test indexes may be selected from the above performance test indexes for combined calculation to obtain the specific user needs. index.
  • Time series data refers to time series data, which is a data column recorded in chronological order by the same unified indicator. Each data in the same data column must be of the same caliber, and must be comparable. Time series data can be the number of periods or the number of time points. In this embodiment, according to the characteristics of each performance test indicator, it can be converted into chronological records or time points in chronological order. For example, for the performance test indicator “number of online users”, generally more attention is paid to online users at the same time every day during the test. The change of the quantity can be compared and analyzed in the subsequent test report for the test indicator "Number of Online Users" at the same time every day in several days.
  • the test indicator "Number of Online Users” can be tested.
  • Data is converted to time-series data. For example, the number of online users at the same moment on each day.
  • the number of online users collected for several days is converted into time-series data to form a time series, which is converted into time-series data.
  • the performance test indicator "Number of registered users” convert the test data into period data. For example, the number of registered users obtained by the same statistics and calculation methods every day. Collect the number of registered users for several days and convert it into a time series data series composed of the number of periods. , Which is converted into time series data.
  • the test data needs to be converted into period data, and the business success rate data in the period of 8:00 to 8:30 for an hour and a half in the morning should be obtained according to the same statistics and calculation methods every day.
  • the business success rate data collected for several days is converted into a time series data column composed of the number of periods, that is, converted into time series data.
  • influxdb is an open source distributed time series, event and indicator database.
  • Influxdb has the following characteristics: 1. Time Series (time series): You can use time-related functions (such as maximum, minimum, sum, etc.). 2.Metrics (metrics): can calculate a large amount of data in real time. 3.Eevents (events): support arbitrary event data.
  • the converted time series data is stored in influxdb according to each preset label, and the preset label may include: a machine name, a measurement time, a measurement index, and the like.
  • Influxdb supports HTTP API, users can use http to read and write Influxdb. Commonly used statistical functions built into Influxdb, such as max, min, mean, etc. Users can call the built-in commonly used statistical functions max, min, mean, etc. in Influxdb to calculate the maximum, minimum, sum, and other statistical data stored in Influxdb. Final test report.
  • Influxdb also supports Grafana. Grafana is a pure html / js application. There will be no cross-domain access restrictions when accessing InfluxDB.
  • the chart can be configured.
  • Grafana can be used to perform graph configuration on the calculated statistical time series data in Influxdb, and a test report with a visual graph can be displayed for the user.
  • the performance test result data of a preset performance testing tool is converted into time series data by a preset rule and stored in a preset time series database; corresponding time series data is taken from the preset time series database according to a user-defined test index. And form a performance test report.
  • the performance test result data of the performance testing tool is obtained, the performance test result data can be parsed, converted into time series data, and stored in a time series database.
  • the performance test result data can be shared by simply taking out corresponding time series data from the preset time series database according to a user-defined test index, which is very convenient.
  • the performance test result data can be stored in the time series database for a long time without being lost.
  • the user can also define a test report template in advance.
  • a SQL report is used to predefine a test report template.
  • the user can customize the performance test indicators and various trend charts that he needs.
  • the performance test indicator data is converted into time series data and stored in a time series database such as the distributed time series database influxdb
  • the user-defined test report template can be extracted from the time series data stored in the time series database such as the distributed time series database influxdb.
  • the time series data corresponding to the key test indicators are filled into the established key indicator test report template, so that the user can view the test report corresponding to each key test indicator after performing the performance operation.
  • the final test result report obtained from the test report template in this embodiment is richer than the simple report generated in the original test tools Jmeter and Gatling, which is easier to analyze system performance test problems and bottlenecks, and is convenient for users to view and analyze. Easy to expand.
  • the test result data is stored in a time series database such as the distributed time series database influxdb, instead of being stored as a file in each of the original test tools Jmeter and Gatling, the test result data in this embodiment can be permanently saved without loss. To facilitate data access and display.
  • FIG. 2 is a schematic flowchart of an embodiment of a method for generating a performance test report of this application.
  • the method for generating a performance test report includes the following steps:
  • Step S10 Obtain performance test result data of a preset performance test tool
  • Step S20 Analyze various performance test indicators included in the performance test result data, and convert the performance test result data of each performance test indicator into time series data of a period number type or a time point type according to preset rules; Time series data of epoch type or time point type is stored in the preset time series database;
  • Step S30 Take out corresponding time series data from the preset time series database according to a user-defined test index, and form a performance test report.
  • the performance test report generation system receives a performance test report generation request sent by a user, for example, receives a performance test report generation request sent by a user through a terminal such as a mobile phone, a tablet computer, or a self-service terminal device.
  • Performance test report generation request sent from pre-installed clients in tablets, kiosks, and other terminals, or receive performance test reports sent by users on browser systems in mobile phones, tablets, and kiosks Generate request.
  • the performance test result data of a preset performance test tool such as a stress test tool Jmeter, a high performance server performance test tool Gatling, etc. are obtained, and the generated performance test
  • the result data is analyzed, and it is transformed into time series data according to the performance test index algorithm and stored in a time series database such as the distributed time series database influxdb.
  • Users can pre-define sql into templates according to the key test indicators they need to view in order to establish key indicator test report templates, and extract time series data corresponding to key test indicators from time series data stored in a time series database such as the distributed time series database influxdb. , And fill it into the established key indicator test report template, so that users can view the test report corresponding to each key test indicator after performing the performance operation.
  • performance test indicators for collecting performance test result data of a preset performance test tool such as a stress test tool Jmeter, a high-performance server performance test tool Gatling, and the like include the following types: 1 registered users, registered users Refers to registered users in the software. These users are potential users of the system and may be online at any time. The significance of this indicator is to let the test engineer understand the total amount of data in the system data and how many users of the system may be online at the same time. 2. Number of online users. The number of online users refers to the number of users who have logged in to the system at a certain time. The number of online users only counts the number of users who log in to the system. These users may not all operate the system and put pressure on the server. 3.
  • the number of concurrent users is different from the number of online users.
  • the number of concurrent users refers to the number of online users who send requests to the server at a certain moment. It is an important indicator of the server's concurrent capacity and synchronization and coordination ability.
  • the number of users sending the same or different requests that is, it can include the same request for a certain service, or different requests for multiple services; on the other hand, it includes the number of users sending the same request to the server at the same time , Limited to the same request for a business. 4.
  • the corresponding time of the request, the corresponding time is the time that the user feels that the software system is servicing it.
  • the corresponding time of a request refers to the time from a request initiated by the client to the corresponding end of the request received from the server by the client. 5.
  • the corresponding time of the transaction refers to the set of operations performed by the user on the client to perform one or more services.
  • the corresponding time of the transaction is to measure the time spent by the user to perform these operation sets. In the performance test, the corresponding time of the transaction is generally obtained by calculating the difference between the start time and the end time of the transaction. 6 clicks per second, clicks per second refers to the number of HTTP requests submitted to the web server per second, it is a common indicator to measure the processing capacity of the server. 7. Throughput.
  • TPS Transaction PerSecond
  • resource utilization refers to the use of resources, such as CPU usage, memory usage, network bandwidth usage, disk I / O input and output monitoring system hardware indicators.
  • resources such as CPU usage, memory usage, network bandwidth usage, disk I / O input and output monitoring system hardware indicators.
  • a single performance test index may be selected from the above performance test indexes based on the user's needs for different test reports, or several performance test indexes may be selected from the above performance test indexes for combined calculation to obtain the specific user needs. index.
  • Time series data refers to time series data, which is a data column recorded in chronological order by the same unified indicator. Each data in the same data column must be of the same caliber, and must be comparable. Time series data can be the number of periods or the number of time points. In this embodiment, according to the characteristics of each performance test indicator, it can be converted into chronological records or time points in chronological order. For example, for the performance test indicator “number of online users”, generally more attention is paid to online users at the same time every day when testing The change of the quantity can be compared and analyzed in the subsequent test report for the test indicator "Number of Online Users" at the same time every day in several days.
  • the test indicator "Number of Online Users” can be tested.
  • Data is converted to time-series data. For example, the number of online users at the same moment on each day.
  • the number of online users collected for several days is converted into time-series data to form a time series, which is converted into time-series data.
  • For the performance test indicator "Number of registered users” convert the test data into period data, such as the number of registered users obtained by the same statistics and calculation methods every day. Collect the number of registered users for several days and convert them into a time series data series composed of the number of periods. , Which is converted into time series data.
  • the test data needs to be converted into period data, and the business success rate data in the period of 8:00 to 8:30 for an hour and a half in the morning should be obtained according to the same statistics and calculation methods every day.
  • the business success rate data collected for several days is converted into a time series data column composed of the number of periods, that is, converted into time series data.
  • influxdb is an open source distributed time series, event and indicator database.
  • Influxdb has the following characteristics: 1. Time Series (time series): You can use time-related functions (such as maximum, minimum, sum, etc.). 2.Metrics (metrics): can calculate a large amount of data in real time. 3.Eevents (events): support arbitrary event data.
  • the converted time series data is stored in influxdb according to each preset label, and the preset label may include: a machine name, a measurement time, a measurement index, and the like.
  • Influxdb supports HTTP API, users can use http to read and write Influxdb. Commonly used statistical functions built into Influxdb, such as max, min, mean, etc. Users can call the built-in commonly used statistical functions max, min, mean, etc. in Influxdb to calculate the maximum, minimum, sum and other statistics of the time series data stored in Influxdb. Final test report.
  • Influxdb also supports Grafana. Grafana is a pure html / js application.
  • the performance test result data of a preset performance testing tool is converted into time series data by a preset rule and stored in a preset time series database; corresponding time series data is taken from the preset time series database according to a user-defined test index. And form a performance test report.
  • the performance test result data of the performance testing tool is obtained, the performance test result data can be parsed, converted into time series data, and stored in a time series database.
  • the performance test result data can be shared by simply taking out corresponding time series data from the preset time series database according to a user-defined test index, which is very convenient.
  • the performance test result data can be stored in the time series database for a long time without being lost.
  • the user can also define a test report template in advance.
  • a SQL report is used to predefine a test report template.
  • the user can customize the performance test indicators and various trend charts that he needs.
  • the performance test indicator data is converted into time series data and stored in a time series database such as the distributed time series database influxdb
  • the user-defined test report template can be extracted from the time series data stored in the time series database such as the distributed time series database influxdb.
  • the time series data corresponding to the key test indicators are filled into the established key indicator test report template, so that the user can view the test report corresponding to each key test indicator after performing the performance operation.
  • the final test result report obtained from the test report template in this embodiment is richer than the simple report generated in the original test tools Jmeter and Gatling, which is easier to analyze system performance test problems and bottlenecks, and is convenient for users to view and analyze. Easy to expand.
  • the test result data is stored in a time series database such as the distributed time series database influxdb, instead of being stored as a file in each of the original test tools Jmeter and Gatling, the test result data in this embodiment can be permanently saved without loss. To facilitate data access and display.
  • the present application also provides a computer-readable storage medium, where the computer-readable storage medium stores a performance test report generation system, and the performance test report generation system may be executed by at least one processor, so that the performance test report is generated by the computer.
  • At least one processor executes the steps of the method for generating a performance test report in the foregoing embodiment.
  • the specific implementation processes of steps S10, S20, and S30 of the method for generating a performance test report are as described above, and are not repeated here.
  • the technical solution of this application that is essentially or contributes to the existing technology can be embodied in the form of a software product, which is stored in a storage medium (such as ROM / RAM, magnetic disk, The optical disc) includes several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the methods described in the embodiments of the present application.
  • a terminal device which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

本申请涉及一种性能测试报告的生成方法、电子装置及可读存储介质,该方法包括:获取预设性能测试工具的性能测试结果数据;分析所述性能测试结果数据中包含的各种性能测试指标,并按预设规则将各个性能测试指标的性能测试结果数据转化为时期数类型或时点数类型的时序数据;将转化的时期数类型或时点数类型的时序数据存储至预设时序数据库中根据用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,并形成性能测试报告。本申请由于所有性能测试结果数据均被转变成时序数据存储于时序数据库中,性能测试结果数据可以长久的保存在时序数据库中,不会丢失。

Description

性能测试报告的生成方法、电子装置及可读存储介质
优先权申明
本申请基于巴黎公约申明享有2018年06月29日递交的申请号为CN 201810694756.9、名称为“性能测试报告的生成方法、电子装置及可读存储介质”中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。
技术领域
本申请涉及计算机技术领域,尤其涉及一种性能测试报告的生成方法、电子装置及可读存储介质。
背景技术
目前,现有的性能测试工具产生的测试报告形态各异且都存在不足。如性能测试工具jmeter产生的测试报告数据只有工具本身可以解析查看,如果脱离Jmeter工具则无法查看其结果。如果要共享其结果数据则很难传递给共享方,需要把工具也附带上。性能测试工具Gatling进行性能测试后生成的测试结果报告是固定的静态html文件格式,无法修改扩展以共享其测试结果数据。即现有的性能测试工具产生的性能测试报告都是以文件形式存储在各自的测试工具中,不易于共享,且测试结果数据不易长久保存,易丢失。
发明内容
本申请的目的在于提供一种性能测试报告的生成方法、电子装置及可读存储介质,旨在生成方便共享及保存的性能测试报告。
为实现上述目的,本申请第一方面提供一种电子装置,所述电子装置包括存储器、处理器,所述存储器上存储有可在所述处理器上运行的性能测试报告的生成系统,所述性能测试报告的生成系统被所述处理器执行时实现如下步骤:
获取预设性能测试工具的性能测试结果数据;
分析所述性能测试结果数据中包含的各种性能测试指标,并按预设规则将各个性能测试指标的性能测试结果数据转化为时期数类型或时点数类型的时序数据;将转化的时期数类型或时点数类型的时序数据存储至预设时序数据库中;
根据用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,并形成性能测试报告。
此外,为实现上述目的,本申请第二方面还提供一种性能测试报告的生成方法,所述性能测试报告的生成方法包括:
获取预设性能测试工具的性能测试结果数据;
分析所述性能测试结果数据中包含的各种性能测试指标,并按预设规则将各个性能测试指标的性能测试结果数据转化为时期数类型或时点数类型的时序数据;将转化的时期数类型或时点数类型的时序数据存储至预设时序数据库中;
根据用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,并形成性能测试报告。
进一步地,为实现上述目的,本申请第三方面还提供一种计算机可读存储介质,所述计算机可读存储介质存储有性能测试报告的生成系统,所述性能测试报告的生成系统可被至少一个处理器执行,以使所述至少一个处理器执行如上述的性能测试报告的生成方法的步骤。
本申请提出的性能测试报告的生成方法、电子装置及可读存储介质,通过预设规则将预设性能测试工具的性能测试结果数据转化为时序数据并存储于预设时序数据库中;根据用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,并形成性能测试报告。由于能在获取性能测试工具的性能测试结果数据后,对性能测试结果数据进行解析,将其转变成时序数据并存储于时序数据库中。在需要生成性能测试报告时只需根据用户自定义测试指标从所述预设时序数据库中取出对应的时序数据即可共享性能测试结果数据,十分方便。而且,由于所有性能测试结果数据均被转变成时序数据存储于时序数据库中,性能测试结果数据可以长久的保存在时序数据库中,不会丢失。
附图说明
图1为本申请性能测试报告的生成系统10较佳实施例的运行环境示意图;
图2为本申请性能测试报告的生成方法一实施例的流程示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
需要说明的是,在本申请中涉及“第一”、“第二”等的描述仅用于描述目的,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。另外,各个实施例之间的 技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本申请要求的保护范围之内。
本申请提供一种性能测试报告的生成系统。请参阅图1,是本申请性能测试报告的生成系统10较佳实施例的运行环境示意图。
在本实施例中,所述的性能测试报告的生成系统10安装并运行于电子装置1中。该电子装置1可包括,但不仅限于,存储器11、处理器12及显示器13。图1仅示出了具有组件11-13的电子装置1,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。
所述存储器11为至少一种类型的可读计算机存储介质,所述存储器11在一些实施例中可以是所述电子装置1的内部存储单元,例如该电子装置1的硬盘或内存。所述存储器11在另一些实施例中也可以是所述电子装置1的外部存储设备,例如所述电子装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器11还可以既包括所述电子装置1的内部存储单元也包括外部存储设备。所述存储器11用于存储安装于所述电子装置1的应用软件及各类数据,例如所述性能测试报告的生成系统10的程序代码等。所述存储器11还可以用于暂时地存储已经输出或者将要输出的数据。
所述处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU),微处理器或其他数据处理芯片,用于运行所述存储器11中存储的程序代码或处理数据,例如执行所述性能测试报告的生成系统10等。
所述显示器13在一些实施例中可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。所述显示器13用于显示在所述电子装置1中处理的信息以及用于显示可视化的用户界面,例如最终生成的性能测试报告等。所述电子装置1的部件11-13通过系统总线相互通信。
性能测试报告的生成系统10包括至少一个存储在所述存储器11中的计算机可读指令,该至少一个计算机可读指令可被所述处理器12执行,以实现本申请各实施例。
其中,上述性能测试报告的生成系统10被所述处理器12执行时实现如下步骤:
步骤S1,获取预设性能测试工具的性能测试结果数据;
步骤S2,分析所述性能测试结果数据中包含的各种性能测试指标,并按预设规则将各个性能测试指标的性能测试结果数据转化为时 期数类型或时点数类型的时序数据;将转化的时期数类型或时点数类型的时序数据存储至预设时序数据库中;
步骤S3,根据用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,并形成性能测试报告。
本实施例中,性能测试报告的生成系统接收用户发出的性能测试报告生成请求,例如,接收用户通过手机、平板电脑、自助终端设备等终端发送的性能测试报告生成请求,如接收用户在手机、平板电脑、自助终端设备等终端中预先安装的客户端上发送来的性能测试报告生成请求,或接收用户在手机、平板电脑、自助终端设备等终端中的浏览器系统上发送来的性能测试报告生成请求。
本实施例中在收到用户发出的性能测试报告生成请求后,获取预设的性能测试工具如压力测试工具Jmeter、高性能服务器性能测试工具Gatling等的性能测试结果数据,并对生成的性能测试结果数据进行解析,根据性能测试指标算法将其转变成时序数据并存储于时序数据库如分布式时序数据库influxdb中。用户可预先根据其所需查看的关键测试指标把sql预定义进模板以建立关键指标测试报告模板,从时序数据库如分布式时序数据库influxdb中存储的时序数据中提取出关键测试指标对应的时序数据,并填充至建立的关键指标测试报告模板中,这样,用户执行完性能操作即可查看各个关键测试指标对应的测试报告。
具体的,本实施例中采集预设的性能测试工具如压力测试工具Jmeter、高性能服务器性能测试工具Gatling等中性能测试结果数据的性能测试指标包括以下几种:1注册用户数,注册用户数指软件中已经注册的用户,这些用户是系统的潜在用户,随时都有可能上线。这个指标的意义在于让测试工程师了解系统数据中的数据总量和系统最大可能有多少用户同时在线。2、在线用户数,在线用户数是指某一时刻已经登录系统的用户数量。在线用户数只是统计了登录系统的用户数量,这些用户不一定都对系统进行操作,对服务器产生压力。3、并发用户数,不同于在线用户数,并发用户数是指某一时刻向服务器发送请求的在线用户数,是衡量服务器并发容量和同步协调能力的重要指标,一方面其包括同一时刻向服务器发送相同或者不同请求的用户数,也就是说,既可以包括对某一业务的相同请求,也可以包括对多个业务的不同请求;另一方面其包括同一时刻向服务器发送相同请求的用户数,仅限于某一业务的相同请求。4、请求的相应时间,相应时间就是用户感受软件系统为其服务所消耗的时间。对于web系统,请求的相应时间指的是从客户端发起的一个请求时间,到客户端接收到从服务器返回的相应结束。5、事务的相应时间,事务是指用户在客户端做一种或多种业务所执行的操作集,事务的相应时间就 是衡量用户执行这些操作集所花费的时间。在性能测试中,一般通过计算事务的开始时间和结束时间的差值来获取事务的相应时间。6、每秒点击数,每秒点击数是指每秒钟向web服务器提交的HTTP请求数,它是衡量服务器处理能力的一个常用指标。7、吞吐率,吞吐率通常指单位时间内从服务器返回的字节数,也可以单位时间内客户提交的请求数。吞吐率是大型web系统衡量自身负载能力的一个重要指标,一般来说,吞吐率越大,单位时间内处理的数据就越多,系统的负载能力也强。8、业务成功率,指多用户对某一业务发起操作的成功率。9、每秒事务处理量(TransactionPerSecond,TPS),TPS表示服务器每秒处理的事务数,是衡量系统处理能力的一个非常重要的指标,在性能测试中,通过检测不同用户的TPS,可以估算出系统处理能力的拐点。10、资源利用率,资源利用率就是指资源的使用情况,如CPU使用率、内存使用率、网络宽带的使用情况、磁盘I/O的输入输出量等系统硬件方面的监控指标。本申请中可针对用户对不同测试报告的需求从上述性能测试指标中选出单个的性能测试指标,也可从上述性能测试指标中选出几种性能测试指标进行组合计算得到用户所需的特定指标。
获取到各个性能测试指标的性能测试结果数据后,对性能测试结果数据进行解析,并将其转变成时序数据。时序数据是指时间序列数据,是同一统一指标按时间顺序记录的数据列。在同一数据列中的各个数据必须是同口径的,要求具有可比性。时序数据可以是时期数,也可以是时点数。本实施例中可根据各个性能测试指标的特性将其按时间顺序记录转换为时期数或时点数,例如,针对性能测试指标“在线用户数”,一般在测试时更关注每天同一时刻的在线用户数量的变化,以在后续的测试报告中可针对若干天数中每天同一时刻的测试指标“在线用户数”来进行比对分析,因此,本实施例中可将测试指标“在线用户数”的测试数据转换为时点数据,如在每一天同一时刻的在线用户数,采集若干天数的在线用户数转换成时点数据组成时序数列,即转换为时序数据。针对性能测试指标“注册用户数”,将测试数据转换为时期数据,如每天按相同的统计和计算方法得到的注册用户数量,采集若干天数的注册用户数量转换成由时期数组成的时序数据列,即转换为时序数据。本实施例中可预先设定针对哪些性能测试指标自动转换为时点数,针对哪些性能测试指标自动转换为时期数,以完成性能测试结果数据到时序数据的自动转换。例如,针对测试网络订票系统的并发处理性能,在早上8:00——8:30半小时的高峰里,要求能支持10万比订票业务,其中成功率不少于98%。因此,针对性能测试指标“业务成功率”,需将测试数据转换为时期数据,每天按相同的统计和计算方法得到在早上8:00——8:30半小时的时期内 的业务成功率数据,采集若干天数的业务成功率数据转换成由时期数组成的时序数据列,即转换为时序数据。
将性能测试结果数据转换成时序数据后,将转换的时序数据存储于时序数据库如分布式时序数据库influxdb中。其中,influxdb是一个开源分布式时序、事件和指标数据库。使用Go语言编写,无需外部依赖。Influxdb具有如下特性:1.Time Series(时间序列):可以使用与时间有关的相关函数(如最大,最小,求和等)。2.Metrics(度量):可以实时对大量数据进行计算。3.Eevents(事件):支持任意的事件数据。本申请中将转换的时序数据按各个预设标签存储于influxdb中,该预设标签可包括:机器名、测量时间、测量指标等等。Influxdb支持HTTP API,用户可使用http对Influxdb进行读写操作。Influxdb中内置常用统计函数,比如max,min,mean等,用户可调用Influxdb中的内置常用统计函数max,min,mean等对Influxdb中存储的时序数据进行求最大,求最小,求和等统计形成最终的测试报告。Influxdb还支持Grafana,Grafana是一个纯粹的html/js应用,访问InfluxDB时不会有跨域访问的限制。只要配置好数据源为InfluxDB之后就可以配置图表。本实施例中可利用Grafana来对Influxdb中的经计算统计的时序数据进行图表配置,即可为用户展示具有可视化图表的测试报告。
本实施例通过预设规则将预设性能测试工具的性能测试结果数据转化为时序数据并存储于预设时序数据库中;根据用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,并形成性能测试报告。由于能在获取性能测试工具的性能测试结果数据后,对性能测试结果数据进行解析,将其转变成时序数据并存储于时序数据库中。在需要生成性能测试报告时只需根据用户自定义测试指标从所述预设时序数据库中取出对应的时序数据即可共享性能测试结果数据,十分方便。而且,由于所有性能测试结果数据均被转变成时序数据存储于时序数据库中,性能测试结果数据可以长久的保存在时序数据库中,不会丢失。
进一步地,在一可选的实施例中,用户还可预先定义一个测试报告模板,如利用sql语句预先定义一个测试报告模板,用户可自定义自己需要的性能测试指标和各种趋势图,在将性能测试指标数据转换为时序数据并存储于时序数据库如分布式时序数据库influxdb后,即可从时序数据库如分布式时序数据库influxdb中存储的时序数据中提取出用户自定义的测试报告模板中的关键测试指标对应的时序数据,并填充至建立的关键指标测试报告模板中,这样,用户执行完性能操 作即可查看各个关键测试指标对应的测试报告。本实施例中由测试报告模板得到的最终测试结果报告相比于原始测试工具Jmeter、Gatling中自带生成的简单报告内容更加丰富,更易于分析系统性能测试问题及瓶颈,便于用户查看分析,灵活易扩展。而且,由于测试结果数据是存储于时序数据库如分布式时序数据库influxdb中,而不是以文件形式存储在各个原始测试工具Jmeter、Gatling中,本实施例中的测试结果数据可以永久保存,不会丢失,方便对数据存取及展示。
如图2所示,图2为本申请性能测试报告的生成方法一实施例的流程示意图,该性能测试报告的生成方法包括以下步骤:
步骤S10,获取预设性能测试工具的性能测试结果数据;
步骤S20,分析所述性能测试结果数据中包含的各种性能测试指标,并按预设规则将各个性能测试指标的性能测试结果数据转化为时期数类型或时点数类型的时序数据;将转化的时期数类型或时点数类型的时序数据存储至预设时序数据库中;
步骤S30,根据用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,并形成性能测试报告。
本实施例中,性能测试报告的生成系统接收用户发出的性能测试报告生成请求,例如,接收用户通过手机、平板电脑、自助终端设备等终端发送的性能测试报告生成请求,如接收用户在手机、平板电脑、自助终端设备等终端中预先安装的客户端上发送来的性能测试报告生成请求,或接收用户在手机、平板电脑、自助终端设备等终端中的浏览器系统上发送来的性能测试报告生成请求。
本实施例中在收到用户发出的性能测试报告生成请求后,获取预设的性能测试工具如压力测试工具Jmeter、高性能服务器性能测试工具Gatling等的性能测试结果数据,并对生成的性能测试结果数据进行解析,根据性能测试指标算法将其转变成时序数据并存储于时序数据库如分布式时序数据库influxdb中。用户可预先根据其所需查看的关键测试指标把sql预定义进模板以建立关键指标测试报告模板,从时序数据库如分布式时序数据库influxdb中存储的时序数据中提取出关键测试指标对应的时序数据,并填充至建立的关键指标测试报告模板中,这样,用户执行完性能操作即可查看各个关键测试指标对应的测试报告。
具体的,本实施例中采集预设的性能测试工具如压力测试工具Jmeter、高性能服务器性能测试工具Gatling等中性能测试结果数据的性能测试指标包括以下几种:1注册用户数,注册用户数指软件中已经注册的用户,这些用户是系统的潜在用户,随时都有可能上线。这个指标的意义在于让测试工程师了解系统数据中的数据总量和系统 最大可能有多少用户同时在线。2、在线用户数,在线用户数是指某一时刻已经登录系统的用户数量。在线用户数只是统计了登录系统的用户数量,这些用户不一定都对系统进行操作,对服务器产生压力。3、并发用户数,不同于在线用户数,并发用户数是指某一时刻向服务器发送请求的在线用户数,是衡量服务器并发容量和同步协调能力的重要指标,一方面其包括同一时刻向服务器发送相同或者不同请求的用户数,也就是说,既可以包括对某一业务的相同请求,也可以包括对多个业务的不同请求;另一方面其包括同一时刻向服务器发送相同请求的用户数,仅限于某一业务的相同请求。4、请求的相应时间,相应时间就是用户感受软件系统为其服务所消耗的时间。对于web系统,请求的相应时间指的是从客户端发起的一个请求时间,到客户端接收到从服务器返回的相应结束。5、事务的相应时间,事务是指用户在客户端做一种或多种业务所执行的操作集,事务的相应时间就是衡量用户执行这些操作集所花费的时间。在性能测试中,一般通过计算事务的开始时间和结束时间的差值来获取事务的相应时间。6、每秒点击数,每秒点击数是指每秒钟向web服务器提交的HTTP请求数,它是衡量服务器处理能力的一个常用指标。7、吞吐率,吞吐率通常指单位时间内从服务器返回的字节数,也可以单位时间内客户提交的请求数。吞吐率是大型web系统衡量自身负载能力的一个重要指标,一般来说,吞吐率越大,单位时间内处理的数据就越多,系统的负载能力也强。8、业务成功率,指多用户对某一业务发起操作的成功率。9、每秒事务处理量(TransactionPerSecond,TPS),TPS表示服务器每秒处理的事务数,是衡量系统处理能力的一个非常重要的指标,在性能测试中,通过检测不同用户的TPS,可以估算出系统处理能力的拐点。10、资源利用率,资源利用率就是指资源的使用情况,如CPU使用率、内存使用率、网络宽带的使用情况、磁盘I/O的输入输出量等系统硬件方面的监控指标。本申请中可针对用户对不同测试报告的需求从上述性能测试指标中选出单个的性能测试指标,也可从上述性能测试指标中选出几种性能测试指标进行组合计算得到用户所需的特定指标。
获取到各个性能测试指标的性能测试结果数据后,对性能测试结果数据进行解析,并将其转变成时序数据。时序数据是指时间序列数据,是同一统一指标按时间顺序记录的数据列。在同一数据列中的各个数据必须是同口径的,要求具有可比性。时序数据可以是时期数,也可以是时点数。本实施例中可根据各个性能测试指标的特性将其按时间顺序记录转换为时期数或时点数,例如,针对性能测试指标“在线用户数”,一般在测试时更关注每天同一时刻的在线用户数量的变化,以在后续的测试报告中可针对若干天数中每天同一时刻的测试指 标“在线用户数”来进行比对分析,因此,本实施例中可将测试指标“在线用户数”的测试数据转换为时点数据,如在每一天同一时刻的在线用户数,采集若干天数的在线用户数转换成时点数据组成时序数列,即转换为时序数据。针对性能测试指标“注册用户数”,将测试数据转换为时期数据,如每天按相同的统计和计算方法得到的注册用户数量,采集若干天数的注册用户数量转换成由时期数组成的时序数据列,即转换为时序数据。本实施例中可预先设定针对哪些性能测试指标自动转换为时点数,针对哪些性能测试指标自动转换为时期数,以完成性能测试结果数据到时序数据的自动转换。例如,针对测试网络订票系统的并发处理性能,在早上8:00——8:30半小时的高峰里,要求能支持10万比订票业务,其中成功率不少于98%。因此,针对性能测试指标“业务成功率”,需将测试数据转换为时期数据,每天按相同的统计和计算方法得到在早上8:00——8:30半小时的时期内的业务成功率数据,采集若干天数的业务成功率数据转换成由时期数组成的时序数据列,即转换为时序数据。
将性能测试结果数据转换成时序数据后,将转换的时序数据存储于时序数据库如分布式时序数据库influxdb中。其中,influxdb是一个开源分布式时序、事件和指标数据库。使用Go语言编写,无需外部依赖。Influxdb具有如下特性:1.Time Series(时间序列):可以使用与时间有关的相关函数(如最大,最小,求和等)。2.Metrics(度量):可以实时对大量数据进行计算。3.Eevents(事件):支持任意的事件数据。本申请中将转换的时序数据按各个预设标签存储于influxdb中,该预设标签可包括:机器名、测量时间、测量指标等等。Influxdb支持HTTP API,用户可使用http对Influxdb进行读写操作。Influxdb中内置常用统计函数,比如max,min,mean等,用户可调用Influxdb中的内置常用统计函数max,min,mean等对Influxdb中存储的时序数据进行求最大,求最小,求和等统计形成最终的测试报告。Influxdb还支持Grafana,Grafana是一个纯粹的html/js应用,访问InfluxDB时不会有跨域访问的限制。只要配置好数据源为InfluxDB之后就可以配置图表。本实施例中可利用Grafana来对Influxdb中的经计算统计的时序数据进行图表配置,即可为用户展示具有可视化图表的测试报告。
本实施例通过预设规则将预设性能测试工具的性能测试结果数据转化为时序数据并存储于预设时序数据库中;根据用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,并形成性能测试报告。由于能在获取性能测试工具的性能测试结果数据后,对性能测试结果数据进行解析,将其转变成时序数据并存储于时序数据库中。 在需要生成性能测试报告时只需根据用户自定义测试指标从所述预设时序数据库中取出对应的时序数据即可共享性能测试结果数据,十分方便。而且,由于所有性能测试结果数据均被转变成时序数据存储于时序数据库中,性能测试结果数据可以长久的保存在时序数据库中,不会丢失。
进一步地,在一可选的实施例中,用户还可预先定义一个测试报告模板,如利用sql语句预先定义一个测试报告模板,用户可自定义自己需要的性能测试指标和各种趋势图,在将性能测试指标数据转换为时序数据并存储于时序数据库如分布式时序数据库influxdb后,即可从时序数据库如分布式时序数据库influxdb中存储的时序数据中提取出用户自定义的测试报告模板中的关键测试指标对应的时序数据,并填充至建立的关键指标测试报告模板中,这样,用户执行完性能操作即可查看各个关键测试指标对应的测试报告。本实施例中由测试报告模板得到的最终测试结果报告相比于原始测试工具Jmeter、Gatling中自带生成的简单报告内容更加丰富,更易于分析系统性能测试问题及瓶颈,便于用户查看分析,灵活易扩展。而且,由于测试结果数据是存储于时序数据库如分布式时序数据库influxdb中,而不是以文件形式存储在各个原始测试工具Jmeter、Gatling中,本实施例中的测试结果数据可以永久保存,不会丢失,方便对数据存取及展示。
此外,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质存储有性能测试报告的生成系统,所述性能测试报告的生成系统可被至少一个处理器执行,以使所述至少一个处理器执行如上述实施例中的性能测试报告的生成方法的步骤,该性能测试报告的生成方法的步骤S10、S20、S30等具体实施过程如上文所述,在此不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件来实现,但很多情况下前者是更佳的实施方式。 基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
以上参照附图说明了本申请的优选实施例,并非因此局限本申请的权利范围。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。另外,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
本领域技术人员不脱离本申请的范围和实质,可以有多种变型方案实现本申请,比如作为一个实施例的特征可用于另一实施例而得到又一实施例。凡在运用本申请的技术构思之内所作的任何修改、等同替换和改进,均应在本申请的权利范围之内。

Claims (20)

  1. 一种电子装置,其特征在于,所述电子装置包括存储器、处理器,所述存储器上存储有可在所述处理器上运行的性能测试报告的生成系统,所述性能测试报告的生成系统被所述处理器执行时实现如下步骤:
    获取预设性能测试工具的性能测试结果数据;
    分析所述性能测试结果数据中包含的各种性能测试指标,并按预设规则将各个性能测试指标的性能测试结果数据转化为时期数类型或时点数类型的时序数据;将转化的时期数类型或时点数类型的时序数据存储至预设时序数据库中;
    根据用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,并形成性能测试报告。
  2. 如权利要求1所述的电子装置,其特征在于,所述按预设规则将所述性能测试结果数据转化为时序数据并存储于预设时序数据库中的步骤包括:
    判断所述性能测试结果数据对应的各个性能测试指标属于预设的时期数类型范围或时点数类型范围;
    若有性能测试指标属于预设的时期数类型范围,则将所述性能测试结果数据中该性能测试指标对应的性能测试结果数据转化为时期数类型的时序数据;
    若有性能测试指标属于预设的时点数类型范围,则将所述性能测试结果数据中该性能测试指标对应的性能测试结果数据转化为时点数类型的时序数据;
    将转化的时期数类型的时序数据和时点数类型的时序数据存储至预设时序数据库中。
  3. 如权利要求2所述的电子装置,其特征在于,所述预设时序数据库为分布式时序数据库influxdb。
  4. 如权利要求1所述的电子装置,其特征在于,所述根据用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,并形成性能测试报告的步骤包括:
    根据预设的测试报告模板中用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,将取出的时序数据填充至所述预设的测试报告模板中与所述用户自定义测试指标对应的位置,并利用预设统计函数对所述用户自定义测试指标对应的时序数据进行计算,根据计算结果生成预设统计图表,形成性能测试报告;所述预设统计函数包括求最大、求最小和/或求和统计函数。
  5. 如权利要求2所述的电子装置,其特征在于,所述根据用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,并形 成性能测试报告的步骤包括:
    根据预设的测试报告模板中用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,将取出的时序数据填充至所述预设的测试报告模板中与所述用户自定义测试指标对应的位置,并利用预设统计函数对所述用户自定义测试指标对应的时序数据进行计算,根据计算结果生成预设统计图表,形成性能测试报告;所述预设统计函数包括求最大、求最小和/或求和统计函数。
  6. 如权利要求3所述的电子装置,其特征在于,所述根据用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,并形成性能测试报告的步骤包括:
    根据预设的测试报告模板中用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,将取出的时序数据填充至所述预设的测试报告模板中与所述用户自定义测试指标对应的位置,并利用预设统计函数对所述用户自定义测试指标对应的时序数据进行计算,根据计算结果生成预设统计图表,形成性能测试报告;所述预设统计函数包括求最大、求最小和/或求和统计函数。
  7. 如权利要求1所述的电子装置,其特征在于,所述性能测试结果数据对应的各个性能测试指标包括:
    在线用户数量、并发用户数量、请求响应时间、事务响应时间、每秒事务处理量TPS、吞吐率、业务办理成功率、资源利用率中的至少一个。
  8. 如权利要求2所述的电子装置,其特征在于,所述性能测试结果数据对应的各个性能测试指标包括:
    在线用户数量、并发用户数量、请求响应时间、事务响应时间、每秒事务处理量TPS、吞吐率、业务办理成功率、资源利用率中的至少一个。
  9. 如权利要求3所述的电子装置,其特征在于,所述性能测试结果数据对应的各个性能测试指标包括:
    在线用户数量、并发用户数量、请求响应时间、事务响应时间、每秒事务处理量TPS、吞吐率、业务办理成功率、资源利用率中的至少一个。
  10. 一种性能测试报告的生成方法,其特征在于,所述性能测试报告的生成方法包括:
    获取预设性能测试工具的性能测试结果数据;
    分析所述性能测试结果数据中包含的各种性能测试指标,并按预设规则将各个性能测试指标的性能测试结果数据转化为时期数类型或时点数类型的时序数据;将转化的时期数类型或时点数类型的时序数据存储至预设时序数据库中;
    根据用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,并形成性能测试报告。
  11. 如权利要求10所述的性能测试报告的生成方法,其特征在于,所述按预设规则将所述性能测试结果数据转化为时序数据并存储于预设时序数据库中的步骤包括:
    判断所述性能测试结果数据对应的各个性能测试指标属于预设的时期数类型范围或时点数类型范围;
    若有性能测试指标属于预设的时期数类型范围,则将所述性能测试结果数据中该性能测试指标对应的性能测试结果数据转化为时期数类型的时序数据;
    若有性能测试指标属于预设的时点数类型范围,则将所述性能测试结果数据中该性能测试指标对应的性能测试结果数据转化为时点数类型的时序数据;
    将转化的时期数类型的时序数据和时点数类型的时序数据存储至预设时序数据库中。
  12. 如权利要求11所述的性能测试报告的生成方法,其特征在于,所述预设时序数据库为分布式时序数据库influxdb。
  13. 如权利要求10所述的性能测试报告的生成方法,其特征在于,所述根据用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,并形成性能测试报告的步骤包括:
    根据预设的测试报告模板中用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,将取出的时序数据填充至所述预设的测试报告模板中与所述用户自定义测试指标对应的位置,并利用预设统计函数对所述用户自定义测试指标对应的时序数据进行计算,根据计算结果生成预设统计图表,形成性能测试报告;所述预设统计函数包括求最大、求最小和/或求和统计函数。
  14. 如权利要求11所述的性能测试报告的生成方法,其特征在于,所述根据用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,并形成性能测试报告的步骤包括:
    根据预设的测试报告模板中用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,将取出的时序数据填充至所述预设的测试报告模板中与所述用户自定义测试指标对应的位置,并利用预设统计函数对所述用户自定义测试指标对应的时序数据进行计算,根据计算结果生成预设统计图表,形成性能测试报告;所述预设统计函数包括求最大、求最小和/或求和统计函数。
  15. 如权利要求12所述的性能测试报告的生成方法,其特征在于,所述根据用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,并形成性能测试报告的步骤包括:
    根据预设的测试报告模板中用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,将取出的时序数据填充至所述预设的测试报告模板中与所述用户自定义测试指标对应的位置,并利用预设统计函数对所述用户自定义测试指标对应的时序数据进行计算,根据计算结果生成预设统计图表,形成性能测试报告;所述预设统计函数包括求最大、求最小和/或求和统计函数。
  16. 如权利要求10所述的性能测试报告的生成方法,其特征在于,所述性能测试结果数据对应的各个性能测试指标包括:
    在线用户数量、并发用户数量、请求响应时间、事务响应时间、每秒事务处理量TPS、吞吐率、业务办理成功率、资源利用率中的至少一个。
  17. 如权利要求11所述的性能测试报告的生成方法,其特征在于,所述性能测试结果数据对应的各个性能测试指标包括:
    在线用户数量、并发用户数量、请求响应时间、事务响应时间、每秒事务处理量TPS、吞吐率、业务办理成功率、资源利用率中的至少一个。
  18. 如权利要求12所述的性能测试报告的生成方法,其特征在于,所述性能测试结果数据对应的各个性能测试指标包括:
    在线用户数量、并发用户数量、请求响应时间、事务响应时间、每秒事务处理量TPS、吞吐率、业务办理成功率、资源利用率中的至少一个。
  19. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有性能测试报告的生成系统,所述性能测试报告的生成系统被处理器执行时实现如下步骤:
    获取预设性能测试工具的性能测试结果数据;
    分析所述性能测试结果数据中包含的各种性能测试指标,并按预设规则将各个性能测试指标的性能测试结果数据转化为时期数类型或时点数类型的时序数据;将转化的时期数类型或时点数类型的时序数据存储至预设时序数据库中;
    根据用户自定义测试指标从所述预设时序数据库中取出对应的时序数据,并形成性能测试报告。
  20. 如权利要求19所述的计算机可读存储介质,其特征在于,所述按预设规则将所述性能测试结果数据转化为时序数据并存储于预设时序数据库中的步骤包括:
    判断所述性能测试结果数据对应的各个性能测试指标属于预设的时期数类型范围或时点数类型范围;
    若有性能测试指标属于预设的时期数类型范围,则将所述性能测试结果数据中该性能测试指标对应的性能测试结果数据转化为时期 数类型的时序数据;
    若有性能测试指标属于预设的时点数类型范围,则将所述性能测试结果数据中该性能测试指标对应的性能测试结果数据转化为时点数类型的时序数据;
    将转化的时期数类型的时序数据和时点数类型的时序数据存储至预设时序数据库中。
PCT/CN2018/107704 2018-06-29 2018-09-26 性能测试报告的生成方法、电子装置及可读存储介质 WO2020000726A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810694756.9 2018-06-29
CN201810694756.9A CN108845914A (zh) 2018-06-29 2018-06-29 性能测试报告的生成方法、电子装置及可读存储介质

Publications (1)

Publication Number Publication Date
WO2020000726A1 true WO2020000726A1 (zh) 2020-01-02

Family

ID=64201775

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/107704 WO2020000726A1 (zh) 2018-06-29 2018-09-26 性能测试报告的生成方法、电子装置及可读存储介质

Country Status (2)

Country Link
CN (1) CN108845914A (zh)
WO (1) WO2020000726A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110245051B (zh) * 2019-06-14 2023-07-04 上海中通吉网络技术有限公司 数据埋点方法、装置、设备和存储介质
CN111143198B (zh) * 2019-12-10 2023-04-28 湖北大学 测试数据的处理方法以及装置
CN111209285A (zh) * 2020-04-23 2020-05-29 成都四方伟业软件股份有限公司 一种基于时序数据的统计指标存储方法及装置
CN111611746A (zh) * 2020-05-20 2020-09-01 中国公路工程咨询集团有限公司 一种面向智能网联车测试的数据库管理系统
CN113608981B (zh) * 2021-07-27 2024-01-05 远景智能国际私人投资有限公司 时序数据库测试方法、装置、计算机设备及存储介质
CN113609008A (zh) * 2021-07-27 2021-11-05 北京淇瑀信息科技有限公司 测试结果分析方法、装置和电子设备
CN113742226B (zh) * 2021-09-01 2024-04-30 上海浦东发展银行股份有限公司 一种软件性能测试方法、装置、介质及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105045710A (zh) * 2015-06-30 2015-11-11 吉林大学 一种云计算环境下的自动化测试数据生成方法
US20160183110A1 (en) * 2014-12-22 2016-06-23 International Business Machines Corporation Network performance testing in non-homogeneous networks
CN106529145A (zh) * 2016-10-27 2017-03-22 浙江工业大学 一种基于arima‑bp神经网络的桥梁监测数据预测方法
CN107832226A (zh) * 2017-11-23 2018-03-23 中国平安人寿保险股份有限公司 基于性能测试的报告生成方法、装置、设备和计算机介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104331477B (zh) * 2014-11-04 2017-08-25 哈尔滨工业大学 基于联邦式检索的云平台并发性能测试方法
CN105279065B (zh) * 2015-09-30 2018-01-16 北京奇虎科技有限公司 在云测试平台中统计测试结果的方法及装置
CN107544897A (zh) * 2017-08-25 2018-01-05 重庆扬讯软件技术股份有限公司 基于一体化实时监控的性能测试方法与系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160183110A1 (en) * 2014-12-22 2016-06-23 International Business Machines Corporation Network performance testing in non-homogeneous networks
CN105045710A (zh) * 2015-06-30 2015-11-11 吉林大学 一种云计算环境下的自动化测试数据生成方法
CN106529145A (zh) * 2016-10-27 2017-03-22 浙江工业大学 一种基于arima‑bp神经网络的桥梁监测数据预测方法
CN107832226A (zh) * 2017-11-23 2018-03-23 中国平安人寿保险股份有限公司 基于性能测试的报告生成方法、装置、设备和计算机介质

Also Published As

Publication number Publication date
CN108845914A (zh) 2018-11-20

Similar Documents

Publication Publication Date Title
WO2020000726A1 (zh) 性能测试报告的生成方法、电子装置及可读存储介质
CN109271411B (zh) 报表生成方法、装置、计算机设备及存储介质
US9037555B2 (en) Asynchronous collection and correlation of trace and communications event data
CN110851465B (zh) 数据查询方法及系统
US20150170070A1 (en) Method, apparatus, and system for monitoring website
US8683489B2 (en) Message queue transaction tracking using application activity trace data
CN111143286B (zh) 一种云平台日志管理方法及系统
US8533743B2 (en) System and method of analyzing business process events
WO2019085307A1 (zh) 数据抽样方法、终端、设备以及计算机可读存储介质
US9971563B2 (en) Systems and methods for low interference logging and diagnostics
US11954133B2 (en) Method and apparatus for managing and controlling resource, device and storage medium
CN110147470B (zh) 一种跨机房数据比对系统及方法
CN110784377A (zh) 一种多云环境下的云监控数据统一管理的方法
CN110266555B (zh) 用于分析网站服务请求的方法
CN113010542B (zh) 业务数据处理方法、装置、计算机设备及存储介质
CN110851317A (zh) 一种预测存储设备iops性能数据的方法、装置、设备及存储介质
CN112631879A (zh) 数据采集方法、装置、计算机可读介质及电子设备
EP4209933A1 (en) Data processing method and apparatus, and electronic device and storage medium
JP2010097285A (ja) システム分析支援プログラム、システム分析支援装置、およびシステム分析支援方法
CN110020166A (zh) 一种数据分析方法及相关设备
CN115242799B (zh) 数据上报方法、装置、设备、存储介质及程序产品
CN114328214B (zh) 报表软件的接口测试用例的提效方法、装置、计算机设备
CN115629889A (zh) 一种多源数据规范化处理方法、装置、系统及存储介质
CN114301893A (zh) 日志管理方法、系统和可读存储介质
CN114579390A (zh) 基于云计算管理平台项目的数据处理方法及相关装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18924911

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29/03/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18924911

Country of ref document: EP

Kind code of ref document: A1