CN108848149B - Method and device for adaptively positioning maximum processing capacity of HTTP (hyper text transport protocol) service - Google Patents

Method and device for adaptively positioning maximum processing capacity of HTTP (hyper text transport protocol) service Download PDF

Info

Publication number
CN108848149B
CN108848149B CN201810569068.XA CN201810569068A CN108848149B CN 108848149 B CN108848149 B CN 108848149B CN 201810569068 A CN201810569068 A CN 201810569068A CN 108848149 B CN108848149 B CN 108848149B
Authority
CN
China
Prior art keywords
data
http request
http
request
processing capacity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810569068.XA
Other languages
Chinese (zh)
Other versions
CN108848149A (en
Inventor
李诚诚
朱慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wacai Network Technology Co ltd
Original Assignee
Wacai Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wacai Network Technology Co ltd filed Critical Wacai Network Technology Co ltd
Priority to CN201810569068.XA priority Critical patent/CN108848149B/en
Publication of CN108848149A publication Critical patent/CN108848149A/en
Application granted granted Critical
Publication of CN108848149B publication Critical patent/CN108848149B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention relates to a method and a device for adaptively positioning the maximum processing capacity of an HTTP service. The method comprises the steps of obtaining an http request from the nginx log; replicating the environment from the production environment to a docker container; the http request interactive tool acquires data from the redis and sends the data to the docker container; and determining whether to continue testing or not according to the monitoring information of the docker. The device comprises an initial data acquisition unit, an initial environment arrangement unit, an http request interaction tool and a test termination judgment unit. The invention restores the on-line http request in the docker and deploys the corresponding on-line application by analyzing the front-end proxy server log, completes the initialization of the pressure test environment, then performs the test, obtains the container monitoring information by monitoring the docker, and automatically expands the http request data before the maximum processing capacity is obtained by the test.

Description

Method and device for adaptively positioning maximum processing capacity of HTTP (hyper text transport protocol) service
Technical Field
The invention relates to the field of performance test, in particular to a method and a device for adaptively positioning the maximum processing capacity of an HTTP service.
Background
In the traditional method for measuring the maximum processing capacity of an http service, a large number of operation and maintenance personnel are required to deploy a test environment of an application system related to the http service, and a network topology exists between the test environment and a production environment, and errors exist between a final test result and an actual result due to differences between the configuration and the number of hardware devices. Professional and experienced performance test engineers are required to compile test scripts and execute performance tests, the test period is long, the cost is high, and test results cannot be issued in time.
The prior art, such as the performance test system CPC based on online requests, intercepts the front-end request traffic by gor. Serializing the data of gor to redis, and then sending a request to the pressure measurement environment according to the predetermined tps through the ngrinder. gor is a recording tool of network flow, and the gor needs flow to be intercepted by the gor before the gor can record successfully. This results in the need for gor usage, which creates an intrusion into the online system and may create an online environment problem. And because the traffic of gor is recorded in real time, the traffic of peak time period is not easy to intercept. When the CPC system is used, a user is still required to manually deploy a pressure measurement environment, and a large operation and maintenance burden still exists. And the CPC system can only execute the request according to the predetermined tps, needs to obtain the maximum processing of the system to be tested, and needs to adjust the test result many times.
Therefore, the following problems exist in the prior art: 1) in the traditional performance test, a large amount of manpower is needed to participate in the work of deploying the test environment; 2) in a traditional mode of intercepting the flow of a front-end network, the influence on the production environment is reduced, and the burden of system maintenance is reduced; 3) the testers need to write the test scripts of the http requests to be tested, and a large amount of time needs to be occupied for research and development and the testers; 4) the test environment performance test cannot reflect the processing capacity of the production environment system in time.
Disclosure of Invention
In order to solve the above problems, an object of the present invention is to provide a method and an apparatus for adaptively locating a maximum processing capacity of an HTTP service. Analyzing the front-end proxy server log, storing a corresponding http request to redis, and according to the analyzed front-end request; deploying online application to a test environment docker; the two-step operation is automatically realized by analyzing the nginx log without manual participation. Restoring the http request on the line in the docker, deploying corresponding on-line application, completing initialization of a pressure test environment, then testing, acquiring container monitoring information through monitoring the docker, and automatically expanding http request data before testing to obtain maximum processing capacity.
In order to solve the above technical problem, the application embodiment is implemented as follows:
the embodiment of the application provides a method for adaptively positioning the maximum processing capacity of an HTTP service, which comprises the following steps:
obtaining an http request from the nginx log;
replicating the environment from the production environment to a docker container;
the http request interactive tool acquires data from the redis and sends the data to the docker container;
and determining whether to continue testing or not according to the monitoring information of the docker.
As a preferred embodiment, the obtaining an http request from the nginx log includes:
scanning the log file by using a data stream scanning tool and storing the log file into a redis;
key values are generated by extracting the requested key data from the log file and stored to the redis.
In a preferred embodiment, the data stream scanning tool is a java.util.scanner class which is self-contained in development software.
As a preferred embodiment, the key value includes a time stamp of the log file.
As a preferred embodiment, said replicating an environment from a production environment to a docker container comprises:
obtaining an application list to be deployed according to the fact that the address of the http request is matched with the address of the http request supported by the application of the production environment;
and downloading the corresponding docker image file according to the application list, and deploying and starting the docker container.
As a preferred embodiment, the http request interactive tool comprises an http request initiating tool and an http request sending tool; the http request initiating tool acquires data from the redis, and the http request sending tool sends the data to the docker container; the http request initiating tool obtains data from redis, and the http request initiating tool comprises:
using a redis client tool to pull request traffic data from the redis;
determining the time of the request according to the maximum time stamp and the minimum time stamp in the data;
and in the determined request time period, the http request sending tool sends the http request data traffic to the docker container.
As a preferred embodiment, the determining whether to continue the test according to the monitoring information of docker includes:
acquiring monitoring information through a docker container monitoring tool, wherein the monitoring information comprises the current processing capacity and multiple specific indexes of the docker container;
comparing each specific index of the docker container with a safety threshold, resetting request flow data when each specific index data is smaller than the safety threshold, and sending the request flow data to the docker container through an http request sending tool; and when one specific index data reaches a safety threshold value, the current processing capacity is the maximum processing capacity of the http service.
As a preferred embodiment, the resetting the request traffic data includes:
obtaining a request data flow multiple m expected to be improved through a corresponding algorithm;
and automatically expanding the requested data flow to m times of the original flow by the http request sending tool under the condition of unchanging request time.
In a preferred embodiment, the algorithm for calculating the flow multiple includes:
according to the formula
Figure BDA0001685269150000021
Calculating a flow multiple n, wherein the indexes are all specific indexes obtained by a docker container monitoring tool, and the threshold is a preset safety threshold of all specific indexes;
and keeping n with the calculation result larger than 1, and selecting the minimum value m in the rest calculation results n as the request data flow multiple m expected to be improved.
The embodiment of the application provides a device for adaptively positioning the maximum processing capacity of an HTTP service, which comprises:
the initial data acquisition unit is used for acquiring an http request from the nginx log;
an initial environment arrangement unit for copying an environment from a production environment to a docker container;
the http request interactive tool is used for acquiring data from the redis and sending the data to the docker container;
and the test termination judging unit determines whether to continue testing according to the monitoring information of the docker.
The method and the device greatly shorten the time for obtaining the maximum processing capacity of the http service. From practical experience, the maximum processing power of a single system can be determined at 3 natural days. And if a system is tested for comprehensive performance manually, at least 20 working days are needed to complete comprehensive measurement, so that the testing efficiency is greatly improved. On the other hand, the maximum processing capacity of the on-line system can be reflected in time, the monitoring system can adjust the monitoring expectation in time, and the capacity expansion can be completed in time before the system can not bear the pressure.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a flow chart of step 2 of the method of the present invention.
FIG. 3 is a flow chart of step 3 of the method of the present invention.
FIG. 4 is a flow chart of step 4 of the method of the present invention.
FIG. 5 is a flow chart of step 5 of the method of the present invention.
FIG. 6 is a diagram of the technical architecture of the method of the present invention.
Fig. 7 is a block diagram of functional units of the apparatus of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more of the examples in this specification. Rather, they are merely examples of apparatus and methods consistent with aspects of one or more embodiments of the specification, as detailed in the claims which follow.
It should be noted that: in other embodiments, the steps of the respective methods need not be performed in the order illustrated and described herein, and in some other embodiments, the methods may include more or less steps than those described herein. Moreover, the individual steps described in this description may in other embodiments be combined into a single step.
As shown in fig. 1-6, the method comprises the steps of:
step 1, deploying a reverse proxy server nginx, wherein an http request passes through the nginx before being sent to an online production environment server cluster.
And 2, acquiring an http request from the nginx log. The method can comprise the following steps:
step 201, using a data stream scanning tool to scan a log file and store the log file to a redis;
step 202, extracting the requested key data from the log file to generate a key value and storing the key value to the redis.
The data stream scanning tool is java.util.scanner class carried by development software, and the key value comprises a time stamp of a log file. And scanning the content of the file by using a java.util.scanner class, converting the data into a json form instead of storing the file, and storing the json form in real time to redis. And intercepting the request flow in a specified time period in the scanning process according to the time stamp in the log, acquiring the url, the request type and the request parameter of the request from the log file, and storing the url, the request type and the request parameter into the redis. The stored key value is the name of the application system + timestamp + 4-bit hash value. For example; if the treasured is recorded in 11 minutes and 00 seconds at 11 months, 11 days and 11 days in 2017, the stored key value is as follows: treasures are dug 1510370520 abcd.
And 3, copying the environment from the production environment to a docker container. The method can comprise the following steps:
step 301, obtaining an application list to be deployed according to the matching between the address of the http request and the address of the http request supported by the application of the production environment.
Take the website of the finance network technology limited company as an example. The user requests to the network address http://8.wacai. com/finence/web/list. After the address is obtained, the mapping file of the nginx service is scanned, and the configuration file corresponding to http://8. wacai.com/finence/web/list in the configuration file is checked. The configuration file records application service ports server 127.1.2.3: 8080; and the corresponding domain name of the system application, generation, finish, wacai, info. Examining 127.1.2.3 the deployed application, the corresponding system in 127.1.2.3 should be a treasure digging system.
And 302, downloading a corresponding docker image file according to the application list, and deploying and starting the docker container.
Take the website of the finance network technology limited company as an example. And (3) a mirror image version of the treasure digging system deployed on a mirror image warehouse query line of a company. And downloading the corresponding docker image file according to the application list, and deploying and starting the docker container. The initialization of the pressure test environment is completed. The process involves: and (4) performing docker mirror image, searching mirror images, pulling the mirror images in a company mirror image warehouse, deploying the mirror images, completing mirror image deployment, and performing testing. The above processes can be directly completed by corresponding docker commands.
And 4, the http request interactive tool acquires data from the redis and sends the data to the docker container. The http request interactive tool comprises an http request initiating tool and an http request sending tool; and the http request initiating tool acquires data from the redis, and the http request sending tool sends the data to the docker container.
The http request initiating tool obtains data from redis, and the http request initiating tool comprises:
step 401, using a redis client tool to pull the requested traffic data from the redis.
The redis client tool can select jedis, which is a Java tool package for obtaining data from the redis with an open source. And acquiring a key value meeting the condition by using a jedis corresponding instruction, thereby pulling the flow data.
Step 402, determining the time of the request according to the maximum time stamp and the minimum time stamp in the data.
For example, the key value includes a timestamp, and the maximum timestamp of the pulled data is 1510370520 (corresponding to a time of 11 minutes 00 seconds at 11 months 11 days 11 in 2017), and the minimum timestamp is 1510369860 (22 minutes 00 seconds at 11 days 11 months 11 days 11 in 2017). 1510370520 and 1510369890 seconds 660 seconds. The time requested was 660 seconds.
And step 403, in the above determined request time period, the http request sending tool sends the http request data traffic to the docker container.
The http request sending tool can use AsyncHttpClient, which is an asynchronous http request client, and the sending efficiency of the http request can be improved. As the example above, an http request data traffic is sent to the docker container using AsyncHttpClient within 660 seconds.
And step 5, determining whether to continue testing or not according to the monitoring information of the docker. The method can comprise the following steps:
step 501, obtaining monitoring information through a docker container monitoring tool, wherein the monitoring information comprises the current processing capacity and multiple specific indexes of the docker container.
The requested data traffic and the requested time in the monitoring information are extracted to request the data traffic/the requested time, and the current processing capacity can be calculated.
The specific indexes can include cpu usage, memory usage, network traffic and disk read-write capacity of the docker container.
Step 502, comparing each specific index of the docker container with a safety threshold, resetting request flow data when each specific index data is smaller than the safety threshold, and sending the request flow data to the docker container through an http request sending tool; and when one specific index data reaches a safety threshold value, the current processing capacity is the maximum processing capacity of the http service.
The resetting of the requested traffic data may be performed by: obtaining a request data flow multiple m expected to be improved through a corresponding algorithm; and automatically expanding the requested data flow to m times of the original flow by the http request sending tool under the condition of unchanging request time.
The algorithm for calculating the flow multiple comprises the following steps: according to the formula
Figure BDA0001685269150000051
Calculating a flow multiple n, wherein the indexes are all specific indexes obtained by a docker container monitoring tool, and the threshold is a preset safety threshold of all specific indexes; and keeping n with the calculation result larger than 1, and selecting the minimum value m in the rest calculation results n as the request data flow multiple m expected to be improved.
Take cpu and memory as examples. At this time, the cpu utilization rate is 20%, the cpu safety threshold is 70%, the memory usage amount is 2000Mb, and the memory safety threshold is 4096 Mb. According to the condition of the cpu, the flow of the test can be improved
Figure BDA0001685269150000052
According to the memory use condition, the flow of the test can be improved to be
Figure BDA0001685269150000053
The smaller of the two was taken as 1.43. According to the test result, under the condition that the request time is not changed, the system automatically expands the database amount to 1.43 times of the original request amount. And recalculating the monitoring information of the docker. Until one of the CPU use condition, the memory use condition, the network flow and the disk read-write quantity reaches a threshold value, the methodAnd outputting the current processing capacity to finish the test.
As shown in fig. 7, in a software implementation, the apparatus for adaptively locating the maximum processing capacity of the HTTP service may include:
an initial data obtaining unit 71, configured to obtain an http request from the nginx log;
an initial environment arrangement unit 72 for copying the environment from the production environment to the docker container;
the http request interactive tool 73 acquires data from the redis and sends the data to the docker container;
the test termination judging unit 74 determines whether to continue the test based on the monitoring information of docker.

Claims (9)

1. The method for adaptively positioning the maximum processing capacity of the HTTP service is characterized by comprising the following steps:
obtaining an http request from the nginx log;
replicating the environment from the production environment to a docker container, specifically: obtaining an application list to be deployed according to the fact that the address of the http request is matched with the address of the http request supported by the application of the production environment; downloading a corresponding docker image file according to the application list, and deploying and starting a docker container;
the http request interactive tool acquires data from the redis and sends the data to the docker container;
and determining whether to continue testing or not according to the monitoring information of the docker.
2. The method for adaptively positioning maximum processing capacity of an HTTP service according to claim 1, wherein the obtaining an HTTP request from a nginx log comprises:
scanning the log file by using a data stream scanning tool and storing the log file into a redis;
key values are generated by extracting the requested key data from the log file and stored to the redis.
3. The method for adaptively locating maximum processing capacity of an HTTP service as recited in claim 2, wherein the data stream scanning tool is a java.
4. The method of adaptively locating maximum processing capacity for an HTTP service as recited in claim 2, wherein the key value comprises a timestamp of a log file.
5. The method for adaptively positioning maximum processing capacity of an HTTP service according to claim 1, wherein the HTTP request interacting means comprises an HTTP request initiating means and an HTTP request sending means; the http request initiating tool acquires data from the redis, and the http request sending tool sends the data to the docker container;
the http request initiating tool obtains data from redis, and the http request initiating tool comprises:
using a redis client tool to pull request traffic data from the redis;
determining the time of the request according to the maximum time stamp and the minimum time stamp in the data;
and in the determined request time period, the http request sending tool sends the http request data traffic to the docker container.
6. The method for adaptively positioning maximum processing capacity of HTTP service according to claim 1, wherein the determining whether to continue the test according to monitoring information of docker comprises:
acquiring monitoring information through a docker container monitoring tool, wherein the monitoring information comprises the current processing capacity and multiple specific indexes of the docker container;
comparing each specific index of the docker container with a safety threshold, resetting request flow data when each specific index data is smaller than the safety threshold, and sending the request flow data to the docker container through an http request sending tool;
and when one specific index data reaches a safety threshold value, the current processing capacity is the maximum processing capacity of the http service.
7. The method of adaptively positioning maximum processing capacity for HTTP services as recited in claim 6, wherein the resetting the request traffic data comprises:
obtaining a request data flow multiple m expected to be improved through a corresponding algorithm;
and automatically expanding the requested data flow to m times of the original flow by the http request sending tool under the condition of unchanging request time.
8. The method for adaptively locating maximum processing capacity of an HTTP service as recited in claim 7, wherein the algorithm for calculating the traffic multiplier comprises:
according to the formula
Figure FDA0002747513630000021
The indexes are all specific indexes obtained by a docker container monitoring tool, and the threshold is a preset safety threshold of all specific indexes;
and keeping n with the calculation result larger than 1, and selecting the minimum value m in the rest calculation results n as the request data flow multiple m expected to be improved.
9. An apparatus for adaptively locating maximum processing capacity of an HTTP service, comprising:
the initial data acquisition unit is used for acquiring an http request from the nginx log;
an initial environment arrangement unit for copying an environment from a production environment to a docker container;
the http request interactive tool is used for acquiring data from the redis and sending the data to the docker container;
and the test termination judging unit determines whether to continue testing according to the monitoring information of the docker.
CN201810569068.XA 2018-06-05 2018-06-05 Method and device for adaptively positioning maximum processing capacity of HTTP (hyper text transport protocol) service Active CN108848149B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810569068.XA CN108848149B (en) 2018-06-05 2018-06-05 Method and device for adaptively positioning maximum processing capacity of HTTP (hyper text transport protocol) service

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810569068.XA CN108848149B (en) 2018-06-05 2018-06-05 Method and device for adaptively positioning maximum processing capacity of HTTP (hyper text transport protocol) service

Publications (2)

Publication Number Publication Date
CN108848149A CN108848149A (en) 2018-11-20
CN108848149B true CN108848149B (en) 2021-01-19

Family

ID=64211302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810569068.XA Active CN108848149B (en) 2018-06-05 2018-06-05 Method and device for adaptively positioning maximum processing capacity of HTTP (hyper text transport protocol) service

Country Status (1)

Country Link
CN (1) CN108848149B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188027A (en) * 2019-05-31 2019-08-30 深圳前海微众银行股份有限公司 Performance estimating method, device, equipment and the storage medium of production environment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015035816A1 (en) * 2013-09-12 2015-03-19 中兴通讯股份有限公司 Nginx server configuration maintenance method and system
CN106888254A (en) * 2017-01-20 2017-06-23 华南理工大学 A kind of exchange method between container cloud framework based on Kubernetes and its each module
CN107566493A (en) * 2017-09-06 2018-01-09 中国科学院信息工程研究所 A kind of agent node creation method, service means for acting as agent and system towards complicated user's request

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015035816A1 (en) * 2013-09-12 2015-03-19 中兴通讯股份有限公司 Nginx server configuration maintenance method and system
CN106888254A (en) * 2017-01-20 2017-06-23 华南理工大学 A kind of exchange method between container cloud framework based on Kubernetes and its each module
CN107566493A (en) * 2017-09-06 2018-01-09 中国科学院信息工程研究所 A kind of agent node creation method, service means for acting as agent and system towards complicated user's request

Also Published As

Publication number Publication date
CN108848149A (en) 2018-11-20

Similar Documents

Publication Publication Date Title
US10678683B2 (en) System and method for automated service layer testing and regression
CN111522922B (en) Log information query method and device, storage medium and computer equipment
US7464014B2 (en) Server recording and client playback of computer network characteristics
US10951740B2 (en) System and method for testing applications with a load tester and testing translator
US11068385B2 (en) Behavior driven development test framework for application programming interfaces and webservices
CN106445485A (en) Software version consistency detection system and detection method thereof
WO2019216976A1 (en) Analytics for an automated application testing platform
CN110750458A (en) Big data platform testing method and device, readable storage medium and electronic equipment
US10534700B2 (en) Separating test verifications from test executions
Jirka et al. A lightweight approach for the sensor observation service to share environmental data across Europe
CN112988608B (en) Data testing method and device, computer equipment and storage medium
CN107179995A (en) A kind of performance test methods of application program of computer network
CN112835792B (en) Pressure testing system and method
CN106411721B (en) Instant messaging method, device and system
CN105068876A (en) Method for acquiring mobile phone APP performance data based on distributed true phones
CN107276842A (en) Interface test method and device and electronic equipment
CN110888805A (en) RESTful API playback test method and system
US10775751B2 (en) Automatic generation of regular expression based on log line data
US9652488B2 (en) Computer product, verification support method, and verification support apparatus
CN111367803A (en) Method and system for improving testing efficiency of client software
CN108848149B (en) Method and device for adaptively positioning maximum processing capacity of HTTP (hyper text transport protocol) service
CN107040504A (en) Method of testing and device
CN109886015A (en) The detection method and device of the installation kit of application program
CN107835080B (en) Distributed system data collection method and data signature generation method
CN111930621A (en) DNS automation performance testing method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant