CN106557353A - A kind of container carries the server performance index Evaluation Method of service application - Google Patents

A kind of container carries the server performance index Evaluation Method of service application Download PDF

Info

Publication number
CN106557353A
CN106557353A CN201610962625.5A CN201610962625A CN106557353A CN 106557353 A CN106557353 A CN 106557353A CN 201610962625 A CN201610962625 A CN 201610962625A CN 106557353 A CN106557353 A CN 106557353A
Authority
CN
China
Prior art keywords
server
evaluation
container
scoring
cpu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610962625.5A
Other languages
Chinese (zh)
Inventor
姚嵩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TIANJIN LIGHT INDUSTRY VOCATIONAL TECHNICAL COLLEGE
Original Assignee
TIANJIN LIGHT INDUSTRY VOCATIONAL TECHNICAL COLLEGE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TIANJIN LIGHT INDUSTRY VOCATIONAL TECHNICAL COLLEGE filed Critical TIANJIN LIGHT INDUSTRY VOCATIONAL TECHNICAL COLLEGE
Priority to CN201610962625.5A priority Critical patent/CN106557353A/en
Publication of CN106557353A publication Critical patent/CN106557353A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0805Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
    • H04L43/0817Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Environmental & Geological Engineering (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention relates to a kind of container carries the server performance index Evaluation Method of service application, its technical characterstic is to comprise the following steps:Step 1, the server performance indicator evaluation system for setting up container carrying service application;Step 2, server CPU, memory size, the evaluation index data of hard disk I/O are obtained in real time from pseudo file system;Step 3, calculation server health score assigning;Step 4, when in three evaluation index data any one scoring or the server health score assigning long period note operation condition of server higher than 80 timesharing prompt system managers;Step 5, the statistical data for showing three evaluation indexes of host computer system, server health score assigning and warning message;Step 6, system manager find out a large amount of containers for consuming system resource and decide whether to eliminate according to scoring and container operation scoring;Step 7, repetition said process.The present invention can realize that container carries the real-time status monitoring of service application process quota and resource overhead.

Description

A kind of container carries the server performance index Evaluation Method of service application
Technical field
The invention belongs to technical field of server virtualization, especially a kind of container carry the server performance of service application Index Evaluation Method.
Background technology
At present, a kind of minority that server virtualization has just been adopted from the maximum enterprise of only largest, courage is general Read, become a requisite part in the daily operation of data center.
But existing server virtualization technology presence still suffers from following defect:
1st, cause client machine system hydraulic performance decline as virtualization layer coordinates resource.
Performance issue is using one of subject matter paid close attention to needed for Intel Virtualization Technology.As virtualization is in client computer and master Level of abstraction is increased between machine, this will increase the operating delay of client task.
For example, in the case of hardware virtualization, when the bare machine that can be installed holonomic system is simulated, performance reduces returning Censure in the expense of following activity generation:
Maintain the state of virtual processor;Support privileged instruction (self-trapping and simulation privileged instruction);Support virtual machine paging; Console function.
Additionally, when hardware virtualization is realized by the program installed in host operating system or perform, performance drop Low main cause is that virtual machine manager is performed together and is dispatched with other application programs, so as to share the resource of main frame.
The Intel Virtualization Technology of higher also has similar problem, (Java .NET such as in the case of the virtualization of programming language level Deng).Binary translation and explanation also reduce the execution speed of application program.Further, since its execution is run environment filtration, visit Ask that memorizer and other physical resources also result in performance reduction.
2nd, cause main frame there is no optimised use due to virtual management software abstract layer so that host computer rate it is low or Reduce QoS of customer.
Virtualization sometimes results in the poorly efficient use of main frame.Particularly when the specific function of some main frames can not be by level of abstraction When representing, and then becoming inaccessible.In hardware virtualization environment, device driver is it is possible that such case, empty Plan machine is provided solely for the default graphics card for only mapping host machine part characteristic sometimes.In programming level virtual machine environment, some bottoms Operating system characteristics become inaccessible, except the specific storehouse of non-usage.For example, in Java first versions, graphic programming Support be very limited amount of, the interface of application program and using feeling excessively poor.
3rd, implicit security breaches and threat, this is due to simulating produced by different performing environments mostly.
New hostile network fishing (phishing) for being difficult to expect has been grown in virtualization, and it can be with fully transparent side Formula simulation framework environment so that rogue program can extract sensitive information from client computer.
In hardware virtualization environment, rogue program can be pre-installed before operating system, and micro- virtual as one Machine manager.So the operating system just with controlled system and manipulation, and can therefrom extract sensitive information to third party.This kind of malice Software includes BluePill and SubVirt.BluePill is directed to AMD processor familys, and the operating system installed is performed shifting Complete in virtual machine.Microsoft is prototype system with the SubVirt earlier versions of Univ Michigan-Ann Arbor USA R & D Cooperation. SubVirt affects Client OS, and when virtual machine restarts, it will obtain the control of main frame.This type The propagation of Malware is because that original hardware and CPU products do not consider virtualization.Existing instruction set can not be by letter Single change updates to adapt to virtualized demand.
The virtual machine of programming level there is also same problem:The change of running environment can obtain sensitive information or monitoring visitor The core position used by family application program.So, the initial condition of runtime environment will be changed and be replaced, if virtual machine The security breaches that there is Malware or host operating system in management program are utilized, it will Jing often occurs safety problem.
The content of the invention
It is an object of the invention to overcome the deficiencies in the prior art, there is provided a kind of reasonable in design, resource multiplex ratio and profit With efficiency high and based on dynamic threshold mode container carry service application server performance index Evaluation Method.
The present invention solves its technical problem and takes technical scheme below to realize:
A kind of container carries the server performance index Evaluation Method of service application, comprises the following steps:
Step 1, the evaluation index for selecting impact server health simultaneously determine each evaluation criterion weight, set up container and carry industry The server performance indicator evaluation system of business application;
The evaluation index for affecting server health includes:Server CPU, memory size and hard disk;
Step 2, server CPU, memory size, the evaluation index data of hard disk I/O are obtained in real time from pseudo file system Information;
Step 3, according to it is described affect server health three evaluation indexes and the evaluation index evaluation criteria amount Calculation server health score assigning;
Step 4, when in three evaluation index data any one scoring or the server health score assigning long period be higher than 80 timesharing prompt system managers note operation condition of server;
Step 5, show that the statistical data of three evaluation indexes of host computer system, server health are commented in system monitoring interface Divide and warning message;
Step 6, system manager find out a large amount of containers for consuming system resources and determine according to scoring and container operation scoring It is fixed whether to eliminate;
Step 7, repetition said process.
Advantages of the present invention and good effect are:
1st, the present invention is main study subject for Namespace the and CGroups system mechanisms of structure container technique, Design tonneau linux kernel characteristic realizes that container carries service application process quota and resource overhead real-time status monitoring, and Performance relative analyses are carried out as reference object with conventional physical bare machine and all kinds of main flow virtual management systems, is set up based on dynamic The service application health degree performance indications appraisement system of threshold mode.
2nd, the present invention reaches the purpose for further improving resource multiplex ratio and utilization ratio.
3rd, using linux kernel characteristic, the present invention realizes that container carries the real-time of service application process quota and resource overhead Condition monitoring.
Specific embodiment
Hereinafter the embodiment of the present invention is described in further detail:
A kind of container carries the server performance index Evaluation Method of service application, comprises the following steps:
Step 1, the evaluation index for selecting impact server health simultaneously determine each evaluation criterion weight, set up container and carry industry The server performance indicator evaluation system of business application;
The evaluation index for affecting server health includes:Server CPU, memory size and hard disk;
In order to realize the purpose of the reliable and stable operation of application system, a high-quality operation ring for including hardware and systems soft ware Border is requisite, and several factors such as wherein CPU, memory size, hard disk I/O bandwidth have decisive shadow to running environment Ring, thus select these deciding factors and add network I/O bandwidth this affect sexual factor carry out comprehensive analysis to assess The running status of server.
The ultimate principle of Docker performance monitorings is monitored different from conventional virtual machine monitoring mode from container.Tradition Virtual machine monitoring scheme can obtain index from each server and their applications for running, these servers and operate in Application on server is typically all static, has very long run time, and container is shown as with certainly on main frame The process of oneself environment, virtual network and different storage managements, shares the resource of bottom main frame, may on identical main frame The process of the errorlevel and long-term surviving of short term survival can be dispatched.Therefore the performance monitoring of Docker should be included based on master The performance indications monitoring of machine and the performance indications based on container monitor two aspects.
1st, Host Based evaluation index
For container, Docker main frames are longtime runnings, therefore the running status of main frame should be supervised Survey, so that all kinds of resources and disposal ability are managed, are optimized.
Cpu load and internal memory load:The load condition of main frame can be grasped in real time by the monitoring to the two indexs, filled The ability of main frame is waved in distribution.The change of the two indexs can also be passed through simultaneously, the running status of container is grasped indirectly.When CPU bears Setting out now drop means that the service of some containers generates interruption, when cpu load exceedes certain greatest measure (such as a long time 90%) illustrate when that the load capacity of main frame is reached the limit of.Occur manager being pointed out to note during above-mentioned two situations.
Host disk space:As host operating system, container, mirror image can all consume disk space, disk space makes Become one of main frame support container quantity restriction index with situation, periodic cleaning disk control remove the container that do not use and Mirror image is a kind of good O&M practice to vacate disk space, to carry more containers.As disk read-write frequency is to process Or even main frame overall operation situation has significant effect, it is therefore desirable to monitor disk read-write frequency, to avoid in same main frame The upper application for disposing multiple high disk read-write frequencies causes the situation that main frame blocks.
Operate in the container sum on main frame:Under the scene that static state is used, current and historical number of containers is understood Us can be helped to guarantee that all of function all has identical running status with deployment before when upgrading.
2nd, the evaluation index based on container
Resource-sharing needs to carry out rational quota to container, and this is required to see making for container resource in turn Use situation.Therefore the behavior of container is needed to be monitored and carry out corresponding tuning.
Container CPU:The time of the CPU of container always arranges CPU shared base there is provided correct in Docker with figureofmerit This information.Rising violently for this index means that the CPU disposal abilities required for one or more containers are provided beyond main frame Limit of power.
The internal memory of container is used, the internal memory of container is exchanged and container memory failure enumerator:The rising meaning of these indexs Taste the amount of memory of container needs beyond the value distributed for which.By monitoring, container is limited using the upper limit of internal memory come really Protect application and will not use too many internal memory, so as to avoid having influence on other containers on same main frame.
The disk read-write frequency of container:Multiple containers can concurrently use identical host resource.By monitoring container magnetic Disk read-write frequency contributes to distributing higher handling capacity to crucial application, such as data storage or Web server, and for batch processing Operation can then carry out magnetic disc i/o shunting.
The network traffics of container:For inter-related container, monitoring virtual network is very important, is such as held The load equalizer of device.The packets need of discarding is tracked, and network traffics can reflect client to application Service condition, if the very high peak value of its appearance, this might mean that and occurs in that Denial of Service attack, load testing or client End application occurs in that failure.
Step 2, real-time monitoring host system resources and Docker container resource using status, from pseudo file system in real time Obtain server CPU, memory size, the evaluation index data message of hard disk I/O;
In the present embodiment, Docker containers are mainly run on linux system, therefore the present embodiment is based on 64 The performance monitoring of linux system.
Separately below the monitoring method of host system resources and Docker container resource using status is illustrated respectively:
1st, host system resources use state monitoring method
Linux kernel passes through/proc file system there is provided one kind, operationally accesses kernel internal data structure, changes Become the mechanism that kernel is arranged.Unlike the file system common from other ,/proc is a kind of pseudo file system, and storage is A series of special files of current inner running status, user can pass through the relevant system hardware of these Fileviews and it is current just In the information of operation process, it might even be possible to change the running status of kernel by changing some of which file.
Based on/proc file system particularity as above, the file in which is also often referred to as virtual file, and has The characteristics of some are unique.For example, although some of which file can return bulk information, file sheet when checking using viewing command The size of body can but be shown as 0 byte.Additionally, the time and date attribute of most of files is usually worked as in these special files Front system time and date, this can be refreshed relevant at any time with them.
In order to check and using upper convenience, these files would generally carry out classifying according to dependency and be stored in different mesh In record even subdirectory, what is stored in such as/proc/scsi catalogues is exactly the relevant information of all scsi devices on current system ,/ What is stored in proc/N is then the relevant information of the process that system is currently running, and wherein N is the process ID being currently running, After process terminates, its associative directory can then disappear.Simultaneously host computer system resource status of overall importance are by positioned at file under/proc Represent, the file related to host computer performance monitoring is as follows:
(1)/proc/cpuinfo:The relevant information of host CPU.
Every processor one group of data of correspondence of host computer system, are divided with null between each group of data within this document.Per group Each data item of divided by row in data, each data item is according to key:Value is organized, and can obtain process type by this file Number, the information such as dominant frequency, cache sizes.
(2)/proc/diskstats:The magnetic disc i/o statistical information list of every piece of disk unit, kernel 2.5.69 are later Linux version supports this function.
Data during this file is included divide device data record according to row.Device data by major device number, secondary device number, Device name and 11 data fields are constituted, and 11 data field implications are as follows:
1) the 1st domain:Read the number of times of disk, successfully complete the total degree of reading.
2) the 2nd domain:Merge and read number of times.
3) the 3rd domain:Read the number of times of sector, the sector total degree successfully read.
4) the 4th domain:Read the millisecond number for spending.
5) the 5th domain:The number of times for writing complete, the total degree for successfully writing complete.
6) the 6th domain:Number of times is write in merging.
7) the 7th domain:The number of times of sector is write, sector total degree is successfully write.
8) the 8th domain:Write the millisecond number of cost.
9) the 9th domain:The current schedules of I/O.When appropriate request queue is given in request, numerical value increases, and request is completed When numerical value reduce.
10) the 10th domain:Millisecond number of the flower in I/O operation.
11) the 11st domain:Weighting, millisecond number of the flower in I/O operation, starts in each I/O, and I/O terminates, and I/O merges When this domain can all increase.This can easily measure mark with those offers one that can be accumulated are stored to the I/O deadlines It is accurate.
(3)/proc/meminfo:In system with regard to present physical internal memory utilization obstacle etc. information.This file is by row Each data item is divided, each data item is according to key:Value is organized, and obtains master by directly reading and separating desired data item The each internal memory of machine system is using subitem data.Corresponding with this file ,/proc/swaps files provide the use of main frame swapace Situation, including the data such as the equipment at the place of swapace, swapace size and consumption.
(4)/proc/net/dev:System shows host computer system network flow statistic information.File is according to title, data side Formula is organized, and wherein data item includes:
1) bytes represents the byte number of transmitting-receiving;
2) packets represents the correct bag amount of transmitting-receiving;
3) errs represents the bag amount of transmitting-receiving mistake;
4) drop represents the bag amount that transmitting-receiving is abandoned;
(5)/proc/stat:Real-time tracing from system last time startup since system operation statistical information, including
1) eight values after " cpu " row represent the system operation that calculated in units of 1/100 second in user model, low respectively Priority users pattern, transports the statistical value of system model, idle pulley, the time of I/O standby modes etc.;
2) " intr " row provides the information of interruption, first be since the system start-up, all of interruption of generation time Number;Then per number, correspondence one is specific interrupts the number of times occurred since the system start-up;
3) " ctxt " has given the number of times of the context swap that CPU occurs since the system start-up.
4) " btime " gives the time from system start-up till now, and unit is the second;
5) the individual number of the task created since the system start-up by " processes ";
6)“procs_running”:The number of the task of current operation queue;
7)“procs_blocked”:The number of current blocked task;
As the CPU state be given in/proc/stat is the cumulative data from after system start-up, therefore cannot be therefrom straight Connect reading CPU usage, it is necessary to which CPU usage is calculated by CPU snapshot datas between two time intervals.CPU usage meter Calculation method is as follows:
The Cpu snapshots of step (1), two time intervals short enough of sampling, are denoted as t1, t2, the wherein knot of t1, t2 respectively Structure is 9 tuples of (user, nice, system, idle, iowait, irq, softirq, stealstolen, guest);
Total Cpu timeslices totalCpuTime of step (2), calculating;
A) primary all cpu service conditions are sued for peace, obtains s1;
B) secondary all cpu service conditions are sued for peace, obtains s2;
C) s2-s1 obtains all timeslices in this time interval, i.e. Total=s2-s1;
Step (3), computation-free time idle;
A) data of idle correspondences idle row, first time idle are designated as idle1, and second idle is designated as idle2;
B) idle=idle2-idle1;
Step (4), calculating cpu utilization rates;
A) pcpu=100* (total-idle)/total
By reading the data in above-mentioned file in real time, it is possible to achieve main to host CPU, internal memory, network, magnetic disc i/o etc. Want the monitoring of resource behaviour in service.
2nd, the monitoring method of Docker containers resource using status
Docker containers rely on cgroups, can follow the trail of container process, CPU, internal memory, block IO, network by cgroups The metric of service condition.Data are come out by a pseudo file system with/similar groups of proc, Cgroups be located at operating system /sys/fs/cgroup catalogues under, and under which, have multiple subdirectories to represent distinct device resource Cgroups levels.For each container that Docker is currently running, all can have under cgroups each level one it is literary Part catalogue is corresponded to therewith, long ID (16 binary values of 64 bit lengths) of the directory name for container.And these subdirectories are with container Operation and create, disappear with the extinction of container.
The all internal memory metrics for being currently running middle docker containers can be in/sys/fs/cgroup/memory/ docker/<long_id>Obtain under catalogue, CPU metrics can be in/sys/fs/cgroup/cpuacct/docker/< long_id>Obtain under catalogue, magnetic disc i/o metric is in/sys/fs/cgroup/blkio/docker/<long_id>Catalogue Lower acquisition, wherein<long_id>It is the long ID of container.
(1) container uses memory situations:
Memory.stats information is checked from the cgroup groups of container place, each data item is according to key:Value is organized, The information of key data item is as follows:
1)Cache:Caching of page, including tmpfs (shmem), unit is byte
2) Rss is anonymous and swap is cached, and not including tmpfs (shmem), unit is byte
3)Mapped_file:The file size of memory-mapped mappings, including tmpfs (shmem), unit is byte
4)Pgpgin:The number of pages being stored in internal memory
5)Pgpgout:The number of pages read from internal memory
6)Swap:Swap consumptions, unit are byte
7)Active_anon:Hideing in active least recently used (least-recently-used, LRU) list Name and swap cachings, including tmpfs (shmem), unit is byte
8)Inactive_anon:Anonymity and swap caching in sluggish LRU list, it is including tmpfs (shmem), single Position is byte
9)Active_file:The file-backed internal memories in LRU list are enlivened, in units of byte
10)Inactive_file:File-backed internal memories in inactive LRU list, in units of byte
11)Unevictable:The internal memory that cannot be regenerated, in units of byte
12)hierarchical_memory_limit:The internal memory of the level comprising memory cgroup is limited, and unit is Byte;
(2) container uses magnetic disc i/o situation:
Disk io situations are obtained from blkio, and under this catalogue, each data item is organized as an independent file respectively, The data item that each file is included is as follows:
1)Blkio.time:Statistics access times of the cgroup to equipment;
2)blkio.io_serviced:Statistics cgroup (includes read, write, sync to the I/O operation of particular device And async) number of times;
3)blkio.sectors:Cgroup is to equipment sector access times for statistics;
4)blkio.io_service_bytes:Statistics cgroup to particular device I/O operation (including read, write, Sync and async) data volume;
5)blkio.io_queued:Statistics cgroup queue in I/O operation (include read, write, sync and Async request number of times);
6)blkio.io_service_time:Statistics cgroup to the I/O operation of particular device (including read, write, Sync and async) time (unit is ns);
7)blkio.io_merged:Statistics cgroup by BIOS request be merged into I/O operation (including read, write, Sync and async) number of times asked;
8)blkio.io_wait_time:Statistics cgroup in each equipment all types of I/O operations (including read, write, Sync and async) waiting time (unit ns) in queue;
(3) container uses network bandwidth situation:
/sys/fs/cgroup/net_cls/docker/<long_id>The container that catalogue storage cgroup is automatically generated
(4) container uses CPU situations:
/sys/fs/cgroup/cpuacct/docker/<long_id>The container that catalogue storage cgroup is automatically generated makes With the report of cpu resource situation.Wherein cpuacct.stat files have counted process User space and kernel state in the control group Cpu usage amounts, unit is USER_HZ, that is, the ticking number of jiffies, cpu.Ticking number per second can use getconf CLK_TCK obtaining, typically 100.The Main Resources that corresponding container can be obtained by data are read from these files make With situation, so as to realize the monitoring to docker container resource service conditions.
Step 3, according to it is described affect server health three evaluation indexes and the evaluation index evaluation criteria amount Calculation server health score assigning;
, by the way of weighting scale assessment, evaluation criteria scale is as follows for host computer system evaluation criteria:
In table, virtual memory exchange=virtual memory page importing+virtual memory page is derived.
In the present embodiment, it is contemplated that the different application systems that server is carried to the needs of above three evaluation index not Together, thus server comprehensive health scoring carried out by the way of three weighted scorings, the lower clothes of server health score assigning score value Business device running status is better.
Server health score assigning=((a1*CPU scorings)+(scoring of a2* internal memories)+(scoring of a3* hard disks))/(a1+a2+a3)
CPU scores:The percentages of the CPU treating capacities shared by busy timeslice.
Internal memory scores:With 10 page/second of single core cpu as the percentages of 100 points of calculating.
Hard disk scores:Exchanged with IO 50% is waited as 100 points of percentages for calculating.
A1, a2, a3 are the weight coefficient of three key factors, and this data can be preset by system manager.
System manager can be according to server health score assigning and the appearance of the different adjustment Docker of three evaluation index scorings Device is disposed, it is to avoid the application for consuming same class resource is deployed in same Docker, is reached and is made full use of system resource Purpose.Represent that this evaluation index is disappeared when the scoring of a certain evaluation index is higher relatively low server health score assigning The resource of consumption is excessive, should cancel the corresponding application system container of deployment and replace with the application system that other evaluation indexes score relatively low System.
Step 4, when in three evaluation index data any one scoring or the server health score assigning long period be higher than 80 timesharing prompt system managers note operation condition of server;
Step 5, show that the statistical data of three evaluation indexes of host computer system, server health are commented in system monitoring interface Point and the data such as warning message;
Step 6, system manager find out a large amount of containers for consuming system resources and determine according to scoring and container operation scoring It is fixed whether to eliminate;
In the present embodiment, the application system for determining to be eliminated by the assessment to the health status using software container is held Device.Its assessment mode is individual event factor Ranking evaluation, i.e., the resource such as CPU, memory size, hard disk I/O bandwidth to container consumption In it is a certain be ranked up, to find out the maximum container of system resources consumption amount corresponding to a certain key factor and be eliminated. Superseded mode is eliminated manually for system manager, so that system manager possesses the final decision of superseded container.
Step 7, repetition said process.
It is emphasized that embodiment of the present invention is illustrative, rather than it is determinate, therefore present invention bag The embodiment for being not limited to described in specific embodiment is included, it is every by those skilled in the art's technology according to the present invention scheme The other embodiment for drawing, also belongs to the scope of protection of the invention.

Claims (1)

1. a kind of container carries the server performance index Evaluation Method of service application, it is characterised in that:Comprise the following steps:
Step 1, the selected evaluation index for affecting server health simultaneously determine each evaluation criterion weight, and setting up container carrying business should Server performance indicator evaluation system;
The evaluation index for affecting server health includes:Server CPU, memory size and hard disk;
Step 2, server CPU, memory size, the evaluation index data message of hard disk I/O are obtained in real time from pseudo file system;
Step 3, according to it is described affect server health three evaluation indexes and the evaluation index evaluation criteria amount calculate Server health score assigning;
Step 4, when in three evaluation index data any one scoring or the server health score assigning long period be higher than 80 points When prompt system manager note operation condition of server;
Step 5, the statistical data that three evaluation indexes of host computer system are shown in system monitoring interface, server health score assigning with And warning message;
Step 6, system manager find out the containers of consumption system resources in a large number according to scoring and container operation scoring and decision is It is no to eliminate;
Step 7, repetition said process.
CN201610962625.5A 2016-11-04 2016-11-04 A kind of container carries the server performance index Evaluation Method of service application Pending CN106557353A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610962625.5A CN106557353A (en) 2016-11-04 2016-11-04 A kind of container carries the server performance index Evaluation Method of service application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610962625.5A CN106557353A (en) 2016-11-04 2016-11-04 A kind of container carries the server performance index Evaluation Method of service application

Publications (1)

Publication Number Publication Date
CN106557353A true CN106557353A (en) 2017-04-05

Family

ID=58444167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610962625.5A Pending CN106557353A (en) 2016-11-04 2016-11-04 A kind of container carries the server performance index Evaluation Method of service application

Country Status (1)

Country Link
CN (1) CN106557353A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107463410A (en) * 2017-08-11 2017-12-12 四川长虹电器股份有限公司 A kind of method disposed with monitoring online platform application
CN107562536A (en) * 2017-08-07 2018-01-09 华迪计算机集团有限公司 A kind of system and method for determining electronic document system performance
CN107943677A (en) * 2017-10-13 2018-04-20 东软集团股份有限公司 Application performance monitoring method, device, readable storage medium storing program for executing and electronic equipment
CN109117351A (en) * 2018-09-27 2019-01-01 四川长虹电器股份有限公司 A kind of front end displaying implementation method of Docker container cloud host and Dashboard
CN109241096A (en) * 2018-08-01 2019-01-18 北京京东金融科技控股有限公司 Data processing method, device and system
WO2019134292A1 (en) * 2018-01-08 2019-07-11 武汉斗鱼网络科技有限公司 Container allocation method and apparatus, server and medium
CN110096339A (en) * 2019-05-10 2019-08-06 重庆八戒电子商务有限公司 A kind of scalable appearance configuration recommendation system and method realized based on system load
CN110928676A (en) * 2019-07-18 2020-03-27 国网浙江省电力有限公司衢州供电公司 Power CPS load distribution method based on performance evaluation
CN111158856A (en) * 2019-12-20 2020-05-15 天津大学 Container visualization system based on Docker
CN111290858A (en) * 2020-05-11 2020-06-16 腾讯科技(深圳)有限公司 Input/output resource management method, device, computer equipment and storage medium
CN111708671A (en) * 2020-06-11 2020-09-25 杭州尚尚签网络科技有限公司 Method and system for evaluating utilization efficiency of server resources and allocating resources
CN112069017A (en) * 2019-06-11 2020-12-11 顺丰科技有限公司 Business system monitoring method and device
CN112256653A (en) * 2020-11-06 2021-01-22 网易(杭州)网络有限公司 Data sampling method and device
CN112559142A (en) * 2019-09-26 2021-03-26 贵州白山云科技股份有限公司 Container control method, device, edge calculation system, medium and equipment
CN112559129A (en) * 2020-12-16 2021-03-26 西安电子科技大学 Device and method for testing load balancing function and performance of virtualization platform
CN112565388A (en) * 2020-12-01 2021-03-26 中盈优创资讯科技有限公司 Distributed acquisition service scheduling system and method based on scoring system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101533366A (en) * 2009-03-09 2009-09-16 浪潮电子信息产业股份有限公司 Method for acquiring and analyzing performance data of server
CN102929667A (en) * 2012-10-24 2013-02-13 曙光信息产业(北京)有限公司 Method for optimizing hadoop cluster performance
CN104239193A (en) * 2014-09-04 2014-12-24 浪潮电子信息产业股份有限公司 Linux-based CPU (Central Processing Unit) and memory usage rate collection method
CN104298339A (en) * 2014-10-11 2015-01-21 东北大学 Server integration method oriented to minimum energy consumption
CN105357296A (en) * 2015-10-30 2016-02-24 河海大学 Elastic caching system based on Docker cloud platform

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101533366A (en) * 2009-03-09 2009-09-16 浪潮电子信息产业股份有限公司 Method for acquiring and analyzing performance data of server
CN102929667A (en) * 2012-10-24 2013-02-13 曙光信息产业(北京)有限公司 Method for optimizing hadoop cluster performance
CN104239193A (en) * 2014-09-04 2014-12-24 浪潮电子信息产业股份有限公司 Linux-based CPU (Central Processing Unit) and memory usage rate collection method
CN104298339A (en) * 2014-10-11 2015-01-21 东北大学 Server integration method oriented to minimum energy consumption
CN105357296A (en) * 2015-10-30 2016-02-24 河海大学 Elastic caching system based on Docker cloud platform

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107562536A (en) * 2017-08-07 2018-01-09 华迪计算机集团有限公司 A kind of system and method for determining electronic document system performance
CN107463410A (en) * 2017-08-11 2017-12-12 四川长虹电器股份有限公司 A kind of method disposed with monitoring online platform application
CN107943677A (en) * 2017-10-13 2018-04-20 东软集团股份有限公司 Application performance monitoring method, device, readable storage medium storing program for executing and electronic equipment
WO2019134292A1 (en) * 2018-01-08 2019-07-11 武汉斗鱼网络科技有限公司 Container allocation method and apparatus, server and medium
CN109241096A (en) * 2018-08-01 2019-01-18 北京京东金融科技控股有限公司 Data processing method, device and system
CN109117351B (en) * 2018-09-27 2020-06-02 四川虹微技术有限公司 Front-end display implementation method for Docker container cloud host and Dashboard
CN109117351A (en) * 2018-09-27 2019-01-01 四川长虹电器股份有限公司 A kind of front end displaying implementation method of Docker container cloud host and Dashboard
CN110096339A (en) * 2019-05-10 2019-08-06 重庆八戒电子商务有限公司 A kind of scalable appearance configuration recommendation system and method realized based on system load
CN112069017A (en) * 2019-06-11 2020-12-11 顺丰科技有限公司 Business system monitoring method and device
CN110928676A (en) * 2019-07-18 2020-03-27 国网浙江省电力有限公司衢州供电公司 Power CPS load distribution method based on performance evaluation
CN110928676B (en) * 2019-07-18 2022-03-11 国网浙江省电力有限公司衢州供电公司 Power CPS load distribution method based on performance evaluation
CN112559142A (en) * 2019-09-26 2021-03-26 贵州白山云科技股份有限公司 Container control method, device, edge calculation system, medium and equipment
CN112559142B (en) * 2019-09-26 2023-12-19 贵州白山云科技股份有限公司 Container control method, device, edge computing system, medium and equipment
CN111158856A (en) * 2019-12-20 2020-05-15 天津大学 Container visualization system based on Docker
CN111290858A (en) * 2020-05-11 2020-06-16 腾讯科技(深圳)有限公司 Input/output resource management method, device, computer equipment and storage medium
CN111708671A (en) * 2020-06-11 2020-09-25 杭州尚尚签网络科技有限公司 Method and system for evaluating utilization efficiency of server resources and allocating resources
CN112256653A (en) * 2020-11-06 2021-01-22 网易(杭州)网络有限公司 Data sampling method and device
CN112256653B (en) * 2020-11-06 2024-02-02 网易(杭州)网络有限公司 Data sampling method and device
CN112565388A (en) * 2020-12-01 2021-03-26 中盈优创资讯科技有限公司 Distributed acquisition service scheduling system and method based on scoring system
CN112559129A (en) * 2020-12-16 2021-03-26 西安电子科技大学 Device and method for testing load balancing function and performance of virtualization platform
CN112559129B (en) * 2020-12-16 2023-03-10 西安电子科技大学 Device and method for testing load balancing function and performance of virtualization platform

Similar Documents

Publication Publication Date Title
CN106557353A (en) A kind of container carries the server performance index Evaluation Method of service application
Cortez et al. Resource central: Understanding and predicting workloads for improved resource management in large cloud platforms
US10761957B2 (en) Optimization of operating system and virtual machine monitor memory management
Kwon et al. Coordinated and efficient huge page management with ingens
Yang et al. Split-level I/O scheduling
CN100451995C (en) System and method to preserve a cache of a virtual machine
US7185155B2 (en) Methods and mechanisms for proactive memory management
Koller et al. Centaur: Host-side ssd caching for storage performance control
Li et al. SparkBench: a spark benchmarking suite characterizing large-scale in-memory data analytics
Ahn et al. Improving {I/O} Resource Sharing of Linux Cgroup for {NVMe}{SSDs} on Multi-core Systems
US20110066802A1 (en) Dynamic page reallocation storage system management
EP2404231A1 (en) Method, system and computer program product for managing the placement of storage data in a multi tier virtualized storage infrastructure
Li et al. A new disk I/O model of virtualized cloud environment
CN103631537A (en) Method and device for managing virtual disk
US20190173770A1 (en) Method and system for placement of virtual machines using a working set computation
Zhang et al. “Anti-Caching”-based elastic memory management for Big Data
Wang et al. Dynamic memory balancing for virtualization
Han et al. Secure and dynamic core and cache partitioning for safe and efficient server consolidation
CN114840148B (en) Method for realizing disk acceleration based on linux kernel bcache technology in Kubernets
Zeng et al. Argus: A Multi-tenancy NoSQL store with workload-aware resource reservation
Vetter et al. Power systems memory deduplication
Liu et al. SDFS: A software‐defined file system for multitenant cloud storage
Kishani et al. Padsa: Priority-aware block data storage architecture for edge cloud serving autonomous vehicles
Venkatesan et al. Sizing cleancache allocation for virtual machines’ transcendent memory
No et al. Multi-layered I/O virtualization cache on KVM/QEMU

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170405