CN115794353B - Cloud network service quality optimization processing method, device, equipment and storage medium - Google Patents
Cloud network service quality optimization processing method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN115794353B CN115794353B CN202211712800.7A CN202211712800A CN115794353B CN 115794353 B CN115794353 B CN 115794353B CN 202211712800 A CN202211712800 A CN 202211712800A CN 115794353 B CN115794353 B CN 115794353B
- Authority
- CN
- China
- Prior art keywords
- data
- disk
- network
- cold
- cold data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000005457 optimization Methods 0.000 title claims abstract description 67
- 238000003672 processing method Methods 0.000 title claims abstract description 24
- 238000012216 screening Methods 0.000 claims abstract description 157
- 238000012545 processing Methods 0.000 claims abstract description 50
- 238000000034 method Methods 0.000 claims abstract description 43
- 238000013509 system migration Methods 0.000 claims description 15
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims description 14
- 230000003111 delayed effect Effects 0.000 claims description 12
- 238000012217 deletion Methods 0.000 claims description 7
- 230000037430 deletion Effects 0.000 claims description 7
- 238000004891 communication Methods 0.000 abstract description 9
- 238000004590 computer program Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 6
- 238000013508 migration Methods 0.000 description 6
- 230000005012 migration Effects 0.000 description 6
- 238000001914 filtration Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000013480 data collection Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
Landscapes
- Computer And Data Communications (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The application provides a cloud network service quality optimization processing method, device, equipment and storage medium, and relates to the technical field of communication, wherein the method is used for acquiring running state data of a cloud host where a user service system is located, wherein the running state data comprises the number of bytes flowing into a disk and the number of bytes flowing out of the disk; determining single-day peak value running state data from the running state data according to the byte number flowing into the magnetic disk; performing disc cold data screening processing according to preset disc cold data screening rules, the byte number of the single-day peak operating state data flowing into the disc and the byte number of the single-day peak operating state data flowing out of the disc to obtain at least one disc cold data and a priority corresponding to the disc cold data; and determining a first resource scheduling scheme according to at least one disk cold data and the priority corresponding to the disk cold data, so that the resource scheduling system can migrate the service system according to the first resource scheduling scheme, and high-efficiency service quality optimization can be realized for all cloud network service systems.
Description
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a cloud network service quality optimization processing method, device, equipment, and storage medium.
Background
The cloud network service is a service platform provided by the cloud as required, and basically based on an integrated architecture, the cloud network service utilizes virtualized resources and provides a mode of realizing large-scale service through an internet protocol (Internet Protocol, IP) network.
At present, when the load in a cloud resource pool is too high and the water level is too high, and the storage is nearly full, and the peak utilization rate of a general central processing unit (Central Processing Unit, CPU) exceeds a threshold value, most of cloud service operators need to adjust the water level, namely, some business applications are migrated to other resource pools, so that resources such as the CPU, the memory or the storage occupied by the business applications are released, the load of the resource pool is reduced, and the normal operation of the resource pool is ensured. Before migration, the current cloud service operators mostly export user information from the cloud management platform manually, then the cloud service operators communicate with users manually one by one, examine and research user service application conditions, and screen out servers capable of being migrated according to statistical analysis of investigation results.
However, the existing service quality optimization method is time-consuming and labor-consuming, has low optimization efficiency, and cannot perform efficient service quality optimization for all systems.
Disclosure of Invention
The application provides a cloud network service quality optimization processing method, device, equipment and storage medium, which are used for solving the technical problems that the existing service quality optimization mode is time-consuming and labor-consuming, the optimization efficiency is low, and high-efficiency service quality optimization cannot be performed for all systems.
In a first aspect, the present application provides a cloud network service quality optimization processing method, including:
acquiring running state data of a cloud host where a user service system is located, wherein the running state data comprises the number of bytes flowing into a disk and the number of bytes flowing out of the disk;
determining single-day peak value running state data from the running state data according to the byte number flowing into the disk;
performing disc cold data screening processing according to a preset disc cold data screening rule, the byte number of the single-day peak operating state data flowing into a disc and the byte number of the single-day peak operating state data flowing out of the disc, and obtaining at least one disc cold data and a priority corresponding to the disc cold data;
and determining a first resource scheduling scheme according to the at least one disk cold data and the priority corresponding to the disk cold data, so that the resource scheduling system performs service system migration according to the first resource scheduling scheme.
The method comprises the steps of determining single-day peak operation state data with the largest byte number flowing into a disk according to operation state data of a cloud host where a user service system is located, automatically analyzing and screening the single-day peak operation state data according to the byte number flowing into the disk and the byte number flowing out of the disk of the service, obtaining disk cold data with less access times, and then transferring the disk cold data with less access times to other cloud hosts according to the disk cold data and priority thereof.
Optionally, the determining a first resource scheduling scheme according to the at least one disk cold data and the priority corresponding to the disk cold data includes: calculating the resource quantity occupied by the disk cold data; and determining a first resource scheduling scheme according to the target resource release amount, the resource amount occupied by the disk cold data and the priority corresponding to the disk cold data.
Here, the method and the device can prioritize the disk cold data, reasonably plan a resource scheduling scheme according to the target resource release amount, reasonably and efficiently schedule the resources on the basis of guaranteeing the service quality, reduce the workload in the resource scheduling as much as possible, and further improve the optimization efficiency.
Optionally, the performing a disc cold data screening process according to a preset disc cold data screening rule, the byte number of the single-day peak running state data flowing into the disc and the byte number of the single-day peak running state data flowing out of the disc to obtain at least one disc cold data and a priority corresponding to the disc cold data, includes: screening N pieces of delayed running state data with the minimum byte number flowing into a disk from single-day peak running state data, screening out disk lazy data with the minimum byte number flowing out of the disk from the N pieces of delayed running state data, judging whether the disk lazy data meets a preset disk cold data screening rule, if so, determining that the disk lazy data is the disk cold data, and determining that the priority is a first priority, wherein N is a positive integer; deleting the disk lazy data from the single-day peak running state data, screening N pieces of new delay running state data with the minimum byte number flowing into the disk from the deleted single-day peak running state data, screening out new disk lazy data with the minimum byte number flowing out of the disk from the N pieces of new delay running state data, judging whether the new disk lazy data meets a preset disk cold data screening rule, if so, determining the new disk lazy data as disk cold data, and determining the priority as a second priority; repeating the steps of performing iteration to delete the new disk lazy data obtained last time in the single-day peak operation state data, screening N new delay operation state data with the minimum byte number flowing into the disk from the deleted single-day peak operation state data, screening out the new disk lazy data with the minimum byte number flowing out of the disk from the N new delay operation state data, judging whether the new disk lazy data meets the preset disk cold data screening rule or not, and stopping iteration until the new disk lazy data is obtained and cannot meet the preset disk cold data screening rule, so as to obtain all the disk cold data and the priority corresponding to the disk cold data.
Optionally, after determining the first resource scheduling scheme according to the target resource release amount, the resource amount occupied by the disk cold data, and the priority corresponding to the disk cold data, the method further includes: and if the first resource scheduling scheme can meet the target water level adjustment requirement standard, the first resource scheduling scheme is sent to a resource scheduling system, so that the resource scheduling system performs service system migration according to the first resource scheduling scheme.
After the resource scheduling scheme is determined, the resource scheduling scheme can be directly sent to the resource scheduling system, manual operation is not needed, efficient migration of the service system can be automatically achieved, and cloud network service quality optimization processing efficiency is further improved.
Optionally, the running state data further includes the number of data packets flowing in by the network card and the number of data packets flowing out by the network card; correspondingly, after determining the first resource scheduling scheme according to the target resource release amount, the resource amount occupied by each piece of cold data and the priority corresponding to each piece of cold data, the method further comprises: deleting the at least one disk cold data from the running state data to obtain network state adjustment data; according to a preset network cold data screening rule, the number of data packets flowing in by a network card of the network state adjustment data and the number of data packets flowing out by the network card, carrying out network cold data screening processing to obtain at least one network cold data and a priority corresponding to the network cold data; and determining a second resource scheduling scheme according to the at least one network cold data and the priority corresponding to the network cold data.
The service capacity can be analyzed according to the disk data, and further, the service capacity can be analyzed according to the network data, so that efficient and accurate service migration is realized, and the cloud network integrated service quality is improved.
Optionally, the filtering of the network cold data according to a preset filtering rule of the network cold data, the number of data packets flowing in by the network card of the network state adjustment data and the number of data packets flowing out by the network card, to obtain at least one network cold data and a priority corresponding to the network cold data, includes: screening M pieces of network delay running state data with the minimum number of data packets flowing in by a network card from the network state adjustment data, screening out network lazy data with the minimum number of data packets flowing out by the network card from the M pieces of network delay running state data, judging whether the network lazy data meets a preset network cold data screening rule, if so, determining that the network lazy data is network cold data, and determining that the priority is a first priority, wherein M is a positive integer; deleting the network lazy data from the network state adjustment data, screening M new network delay running state data with the minimum data packet number flowing in by a network card from the deleted network state adjustment data, screening out new network lazy data with the minimum data packet number flowing out by the network card from the M new network delay running state data, judging whether the new network lazy data meets a preset network cold data screening rule, if so, determining the new network lazy data as network cold data, and determining the priority as a second priority; repeating the steps of executing the deletion of the new network lazy data obtained last time in the network state adjustment data, screening N new delay operation state data with the minimum data packet number flowing in by the network card from the deleted network state adjustment data, screening the new network lazy data with the minimum data packet number flowing out by the network card from the N new delay operation state data, judging whether the new network lazy data meets the preset network cold data screening rule or not, and stopping iteration until the new network lazy data is obtained and cannot meet the preset network cold data screening rule, so as to obtain all the network cold data and the priority corresponding to the network cold data.
Optionally, after determining the second resource scheduling scheme according to the at least one network cold data and the priority corresponding to the network cold data, the method further includes: and sending the first resource scheduling scheme and the second resource scheduling scheme to a resource scheduling system so that the resource scheduling system performs service system migration according to the first resource scheduling scheme and the second resource scheduling scheme.
Here, the method and the device can perform automatic and intelligent scheduling of the service on the disk capacity and the network level, and further improve the overall cloud resource utilization rate of the cloud service operator and the cloud network integrated service quality.
In a second aspect, the present application provides a cloud network service quality optimization processing device, including:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring running state data of a cloud host where a user service system is located, and the running state data comprises the number of bytes flowing into a disk and the number of bytes flowing out of the disk;
the peak value determining module is used for determining single-day peak value running state data from the running state data according to the byte number flowing into the magnetic disk;
the disk cold data determining module is used for performing disk cold data screening processing according to preset disk cold data screening rules, the byte number of the single-day peak operating state data flowing into the disk and the byte number of the single-day peak operating state data flowing out of the disk to obtain at least one disk cold data and a priority corresponding to the disk cold data;
And the first scheduling scheme determining module is used for determining a first resource scheduling scheme according to the at least one disk cold data and the priority corresponding to the disk cold data so that the resource scheduling system can perform service system migration according to the first resource scheduling scheme.
Optionally, the first scheduling scheme determining module is specifically configured to: calculating the resource quantity occupied by the disk cold data; and determining a first resource scheduling scheme according to the target resource release amount, the resource amount occupied by the disk cold data and the priority corresponding to the disk cold data.
Optionally, the disk cold data determining module is specifically configured to: screening N pieces of delayed running state data with the minimum byte number flowing into a disk from single-day peak running state data, screening out disk lazy data with the minimum byte number flowing out of the disk from the N pieces of delayed running state data, judging whether the disk lazy data meets a preset disk cold data screening rule, if so, determining that the disk lazy data is the disk cold data, and determining that the priority is a first priority, wherein N is a positive integer; deleting the disk lazy data from the single-day peak running state data, screening N pieces of new delay running state data with the minimum byte number flowing into the disk from the deleted single-day peak running state data, screening out new disk lazy data with the minimum byte number flowing out of the disk from the N pieces of new delay running state data, judging whether the new disk lazy data meets a preset disk cold data screening rule, if so, determining the new disk lazy data as disk cold data, and determining the priority as a second priority; repeating the steps of performing iteration to delete the new disk lazy data obtained last time in the single-day peak operation state data, screening N new delay operation state data with the minimum byte number flowing into the disk from the deleted single-day peak operation state data, screening out the new disk lazy data with the minimum byte number flowing out of the disk from the N new delay operation state data, judging whether the new disk lazy data meets the preset disk cold data screening rule or not, and stopping iteration until the new disk lazy data is obtained and cannot meet the preset disk cold data screening rule, so as to obtain all the disk cold data and the priority corresponding to the disk cold data.
Optionally, after the first scheduling scheme determining module determines the first resource scheduling scheme according to the at least one disk cold data and the priority corresponding to the disk cold data, the apparatus further includes: and the first sending module is used for sending the first resource scheduling scheme to a resource scheduling system if the first resource scheduling scheme can meet the target water level adjustment requirement standard, so that the resource scheduling system carries out service system migration according to the first resource scheduling scheme. Optionally, the running state data further includes the number of data packets flowing in by the network card and the number of data packets flowing out by the network card; correspondingly, after the first scheduling scheme determining module determines the first resource scheduling scheme according to the at least one disk cold data and the priority corresponding to the disk cold data, the apparatus further includes: the deleting module is used for deleting the at least one disk cold data from the running state data to obtain network state adjustment data; the network cold data determining module is used for carrying out network cold data screening processing according to preset network cold data screening rules, the number of data packets flowing in by the network card of the network state adjustment data and the number of data packets flowing out by the network card, so as to obtain at least one network cold data and a priority corresponding to the network cold data; and the second scheduling scheme determining module is used for determining a second resource scheduling scheme according to the at least one network cold data and the priority corresponding to the network cold data.
Optionally, the network cold data determining module is specifically configured to: screening M pieces of network delay running state data with the minimum number of data packets flowing in by a network card from the network state adjustment data, screening out network lazy data with the minimum number of data packets flowing out by the network card from the M pieces of network delay running state data, judging whether the network lazy data meets a preset network cold data screening rule, if so, determining that the network lazy data is network cold data, and determining that the priority is a first priority, wherein M is a positive integer; deleting the network lazy data from the network state adjustment data, screening M new network delay running state data with the minimum data packet number flowing in by a network card from the deleted network state adjustment data, screening out new network lazy data with the minimum data packet number flowing out by the network card from the M new network delay running state data, judging whether the new network lazy data meets a preset network cold data screening rule, if so, determining the new network lazy data as network cold data, and determining the priority as a second priority; repeating the steps of executing the deletion of the new network lazy data obtained last time in the network state adjustment data, screening N new delay operation state data with the minimum data packet number flowing in by the network card from the deleted network state adjustment data, screening the new network lazy data with the minimum data packet number flowing out by the network card from the N new delay operation state data, judging whether the new network lazy data meets the preset network cold data screening rule or not, and stopping iteration until the new network lazy data is obtained and cannot meet the preset network cold data screening rule, so as to obtain all the network cold data and the priority corresponding to the network cold data.
Optionally, after the second scheduling scheme determining module determines the second resource scheduling scheme according to the at least one network cold data and the priority corresponding to the network cold data, the apparatus further includes: and the second sending module is used for sending the first resource scheduling scheme and the second resource scheduling scheme to a resource scheduling system so that the resource scheduling system can perform service system migration according to the first resource scheduling scheme and the second resource scheduling scheme.
In a third aspect, the present application provides a cloud network service quality optimization processing device, including: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executes the computer-executable instructions stored in the memory, so that the at least one processor executes the cloud network service quality optimization processing method according to the first aspect and the various possible designs of the first aspect.
In a fourth aspect, the present application provides a cloud network service quality optimization processing device, including: at least one processor and memory;
the memory stores computer-executable instructions;
The at least one processor executes the computer-executable instructions stored in the memory, so that the at least one processor executes the cloud network service quality optimization processing method according to the second aspect and the various possible designs of the second aspect.
In a fifth aspect, the present application provides a computer readable storage medium, where computer executable instructions are stored, and when a processor executes the computer executable instructions, the cloud service quality optimization processing method according to the first aspect and the various possible designs of the first aspect is implemented.
According to the cloud network service quality optimization processing method, device, equipment and storage medium, the method comprises the steps of firstly determining single-day peak operation state data with the largest byte number flowing into a disk according to operation state data of a cloud host where a user service system is located, automatically analyzing and screening the single-day peak operation state data according to the byte number flowing into the disk and the byte number flowing out of the disk of the service, obtaining disk cold data with less access times, and then migrating the disk cold data with less access times to other cloud hosts according to the disk cold data and priority thereof, so that resource adjustment of the cloud host where the user service system is located is achieved, the overall cloud resource utilization rate of the cloud host is improved, and efficient service quality optimization can be achieved for all cloud network service systems.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive faculty for a person skilled in the art.
Fig. 1 is a schematic system architecture diagram of a cloud network service quality optimization processing method provided in an embodiment of the present application;
fig. 2 is a flow chart of a cloud network service quality optimization processing method provided in an embodiment of the present application;
fig. 3 is a flow chart of another cloud network service quality optimization processing method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a cloud network service quality optimization processing method device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a cloud network service quality optimization processing device provided in an embodiment of the present application.
Specific embodiments of the present disclosure have been shown by way of the above drawings and will be described in more detail below. These drawings and the written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the disclosed concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims of this application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the technical scheme of the application, the related information such as user data and the like is collected, stored, used, processed, transmitted, provided, disclosed and the like, and all meet the requirements of related laws and regulations without violating the common-practice custom.
The cloud network service architecture is mainly two fundamental basic stones, namely an original data center or service center of an operator, so that the cloud network service architecture becomes a new service and a new data center; and secondly, an established IP next generation network (Next generation network, NGN). Both operate on the IP network and are now well integrated together, thus forming a powerful and flexible unified service implementation system. The cloud computing service and the network service which are convenient and quick are provided for various users, and are collectively called as a 'cloud network integrated' service, and also called as a cloud network service. In order to ensure the stability of the cloud network integrated service and the good experience of users, the cloud network integrated service needs to be optimized. Most of the current services are in the cloud just for years, and most of cloud services are optimized manually, so that a complete cloud network integrated service optimization method is not available. The existing service quality optimization mode is time-consuming and labor-consuming, has low optimization efficiency, and cannot perform efficient service quality optimization on all systems.
In order to solve the technical problems, embodiments of the present application provide a cloud network service quality optimization processing method, apparatus, device, and storage medium, according to running state data of a cloud host where a user service system is located, first determining single-day peak running state data with a maximum number of bytes flowing into a disk, automatically analyzing and screening the single-day peak running state data by the number of bytes flowing into the disk and the number of bytes flowing out of the disk of a service, so as to obtain disk cold data with a small access frequency, and further, according to the disk cold data and priority thereof, the disk cold data with a small access frequency can be migrated to other cloud hosts, thereby realizing efficient service quality optimization for all cloud network service systems.
Optionally, fig. 1 is a schematic diagram of a cloud network service quality optimization processing system architecture provided in an embodiment of the present application. In fig. 1, the above architecture includes at least one of a data acquisition device 101, a processing device 102, and a display device 103.
It can be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the architecture of the cloud network service quality optimization processing system. In other possible embodiments of the present application, the architecture may include more or fewer components than those illustrated, or some components may be combined, some components may be separated, or different component arrangements may be specifically determined according to the actual application scenario, and the present application is not limited herein. The components shown in fig. 1 may be implemented in hardware, software, or a combination of software and hardware.
In a specific implementation process, the data collection device 101 may include an input/output interface, or may include a communication interface, where the data collection device 101 may be connected to a cloud service operator or a server through the input/output interface or the communication interface, to collect operation state data of a cloud host where a user service system is located, and so on.
The processing device 102 may analyze the cold and hot degree of the service application data according to the running state data of the service application system, and classify the cold and hot degree of the service application data to generate a resource scheduling scheme.
The display device 103 may be used to display the above results or the like, or may interact with the user through a display apparatus. The display device 103 may be a user terminal, a user terminal of a worker, or a network management device, a server, or the like.
The display device 103 may also be a touch display screen for receiving user instructions while displaying the above content to enable interaction with a user.
It will be appreciated that the processing device described above may be implemented by a processor reading instructions in a memory and executing the instructions, or by a chip circuit.
It can be understood that the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the system architecture of the cloud network service quality optimization processing method. In other possible embodiments of the present application, the architecture may include more or fewer components than those illustrated, or some components may be combined, some components may be separated, or different component arrangements may be specifically determined according to the actual application scenario, and the present application is not limited herein. The components shown in fig. 1 may be implemented in hardware, software, or a combination of software and hardware.
In addition, the network architecture and the service scenario described in the embodiments of the present application are for more clearly describing the technical solution of the embodiments of the present application, and do not constitute a limitation on the technical solution provided in the embodiments of the present application, and as a person of ordinary skill in the art can know, with evolution of the network architecture and appearance of a new service scenario, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
The following description of the technical solutions of the present application will take several embodiments as examples, and the same or similar concepts or processes may not be described in detail in some embodiments.
Fig. 2 is a flow chart of a cloud network service quality optimization processing method provided by an embodiment of the present application, and the embodiment of the present application may be applied to the processing device 102 in fig. 1, where the processing device 102 may be a server of a cloud service operator or a processor in the server, and a specific execution body may be determined according to an actual application scenario. As shown in fig. 2, the method comprises the steps of:
s201: and acquiring running state data of a cloud host where the user service system is located.
Wherein the operational status data includes a number of bytes flowing into the disk and a number of bytes flowing out of the disk.
Optionally, data is collected through a corresponding interface of the cloud computing management platform openstack and interfaces provided by other management systems, and a time disk input output (io) value (diskio) of a cloud host where the client service system is located is mainly collected.
Optionally, a network io value (netio) is also collected.
Optionally, the following values are specifically included:
diskinby: the number of bytes streamed to disk.
Diskoutby: byte count of the outgoing disk.
Netinpac: the number of packets flowing in by the network card.
Netoutpac: number of packets flowing out of the network card.
Optionally, after the data is collected, an ip address (ip address) of the cloud host system is accessed, and the data is put into a database table according to time, a cloud host serial number (computer serial number), and an index.
S202: and determining single-day peak value running state data from the running state data according to the byte number flowing into the disk.
Specifically, a set of data corresponding to the maximum number of bytes flowing into the disk per day is determined as single-day peak operation state data of the cloud host.
The single-day peak running state data can include the byte number of the incoming disk and the byte number of the outgoing disk, and can also include the data packet number of the network card incoming and the data packet number of the network card outgoing.
According to the embodiment of the application, firstly, the single-day peak value running state data is determined from the running state data according to the byte number flowing into the disk, more data occupying more resources can be effectively screened, and the cold data is determined according to the single-day peak value running state data, so that efficient migration of resources and efficient optimization of system service quality are conveniently realized.
S203: and carrying out disc cold data screening processing according to a preset disc cold data screening rule, the byte number of the single-day peak operating state data flowing into the disc and the byte number of the single-day peak operating state data flowing out of the disc, and obtaining at least one disc cold data and the priority corresponding to the disc cold data.
Optionally, performing disc cold data screening processing according to a preset disc cold data screening rule, the byte number of the single-day peak running state data flowing into the disc and the byte number of the single-day peak running state data flowing out of the disc to obtain at least one disc cold data and a priority corresponding to the disc cold data, including:
screening N pieces of delayed running state data with the minimum byte number flowing into a disk from single-day peak running state data, screening out disk lazy data with the minimum byte number flowing out of the disk from the N pieces of delayed running state data, judging whether the disk lazy data meets a preset disk cold data screening rule, if so, determining that the disk lazy data is the disk cold data, and determining that the priority is a first priority, wherein N is a positive integer;
Deleting the disk lazy data from the single-day peak running state data, screening N pieces of new delay running state data with the minimum byte number flowing into the disk from the deleted single-day peak running state data, screening out the new disk lazy data with the minimum byte number flowing out of the disk from the N pieces of new delay running state data, judging whether the new disk lazy data meets the preset disk cold data screening rule, if so, determining the new disk lazy data as disk cold data, and determining the priority as a second priority;
repeating the steps of performing iteration to delete the new disk lazy data obtained last time in the single-day peak operation state data, screening N new delay operation state data with the minimum byte number flowing into the disk from the deleted single-day peak operation state data, screening out the new disk lazy data with the minimum byte number flowing out of the disk from the N new delay operation state data, judging whether the new disk lazy data meets the preset disk cold data screening rule or not, and stopping iteration until the new disk lazy data is obtained and cannot meet the preset disk cold data screening rule, so as to obtain all the disk cold data and the priority corresponding to the disk cold data.
Wherein N may be determined according to practical situations, which is not specifically limited in the embodiment of the present application.
The preset disc cold data screening rule can be determined according to actual conditions, and the embodiment of the application does not limit the method specifically.
Optionally, the preset disc cold data screening rule may include a rule of a number of bytes of the outgoing disc corresponding to the disc lazy data, for example, the preset disc cold data screening rule may determine that the disc lazy data satisfies the disc cold data rule when a quotient of a maximum number of bytes of the outgoing disc in all running state data and a number of bytes of the outgoing disc corresponding to the disc lazy data is greater than a first preset limit threshold. The first preset limiting threshold may be determined according to an actual situation, which is not specifically limited in the embodiment of the present application.
In one possible implementation, the disk cold data screening process is implemented as follows:
according to the characteristics of the resource pool, a priority is set for each data, for the resource pool needing to reduce the water level, the minimum 10 (N may be determined according to practical situations, and 10 are taken as examples here) bytes flowing into the disk are found out from the single-day peak operation state data, the minimum byte number flowing into the disk may be represented as minum (maxdiskinby), the data corresponding to the 10 minum (maxdiskinby) data are delay operation state data through minum (maxdiskinby), the data with the minimum byte number flowing out of the disk is found out from the 10 delay operation state data and is determined as disk lazy data, the serial number of the disk lazy data may be represented as lazy pump unit, and the data with the minimum byte number flowing out of the disk is represented as minum (diskoutby), namely lazy pump unit= minum (diskoutby). Checking whether the difference between minum (diskoutby) and the maximum byte count max (diskoutby) of all running state data (the first preset limit threshold may be determined according to practical situations, and here, 10 is taken as an example) is more than 10 times, if so, the lazycomputeuuid meets the requirement of the disk cold data, and if so, the lazycomputeuuid is used as a disk cold data tag, and the data priority may be determined to be 1.
The row of lazycommputeruiid is deleted from the single day peak running state data, the above steps are repeated, and new minum (diskoutby) computeuuid tag cold data is found, and the priority is set to 2. And so on until no difference between minum (diskoutby) and max (diskoutby) is found to be more than 10 times.
maxdikiskinby represents the number of bytes flowing into the disk in the single day peak operating state data.
Wherein, the cold data refers to data with less access times.
In step S203, the embodiment of the present application first finds out several data with the least number of bytes flowing into the disk, and then determines the data with the least number of bytes flowing out of the disk from the screened several data, thereby determining disk lazy data with less access times, that is, determining a migratable service through the disk lazy data.
Here, through presetting the disk cold data screening rule, the data with the number of bytes of the largest outflow disk in all data and the number of bytes of the largest outflow disk of the disk lazy data are screened, so that errors can be reduced, the risk of false deletion is reduced, and the optimization efficiency and the optimization quality are further improved.
S204: and determining a first resource scheduling scheme according to at least one disk cold data and the priority corresponding to the disk cold data.
Optionally, determining the first resource scheduling scheme according to at least one disk cold data and the priority corresponding to the disk cold data includes:
calculating the resource quantity occupied by the cold data of the magnetic disk; and determining a first resource scheduling scheme according to the target resource release amount, the resource amount occupied by the disk cold data and the priority corresponding to the disk cold data.
The target resource release amount may be determined according to the situation, so to speak, the user/staff may input the target resource release amount in advance, or may acquire the current resource amount and the target resource amount in advance, and determine the difference value as the target resource release amount.
Optionally, the determining the first resource scheduling scheme according to the target resource release amount, the resource amount occupied by the disk cold data and the priority corresponding to the disk cold data may be:
and sequencing according to the priorities, firstly determining whether the resource amount of the disk cold data of the first priority is larger than or equal to the target resource release amount, and if the resource amount of the disk cold data of the first priority is larger than or equal to the target resource release amount, determining that the first resource scheduling scheme is to migrate the service corresponding to the disk cold data of the first priority.
If the resource amount of the disk cold data of the first priority is smaller than the target resource release amount, calculating whether the sum of the resource amount of the disk cold data of the first priority and the resource amount of the disk cold data of the second priority is larger than or equal to the target resource release amount, and if the sum is larger than or equal to the target resource release amount, determining that the first resource scheduling scheme is to migrate the service corresponding to the disk cold data of the first priority and the service corresponding to the disk cold data of the second priority.
And the priority order is analogized until the target resource release amount is met, so that the efficient scheduling and optimization of the service can be realized under the condition of migrating the service as little as possible, and the quality of cloud network integrated service is improved.
The embodiment of the application provides a quality optimization method for all cloud network integrated services, which is characterized in that according to the running state data of a cloud host where a user service system is located, single-day peak running state data with the largest byte number flowing into a disk is firstly determined, and the single-day peak running state data is automatically analyzed and screened according to the byte number flowing into the disk and the byte number flowing out of the disk of the service, so that disk cold data with less access times can be obtained, and further, the disk cold data with less access times can be migrated to other cloud hosts according to the disk cold data and the priority thereof, and efficient service quality optimization can be realized for all cloud network service systems.
Optionally, after determining the first resource scheduling scheme according to the target resource release amount, the resource amount occupied by the disk cold data and the priority corresponding to the disk cold data, the method further includes: and if the first resource scheduling scheme can meet the target water level adjustment requirement standard, the first resource scheduling scheme is sent to a resource scheduling system so that the resource scheduling system can perform service system migration according to the first resource scheduling scheme.
Here, after the resource scheduling scheme is determined, the embodiment of the application can directly send the resource scheduling scheme to the resource scheduling system without manual operation, can automatically realize the efficient migration of the service system, and further improves the cloud network service quality optimization processing efficiency.
Optionally, the embodiment of the present application may further implement further service scheduling and optimization through network data, and correspondingly, fig. 3 is a flow chart of another cloud network service quality optimization processing method provided by the embodiment of the present application, as shown in fig. 3, where the method includes:
s301: and acquiring running state data of a cloud host where the user service system is located.
The running state data comprise the number of bytes flowing into the disk, the number of bytes flowing out of the disk, the number of data packets flowing in by the network card and the number of data packets flowing out of the network card.
S302: and determining single-day peak value running state data from the running state data according to the byte number flowing into the disk.
Each single-day peak value running state data comprises the byte number of the inflow magnetic disk, the byte number of the outflow magnetic disk, the data packet number of the network card inflow and the data packet number of the network card outflow corresponding to the single-day peak value running state data.
S303: and carrying out disc cold data screening processing according to a preset disc cold data screening rule, the byte number of the single-day peak operating state data flowing into the disc and the byte number of the single-day peak operating state data flowing out of the disc, and obtaining at least one disc cold data and the priority corresponding to the disc cold data.
S304: and determining a first resource scheduling scheme according to at least one disk cold data and the priority corresponding to the disk cold data.
The implementation of steps S301 to S304 is similar to that of steps S201 to S204, and the embodiments of the present application are not described herein.
S305: and deleting at least one disk cold data in the running state data to obtain network state adjustment data.
Here, in the embodiment of the present application, the filtering of the network cold data is performed in the network state adjustment data obtained by deleting the disk cold data.
S306: and carrying out network cold data screening processing according to preset network cold data screening rules, the number of data packets flowing in by a network card of the network state adjustment data and the number of data packets flowing out by the network card, and obtaining at least one network cold data and a priority corresponding to the network cold data.
Optionally, according to a preset network cold data screening rule, the number of data packets flowing in by a network card of the network state adjustment data and the number of data packets flowing out by the network card, performing network cold data screening processing to obtain at least one network cold data and a priority corresponding to the network cold data, including:
Screening M pieces of network delay running state data with the minimum number of data packets flowing in the network card from the network state adjustment data, screening out network lazy data with the minimum number of data packets flowing out of the network card from the M pieces of network delay running state data, judging whether the network lazy data meets a preset network cold data screening rule, if so, determining that the network lazy data is network cold data, and determining that the priority is a first priority; wherein M is any positive integer.
Deleting the network lazy data determined in the last step from the network state adjustment data, screening M new network delay running state data with the minimum number of data packets flowing in by a network card from the deleted network state adjustment data, screening out the new network lazy data with the minimum number of data packets flowing out by the network card from the M new network delay running state data, judging whether the new network lazy data meets the preset network cold data screening rule, if so, determining the new network lazy data as the network cold data, and determining the priority as the second priority;
repeating the steps of executing the deletion of the new network lazy data obtained last time in the network state adjustment data, screening N new delay operation state data with the minimum data packet number flowing in by the network card from the deleted network state adjustment data, screening the new network lazy data with the minimum data packet number flowing out by the network card from the N new delay operation state data, judging whether the new network lazy data meets the preset network cold data screening rule or not, and stopping iteration until the new network lazy data is obtained and cannot meet the preset network cold data screening rule, so as to obtain all the network cold data and the priority corresponding to the network cold data.
Wherein, M may be determined according to practical situations, which is not particularly limited in the embodiment of the present application.
The preset network cold data screening rule can be determined according to actual conditions, and the embodiment of the application does not specifically limit the method.
Optionally, the preset network cold data filtering rule may include a rule of a number of data packets flowing out of a network card corresponding to the network lazy data, for example, the preset network cold data filtering rule may determine that the network lazy data satisfies the network cold data rule if a quotient of a maximum number of data packets flowing out of the network card in all running state data and a number of data packets flowing out of the network card corresponding to the network lazy data is greater than a second preset limit threshold. The second preset limiting threshold may be determined according to an actual situation, which is not specifically limited in the embodiment of the present application.
In one possible implementation, the network cold data screening process is implemented as follows:
and if the water level adjustment requirement is met after the release of the resources (CPU\memory\storage) occupied by the cloud host corresponding to the service of the disk cold data, operating the service system migration, if the water level adjustment requirement is still not met after the release of the resources occupied by the cloud host corresponding to the service of the disk cold data, starting the next step, firstly finding out the minimum 10 (M can be determined according to practical situations, 10 are taken as examples here) data packets flowing in from the network state adjustment data, the minimum data packets flowing in from the network card can be represented as minum (netinpac), the data corresponding to the 10 minum (netinpac) data are network delay running state data, in the 10 network delay running state data, finding out the data of minum (netoutpac) with the minimum data packets flowing out from the network card is determined as the network lazy data, checking whether the difference between the minum (netoutpac) and the maximum data packets max (netoutpac) flowing out from all running state data is 10 (the second preset limit threshold can be determined according to practical situations, if the lazy is more than 10 times, the lazy is satisfied, the network data can be regarded as cold network data with the network label 1 as the network data with the network delay running state data.
And deleting the line of the network lazy data from the network state adjustment data, repeating the steps, finding a new network cold data label, and setting the priority to be 2. And so on until no data is found that satisfies whether the difference between minum (netoutpac) and max (netoutpac) is more than 10 times.
In step S306, the embodiment of the present application first finds out several data of the number of data packets flowing into the network card, and then determines the data with the minimum number of data packets flowing out of the network card from the screened several data, thereby determining network lazy data with fewer access times, that is, determining a migratable service through the network lazy data.
Here, through presetting the network cold data screening rule, the data with the number of data packets flowing out of the maximum network card and the number of data packets flowing out of the maximum network card of the network lazy data in all data are screened, so that errors can be reduced, the risk of false deletion is reduced, and the optimization efficiency and the optimization quality are further improved.
S307: and determining a second resource scheduling scheme according to at least one network cold data and the priority corresponding to the network cold data.
The embodiment of the application can analyze the service capacity according to the disk data, further can analyze the service capacity according to the network data, thereby realizing efficient and accurate service migration and further improving the cloud network integrated service quality.
Optionally, after determining the second resource scheduling scheme according to the at least one network cold data and the priority corresponding to the network cold data, the method further includes:
and sending the first resource scheduling scheme and the second resource scheduling scheme to a resource scheduling system so that the resource scheduling system carries out service system migration according to the first resource scheduling scheme and the second resource scheduling scheme.
Here, the embodiment of the application can automatically and intelligently schedule the service at the disk capacity and the network level, and further improves the overall cloud resource utilization rate of the cloud service operator and the cloud network integrated service quality.
Exemplary, an embodiment of the present application provides a cloud network service quality optimization processing method, which is as follows:
the first step: data are collected through an openstack corresponding interface and interfaces provided by other management systems, a time disk io value (diskio) and a network io value (netio) of a cloud host where a client service system is located are mainly collected, an ip address (ipaddress) of the cloud host system is accessed, and the data are put into a database table according to time (time), a cloud host id and an index to form a table 1, wherein the table heads of the table 1 are computeruuid, time, diskinby, diskoutby, netinpac, netoutpac respectively, and specific forms of the table 1 are not exemplified herein.
And a second step of: the peak disk writing, writing out byte number, network inflow and outflow data packet number of each cloud host every day are found in the database table. Table 2 is formed, and table 2 is formed, wherein the table heads of table 2 are computeruuid, maxdiskinby, diskoutby, netinpac, netoutpac, respectively, and the specific form of table 2 is not exemplified herein.
And a third step of: according to the characteristics of the resource pool, a priority is set for each data, for the resource pool needing to reduce the water level, minum (maxdiskinby) are found in table 2, in the 10, lazymideputuuid= minum (diskoutby) is found, whether the quotient of minum (diskoutby) and max (diskoutby) is more than 10 times, if yes, lazymideputuuid meets the cold data requirement, and for this lazymideputuuid, a cold data tag is made, priority 1 is put into a cold data table 3, wherein the table heads of table 3 are respectively a computer mideputuud, a disk cold data tag (diskcold), a priority (Prior), and the specific form of table 3 is not exemplified here.
Fourth step: the row of lazycomputeruuid is deleted from table 2, the third step is repeated, and a new lazycomputeruuid is found, setting the new data priority to 2. And so on until the quotient of minum (diskoutby) and max (diskoutby) is not found to be more than 10 times.
Fifth step: calculating the amount of resources (CPU\memory\storage) occupied by the cloud host in the table 3, and if the water level adjustment requirement is met after release, exporting the table 3 to a cloud service provider resource scheduling system, and operating the service system to migrate; if the resources occupied by the cloud host in table 3 are released and still cannot meet the water level adjustment requirement, the next step is started.
Sixth step: table 4 is formed after deleting the cloud host information of table 3 for the cloud hosts in table 1.
Seventh step: minum (netinpac) 10 were found for netinpac in table 4, forming table 5.
Eighth step: for netoutpac of minum (netinpac), if it is more than 10 times greater than maxnet outpac, network cold data table 6 is formed.
Ninth step: and exporting the table 6 to a cloud service provider resource scheduling system, and migrating an operation service system.
It is to be understood that the embodiments of the present application are not specifically limited to the specific embodiments of the tables.
By the method, the embodiment of the application analyzes the cold and hot degree of the service application data according to the operation data of the service application system, and classifies the service application data cold and hot, so that cloud service providers can conveniently adjust the water level of the cloud resource pool, and the overall cloud resource utilization rate of the cloud service operators is improved.
Fig. 4 is a schematic structural diagram of a cloud network service quality optimization processing method device provided in an embodiment of the present application, where, as shown in fig. 4, the device in the embodiment of the present application includes: an acquisition module 401, a peak determination module 402, a disk cold data determination module 403, and a first scheduling scheme determination module 404. The cloud network service quality optimization processing method and device can be a server or terminal equipment, or a chip or integrated circuit for realizing the functions of the server or the terminal equipment. It should be noted that, the division of the acquisition module 401, the peak value determination module 402, the disk cold data determination module 403, and the first scheduling scheme determination module 404 is only a division of a logic function, and both may be integrated or independent physically.
The system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring the running state data of a cloud host where a user service system is located, and the running state data comprises the number of bytes flowing into a disk and the number of bytes flowing out of the disk;
the peak value determining module is used for determining single-day peak value running state data from the running state data according to the byte number flowing into the magnetic disk;
the disk cold data determining module is used for performing disk cold data screening processing according to preset disk cold data screening rules, the byte number of the single-day peak operating state data flowing into the disk and the byte number of the single-day peak operating state data flowing out of the disk to obtain at least one disk cold data and a priority corresponding to the disk cold data;
And the first scheduling scheme determining module is used for determining a first resource scheduling scheme according to at least one disk cold data and the priority corresponding to the disk cold data so that the resource scheduling system can perform service system migration according to the first resource scheduling scheme.
Optionally, the first scheduling scheme determining module is specifically configured to:
calculating the resource quantity occupied by the cold data of the magnetic disk;
and determining a first resource scheduling scheme according to the target resource release amount, the resource amount occupied by the disk cold data and the priority corresponding to the disk cold data.
Optionally, the disk cold data determining module is specifically configured to:
screening N pieces of delayed running state data with the minimum byte number flowing into a disk from single-day peak running state data, screening out disk lazy data with the minimum byte number flowing out of the disk from the N pieces of delayed running state data, judging whether the disk lazy data meets a preset disk cold data screening rule, if so, determining that the disk lazy data is the disk cold data, and determining that the priority is a first priority, wherein N is a positive integer;
deleting the disk lazy data from the single-day peak running state data, screening N pieces of new delay running state data with the minimum byte number flowing into the disk from the deleted single-day peak running state data, screening out the new disk lazy data with the minimum byte number flowing out of the disk from the N pieces of new delay running state data, judging whether the new disk lazy data meets the preset disk cold data screening rule, if so, determining the new disk lazy data as disk cold data, and determining the priority as a second priority;
Repeating the steps of performing iteration to delete the new disk lazy data obtained last time in the single-day peak operation state data, screening N new delay operation state data with the minimum byte number flowing into the disk from the deleted single-day peak operation state data, screening out the new disk lazy data with the minimum byte number flowing out of the disk from the N new delay operation state data, judging whether the new disk lazy data meets the preset disk cold data screening rule or not, and stopping iteration until the new disk lazy data is obtained and cannot meet the preset disk cold data screening rule, so as to obtain all the disk cold data and the priority corresponding to the disk cold data.
Optionally, after the first scheduling scheme determining module determines the first resource scheduling scheme according to at least one disk cold data and the priority corresponding to the disk cold data, the apparatus further includes:
and the first sending module is used for sending the first resource scheduling scheme to the resource scheduling system if the first resource scheduling scheme can meet the target water level adjustment requirement standard, so that the resource scheduling system carries out service system migration according to the first resource scheduling scheme.
Optionally, the running state data further includes the number of data packets flowing in by the network card and the number of data packets flowing out by the network card;
Correspondingly, after the first scheduling scheme determining module determines the first resource scheduling scheme according to at least one disk cold data and the priority corresponding to the disk cold data, the apparatus further includes:
the deleting module is used for deleting at least one disk cold data in the running state data to obtain network state adjustment data;
the network cold data determining module is used for carrying out network cold data screening processing according to preset network cold data screening rules, the number of data packets flowing in by a network card of network state adjustment data and the number of data packets flowing out by the network card, and obtaining at least one network cold data and a priority corresponding to the network cold data;
and the second scheduling scheme determining module is used for determining a second resource scheduling scheme according to at least one network cold data and the priority corresponding to the network cold data.
Optionally, the network cold data determining module is specifically configured to: screening M pieces of network delay running state data with the minimum number of data packets flowing in by a network card from the network state adjustment data, screening out network lazy data with the minimum number of data packets flowing out by the network card from the M pieces of network delay running state data, judging whether the network lazy data meets a preset network cold data screening rule, if so, determining that the network lazy data is network cold data, and determining that the priority is a first priority, wherein M is a positive integer; deleting the network lazy data from the network state adjustment data, screening M new network delay running state data with the minimum data packet number flowing in by a network card from the deleted network state adjustment data, screening out new network lazy data with the minimum data packet number flowing out by the network card from the M new network delay running state data, judging whether the new network lazy data meets a preset network cold data screening rule, if so, determining the new network lazy data as network cold data, and determining the priority as a second priority; repeating the steps of executing the deletion of the new network lazy data obtained last time in the network state adjustment data, screening N new delay operation state data with the minimum data packet number flowing in by the network card from the deleted network state adjustment data, screening the new network lazy data with the minimum data packet number flowing out by the network card from the N new delay operation state data, judging whether the new network lazy data meets the preset network cold data screening rule or not, and stopping iteration until the new network lazy data is obtained and cannot meet the preset network cold data screening rule, so as to obtain all the network cold data and the priority corresponding to the network cold data.
Optionally, after the second scheduling scheme determining module determines the second resource scheduling scheme according to at least one network cold data and the priority corresponding to the network cold data, the apparatus further includes:
and the second sending module is used for sending the first resource scheduling scheme and the second resource scheduling scheme to the resource scheduling system so that the resource scheduling system can carry out service system migration according to the first resource scheduling scheme and the second resource scheduling scheme.
The embodiment of the application also provides cloud network service quality optimization processing equipment.
Referring to fig. 5, a schematic structural diagram of a cloud network service quality of service optimization processing device 500 suitable for implementing an embodiment of the disclosure is shown, where the cloud network service quality of service optimization processing device 500 may be a terminal device or a server. The terminal device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a personal digital assistant (Personal Digital Assistant, PDA for short), a tablet (Portable Android Device, PAD for short), a portable multimedia player (Portable Media Player, PMP for short), an in-vehicle terminal (e.g., an in-vehicle navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The cloud network service quality optimization processing device shown in fig. 5 is only an example, and should not impose any limitation on the functions and application scope of the embodiments of the present disclosure.
As shown in fig. 5, the cloud service quality of service optimization processing apparatus 500 may include a processing device (e.g., a central processing unit, a graphics processor, etc.) 501 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage device 508 into a random access Memory (Random Access Memory, RAM) 503. In the RAM 503, various programs and data required for the operation of the cloud service quality of service optimization processing device 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
In general, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 507 including, for example, a liquid crystal display (Liquid Crystal Display, abbreviated as LCD), a speaker, a vibrator, and the like; storage 508 including, for example, magnetic tape, hard disk, etc.; and communication means 509. The communication means 509 may allow the cloud service quality of service optimization processing device 500 to communicate wirelessly or by wire with other devices to exchange data. While fig. 5 shows a cloud network quality of service optimization processing apparatus 500 having various devices, it should be understood that not all of the illustrated devices are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or from the storage means 508, or from the ROM 502. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 501.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be included in the cloud network service quality optimization processing device; or may exist alone without being assembled into the cloud network service quality optimization processing device.
The computer readable medium carries one or more programs which, when executed by the cloud network service quality optimization processing device, cause the cloud network service quality optimization processing device to execute the method shown in the above embodiment.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (Local Area Network, LAN for short) or a wide area network (Wide Area Network, WAN for short), or it may be connected to an external computer (e.g., connected via the internet using an internet service provider).
The embodiment of the application also provides a computer readable storage medium, wherein computer execution instructions are stored in the computer readable storage medium, and the computer execution instructions are used for realizing the cloud network service quality optimization processing method of any one of the above when being executed by a processor.
The embodiment of the application also provides a computer program product, which comprises a computer program, wherein the computer program is used for realizing the cloud network service quality optimization processing method of any one of the above when being executed by a processor.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (9)
1. The cloud network service quality optimization processing method is characterized by comprising the following steps of:
Acquiring running state data of a cloud host where a user service system is located, wherein the running state data comprises the number of bytes flowing into a disk and the number of bytes flowing out of the disk;
determining single-day peak value running state data from the running state data according to the byte number flowing into the disk;
performing disc cold data screening processing according to a preset disc cold data screening rule, the byte number of the single-day peak operating state data flowing into a disc and the byte number of the single-day peak operating state data flowing out of the disc, and obtaining at least one disc cold data and a priority corresponding to the disc cold data;
determining a first resource scheduling scheme according to the at least one disk cold data and the priority corresponding to the disk cold data, so that a resource scheduling system performs service system migration according to the first resource scheduling scheme;
and performing disc cold data screening processing according to a preset disc cold data screening rule, the byte number of the single-day peak operating state data flowing into the disc and the byte number of the single-day peak operating state data flowing out of the disc to obtain at least one disc cold data and a priority corresponding to the disc cold data, wherein the method comprises the following steps:
screening N pieces of delayed running state data with the minimum byte number flowing into a disk from single-day peak running state data, screening out disk lazy data with the minimum byte number flowing out of the disk from the N pieces of delayed running state data, judging whether the disk lazy data meets a preset disk cold data screening rule, if so, determining that the disk lazy data is the disk cold data, and determining that the priority is a first priority, wherein N is a positive integer;
Deleting the disk lazy data from the single-day peak running state data, screening N pieces of new delay running state data with the minimum byte number flowing into the disk from the deleted single-day peak running state data, screening out new disk lazy data with the minimum byte number flowing out of the disk from the N pieces of new delay running state data, judging whether the new disk lazy data meets a preset disk cold data screening rule, if so, determining the new disk lazy data as disk cold data, and determining the priority as a second priority;
repeating the steps of performing iteration to delete the new disk lazy data obtained last time in the single-day peak operation state data, screening N new delay operation state data with the minimum byte number flowing into the disk from the deleted single-day peak operation state data, screening out the new disk lazy data with the minimum byte number flowing out of the disk from the N new delay operation state data, judging whether the new disk lazy data meets the preset disk cold data screening rule or not, and stopping iteration until the new disk lazy data is obtained and cannot meet the preset disk cold data screening rule, so as to obtain all the disk cold data and the priority corresponding to the disk cold data.
2. The method of claim 1, wherein the determining the first resource scheduling scheme according to the at least one disk cold data and the priority corresponding to the disk cold data comprises:
calculating the resource quantity occupied by the disk cold data;
and determining a first resource scheduling scheme according to the target resource release amount, the resource amount occupied by the disk cold data and the priority corresponding to the disk cold data.
3. The method according to claim 1 or 2, further comprising, after said determining a first resource scheduling scheme according to said at least one disk cold data and a priority corresponding to said disk cold data:
and if the first resource scheduling scheme can meet the target water level adjustment requirement standard, the first resource scheduling scheme is sent to a resource scheduling system, so that the resource scheduling system performs service system migration according to the first resource scheduling scheme.
4. The method according to claim 1 or 2, wherein the operation status data further includes a number of packets flowing in by the network card and a number of packets flowing out by the network card;
correspondingly, after determining the first resource scheduling scheme according to the at least one disk cold data and the priority corresponding to the disk cold data, the method further includes:
Deleting the at least one disk cold data from the running state data to obtain network state adjustment data;
according to a preset network cold data screening rule, the number of data packets flowing in by a network card of the network state adjustment data and the number of data packets flowing out by the network card, carrying out network cold data screening processing to obtain at least one network cold data and a priority corresponding to the network cold data;
and determining a second resource scheduling scheme according to the at least one network cold data and the priority corresponding to the network cold data.
5. The method of claim 4, wherein the performing network cold data screening according to the preset network cold data screening rule, the number of data packets flowing in by the network card of the network state adjustment data and the number of data packets flowing out by the network card to obtain at least one network cold data and the priority corresponding to the network cold data comprises:
screening M pieces of network delay running state data with the minimum number of data packets flowing in by a network card from the network state adjustment data, screening out network lazy data with the minimum number of data packets flowing out by the network card from the M pieces of network delay running state data, judging whether the network lazy data meets a preset network cold data screening rule, if so, determining that the network lazy data is network cold data, and determining that the priority is a first priority, wherein M is a positive integer;
Deleting the network lazy data from the network state adjustment data, screening M new network delay running state data with the minimum data packet number flowing in by a network card from the deleted network state adjustment data, screening out new network lazy data with the minimum data packet number flowing out by the network card from the M new network delay running state data, judging whether the new network lazy data meets a preset network cold data screening rule, if so, determining the new network lazy data as network cold data, and determining the priority as a second priority;
repeating the steps of executing the deletion of the new network lazy data obtained last time in the network state adjustment data, screening N new delay operation state data with the minimum data packet number flowing in by the network card from the deleted network state adjustment data, screening the new network lazy data with the minimum data packet number flowing out by the network card from the N new delay operation state data, judging whether the new network lazy data meets the preset network cold data screening rule or not, and stopping iteration until the new network lazy data is obtained and cannot meet the preset network cold data screening rule, so as to obtain all the network cold data and the priority corresponding to the network cold data.
6. The method of claim 4, further comprising, after said determining a second resource scheduling scheme based on said at least one network cold data and a priority corresponding to said network cold data:
and sending the first resource scheduling scheme and the second resource scheduling scheme to a resource scheduling system so that the resource scheduling system performs service system migration according to the first resource scheduling scheme and the second resource scheduling scheme.
7. The cloud network service quality optimizing processing device is characterized by comprising:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring running state data of a cloud host where a user service system is located, and the running state data comprises the number of bytes flowing into a disk and the number of bytes flowing out of the disk;
the peak value determining module is used for determining single-day peak value running state data from the running state data according to the byte number flowing into the magnetic disk;
the disk cold data determining module is used for performing disk cold data screening processing according to preset disk cold data screening rules, the byte number of the single-day peak operating state data flowing into the disk and the byte number of the single-day peak operating state data flowing out of the disk to obtain at least one disk cold data and a priority corresponding to the disk cold data;
The first scheduling scheme determining module is used for determining a first resource scheduling scheme according to the at least one disk cold data and the priority corresponding to the disk cold data;
the disk cold data determining module is specifically configured to:
screening N pieces of delayed running state data with the minimum byte number flowing into a disk from single-day peak running state data, screening out disk lazy data with the minimum byte number flowing out of the disk from the N pieces of delayed running state data, judging whether the disk lazy data meets a preset disk cold data screening rule, if so, determining that the disk lazy data is the disk cold data, and determining that the priority is a first priority, wherein N is a positive integer;
deleting the disk lazy data from the single-day peak running state data, screening N pieces of new delay running state data with the minimum byte number flowing into the disk from the deleted single-day peak running state data, screening out new disk lazy data with the minimum byte number flowing out of the disk from the N pieces of new delay running state data, judging whether the new disk lazy data meets a preset disk cold data screening rule, if so, determining the new disk lazy data as disk cold data, and determining the priority as a second priority;
Repeating the steps of performing iteration to delete the new disk lazy data obtained last time in the single-day peak operation state data, screening N new delay operation state data with the minimum byte number flowing into the disk from the deleted single-day peak operation state data, screening out the new disk lazy data with the minimum byte number flowing out of the disk from the N new delay operation state data, judging whether the new disk lazy data meets the preset disk cold data screening rule or not, and stopping iteration until the new disk lazy data is obtained and cannot meet the preset disk cold data screening rule, so as to obtain all the disk cold data and the priority corresponding to the disk cold data.
8. The cloud network service quality optimizing processing device is characterized by comprising the following components:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 6.
9. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out the method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211712800.7A CN115794353B (en) | 2022-12-30 | 2022-12-30 | Cloud network service quality optimization processing method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211712800.7A CN115794353B (en) | 2022-12-30 | 2022-12-30 | Cloud network service quality optimization processing method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115794353A CN115794353A (en) | 2023-03-14 |
CN115794353B true CN115794353B (en) | 2024-02-23 |
Family
ID=85428213
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211712800.7A Active CN115794353B (en) | 2022-12-30 | 2022-12-30 | Cloud network service quality optimization processing method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115794353B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111443867A (en) * | 2020-03-24 | 2020-07-24 | 腾讯科技(深圳)有限公司 | Data storage method, device, equipment and storage medium |
CN111930724A (en) * | 2020-10-14 | 2020-11-13 | 腾讯科技(深圳)有限公司 | Data migration method and device, storage medium and electronic equipment |
CN113204426A (en) * | 2021-04-28 | 2021-08-03 | 北京京东乾石科技有限公司 | Task processing method of resource pool and related equipment |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114442910B (en) * | 2020-11-06 | 2024-08-16 | 伊姆西Ip控股有限责任公司 | Method for managing storage system, electronic device and computer readable medium |
-
2022
- 2022-12-30 CN CN202211712800.7A patent/CN115794353B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111443867A (en) * | 2020-03-24 | 2020-07-24 | 腾讯科技(深圳)有限公司 | Data storage method, device, equipment and storage medium |
CN111930724A (en) * | 2020-10-14 | 2020-11-13 | 腾讯科技(深圳)有限公司 | Data migration method and device, storage medium and electronic equipment |
CN113204426A (en) * | 2021-04-28 | 2021-08-03 | 北京京东乾石科技有限公司 | Task processing method of resource pool and related equipment |
Also Published As
Publication number | Publication date |
---|---|
CN115794353A (en) | 2023-03-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108776934B (en) | Distributed data calculation method and device, computer equipment and readable storage medium | |
CN110008045B (en) | Method, device and equipment for aggregating microservices and storage medium | |
WO2021082649A1 (en) | List update method and apparatus, readable medium, and electronic device | |
CN109062699A (en) | A kind of resource monitoring method, device, server and storage medium | |
CN113760982A (en) | Data processing method and device | |
CN109857550A (en) | Resource allocation method, device, equipment and storage medium based on machine learning | |
CN111352957A (en) | Remote dictionary service optimization method and related equipment | |
CN113242184A (en) | Flow control method of distributed architecture and related equipment | |
CN112860538B (en) | Method and device for carrying out interface regression test based on online log | |
CN116541174A (en) | Storage device capacity processing method, device, equipment and storage medium | |
CN115794353B (en) | Cloud network service quality optimization processing method, device, equipment and storage medium | |
CN116521639A (en) | Log data processing method, electronic equipment and computer readable medium | |
CN111625436A (en) | Insurance business capacity management method and device, electronic equipment and storage medium | |
CN112685481A (en) | Data processing method and device | |
CN115016890A (en) | Virtual machine resource allocation method and device, electronic equipment and storage medium | |
CN114564249A (en) | Recommendation scheduling engine, recommendation scheduling method, and computer-readable storage medium | |
CN114064403A (en) | Task delay analysis processing method and device | |
CN113204426A (en) | Task processing method of resource pool and related equipment | |
CN115729765A (en) | Data acquisition method and device, electronic equipment and storage medium | |
CN110119364B (en) | Method and system for input/output batch submission | |
CN113986097A (en) | Task scheduling method and device and electronic equipment | |
CN113778977A (en) | Data processing method and data processing device | |
CN112910855A (en) | Sample message processing method and device | |
CN115037729B (en) | Data aggregation method, device, electronic equipment and computer readable medium | |
US10956037B2 (en) | Provisioning storage allocation using prioritized storage system capabilities |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |