CN113094115A - Deployment strategy determining method, system and storage medium - Google Patents

Deployment strategy determining method, system and storage medium Download PDF

Info

Publication number
CN113094115A
CN113094115A CN202110336678.7A CN202110336678A CN113094115A CN 113094115 A CN113094115 A CN 113094115A CN 202110336678 A CN202110336678 A CN 202110336678A CN 113094115 A CN113094115 A CN 113094115A
Authority
CN
China
Prior art keywords
environment
parameter
data
processor
data processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110336678.7A
Other languages
Chinese (zh)
Other versions
CN113094115B (en
Inventor
袁立平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202110336678.7A priority Critical patent/CN113094115B/en
Publication of CN113094115A publication Critical patent/CN113094115A/en
Application granted granted Critical
Publication of CN113094115B publication Critical patent/CN113094115B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • G06F9/4451User profiles; Roaming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/103Workflow collaboration or project management
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Physics & Mathematics (AREA)
  • Operations Research (AREA)
  • Tourism & Hospitality (AREA)
  • Quality & Reliability (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application discloses a deployment strategy determination method, a system and a storage medium, wherein the method comprises the following steps: acquiring a first parameter; the first parameter represents the running state of the service data in the first environment; detecting the running state of a second environment based on the first parameter to obtain a second parameter; the second environment comprises a running environment where the business data is not deployed; the second parameter represents data processing capability of the second environment; and determining a deployment strategy of the business data based on the first parameter and the second parameter.

Description

Deployment strategy determining method, system and storage medium
Technical Field
The present application relates to the field of information processing technologies, and in particular, to a deployment policy determining method, a deployment policy determining system, and a computer-readable storage medium.
Background
When deploying the service data, the user can only select the target platform according to the configuration parameters of the platform provided by the new platform. After a user selects a target platform and migrates service data, the running state of the service data in the target platform is often found to be lower than the expected running state. This is mainly caused by the fact that the user does not objectively evaluate configuration parameters, operating states, and the like of the target platform before migrating the service data.
Disclosure of Invention
Based on the above problems, embodiments of the present application provide a deployment policy determination method, a deployment policy determination system, and a computer-readable storage medium. By the deployment strategy determining method provided by the embodiment of the application, the data processing capacity of any cloud environment without deployment service data can be sufficiently and objectively evaluated, and the deployment strategy of the service data is determined according to the evaluation result, so that the pertinence of service data deployment is enhanced, the efficiency of service data deployment is improved, and the effect of service data deployment is improved.
The technical scheme provided by the embodiment of the application is as follows:
the embodiment of the application provides a deployment strategy determination method, which comprises the following steps: .
Acquiring a first parameter; the first parameter represents the running state of the service data in the first environment;
detecting the running state of a second environment based on the first parameter to obtain a second parameter; the second environment comprises a running environment where the business data is not deployed; the second parameter represents data processing capability of the second environment;
and determining a deployment strategy of the business data based on the first parameter and the second parameter.
In some embodiments, the determining a deployment policy of the business data based on the first parameter and the second parameter includes:
determining a target parameter based on the first parameter; the target parameters comprise at least one type of target operation state of the service data;
and determining the deployment strategy based on the matching degree of the target parameter and the second parameter.
In some embodiments, the first parameter comprises a data processing capability parameter of a processor of the first environment; the detecting an operating state of a second environment based on the first parameter includes:
if the processor architecture of the second environment is different from the processor architecture of the first environment, acquiring a data processing capacity parameter of a processor of the second environment;
determining first data based on a data processing capability parameter of a processor of the first environment and a data processing capability parameter of a processor of the second environment; wherein the first data is data processing capability parameter of the processor of the first environment and proportion data of the data processing capability parameter of the processor of the second environment;
detecting an operational state of the second environment based on the first data.
In some embodiments, the determining first data based on the data processing capability parameter of the processor of the first environment and the data processing capability parameter of the processor of the second environment comprises:
if the number of the processors in the first environment is multiple and the number of the processors in the first environment is equal to the number of the processors in the second environment, acquiring a data processing capability parameter of each processor in the processors in the first environment and a data processing capability parameter of each processor in the processors in the second environment;
determining the first data based on data processing capability parameters of respective ones of the processors of the first environment and data processing capability parameters of respective ones of the processors of the second environment.
In some embodiments, the detecting the operating state of the second environment based on the first data includes:
if the number of the processors in the first environment is multiple and the number of the processors in the first environment is greater than the number of the processors in the second environment, load balancing is performed on each processor in the processors in the second environment based on the data processing capacity parameter of each processor in the processors in the first environment, the data processing capacity parameter of each processor in the processors in the second environment and the first data to obtain second data; wherein the second data is a result of load balancing of each processor in the second processor;
detecting an operational state of the second environment based on the second data.
In some embodiments, the method further comprises:
if the number of the processors of the first environment is one, acquiring a third environment; wherein the configuration information of the processor of the third environment matches the configuration information of the processor of the first environment;
and detecting the running state of the third environment, and acquiring the data processing capacity parameter of the processor of the first environment.
In some embodiments, the first parameter further comprises a memory read-write speed parameter of the first environment; the detecting an operating state of a second environment based on the first parameter includes:
and detecting the data read-write delay state of the memory in the second environment through a first pressure test tool based on the memory read-write speed parameter in the first environment.
In some embodiments, the first parameter further comprises a network data read-write parameter of the first environment; the detecting an operating state of a second environment based on the first parameter includes:
and detecting the packet loss rate of the network data of the second environment through a second pressure testing tool based on the network read-write parameters of the first environment.
An embodiment of the present application further provides a deployment policy determining system, where the system includes: a processor, a memory, and a communication bus; wherein:
the communication bus is used for realizing data transmission between the processor and the memory;
the memory has stored therein a computer program; the computer program, when executed by the processor, is capable of implementing a deployment policy determination method as described in any of the preceding.
An embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the deployment policy determination method according to any one of the foregoing embodiments can be implemented.
As can be seen from the above, the deployment policy determining method provided in the embodiment of the present application detects, based on the first parameter indicating the operating state of the service data in the first environment, the data processing capability of the second environment where the service data is not deployed, and after obtaining the second parameter, can determine the deployment policy of the service data according to the first parameter and the second parameter. The first parameter can represent the current operation state of the business data, and the second parameter can objectively represent the data processing capacity of the second environment, so that the deployment strategy of the business data is determined according to the first parameter and the second parameter, the actual requirement of the business data deployment can be reflected, the actual data processing capacity of the second environment can be fully reflected, the pertinence of the business data deployment is enhanced, and the business data deployment efficiency is improved.
Drawings
Fig. 1 is a schematic flowchart of a first deployment policy determining method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a second deployment policy determination method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a deployment policy determination system according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The present application relates to the field of information technology, and in particular, to a deployment policy determining method, a deployment policy determining system, and a computer-readable storage medium.
As the cloud era comes, more and more service data are deployed in the cloud platform, and as the service types implemented by the service data are changed or expanded, more and more service data may be respectively deployed in different cloud platforms, so that the robustness and stability of the cloud platform will directly affect the operating state of the service data.
In practical applications, after a customer deploys local service data to a cloud platform, that is, Physical to Virtual (P2V), or migrates service data already deployed on the cloud platform to a new cloud platform (V2V), there is a high possibility that the operating state of the service data in the cloud platform or the new cloud platform is far lower than the expected state of the customer or the actual configuration of the cloud platform.
On one hand, the software and hardware configuration of the cloud platform or the new cloud platform is not consistent with the actual operation requirement of the service data; on the other hand, more importantly, the customer provides configuration parameters for the cloud platform to be selected according to the unique basis when selecting the target cloud platform, and therefore, the customer cannot objectively evaluate the actual data processing capacity of the cloud platform or a new cloud platform.
Based on the above problems, embodiments of the present application provide a deployment policy determining method, by which a user can, when needing to deploy service data, objectively detect an operating state of a first environment where the service data is currently located to obtain a first parameter, then detect, according to the first parameter, a data processing capability of any cloud platform, i.e., a second environment, where the service data is to be deployed in a targeted manner to obtain a second parameter, and then determine how to deploy the service data according to the first parameter and the second parameter, thereby providing a comprehensive and objective reference basis for the deployment of the service data, reducing a probability that the operating state of the service data in a new cloud platform is lower than an expected state, enhancing a pertinence of the deployment of the service data, and improving the efficiency of the deployment of the service data.
It should be noted that the deployment policy determining method provided in the embodiment of the present Application may be implemented by a Processor of a deployment policy determining system, where the Processor may be at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a controller, a microcontroller, and a microprocessor.
Fig. 1 is a schematic flowchart of a first deployment policy determining method according to an embodiment of the present application. As shown in fig. 1, the deployment policy determination method may include steps 101 to 103:
step 101, obtaining a first parameter.
The first parameter represents the operation state of the service data in the first environment.
In one embodiment, the first environment may include a local environment and/or a network environment.
In one embodiment, the local environment may include a locally configured computer hardware environment, or a software environment configured in a local computer hardware environment, such as a virtual machine environment, an operating system environment, or the like.
In one embodiment, the network environment may include a remotely located server. The network environment may include, for example, a distributed hardware environment or a software environment.
In one embodiment, the network environment may include a cloud platform.
In one embodiment, a first portion of the functionality of the traffic data may be implemented in a local environment and a second portion of the functionality may be implemented in a network environment.
In one embodiment, the service data may include a series of data related to service trigger, service execution process, service execution result, service execution status, and the like.
Accordingly, the operation state of the service data in the first environment may include a state and/or a result of the at least one data in the operation process of being queried, read, copied, deleted, added, and the like in the first environment.
In one embodiment, the business data may include software programs needed to perform business operations. Illustratively, the software programs, which may include user applications, may also include system applications; illustratively, System applications may include an Operating System (OS) on which execution of services depends; the OS may illustratively include an OS of a hardware environment, and may also include an OS of a virtual machine created in the hardware environment.
Accordingly, the operation state of the service data in the first environment may include an operation state of the software program in the first environment, and exemplarily, the operation state may include a normal operation state and an abnormal operation state.
Illustratively, the operation state may include whether the service data is stuck in the operation process, the occurrence frequency of the stuck, and the like; the operation state may further include a type and a quantity of hardware resources occupied by the service data when the service data operates in the first environment, a type and a quantity of software resources, and the like. Illustratively, the hardware resources may include a CPU, a disk, a sound card, a video card, a communication interface, and the like; software resources may include processes, threads, and the like.
In one embodiment, the first parameter may be a result of performing statistical averaging on an operation state of the service data in the first environment; for example, the first parameter may be obtained by statistically averaging the operation states of the traffic data for a first time period.
In one embodiment, the first parameter may be obtained by statistically averaging abnormal operation states of the service data in the first environment. Illustratively, the first parameter may be obtained by counting the anomalies occurring in the service data operation process within the second time period.
In one embodiment, the first parameter may further include a duration of the service data in the first environment.
And 102, detecting the running state of the second environment based on the first parameter to obtain a second parameter.
The second environment comprises a running environment without service data deployment; a second parameter representing a data processing capability of the second environment.
In one embodiment, the second environment may be a network environment or a local environment.
In one embodiment, the second environment may be a cloud platform.
In one embodiment, the second environment may be a virtual machine created in a cloud platform environment or a local environment.
In one embodiment, the number of the second environments may be plural, and for example, the hardware configuration and/or the software configuration of each second environment may be different.
In one embodiment, the second environment may be an environment where no data is deployed.
In one embodiment, the second environment may be an environment where the service data running in the first environment is not deployed but other service data is deployed.
In one embodiment, the detecting the operating state of the second environment based on the first parameter may be implemented by any one of the following methods:
and carrying out omnibearing detection on the running state of the second environment to obtain a first detection result, and then obtaining a second parameter according to the matching degree of the first parameter and the detection type in the first detection result.
And determining a detection strategy for detecting the running state of the second environment based on the first parameter, and detecting the running state of the second environment according to the detection strategy to obtain a second parameter.
And constructing an environment for detecting the running state of the second environment based on the first parameter, and detecting the running state of the second environment in the environment to obtain a second parameter. The environment for detecting the operating state of the second environment may include a software test environment, a hardware test environment, and the like. The software testing environment can comprise preparation of a software testing tool, construction of a pressure testing condition and the like; the hardware test environment may include an environment for monitoring states of resources such as a communication link, a processor, and a hard disk.
In one embodiment, the second parameter may include a CPU data processing capability of the second environment. Illustratively, the CPU data processing capability may include at least one of CPU host frequency, number of cores, CPU architecture, and the like.
In one embodiment, the second parameter may be a peak data processing capability of the second environment.
In one embodiment, the second parameter may be an average data processing capacity of the second environment for a third length of time.
In one embodiment, when the number of the second environments is plural, the detection of the operation state of the second environment may be performed in parallel or in series.
In an implementation manner, in a case that the number of the second environments is multiple and the configuration parameters of each second environment are different, the method provided by the embodiment of the present application may implement detection of the data processing capability of any second environment.
And 103, determining a deployment strategy of the service data based on the first parameter and the second parameter.
In an embodiment, the deployment policy of the service data may be a policy of whether to deploy the service data to the second environment.
In one embodiment, the deployment policy of the business data may include a policy of how to deploy the business data. For example, all the service data is deployed to the second environment; or deploying the first part of the business data to the second environment, and still deploying the second part of the business data to the first environment.
In one embodiment, in the case that the number of the second environments is multiple, the deployment policy of the service data may include a policy of how to deploy all or part of the service data in the multiple second environments.
In an embodiment, determining the deployment policy of the service data based on the first parameter and the second parameter may be implemented by any one of the following manners:
in the case that the second parameter matches the first parameter, and the difference between the value corresponding to the second parameter and the value corresponding to the first parameter is greater than the first threshold, most or all of the traffic data may be determined, and may be migrated to the second environment. The first threshold may be determined according to an expected operation state of the service data.
And under the condition that the second parameter is not matched with the first parameter, determining that the deployment strategy of the service data is as follows: it is not recommended to migrate the service data into the second environment.
Under the condition that the first parameter represents the abnormal operation state of the first environment, and the second parameter indicates that the configuration state of the second environment can overcome the problem corresponding to the abnormal operation state, the deployment strategy of the service data can be determined as follows: it is proposed to migrate all or part of the traffic data into the second environment.
Under the condition that the number of the second environments is multiple, sequencing the multiple second parameters according to at least one specified detection dimension to obtain a sequencing result; and then, carrying out proportion division on the service data according to the sequencing result, and respectively deploying in a second environment corresponding to the sequencing result according to the proportion division result.
Under the condition that the number of the second environments is multiple and the dominant data processing capacity dimensionality of each second environment is different, the running state of the business data can be analyzed according to the dominant data processing capacity dimensionality, and the business data part corresponding to the dominant data processing capacity dimensionality is migrated to the corresponding second environment.
In an implementation manner, in a case that the number of the second environments is multiple and the configuration parameters of each second environment are different, the method provided by the embodiment of the present application may determine a deployment policy for any second environment, thereby providing conditions for flexible deployment of the service data.
As can be seen from the above, the deployment policy determining method provided in the embodiment of the present application detects, based on the first parameter indicating the operating state of the service data in the first environment, the operating environment where the service data is not deployed, and after obtaining the second parameter, can determine the deployment policy of the service data according to the first parameter and the second parameter. The first parameter can represent the current operation state of the business data, and the second parameter can objectively represent the data processing capacity of the second environment, so that the deployment strategy of the business data is determined according to the first parameter and the second parameter, the actual requirement of the business data deployment can be reflected, the actual data processing capacity of the second environment can be fully reflected, the pertinence of the business data deployment is improved, and the business data deployment efficiency is improved.
Based on the foregoing embodiments, the present application embodiment further provides a second deployment policy determining method, and fig. 2 is a schematic flow chart of the second deployment policy determining method provided in the present application embodiment. As shown in fig. 2, the method may include steps 201 to 204:
step 201, obtaining a first parameter.
The first parameter represents the operation state of the service data in the first environment.
Step 202, detecting the running state of the second environment based on the first parameter to obtain a second parameter.
The second environment comprises a running environment without service data deployment; a second parameter representing a data processing capability of the second environment.
In the embodiment of the present application, the first parameter includes a memory read-write speed parameter of the first environment.
In one embodiment, the first parameter may further include a size of a memory of the first environment.
In one embodiment, the Memory of the first environment may include at least one of a Read Only Memory (ROM), a Random Access Memory (RAM), a buffer Memory (Cache), and the like.
In one embodiment, the storage of the first environment may be a memory.
In one embodiment, the storage of the first environment may be a Hard Disk Drive (HDD) or a magnetic Disk.
In one embodiment, the memory read/write speed parameter of the first environment may include at least one of a read/write speed level of a memory card, a throughput of a disk (IOPS), and the like.
In one embodiment, the memory read-write speed parameter of the first environment may further include a read-write delay parameter of the memory of the first environment.
In an embodiment, the memory read-write speed parameter of the first environment may be obtained by monitoring the disk utilization rate by means of an iostat or the like in the first environment, and after monitoring for a period of time, for example, ten seconds, calculating an IOPS, an Input/Output (IO) average fast size, and an IO read-write average delay of the disk during the period of time.
In this embodiment of the present application, in the case that the first parameter includes a memory read-write speed parameter of the first environment, step 202 may be implemented by:
and detecting the data read-write delay state of the memory in the second environment through the first pressure test tool based on the read-write speed parameter of the memory in the first environment.
In one embodiment, the first stress testing tool may be a software stress testing tool capable of implementing automated testing. Illustratively, the first pressure test tool may be a FIO.
In one embodiment, in addition to monitoring the data read/write delay state of the memory in the second environment, the data read/write speed of the memory in the second environment may also be monitored.
In one embodiment, the data read/write delay status may include a specific time length of the data read/write delay.
In an embodiment, the data read/write delay status may include a section description of the data read/write delay, such as the data read/write delay is significant, the data read/write delay is not significant, and the like.
In one embodiment, the data read/write delay status may be a result of statistically averaging the data read/write delay of the second environment for a third time period.
In one embodiment, the detection of the data read/write delay state of the memory in the second environment by the first stress test tool based on the memory read/write speed parameter in the first environment may be implemented by any one of the following manners:
determining a first delay threshold value based on a memory read-write speed parameter of a first environment; and then establishing data to be read and written of the memory in the second environment, and detecting the read-write delay of the memory in the second environment through the first pressure test tool and the data to be read and written to obtain first delay. Correspondingly, if the first delay is greater than or equal to the first delay threshold, it indicates that the data delay state of the memory of the second environment cannot meet the operation requirement of the service data; on the contrary, if the first delay is smaller than the first delay threshold, the data delay state of the memory of the second environment can be indicated, and the operation requirement of the service data can be met.
Determining a read-write threshold of the first environment for at least one type of data based on the memory read-write speed parameter of the first environment, then constructing data to be read and written in a type corresponding to the second environment, detecting memory delay of the second environment through the first pressure test tool and the data to be read and written in the corresponding type, obtaining delay corresponding to each type of data, and recording the delay as second delay. Correspondingly, if the second delay corresponding to any type of data is greater than or equal to the read-write threshold of the corresponding type, it indicates that the second environment cannot meet the operation requirement of the service data; on the contrary, if the second delay corresponding to each type of data is smaller than the read-write threshold of the corresponding type, the data delay state of the memory in the second environment is indicated, and the operation requirement of the service data can be met.
In one embodiment, under the condition that the second environment is a newly-built virtual machine, in the idle state of the second environment, a tool such as an FIO is used to simulate the read-write operation of the corresponding type of data in the first environment, and the read-write process is detected, so that the data read-write delay state of the memory in the second environment can be obtained. For example, the detection of the read-write delay state of the memory in the second environment may be performed in a long time, for example, one month, so that the detection operation may cover the read-write delay data corresponding to the second environment in various scenarios, thereby providing an accurate objective basis for determining the read-write delay state of the memory for the deployment policy.
As can be seen from the above, in the embodiment of the present application, after the read-write speed parameter of the memory in the first environment is obtained, the read-write delay state of the memory in the second environment can be detected based on the read-write speed parameter of the memory in the first environment and the first pressure testing tool, so that under the long-time multidimensional pressure test of the first pressure testing tool, the read-write delay state of the memory in the second environment, which is obtained by the detection in the embodiment of the present application, can sufficiently and objectively reflect the data read-write state of the memory in the second environment, thereby providing an objective basis for the formulation of the service data deployment policy.
In this embodiment, the first parameter includes a network read-write parameter of the first environment.
Accordingly, step 202 may be implemented by:
and detecting the packet loss rate of the network data in the second environment through a second pressure testing tool based on the network read-write parameters in the first environment.
In one embodiment, the network read-write parameters of the first environment may include at least one of a network data read-write speed parameter and a network data request response speed parameter of the first environment.
In one embodiment, the network read-write parameters of the first environment may include transmission speed parameters of the first environment for a specified type of network data.
In an embodiment, the network read-write parameter of the first environment may include a packet loss rate of network data of the first environment.
In one embodiment, the network read-write parameters of the first environment may be obtained by:
in the service data operation process of the first environment, the utilization rate of network resources of the first environment is detected by utilizing tcpstat, the detection operation is controlled to be continued for a period of time, for example, 10 minutes, then at least one parameter, such as Packet Per Second (PPS), the average size of a network data Packet, the type of a network protocol, the number of connections corresponding to each type of the network protocol, and the like, in the period of time is calculated, and then the first parameter is obtained through the parameter. The network Protocol type may include a Transmission Control Protocol (TCP) and a User Datagram Protocol (UDP).
In one embodiment, the second pressure testing tool may be different from the first pressure testing tool.
In one embodiment, the second pressure testing tool may be an iperf.
In one embodiment, under the condition that the second environment is a newly created virtual machine, the virtual machine is set to be switched to an idle state, then a second pressure test tool is used for simulating TCP and UDP messages in the first environment, the same PPS and message data volume are configured, and then a pressure test process is started; in the pressure testing process, if the packet loss phenomenon is detected to occur, the packet loss time and the packet loss reason can be recorded, and the packet loss times can be counted; after the test process is finished, the packet loss rate can be obtained according to the packet loss times.
As can be seen from the above, in the embodiment of the present application, the network data read-write parameter of the service data in the first environment can be obtained, and then the packet loss rate of the network data in the second environment is detected by the second pressure testing tool based on the read-write parameter. Therefore, the deployment strategy determining method provided by the embodiment of the application can determine the data processing state of the network dimension in the environment to be deployed, namely the second environment, according to the data processing state of the network dimension of the service data in the first environment, so that the matching degree of the data processing state of the network dimension in the second environment and the network requirement of the service data is higher, and an objective foundation is laid for the service deployment of the network dimension of the service data.
In an embodiment of the application, the first parameter comprises a data processing capability parameter of a processor of said first environment.
In one embodiment, the first parameter may include a number of processors of the first environment.
In one embodiment, the first parameter may include an architecture of a processor of the first environment.
In one embodiment, the first parameter may include a master frequency, a run-time duration, a cache speed, a multimedia instruction set, etc. of a processor of the first environment.
In one embodiment, if the processors of the first environment are multi-core processors, the first parameter may include a processing capability parameter of each single-core processor; illustratively, a load peak parameter for each single-core processor may also be included.
In one embodiment, when monitoring the processing capability parameters of the first single-core processor, the first single-core processor needs to be isolated from other single-core processors by using a relevant tool, and then the first CPU score of the first single-core processor under a single thread is tested by using a third stress testing tool. Illustratively, the first CPU score may be obtained by averaging a plurality of tests; illustratively, the third pressure testing tool may be sysbench; the associated tool may be a cset tool.
In the embodiment of the present application, if the processor of the first environment is a single-core processor, the data processing capability parameter of the processor of the first environment may be acquired through steps a1 to a 2:
step a1, if the number of processors of the first context is one, acquiring a third context.
And the configuration information of the processor of the third environment is matched with the configuration information of the processor of the first environment.
In practical applications, before completely migrating the service data to other environments, the continuous operation state of the service data still needs to be maintained, so that, when the number of processors in the first environment is one, the pressure test cannot be directly performed in the first environment to obtain the data processing capability parameter of the processor in the first environment; the data processing capability parameter of the processor of the first environment may be determined by monitoring the operational state of the processor of the third environment, which is configured exactly as the first environment.
In one embodiment, the configuration information of the processor of the third environment may include an architecture, a main frequency, a number, an instruction set, a continuous operation time, a cache speed, and the like of the processor of the third environment, which are completely matched with the configuration information of the processor of the first environment.
And step A2, detecting the running state of the third environment, and acquiring the data processing capacity parameter of the processor of the first environment.
In one embodiment, the detecting the operation state of the third environment may be that the data processing capability parameter of the processor of the third environment is obtained by the third pressure testing tool under the condition that the third environment is unloaded, and the data processing capability parameter of the processor of the third environment is set as the data processing capability parameter of the processor of the first environment.
Through the above manner, under the condition that the first environment is the single-core processor, the data processing capability parameters of the processor in the first environment can still be accurately acquired under the condition that the service data running state is not influenced through the steps provided by the embodiment of the application.
Accordingly, in case of the first parameter, including the data processing capability parameter of the processor of the first environment, step 202 may be implemented by steps B1 to B3:
step B1, if the processor architecture of the second environment is different from the processor architecture of the first environment, acquiring the data processing capability parameter of the processor of the second environment.
For example, if the processor architecture of the second environment is the same as the processor architecture of the first environment, it may not be possible to calculate the data processing capability parameter of the processor of the second environment according to the continuous operation time of the processor of the second environment in combination with the processor architecture of the second environment.
In practical applications, since the processors with different architectures may have different indexes in terms of main frequency, instruction acquisition, data processing, maximum number of processors in the system, cache types, power consumption, and the like, under the condition that the processor architecture of the first environment is different from the processor architecture of the second environment, the data processing capability parameter of the processor of the second environment needs to be acquired in a pressure test manner.
In an embodiment, the data processing capability parameter of the processor in the second environment may be the same as the data processing capability parameter of the processor in the first environment, and is not described herein again.
Step B2, determining the first data based on the data processing capability parameter of the processor of the first environment and the data processing capability parameter of the processor of the second environment.
The first data is data of the ratio of the data processing capacity parameter of the processor in the first environment to the data processing capacity parameter of the processor in the second environment.
In one embodiment, in the case that the first data is greater than 1, it may be determined that the data processing capability of the single-core processor of the second environment is weaker than that of the single-core processor of the first environment. However, when the core number of the processor of the second environment is greater than that of the processor of the first environment, it may still be determined that the policy is that part of the service data may be migrated and deployed to the second environment; otherwise, it is recommended not to migrate the service data to the second environment.
In one embodiment, in the case where the first data is less than 1, it indicates that the data processing capability of the processor of the second environment is better than the data processing capability of the single-core processor of the first environment.
In the embodiment of the present application, step B2 may be implemented by step C:
and step C, if the number of the processors of the first environment is multiple and the number of the processors of the first environment is equal to that of the processors of the second environment, determining the first data based on the data processing capacity parameters of the processors of the first environment and the data processing capacity parameters of the processors of the second environment.
In one embodiment, in the case that the number of the processors in the first environment is multiple, the data processing capability parameters of each processor in the first environment and the processing capability parameters of each processor in the second environment may be divided from the other processors by the method described in the foregoing embodiment, and the sysbench is used to test the CPU score of the single processor.
In one embodiment, the first data may be determined by a ratio of a parameter value corresponding to a data processing capability parameter of a single processor in the first environment to a parameter value corresponding to a data processing capability parameter of a single processor in the second environment.
In one embodiment, when a plurality of first values corresponding to the data processing capability parameters of the processors of the first environment are different, the maximum value among the first values may represent the processing capability parameter of the processor of the first environment; in the case that the plurality of second values corresponding to the data processing capability parameters of the processors of the second environment are different, the minimum value of the second values may be obtained to represent the processing capability parameters of the processors of the second environment, and at this time, the first data may be determined by the maximum value of the first values and the minimum value of the second values.
In the embodiment of the present application, if the number of the processors in the second environment is multiple and the number of the processors in the first environment is not equal to the number of the processors in the second environment, for example, in the case that the number of the processors in the first environment is greater than the number of the processors in the second environment, the data processing capability parameters of the first environment and the data processing capability parameters of the second environment may be implemented by the methods provided in the foregoing embodiments.
In this embodiment of the application, when the number of processors in the first environment is multiple, and when the number of processors in the first environment is smaller than the number of processors in the second environment, if the number of processors in the first environment is much smaller than the number of processors in the second environment, the deployment policy may be determined as: it is proposed to migrate the service data to the second context.
Accordingly, if the number of processors in the first environment is not much smaller than the number of processors in the second environment, the data processing capability parameters of the processors in the first environment and the data processing capability parameters of the processors in the second environment may be obtained respectively by the method described in the foregoing embodiment, and the deployment policy is determined according to the data processing capability parameters in the two environments.
Step B3 detects an operating state of the second environment based on the first data.
In one embodiment, the detecting the operating state of the second environment based on the first data may be implemented by:
when the first data is not 1, the data processing capability to be provided by the processor of the second environment is determined based on the first data and the data processing capability of the processor of the first environment. After the above operation is finished, the actual data processing capability of the processor in the second environment may be detected, and the difference between the data processing capability that the processor in the second environment should have and the actual data processing capability may be compared to obtain the first detection result. Correspondingly, if the first detection result shows that the actual data processing capacity of the processor in the second environment is higher than the data processing capacity which the processor should have, the service data can be recommended to be migrated and deployed to the second environment; otherwise, it may be recommended that the service data is not migrated and deployed to the second environment.
Under the condition that the first data is larger than 1, acquiring the sum of the data processing capacities of all processors in the first environment, and recording the sum as a first processing capacity; and then, detecting the data processing capacity of each processor in the second environment to obtain the sum of the data processing capacities of all the processors in the second environment, and recording the sum as the second processing capacity. Illustratively, after the operation is finished, the second detection result may also be obtained according to a difference between the first data processing capability and the second data processing capability. Correspondingly, if the first processing capacity is better than the second processing capacity, the business data can be recommended not to be migrated and deployed to the second environment; otherwise, the service data migration deployment to the second environment may be suggested.
In case the first data is smaller than 1, indicating that the data processing capacity of the processor of the first environment is smaller than the data processing capacity of the processor of the second environment, the data processing capacity actually available to the processor of the second environment may be monitored, denoted as a third processing capacity. Accordingly, if the third processing capability is better than the first processing capability, it may be suggested that part of the service data is migrated to the second environment, or that the service data is not migrated to the second environment.
Through the steps, the deployment strategy determining method provided by the embodiment of the application can realize flexible detection of the data processing capacity of the processor in the second environment according to the data processing capacity parameter of the processor in the first environment, so that objective evaluation of the data processing capacity of the processor in the second environment is realized.
In the embodiment of the present application, step B3 may be implemented by steps D1 to D2:
this can be achieved by steps D1 through D2:
step D1, if the number of the processors in the first environment is multiple and the number of the processors in the first environment is greater than the number of the processors in the second environment, load balancing is performed on each of the processors in the second environment based on the data processing capability parameters of each of the processors in the first environment, the data processing capability parameters of each of the processors in the second environment, and the first data, so as to obtain second data.
And the second data is the result of load balancing of each processor in the second processor.
In one embodiment, the load balancing of the processors in the second environment may be implemented by any one of the following methods:
and when the first data is less than 1, load balancing is carried out on each processor in the processors of the second environment based on the data processing capacity parameters of each processor in the processors of the first environment, the data processing capacity parameters of each processor in the processors of the second environment and the first data. Correspondingly, when the first data is greater than 1, load balancing may not be performed, and at this time, the second environment may not meet the actual operation requirement of the service data, and the corresponding deployment policy may be: it is proposed not to migrate the deployment service data to the second environment.
And determining the data processing capacity required to be met by each processor in the second environment according to the data processing capacity and the first data of each processor in the first environment, recording the data processing capacity as target data processing capacity, and balancing the load of each processor in the second environment according to the target data processing capacity. Illustratively, the first data is 100/120, i.e., 5/6, then in the case where the load of any processor in the first environment is a, the load of any processor in the second environment may be set to 6A/5. Illustratively, the occupancy rate of each processor may be simulated in cooperation with cgroup by a third pressure testing tool.
Step D2 detects an operating state of the second environment based on the second data.
In this embodiment of the application, the detecting the operating state of the second environment based on the second data may be implemented by:
determining a result of load balancing of the processors in the second environment based on the second data, and if the load balancing is successful, detecting the running state of the second environment by the method of the foregoing embodiment on the basis of the current load balancing state of each processor; otherwise, if the load balancing fails, the load balancing operation may be executed again, or the load balancing may be abandoned, and the current deployment policy determination process may be stopped.
Through the manner, the deployment strategy determining method provided by the embodiment of the application can perform flexible, all-dimensional and multi-dimensional objective evaluation on the data processing capacity of the processor in any environment on the basis of the data processing capacity of the processor in the first environment, so that objective evaluation results can be provided for the processing capacity of the processor during service data deployment.
Step 203, determining a target parameter based on the first parameter.
The target parameters comprise at least one type of target operation state of the service data.
In one embodiment, the type of the target parameter may be the same as that of all or part of the first parameters. For example, in the case that the first parameter includes a parameter characterizing the abnormal operation of the first environment, the target parameter may include a parameter that the service data operates normally. Illustratively, the first parameter may be a parameter that the CPU of the first environment cannot allocate resources for at least one process or thread; the target parameter may be a parameter that the CPU of the second environment needs to allocate sufficient resources to the corresponding process or thread.
In an embodiment, the target parameter may include at least one specific value of a parameter corresponding to improving the operation state of the service data, for example, the read/write speed of the memory may be a first speed; and the packet loss rate of the network data is less than a second threshold value.
In one embodiment, determining the target parameter based on the first parameter may be performed by any one of the following methods:
and evaluating the first parameter and the service data to determine a target parameter. Illustratively, the target parameter may include parameters that are not included in the first parameter but are required for the operation of the service data, and these parameters may be the target parameters.
And evaluating the first parameter and the long-time running state of the service data to determine a target parameter. Illustratively, the target parameter may include a parameter that is not included in the first parameter but that is required to be increased as the service data is operated for a long time.
And 204, determining a deployment strategy based on the matching degree of the target parameter and the second parameter.
In one embodiment, determining the deployment policy based on the matching degree of the target parameter and the second parameter may be implemented by any one of the following:
if the target parameter is completely matched with the second parameter, the deployment policy may be determined as: it is proposed to migrate all or part of the data deployment of the business data to the second environment.
If the target data does not match the second parameter at all, the deployment policy may be determined to be: it is proposed not to migrate the service data to the second environment.
If the target parameter partially matches the second parameter, the deployment policy may be determined to be: it is proposed to migrate part of the traffic data to the second environment.
Determining the target type according to the matching result of the target parameter and the second parameter; then, analyzing the service data to determine part of service data corresponding to the target type; at this point, it may be determined that the deployment policy is: and migrating part of the service data deployment corresponding to the target type to a second environment.
In an implementation manner, the deployment policy determining method provided in the embodiment of the present application can determine the deployment policy based on the actual operation state of the first parameter, that is, the service data, and the detection result of the second parameter, that is, the data processing capability of the second environment, and can deploy the service data to the second environment after determining the deployment policy, and determine a final deployment decision according to the trial operation state of the service data in the second environment.
In one embodiment, the commissioning of the traffic data in the second environment may be performed for a specified period of time. For example, the traffic data may be continuously run for a specified time period, and may be executed at specified time intervals for a specified time period of each day/month.
In one embodiment, the specified time period may be three months, and the specified time interval may be determined according to the actual operation condition of the service data, for example, if the service data is in the peak operation state at 16:00-22:00 of each day, the determined time interval may be each day, and the duration of the commissioning period is 16:00-22:00 of each day.
In one embodiment, the specified time period may be determined by statistically averaging the actual operation states of the service data, for example, the specified time period may be several days corresponding to holidays.
In one embodiment, if the operation state of the business data in the second environment is better than the operation state of the business data in the first environment within a specified time period, the business data can be suggested to be partially or completely deployed to the second environment; conversely, it is not recommended that some or all of the business data be deployed to the second environment.
In one embodiment, if the operating state of the service data in the second environment is the same as or better than the determined deployment policy in the early stage within the specified time period, it may be recommended to deploy part or all of the service data to the second environment; conversely, it is not recommended that some or all of the business data be deployed to the second environment.
As can be seen from the above, the deployment policy determining method provided in the embodiment of the present application, after obtaining the first parameter indicating the operation state of the service data in the first environment, can detect the operation state of the second environment based on the first parameter to obtain the second parameter indicating the data processing capability of the second environment, then determine the target parameter based on the first parameter, and determine the final deployment policy based on the matching degree of the target parameter and the second parameter. That is to say, the deployment strategy determining method provided in the embodiment of the present application, when detecting any second environment, can use the actual running state of the service data in the first environment as a condition, and use the data processing capability of the second environment as a basis, so that the detection pertinence and objectivity of the second environment are higher, and further, a guarantee is provided for efficient deployment of the service data.
Based on the foregoing embodiments, the embodiment of the present application further provides a deployment policy determining system 3, and fig. 3 is a schematic structural diagram of the deployment policy determining system 3 provided in the embodiment of the present application. As shown in fig. 3, the deployment policy determination system 3 may include: a processor 301, a memory 302, and a communication bus; wherein:
a communication bus for implementing data transmission between the processor 301 and the memory 302.
The memory 302 has stored therein a computer program; the computer program, when executed by the processor 301, is capable of implementing a deployment policy determination method as in any of the previous embodiments.
The processor 301 may be at least one of an ASIC, a DSP, a DSPD, a PLD, an FPGA, a CPU, a controller, a microcontroller, and a microprocessor. It is to be understood that the electronic device for implementing the above-mentioned processor function may be other electronic devices, and the embodiments of the present invention are not particularly limited.
The memory 302 may be a volatile memory (RAM); or a non-volatile memory (non-volatile memory) such as a ROM, a flash memory (Hard Disk Drive, HDD) or a Solid-State Drive (SSD), or a combination of such memories, and provides instructions and data to the processor.
As can be seen from the above, the deployment policy determining system 3 provided in the embodiment of the present application detects, based on the first parameter indicating the operating state of the service data in the first environment, the operating environment where the service data is not deployed, and after obtaining the second parameter, can determine the deployment policy of the service data according to the first parameter and the second parameter. The first parameter can represent the current operation state of the business data, and the second parameter can objectively represent the data processing capacity of the second environment, so that the deployment strategy of the business data is determined according to the first parameter and the second parameter, the actual requirement of the business data deployment can be reflected, the actual data processing capacity of the second environment can be fully reflected, the pertinence of the business data deployment is enhanced, and the business data deployment efficiency is improved.
Based on the foregoing embodiments, the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the deployment policy determination method according to any of the foregoing embodiments can be implemented.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
The methods disclosed in the method embodiments provided by the present application can be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in various product embodiments provided by the application can be combined arbitrarily to obtain new product embodiments without conflict.
The features disclosed in the various method or apparatus embodiments provided herein may be combined in any combination to arrive at new method or apparatus embodiments without conflict.
The computer-readable storage medium may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a magnetic Random Access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM); and may be various electronic devices such as mobile phones, computers, tablet devices, personal digital assistants, etc., including one or any combination of the above-mentioned memories.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus necessary general hardware nodes, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method described in the embodiments of the present application.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

1. A deployment policy determination method, the method comprising:
acquiring a first parameter; the first parameter represents the running state of the service data in the first environment;
detecting the running state of a second environment based on the first parameter to obtain a second parameter; the second environment comprises a running environment where the business data is not deployed; the second parameter represents data processing capability of the second environment;
and determining a deployment strategy of the business data based on the first parameter and the second parameter.
2. The method of claim 1, wherein determining the deployment policy for the traffic data based on the first parameter and the second parameter comprises:
determining a target parameter based on the first parameter; the target parameters comprise at least one type of target operation state of the service data;
and determining the deployment strategy based on the matching degree of the target parameter and the second parameter.
3. The method of claim 1, wherein the first parameter comprises a data processing capability parameter of a processor of the first environment; the detecting an operating state of a second environment based on the first parameter includes:
if the processor architecture of the second environment is different from the processor architecture of the first environment, acquiring a data processing capacity parameter of a processor of the second environment;
determining first data based on a data processing capability parameter of a processor of the first environment and a data processing capability parameter of a processor of the second environment; wherein the first data is data processing capability parameter of the processor of the first environment and proportion data of the data processing capability parameter of the processor of the second environment;
detecting an operational state of the second environment based on the first data.
4. The method of claim 3, wherein determining the first data based on the data processing capability parameter of the processor of the first environment and the data processing capability parameter of the processor of the second environment comprises:
determining the first data based on the data processing capability parameters of the respective processors of the first environment and the data processing capability parameters of the respective processors of the second environment if the number of processors of the first environment is plural and the number of processors of the first environment is equal to the number of processors of the second environment.
5. The method of claim 3, wherein detecting the operational state of the second environment based on the first data comprises:
if the number of the processors in the first environment is multiple and the number of the processors in the first environment is greater than the number of the processors in the second environment, load balancing is performed on each processor in the processors in the second environment based on the data processing capacity parameter of each processor in the processors in the first environment, the data processing capacity parameter of each processor in the processors in the second environment and the first data to obtain second data; wherein the second data is a result of load balancing of each processor in the second processor;
detecting an operational state of the second environment based on the second data.
6. The method of claim 3, further comprising:
if the number of the processors of the first environment is one, acquiring a third environment; wherein the configuration information of the processor of the third environment matches the configuration information of the processor of the first environment;
and detecting the running state of the third environment, and acquiring the data processing capacity parameter of the processor of the first environment.
7. The method of claim 1, wherein the first parameter further comprises a memory read-write speed parameter of the first environment; the detecting an operating state of a second environment based on the first parameter includes:
and detecting the data read-write delay state of the memory in the second environment through a first pressure test tool based on the memory read-write speed parameter in the first environment.
8. The method of claim 1, wherein the first parameter further comprises a network data read-write parameter of the first environment; the detecting an operating state of a second environment based on the first parameter includes:
and detecting the packet loss rate of the network data of the second environment through a second pressure testing tool based on the network read-write parameters of the first environment.
9. A deployment policy determination system, the system comprising: a processor, a memory, and a communication bus; wherein:
the communication bus is used for realizing data transmission between the processor and the memory;
the memory has stored therein a computer program; the computer program, when executed by the processor, is capable of implementing a deployment policy determination method as claimed in any one of claims 1 to 8.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, is capable of implementing the deployment policy determination method according to any one of claims 1 to 8.
CN202110336678.7A 2021-03-29 2021-03-29 Deployment strategy determining method, system and storage medium Active CN113094115B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110336678.7A CN113094115B (en) 2021-03-29 2021-03-29 Deployment strategy determining method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110336678.7A CN113094115B (en) 2021-03-29 2021-03-29 Deployment strategy determining method, system and storage medium

Publications (2)

Publication Number Publication Date
CN113094115A true CN113094115A (en) 2021-07-09
CN113094115B CN113094115B (en) 2023-05-02

Family

ID=76671084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110336678.7A Active CN113094115B (en) 2021-03-29 2021-03-29 Deployment strategy determining method, system and storage medium

Country Status (1)

Country Link
CN (1) CN113094115B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662757A (en) * 2012-03-09 2012-09-12 浪潮通信信息系统有限公司 Resource demand pre-estimate method for cloud computing program smooth transition
CN107995029A (en) * 2017-11-28 2018-05-04 紫光华山信息技术有限公司 Elect control method and device, electoral machinery and device
CN108011817A (en) * 2017-11-09 2018-05-08 中国电力科学研究院有限公司 A kind of method and system disposed again to power communication private network business route
CN108762768A (en) * 2018-05-17 2018-11-06 烽火通信科技股份有限公司 Network Intelligent Service dispositions method and system
JP2019106031A (en) * 2017-12-13 2019-06-27 株式会社日立製作所 Data processing system and data analysis/processing method
CN111782232A (en) * 2020-07-31 2020-10-16 平安银行股份有限公司 Cluster deployment method and device, terminal equipment and storage medium
CN112383936A (en) * 2020-11-27 2021-02-19 中国联合网络通信集团有限公司 Method and device for evaluating number of accessible users

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662757A (en) * 2012-03-09 2012-09-12 浪潮通信信息系统有限公司 Resource demand pre-estimate method for cloud computing program smooth transition
CN108011817A (en) * 2017-11-09 2018-05-08 中国电力科学研究院有限公司 A kind of method and system disposed again to power communication private network business route
CN107995029A (en) * 2017-11-28 2018-05-04 紫光华山信息技术有限公司 Elect control method and device, electoral machinery and device
JP2019106031A (en) * 2017-12-13 2019-06-27 株式会社日立製作所 Data processing system and data analysis/processing method
CN108762768A (en) * 2018-05-17 2018-11-06 烽火通信科技股份有限公司 Network Intelligent Service dispositions method and system
CN111782232A (en) * 2020-07-31 2020-10-16 平安银行股份有限公司 Cluster deployment method and device, terminal equipment and storage medium
CN112383936A (en) * 2020-11-27 2021-02-19 中国联合网络通信集团有限公司 Method and device for evaluating number of accessible users

Also Published As

Publication number Publication date
CN113094115B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN108205424B (en) Data migration method and device based on disk and electronic equipment
CN102254120B (en) Method, system and relevant device for detecting malicious codes
JP5719930B2 (en) System test equipment
TW201301137A (en) Virtual machine image analysis
CN110765026A (en) Automatic testing method and device, storage medium and equipment
CN108763089B (en) Test method, device and system
CN109739527B (en) Method, device, server and storage medium for client gray scale release
US11231854B2 (en) Methods and apparatus for estimating the wear of a non-volatile memory
CN111124911A (en) Automatic testing method, device, equipment and readable storage medium
CN114902192A (en) Verification and prediction of cloud readiness
CN113220660A (en) Data migration method, device and equipment and readable storage medium
CN108595323B (en) System testing method and related device
CN115757066A (en) Hard disk performance test method, device, equipment, storage medium and program product
CN111984452A (en) Program failure detection method, program failure detection device, electronic device, and storage medium
CN109002348B (en) Load balancing method and device in virtualization system
CN109634524B (en) Data partition configuration method, device and equipment of data processing daemon
JPWO2018150619A1 (en) Granting apparatus, granting method and granting program
CN113094115B (en) Deployment strategy determining method, system and storage medium
CN110569157B (en) Storage testing method, device, server and storage medium
CN111078418A (en) Operation synchronization method and device, electronic equipment and computer readable storage medium
CN115617668A (en) Compatibility testing method, device and equipment
CN115391110A (en) Test method of storage device, terminal device and computer readable storage medium
CN114356571A (en) Processing method and device
CN113360389A (en) Performance test method, device, equipment and storage medium
CN114218011A (en) Test simulation method and device, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant