CN113806097A - Data processing method and device, electronic equipment and storage medium - Google Patents

Data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113806097A
CN113806097A CN202111149589.8A CN202111149589A CN113806097A CN 113806097 A CN113806097 A CN 113806097A CN 202111149589 A CN202111149589 A CN 202111149589A CN 113806097 A CN113806097 A CN 113806097A
Authority
CN
China
Prior art keywords
cluster
target
servers
data
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111149589.8A
Other languages
Chinese (zh)
Inventor
王永亮
朱一飞
刘源
裴中率
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Netease Cloud Music Technology Co Ltd
Original Assignee
Hangzhou Netease Cloud Music Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Netease Cloud Music Technology Co Ltd filed Critical Hangzhou Netease Cloud Music Technology Co Ltd
Priority to CN202111149589.8A priority Critical patent/CN113806097A/en
Publication of CN113806097A publication Critical patent/CN113806097A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/547Remote procedure calls [RPC]; Web services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/544Remote

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)

Abstract

The application relates to the technical field of computers, and discloses a data processing method, a data processing device, electronic equipment and a storage medium, wherein the method comprises the following steps: receiving a data request of a target data service API; determining a target resource group corresponding to the target data service API according to the established data service API and a first corresponding relation between the resource groups; determining a target cluster corresponding to the target resource group according to a second corresponding relation between the resource group and the cluster; and performing data processing on the data request through the target cluster. According to the method, the association between the data service API and the clusters is realized through the resource groups, the data service APIs corresponding to the resource groups are all deployed under the same cluster, and the clusters are mutually independent, so that the isolation between the data corresponding to different resource groups is realized, and the problem of data pollution is avoided.

Description

Data processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data processing method and apparatus, an electronic device, and a storage medium.
Background
The data service is used as the uppermost layer of the platform construction in the unified data, and can provide data warehouse data to a data user in a servitization and interfacing manner, shield a plurality of details of bottom data storage and calculation, and simplify and strengthen the use of data; meanwhile, chimney type construction is avoided, development and delivery efficiency of a data API are enhanced, and data utilization rate is improved.
In the prior art, a Kubernets (an open source type container arranging system) + docker (an open source type application container engine) technology is adopted, a host is built based on a cloud host to carry out containerization deployment, and data service is provided for a user.
However, the whole container is integrated after the container is formed by the scheme, data isolation is not carried out in the container, and the problem of data pollution exists.
Disclosure of Invention
The application provides a data processing method, a data processing device, an electronic device and a storage medium, which are used for avoiding the problem of data pollution.
In a first aspect, an embodiment of the present application provides a data processing method, where the method includes:
receiving a data request of a target data service Application Programming Interface (API);
determining a target resource group corresponding to the target data service API according to the established data service API and a first corresponding relation between the resource groups;
determining a target cluster corresponding to the target resource group according to a second corresponding relation between the resource group and the cluster;
and performing data processing on the data request through the target cluster.
In some alternative embodiments, clusters of the same scene are placed in the same environmental pool.
In some optional embodiments, before receiving the data request of the target data service API, the method further includes:
and responding to the target data service API establishing instruction, and establishing a first corresponding relation between the established target data service API and the target resource group.
In some optional embodiments, if the target resource group is not an existing resource group, before establishing the first correspondence between the created target data service API and the target resource group, the method further includes:
and creating the target resource group.
In some optional embodiments, after creating the target resource group, the method further includes:
determining an initial number of servers corresponding to the target cluster;
constructing the target cluster according to the initial number of servers;
and establishing a second corresponding relation between the created target resource group and the target cluster.
In some optional embodiments, constructing the target cluster according to the initial number of servers includes:
if the initial number of unoccupied servers exist in the resource pool corresponding to the target cluster, mirroring the initial number of servers from the corresponding resource pool, and deploying the mirrored servers to the target cluster;
if the resource pool corresponding to the target cluster does not have the initial number of unoccupied servers, generating the initial number of servers in the corresponding resource pool, mirroring the generated servers from the corresponding resource pool, and deploying the mirrored servers into the target cluster.
In some optional embodiments, after performing data processing on the data request by the target cluster, the method further includes:
and determining a processing result of a server in a target cluster for processing the data request, and returning the processing result through the target data service API.
In some optional embodiments, the method further comprises:
determining adjustment data corresponding to each cluster based on the monitored operation information of each cluster; the adjustment data comprises scaling information and adjustment quantity;
and adjusting the number of the servers in the corresponding clusters to be adjusted according to the adjustment data corresponding to each cluster to be adjusted.
In some optional embodiments, adjusting the number of servers in the cluster to be adjusted according to the adjustment data corresponding to each cluster to be adjusted includes:
for any cluster to be adjusted, if the scaling information represents the scaling, releasing the servers with the adjusted number from the cluster to be adjusted to a corresponding resource pool;
if the expansion and contraction information represents expansion and the corresponding resource pool has the unoccupied servers with the adjusted number, mirroring the servers with the adjusted number from the corresponding resource pool, and deploying the mirrored servers to the cluster to be adjusted; or, if the expansion and contraction information represents expansion and the corresponding resource pool does not have the unoccupied servers with the adjusted number, generating the servers with the adjusted number in the corresponding resource pool, mirroring the generated servers from the corresponding resource pool, and deploying the mirrored servers to the cluster to be adjusted.
In some optional embodiments, determining, based on the monitored operation information of each cluster, adjustment data corresponding to each cluster includes:
for any cluster, determining the CPU state according to the index of a Central Processing Unit (CPU) in the running information;
if the CPU state is busy, determining that the scaling information corresponding to the cluster represents the scaling, and the adjustment quantity corresponding to the cluster is a preset multiple of the current server quantity in the cluster;
if the CPU state is normal, determining scaling information and adjustment quantity corresponding to the cluster according to a Virtual Machine (JVM) index in the running information;
if the CPU state is idle, determining an adjustment value according to a preset condition; if the adjustment value is a positive number, determining that the scaling information corresponding to the cluster represents the expansion capacity, and the adjustment quantity corresponding to the cluster is the adjustment value; if the adjustment value is a negative number, determining that the scaling information corresponding to the cluster represents the scaling, and the adjustment quantity corresponding to the cluster is an absolute value of the adjustment value; and if the adjustment value is 0, determining that the scaling information corresponding to the cluster represents no scaling, and the adjustment quantity corresponding to the cluster is 0.
In some optional embodiments, if the CPU indicator includes a CPU utilization, determining a CPU state according to the CPU indicator in the operation information includes:
if the CPU utilization rate is greater than the first utilization rate, determining that the CPU state is busy;
if the CPU utilization rate is less than the second utilization rate, determining that the CPU state is idle;
if the CPU utilization rate is within the range between the first utilization rate and the second utilization rate, determining that the CPU state is normal;
wherein the first utilization is greater than the second utilization.
In some optional embodiments, if the CPU index includes CPU waiting information, determining a CPU state according to the CPU index in the running information includes:
if the CPU waiting information is larger than the first waiting value, determining that the CPU state is busy;
if the CPU waiting information is smaller than a second waiting value, determining that the CPU state is idle;
if the CPU waiting information is in the range between the first waiting value and the second waiting value, determining that the CPU state is normal;
wherein the first wait value is greater than the second wait value.
In some optional embodiments, if the JVM index includes a memory recovery duration and a memory recovery number within a first preset duration, determining scaling information and an adjustment number corresponding to the cluster according to the JVM index in the running information includes:
if the memory recovery time length in the first preset time length is greater than the first time length and the memory recovery times are greater than the first times, determining that the scaling information corresponding to the cluster represents the expansion and the contraction, and the adjustment number corresponding to the cluster is a preset multiple of the current number of the servers in the cluster;
if the memory recovery time length in the first preset time length is less than the second time length and the memory recovery times are less than the second time number, determining an adjustment value according to the preset condition; if the adjustment value is a positive number, determining that the scaling information corresponding to the cluster represents the expansion capacity, and the adjustment quantity corresponding to the cluster is the adjustment value; if the adjustment value is a negative number, determining that the scaling information corresponding to the cluster represents the scaling, and the adjustment quantity corresponding to the cluster is an absolute value of the adjustment value; if the adjustment value is 0, determining that the scaling information corresponding to the cluster represents no scaling, and the adjustment quantity corresponding to the cluster is 0;
if the memory recovery time length within a first preset time length is within the range between the first time length and the Second time length, and the memory recovery times are within the range between the first time number and the Second time number, determining scaling information and an adjustment number corresponding to the cluster according to a system Throughput (TPS) index in the running information;
the first time length is larger than the second time length, and the first time number is larger than the second time number.
In some optional embodiments, if the JVM index includes a peak ratio of running threads and a number of blocked threads within a second preset time period, determining scaling information and an adjustment number corresponding to the cluster according to the JVM index in the running information includes:
if the ratio peak value of the running threads in a second preset time length is larger than a first ratio or the number of the blocked threads is larger than a first number, determining that the scaling information corresponding to the cluster represents the expansion and contraction, and the adjustment number corresponding to the cluster is a preset multiple of the current number of the servers in the cluster;
if the operating thread proportion peak value in a second preset time length is smaller than a second proportion, or the number of blocked threads is smaller than a second number, determining an adjustment value according to the preset condition; if the adjustment value is a positive number, determining that the scaling information corresponding to the cluster represents the expansion capacity, and the adjustment quantity corresponding to the cluster is the adjustment value; if the adjustment value is a negative number, determining that the scaling information corresponding to the cluster represents the scaling, and the adjustment quantity corresponding to the cluster is an absolute value of the adjustment value; if the adjustment value is 0, determining that the scaling information corresponding to the cluster represents no scaling, and the adjustment quantity corresponding to the cluster is 0;
if the operating thread proportion peak value in a second preset time length is in the range between the first proportion and the second proportion, or the number of blocked threads is in the range between the first number and the second number, determining scaling information and an adjustment number corresponding to the cluster according to TPS indexes in the operating information;
wherein the first ratio is greater than the second ratio, and the first number is greater than the second number.
In some optional embodiments, if the TPS indicator includes a TPS peak value within a third preset time period and an average time period of a request response, determining scaling information and an adjustment number corresponding to the cluster according to the TPS indicator in the operation information includes:
if the TPS peak value in a third preset time length is greater than the preset throughput and the average time length of request response is greater than the third time length, determining that the scaling information corresponding to the cluster represents the expansion and contraction, and the adjustment number corresponding to the cluster is a preset multiple of the current number of servers in the cluster;
if the TPS peak value in a third preset time length is less than or equal to the preset throughput and the average time length of request response is less than or equal to the third time length, determining an adjustment value according to the preset condition; if the adjustment value is a positive number, determining that the scaling information corresponding to the cluster represents the expansion capacity, and the adjustment quantity corresponding to the cluster is the adjustment value; if the adjustment value is a negative number, determining that the scaling information corresponding to the cluster represents the scaling, and the adjustment quantity corresponding to the cluster is an absolute value of the adjustment value; and if the adjustment value is 0, determining that the scaling information corresponding to the cluster represents no scaling, and the adjustment quantity corresponding to the cluster is 0.
In some optional embodiments, adjusting the number of servers in the cluster to be adjusted according to the adjustment data corresponding to each cluster to be adjusted includes:
sending the adjusting data corresponding to each cluster to be adjusted to a Git Repo (a code hosting platform), and hosting the adjusting data corresponding to each cluster to be adjusted through the Git Repo;
adjusting data corresponding to each cluster to be adjusted, which is obtained from the Git Repo, through a Tekton CD (native open source framework), to generate a message in a target format corresponding to each cluster to be adjusted;
analyzing the messages of the target format corresponding to each cluster to be adjusted, which are acquired from the Tekton CD, through a cloud native k8s system, arranging the analyzed data, and adjusting the number of servers in the corresponding cluster to be adjusted through a k8s component in a cloud native k8s system according to the sequence.
In a second aspect, an embodiment of the present application provides a data processing apparatus, including:
the receiving module is used for receiving a data request of a target data service API;
the determining module is used for determining a target resource group corresponding to the target data service API according to the established data service API and a first corresponding relation between the resource groups;
the determining module is further used for determining a target cluster corresponding to the target resource group according to a second corresponding relationship between the resource group and the cluster;
and the processing module is used for processing the data of the data request through the target cluster.
In some alternative embodiments, clusters of the same scene are placed in the same environmental pool.
In some optional embodiments, the method further includes, before the receiving module receives the data request of the target data service API, establishing, in response to the target data service API creation instruction, a first correspondence between the created target data service API and the target resource group.
In some optional embodiments, if the target resource group is not an existing resource group, before the creating module creates the first correspondence between the created target data service API and the target resource group, the creating module is further configured to:
and creating the target resource group.
In some optional embodiments, after the creating the set of target resources, the creating module is further configured to:
determining an initial number of servers corresponding to the target cluster;
constructing the target cluster according to the initial number of servers;
and establishing a second corresponding relation between the created target resource group and the target cluster.
In some optional embodiments, the creating module is specifically configured to:
if the initial number of unoccupied servers exist in the resource pool corresponding to the target cluster, mirroring the initial number of servers from the corresponding resource pool, and deploying the mirrored servers to the target cluster;
if the resource pool corresponding to the target cluster does not have the initial number of unoccupied servers, generating the initial number of servers in the corresponding resource pool, mirroring the generated servers from the corresponding resource pool, and deploying the mirrored servers into the target cluster.
In some optional embodiments, after the processing module performs data processing on the data request by the target cluster, the processing module is further configured to:
and determining a processing result of a server in a target cluster for processing the data request, and returning the processing result through the target data service API.
In some optional embodiments, the apparatus further comprises an adjusting module, configured to:
determining adjustment data corresponding to each cluster based on the monitored operation information of each cluster; the adjustment data comprises scaling information and adjustment quantity;
and adjusting the number of the servers in the corresponding clusters to be adjusted according to the adjustment data corresponding to each cluster to be adjusted.
In some optional embodiments, the adjusting module is specifically configured to:
for any cluster to be adjusted, if the scaling information represents the scaling, releasing the servers with the adjusted number from the cluster to be adjusted to a corresponding resource pool;
if the expansion and contraction information represents expansion and the corresponding resource pool has the unoccupied servers with the adjusted number, mirroring the servers with the adjusted number from the corresponding resource pool, and deploying the mirrored servers to the cluster to be adjusted; or, if the expansion and contraction information represents expansion and the corresponding resource pool does not have the unoccupied servers with the adjusted number, generating the servers with the adjusted number in the corresponding resource pool, mirroring the generated servers from the corresponding resource pool, and deploying the mirrored servers to the cluster to be adjusted.
In some optional embodiments, the adjusting module is specifically configured to:
for any cluster, determining the state of a CPU according to the CPU index in the running information;
if the CPU state is busy, determining that the scaling information corresponding to the cluster represents the scaling, and the adjustment quantity corresponding to the cluster is a preset multiple of the current server quantity in the cluster;
if the CPU state is normal, determining scaling information and adjustment quantity corresponding to the cluster according to JVM indexes in the running information;
if the CPU state is idle, determining an adjustment value according to a preset condition; if the adjustment value is a positive number, determining that the scaling information corresponding to the cluster represents the expansion capacity, and the adjustment quantity corresponding to the cluster is the adjustment value; if the adjustment value is a negative number, determining that the scaling information corresponding to the cluster represents the scaling, and the adjustment quantity corresponding to the cluster is an absolute value of the adjustment value; and if the adjustment value is 0, determining that the scaling information corresponding to the cluster represents no scaling, and the adjustment quantity corresponding to the cluster is 0.
In some optional embodiments, if the CPU indicator includes a CPU utilization, the adjusting module is specifically configured to:
if the CPU utilization rate is greater than the first utilization rate, determining that the CPU state is busy;
if the CPU utilization rate is less than the second utilization rate, determining that the CPU state is idle;
if the CPU utilization rate is within the range between the first utilization rate and the second utilization rate, determining that the CPU state is normal;
wherein the first utilization is greater than the second utilization.
In some optional embodiments, if the CPU indicator includes CPU waiting information, the adjusting module is specifically configured to:
if the CPU waiting information is larger than the first waiting value, determining that the CPU state is busy;
if the CPU waiting information is smaller than a second waiting value, determining that the CPU state is idle;
if the CPU waiting information is in the range between the first waiting value and the second waiting value, determining that the CPU state is normal;
wherein the first wait value is greater than the second wait value.
In some optional embodiments, if the JVM indicator includes a memory recycling duration and a number of times of memory recycling within a first preset duration, the adjusting module is specifically configured to:
if the memory recovery time length in the first preset time length is greater than the first time length and the memory recovery times are greater than the first times, determining that the scaling information corresponding to the cluster represents the expansion and the contraction, and the adjustment number corresponding to the cluster is a preset multiple of the current number of the servers in the cluster;
if the memory recovery time length in the first preset time length is less than the second time length and the memory recovery times are less than the second time number, determining an adjustment value according to the preset condition; if the adjustment value is a positive number, determining that the scaling information corresponding to the cluster represents the expansion capacity, and the adjustment quantity corresponding to the cluster is the adjustment value; if the adjustment value is a negative number, determining that the scaling information corresponding to the cluster represents the scaling, and the adjustment quantity corresponding to the cluster is an absolute value of the adjustment value; if the adjustment value is 0, determining that the scaling information corresponding to the cluster represents no scaling, and the adjustment quantity corresponding to the cluster is 0;
if the memory recovery time length within a first preset time length is within the range between the first time length and the second time length, and the memory recovery times are within the range between the first time number and the second time number, determining scaling information and adjustment quantity corresponding to the cluster according to TPS indexes in the operation information;
the first time length is larger than the second time length, and the first time number is larger than the second time number.
In some optional embodiments, if the JVM index includes a running thread proportion peak value and a number of blocked threads within a second preset time period, the adjusting module is specifically configured to:
if the ratio peak value of the running threads in a second preset time length is larger than a first ratio or the number of the blocked threads is larger than a first number, determining that the scaling information corresponding to the cluster represents the expansion and contraction, and the adjustment number corresponding to the cluster is a preset multiple of the current number of the servers in the cluster;
if the operating thread proportion peak value in a second preset time length is smaller than a second proportion, or the number of blocked threads is smaller than a second number, determining an adjustment value according to the preset condition; if the adjustment value is a positive number, determining that the scaling information corresponding to the cluster represents the expansion capacity, and the adjustment quantity corresponding to the cluster is the adjustment value; if the adjustment value is a negative number, determining that the scaling information corresponding to the cluster represents the scaling, and the adjustment quantity corresponding to the cluster is an absolute value of the adjustment value; if the adjustment value is 0, determining that the scaling information corresponding to the cluster represents no scaling, and the adjustment quantity corresponding to the cluster is 0;
if the operating thread proportion peak value in a second preset time length is in the range between the first proportion and the second proportion, or the number of blocked threads is in the range between the first number and the second number, determining scaling information and an adjustment number corresponding to the cluster according to TPS indexes in the operating information;
wherein the first ratio is greater than the second ratio, and the first number is greater than the second number.
In some optional embodiments, if the TPS indicator includes a TPS peak value within a third preset time period and a request response average time period, the adjusting module is specifically configured to:
if the TPS peak value in a third preset time length is greater than the preset throughput and the average time length of request response is greater than the third time length, determining that the scaling information corresponding to the cluster represents the expansion and contraction, and the adjustment number corresponding to the cluster is a preset multiple of the current number of servers in the cluster;
if the TPS peak value in a third preset time length is less than or equal to the preset throughput and the average time length of request response is less than or equal to the third time length, determining an adjustment value according to the preset condition; if the adjustment value is a positive number, determining that the scaling information corresponding to the cluster represents the expansion capacity, and the adjustment quantity corresponding to the cluster is the adjustment value; if the adjustment value is a negative number, determining that the scaling information corresponding to the cluster represents the scaling, and the adjustment quantity corresponding to the cluster is an absolute value of the adjustment value; and if the adjustment value is 0, determining that the scaling information corresponding to the cluster represents no scaling, and the adjustment quantity corresponding to the cluster is 0.
In some optional embodiments, the adjusting module is specifically configured to:
sending the adjusting data corresponding to each cluster to be adjusted to a Git Repo, and hosting the adjusting data corresponding to each cluster to be adjusted through the Git Repo;
adjusting data corresponding to each cluster to be adjusted, which is obtained from the Git Repo, through a Tekton CD to generate a message in a target format corresponding to each cluster to be adjusted;
analyzing the messages of the target format corresponding to each cluster to be adjusted, which are acquired from the Tekton CD, through a cloud native k8s system, arranging the analyzed data, and adjusting the number of servers in the corresponding cluster to be adjusted through a k8s component in a cloud native k8s system according to the sequence.
In a third aspect, an embodiment of the present application provides an electronic device, including at least one processor and at least one memory, where the memory stores a computer program, and when the program is executed by the processor, the processor is caused to execute the data processing method according to any one of the above first aspects.
In a fourth aspect, an embodiment of the present application provides a storage medium storing a computer program executable by an electronic device, and when the program runs on the electronic device, the program causes the electronic device to execute the data processing method according to any one of the first aspect.
The data processing method, the data processing device, the electronic equipment and the storage medium provided by the embodiment of the application have the following beneficial effects:
according to the embodiment of the application, a first corresponding relation between the data service API and the resource groups and a second corresponding relation between the resource groups and the clusters are established, namely, the association between the data service API and the clusters is realized through the resource groups, the data service APIs corresponding to the resource groups are all deployed under the same cluster, namely, the data requests of the target data service API corresponding to the same resource group are processed by the same cluster, the data requests of the target data service API corresponding to different resource groups are respectively processed by different clusters, and the clusters are mutually independent, so that the isolation between the data corresponding to different resource groups is realized, and the problem of data pollution is avoided.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a correspondence relationship among a first data service API, a resource group, and a cluster provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of a first data processing method provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of a container pool and an environmental pool provided by an embodiment of the present application;
fig. 5 is a schematic flowchart of a second data processing method provided in an embodiment of the present application;
fig. 6 is a schematic diagram of a correspondence relationship among a second data service API, a resource group, and a cluster provided in the embodiment of the present application;
fig. 7 is a schematic diagram of a correspondence relationship between a third data service API, a resource group, and a cluster provided in the embodiment of the present application;
fig. 8 is a flowchart illustrating a method for adjusting the number of servers in a cluster according to an embodiment of the present disclosure;
FIG. 9 is a diagram of a cloud-native system architecture provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of a data processing apparatus provided by an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 12 is a schematic diagram of a program product provided in an embodiment of the present application.
Detailed Description
The principles and spirit of the present application will be described with reference to a number of exemplary embodiments. It should be understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the present application, and are not intended to limit the scope of the present application in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present application may be embodied as a system, apparatus, device, method, or computer program product. Thus, the present application may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
In this document, it is to be understood that any number of elements in the figures are provided by way of illustration and not limitation, and any nomenclature is used for differentiation only and not in any limiting sense.
The principles and spirit of the present application are explained in detail below with reference to several representative embodiments of the present application.
Summary of The Invention
The data service is used as the uppermost layer of the platform construction in the unified data, and can provide data warehouse data to a data user in a servitization and interfacing manner, shield a plurality of details of bottom data storage and calculation, and simplify and strengthen the use of data; meanwhile, chimney type construction is avoided, development and delivery efficiency of a data API are enhanced, and data utilization rate is improved. In the prior art, a Kubernets + docker technology is adopted, a host is built based on a cloud host to carry out containerization deployment, and data service is provided for users.
In practice, data isolation is often required to avoid contamination between data. However, the whole container is integrated after the container is formed by the scheme, data isolation is not carried out in the container, and the problem of data pollution exists.
In view of this, the present application embodiments provide a data processing method, an apparatus, an electronic device, and a storage medium, where a first corresponding relationship between a data service API and a resource group and a second corresponding relationship between the resource group and a cluster are established in the present application embodiment, that is, association between the data service API and the cluster is realized through the resource group, and the data service APIs corresponding to the resource group are all deployed under the same cluster, that is, data requests of target data service APIs corresponding to the same resource group are all processed by the same cluster, and data requests of target data service APIs corresponding to different resource groups are processed by different clusters, and because the clusters are independent from each other, isolation between data corresponding to different resource groups is realized, thereby avoiding a problem of data pollution.
Having described the basic principles of the present application, various non-limiting embodiments of the present application are described in detail below.
Application scene overview
The data processing method provided by the embodiment of the application can be applied to the scenario shown in fig. 1, and referring to fig. 1, the data service is used as the uppermost layer of the platform construction in the unified data, and the data warehouse data is provided to the business system in a service and interface manner.
After receiving a data request of a target data service API, the cloud native system determines a target resource group corresponding to the target data service API according to the established data service API and a first corresponding relation between the resource groups; then, determining a target cluster corresponding to the target resource group according to a second corresponding relation between the resource group and the cluster; and then, the data request is subjected to data processing through the target cluster.
The cloud native system is a distributed cloud based on distributed deployment and unified operation and management, and is a cloud technology product system established on the basis of technologies such as containers, micro-services, DevOps (development and operation and maintenance combination) and the like.
Referring to fig. 2, the first corresponding relationship is that at least one data service API corresponds to one resource group, and the second corresponding relationship is that one resource group corresponds to one cluster. The data service API101, the data service API102, and the data service API103 corresponding to the resource group 1 in fig. 2 are deployed under the cluster 1 corresponding to the resource group 1; the data service API201 and the data service API202 corresponding to the resource group 2 are deployed under the cluster 2 corresponding to the resource group 2.
The data requests of 3 data service APIs corresponding to the resource group 1 are processed by the cluster 1, the data requests of 2 data service APIs corresponding to the resource group 2 are processed by the cluster 2, and the cluster 1 and the cluster 2 are independent, so that the data of the resource group 1 and the data of the resource group 2 are isolated from each other.
It is understood that fig. 2 is only an example of one possible implementation manner of the first corresponding relationship and the second corresponding relationship, and the present application is not limited thereto in particular.
Exemplary method
FIG. 3 is a flowchart illustrating a first data processing method according to an exemplary embodiment, the method comprising the steps of:
step S301: a data request of a target data service API is received.
Referring to fig. 1, various types of data requests, such as a simple query service, a complex query service, or a hybrid query service, may be transmitted through the data service API.
Step S302: and determining a target resource group corresponding to the target data service API according to the established data service API and the first corresponding relation between the resource groups.
In this embodiment, a first corresponding relationship between the data service API and the resource groups is established, so that the target resource group corresponding to the target data service API can be determined according to the first corresponding relationship. Also taking the above fig. 2 as an example:
if the target data service API is the data service API101, the data service API102 or the data service API103, the target resource group is the resource group 1; if the target data service API is the data service API201 or the data service API202, the target resource group is the resource group 2.
The above example is only to illustrate how to determine the target resource group by taking fig. 2 as an example, and the present application is not particularly limited to this.
Step S303: and determining a target cluster corresponding to the target resource group according to the second corresponding relation between the resource group and the cluster.
In this embodiment, a second correspondence between resource groups and clusters is also established, so that a target cluster corresponding to the target resource group may be determined according to the second correspondence. Also taking the above fig. 2 as an example:
if the target resource group is the resource group 1, the target cluster is the cluster 1; if the target resource group is resource group 2, the target cluster is cluster 2 described above.
The above example is only to illustrate how to determine the target cluster by taking fig. 2 as an example, and the present application is not limited to this specifically.
Step S304: and performing data processing on the data request through the target cluster.
According to the scheme, a first corresponding relation between the data service API and the resource groups and a second corresponding relation between the resource groups and the clusters are established, namely, the association between the data service API and the clusters is realized through the resource groups, the data service APIs corresponding to the resource groups are all deployed under the same cluster, namely, the data requests of the target data service API corresponding to the same resource group are all processed by the same cluster, the data requests of the target data service API corresponding to different resource groups are respectively processed by different clusters, and the clusters are mutually independent, so that the isolation between the data corresponding to different resource groups is realized, and the problem of data pollution is avoided.
In some optional embodiments, after the data request is processed by the target cluster, it is further required to determine a processing result of a server in the target cluster that processes the data request, and return the processing result (such as data queried by the user) through the target data service API, so as to respond to the data request of the user.
In implementation, data isolation is sometimes required under different scenarios, and based on this, in some alternative implementations, clusters of the same scenario are placed in the same environmental pool. Specifically, a plurality of environment pools are arranged in the container pool, and each environment pool corresponds to one scene.
Referring to fig. 4, two environmental pools are provided in the container pool: a development environment pool and a testing environment pool. The cluster A, the cluster B and the cluster C all belong to clusters of development environments and are placed in a development environment pool; the cluster D, the cluster E and the cluster F belong to clusters of the test environment and are placed in a test environment pool.
Fig. 4 is only illustrated by taking two environment pools as an example, and more environment pools may be provided in practical applications.
According to the scheme, the clusters in the same scene are placed in the same environment pool, so that not only data isolation is performed, but also environment isolation is performed for different scenes.
For any data service API, before receiving the data request of the data service API, a resource group and a corresponding relation between the resource group and the cluster need to be established. Based on this, fig. 5 is a flow chart illustrating a second data processing method according to an exemplary embodiment, the method comprising the steps of:
step S501: and responding to the target data service API establishing instruction, and establishing a first corresponding relation between the established target data service API and the target resource group.
In some embodiments, the target resource group may be an existing resource group, and at this time, a first corresponding relationship between the target data service API and the existing target resource group may be directly established;
in other embodiments, the target resource group may not be an existing resource group, and the target resource group needs to be created first before the first mapping relationship between the target data service API and the created target resource group is established.
If the target resource group is not an existing resource group, the corresponding relation between the target resource group and the cluster is not established. Based on this, in some optional embodiments, after the target resource group is created, the following steps are further included:
determining an initial number of servers corresponding to the target cluster;
constructing the target cluster according to the initial number of servers;
and establishing a second corresponding relation between the created target resource group and the target cluster.
The above-mentioned building of the target cluster according to the initial number of servers can be implemented by, but not limited to, the following ways:
1) if the initial number of unoccupied servers exist in the resource pool corresponding to the target cluster, mirroring the initial number of servers from the corresponding resource pool, and deploying the mirrored servers to the target cluster;
the resource pool includes the environment pool and/or the container pool shown in fig. 4.
Illustratively, a parallel priority mode is adopted, and a certain number of servers are generated in a resource pool in advance, for example: a third number of servers may be generated in the pool of containers; or respectively generating a fourth number of servers in each environment pool; or a third number of servers may be generated in the container pool, and a fourth number of servers may be generated in each environment pool. The servers in the environment pool may be occupied by clusters of the corresponding environments, and the servers in the container pool may be occupied by clusters of each environment. In the following, taking the target cluster corresponding to 5 servers (i.e. 5 servers need to be deployed in the target cluster), and taking the corresponding test environment as an example, different situations are respectively described:
first case
And (3) directly mirroring 5 servers from the testing environment pool, and deploying the mirrored servers to the target cluster.
Second case
2 unoccupied servers exist in the test environment pool, 20 unoccupied servers exist in the container pool, 2 servers are mirrored from the test environment pool, 3 servers are mirrored from the container pool, and the mirrored servers are deployed in the target cluster; or 5 servers are directly mirrored from the container pool, and then the mirrored servers are deployed into the target cluster.
Third case
And (3) no unoccupied server exists in the test environment pool, 20 unoccupied servers exist in the container pool, 5 servers are directly mirrored from the container pool, and the mirrored servers are deployed in the target cluster.
The present embodiment is only illustrative of the above three cases, and other cases may occur in practical applications, which are not illustrated here.
2) If the resource pool corresponding to the target cluster does not have the initial number of unoccupied servers, generating the initial number of servers in the corresponding resource pool, mirroring the generated servers from the corresponding resource pool, and deploying the mirrored servers into the target cluster.
For example, in the response priority mode, when a cluster needs a server, a required number of servers are generated in the resource pool, so that the generated servers are mirrored and deployed to the corresponding cluster.
Also taking the above fig. 2 as an example:
if the newly created data service API104 and the corresponding target resource group are the existing resource group 1, then the first corresponding relationship between the data service API104 and the resource group 1 is established without creating the resource group 1.
In addition, because the resource group 1 is an existing resource group, and the corresponding relationship between the resource group 1 and the cluster is already established, the cluster 1 does not need to be created, and the second corresponding relationship between the resource group 1 and the cluster 1 does not need to be established.
At this time, the data service API, the resource group, and the corresponding relationship among the clusters can be referred to as fig. 6.
If the newly created data service API301 and the corresponding target resource group are resource group 3 (not an existing resource group), first create resource group 3, and then establish the first correspondence between the data service API301 and resource group 3.
In addition, since the resource group 3 is not an existing resource group, and the corresponding relationship between the resource group 3 and the cluster is not established, it is necessary to create the cluster 3 first (the process of creating the cluster can refer to the above embodiment), and then establish the second corresponding relationship between the resource group 3 and the cluster 3.
At this time, the data service API, the resource group, and the corresponding relationship between the clusters can be referred to as fig. 7.
The two examples are only for clearly explaining how to establish the first corresponding relationship and the second corresponding relationship, and the embodiment is not limited thereto.
In implementation, after a new cluster is created, the identification of the cluster may be returned. The cluster identifier may be set according to an actual application scenario, for example, the application name, the machine room information, and the environment information are used as the cluster identifier, and a server in the cluster may be dynamically matched to an upstream of the Nginx (a website server), and the domain name is used for accessing the Nginx to the outside, so as to implement data access.
Step S502: a data request of a target data service API is received.
Step S503: and determining a target resource group corresponding to the target data service API according to the established data service API and the first corresponding relation between the resource groups.
Step S504: and determining a target cluster corresponding to the target resource group according to the second corresponding relation between the resource group and the cluster.
Step S505: and performing data processing on the data request through the target cluster.
The specific implementation manner of steps S502 to S505 can refer to the above embodiments, and will not be described herein.
As described above, the data processing needs to be performed on the corresponding data requests through the servers in each cluster, and since the number of the data requests of each data service API is different, the request types are different, the service conditions of the servers in the clusters are also different, and the number of the servers in some clusters is large, but the number of the data requests to be processed is small, which may cause resource waste of the servers; some clusters have a small number of servers, but have a large number of data requests to process, which results in poor performance of the servers.
Based on the above embodiment, fig. 8 shows a flow chart of a method for adjusting the number of servers in a cluster, where the method may not be able to adjust the number of servers in each cluster timely and reasonably if the number of servers in a cluster is set manually, and the method includes the following steps:
step S801: determining adjustment data corresponding to each cluster based on the monitored operation information of each cluster; the adjustment data includes scaling information and an adjustment amount.
In this embodiment, the operation information of each cluster represents a service condition of the server, that is, a processing condition of the server in each cluster to the data request, and based on this, how to adjust the number of the servers in each cluster can be determined, for example: whether a cluster needs to be expanded (adding servers), the number of which is increased when the servers need to be added; and whether the cluster needs to be scaled down (reduce servers), the number of which is reduced when the servers need to be reduced.
The present embodiment does not specifically limit the operation information, and may include at least one of a CPU index, a JVM index, and a TPS index, for example.
In this embodiment, a specific implementation manner of obtaining the operation information of each monitored cluster is not limited, for example, the operation information of each monitored cluster of the monitoring device is obtained through the message middleware.
Step S802: and adjusting the number of the servers in the corresponding clusters to be adjusted according to the adjustment data corresponding to each cluster to be adjusted.
In the monitored clusters, expansion and contraction information of some clusters may represent expansion and contraction, and expansion and contraction information of some clusters represents contraction and contraction, for both the two types of clusters, the number of the servers to be adjusted is required to be adjusted, and the number of the servers in the corresponding cluster to be adjusted is required to be adjusted according to adjustment data corresponding to each cluster to be adjusted.
According to the scheme, based on the monitored running information of each cluster, the scaling information and the adjustment quantity corresponding to each cluster are determined in real time; further, the number of the servers in the corresponding clusters to be adjusted is adjusted reasonably in time according to the scaling information and the adjustment number corresponding to each cluster to be adjusted, and when the servers in the clusters cannot meet the data request, the situation of poor performance of the servers is reduced through timely capacity expansion; when the number of the servers in the cluster is large but the number of the data requests to be processed is small, the situation of resource waste of the servers is reduced through timely capacity reduction, the stability and the usability of the service are improved, and the reasonable utilization of the resources is realized.
In some alternative embodiments, the step S801 may be implemented by, but not limited to, the following steps:
for any cluster, determining the state of a CPU according to the CPU index in the running information;
if the CPU state is busy, determining that the scaling information corresponding to the cluster represents the scaling, and the adjustment quantity corresponding to the cluster is a preset multiple of the current server quantity in the cluster;
if the CPU state is normal, determining scaling information and adjustment quantity corresponding to the cluster according to JVM indexes in the running information;
if the CPU state is idle, determining an adjustment value according to a preset condition; if the adjustment value is a positive number, determining that the scaling information corresponding to the cluster represents the expansion capacity, and the adjustment quantity corresponding to the cluster is the adjustment value; if the adjustment value is a negative number, determining that the scaling information corresponding to the cluster represents the scaling, and the adjustment quantity corresponding to the cluster is an absolute value of the adjustment value; and if the adjustment value is 0, determining that the scaling information corresponding to the cluster represents no scaling, and the adjustment quantity corresponding to the cluster is 0.
The CPU index may include a CPU utilization rate, and correspondingly, the CPU state may be determined in the following manner:
if the CPU utilization rate is greater than the first utilization rate, determining that the CPU state is busy;
if the CPU utilization rate is less than the second utilization rate, determining that the CPU state is idle;
if the CPU utilization rate is within the range between the first utilization rate and the second utilization rate, determining that the CPU state is normal;
wherein the first utilization is greater than the second utilization.
If the CPU utilization rate in the cluster is larger than the first utilization rate, the CPU utilization rate in the cluster is over high, and the CPU state is determined to be busy; if the CPU utilization rate in the cluster is smaller than the second utilization rate, the CPU utilization rate in the cluster is over low, and the CPU state is determined to be idle; and if the CPU utilization rate is in the range between the first utilization rate and the second utilization rate, the current CPU utilization rate is more appropriate, and the CPU state is determined to be normal.
Specifically, the CPU utilization may be 95%.
The first utilization rate and the second utilization rate may be set according to an actual application scenario, for example, the first utilization rate is 0.5, and the second utilization rate is 0.2, which is not specifically limited in this application.
The CPU index may also include CPU wait information, and correspondingly, the CPU state may be determined by:
if the CPU waiting information is larger than the first waiting value, determining that the CPU state is busy;
if the CPU waiting information is smaller than a second waiting value, determining that the CPU state is idle;
if the CPU waiting information is in the range between the first waiting value and the second waiting value, determining that the CPU state is normal;
wherein the first wait value is greater than the second wait value.
If the CPU waiting information in the cluster is larger than the first waiting value, the CPU waiting information in the cluster is over large, and the CPU state is determined to be busy; if the CPU waiting information in the cluster is smaller than the second waiting value, the CPU waiting information in the cluster is over small, and the CPU state is determined to be idle; and if the CPU waiting information is in the range between the first waiting value and the second waiting value, the current CPU waiting information is more appropriate, and the CPU state is determined to be normal.
Specifically, the CPU wait information may be a CPU low wait value of 95%.
The first waiting value and the second waiting value may be set according to an actual application scenario, for example, the first waiting value is 0.3, and the second waiting value is 0.1, which is not specifically limited in this application.
The JVM index may include a memory recovery duration and a memory recovery frequency within a first preset duration, and correspondingly, the scaling information and the adjustment quantity corresponding to the cluster may be determined according to the JVM index in the following manner:
if the memory recovery time length in the first preset time length is greater than the first time length and the memory recovery times are greater than the first times, determining that the scaling information corresponding to the cluster represents the expansion and the contraction, and the adjustment number corresponding to the cluster is a preset multiple of the current number of the servers in the cluster;
if the memory recovery time length in the first preset time length is less than the second time length and the memory recovery times are less than the second time number, determining an adjustment value according to the preset condition; if the adjustment value is a positive number, determining that the scaling information corresponding to the cluster represents the expansion capacity, and the adjustment quantity corresponding to the cluster is the adjustment value; if the adjustment value is a negative number, determining that the scaling information corresponding to the cluster represents the scaling, and the adjustment quantity corresponding to the cluster is an absolute value of the adjustment value; if the adjustment value is 0, determining that the scaling information corresponding to the cluster represents no scaling, and the adjustment quantity corresponding to the cluster is 0;
if the memory recovery time length within a first preset time length is within the range between the first time length and the second time length, and the memory recovery times are within the range between the first time number and the second time number, determining scaling information and adjustment quantity corresponding to the cluster according to TPS indexes in the operation information;
the first time length is larger than the second time length, and the first time number is larger than the second time number.
Because the first time length is longer than the second time length and the first time number is greater than the second time number, if the memory recovery time length in the first preset time length is greater than the first time length and the memory recovery frequency is greater than the first frequency, the memory recovery in the cluster is over frequent, and the number of the servers in the cluster needs to be increased, so that the expansion and contraction information representation capacity expansion corresponding to the cluster is determined, and the number is adjusted to be a preset multiple of the current number of the servers in the cluster, so as to reduce the time length and the frequency of the memory recovery in the cluster; if the memory recovery time length in the first preset time length is less than the second time length and the memory recovery times are less than the second times, the cluster rarely recovers the memory, and the adjustment data needs to be determined according to the preset conditions; if the two conditions are not met within the first preset time, the recovery of the memory in the cluster is normal, and the adjustment data needs to be further determined according to the TPS index.
The first duration, the second duration, the first number and the second number may be set according to an actual application scenario, for example, the first duration is 20ms, the second duration is 10ms, the first number is 10 times, and the second number is 2 times, which is not specifically limited in this application.
The JVM index may also include a peak ratio of running threads and a number of blocked threads within a second preset time period, and correspondingly, the scaling information and the adjustment number corresponding to the cluster may be determined according to the JVM index in the following manner:
if the ratio peak value of the running threads in a second preset time length is larger than a first ratio or the number of the blocked threads is larger than a first number, determining that the scaling information corresponding to the cluster represents the expansion and contraction, and the adjustment number corresponding to the cluster is a preset multiple of the current number of the servers in the cluster;
if the operating thread proportion peak value in a second preset time length is smaller than a second proportion, or the number of blocked threads is smaller than a second number, determining an adjustment value according to the preset condition; if the adjustment value is a positive number, determining that the scaling information corresponding to the cluster represents the expansion capacity, and the adjustment quantity corresponding to the cluster is the adjustment value; if the adjustment value is a negative number, determining that the scaling information corresponding to the cluster represents the scaling, and the adjustment quantity corresponding to the cluster is an absolute value of the adjustment value; if the adjustment value is 0, determining that the scaling information corresponding to the cluster represents no scaling, and the adjustment quantity corresponding to the cluster is 0;
if the operating thread proportion peak value in a second preset time length is in the range between the first proportion and the second proportion, or the number of blocked threads is in the range between the first number and the second number, determining scaling information and an adjustment number corresponding to the cluster according to TPS indexes in the operating information;
wherein the first ratio is greater than the second ratio, and the first number is greater than the second number.
Because the first proportion is greater than the second proportion, and the first number is greater than the second number, if the ratio peak value of the running threads in the second preset time is greater than the first proportion, the running threads in the cluster are too many, and if the number of the blocked threads is greater than the first number, the cluster is indicated that a large number of blocked threads occur, the number of the servers in the cluster needs to be increased, so as to reduce the running threads and the blocked threads in the cluster; if the operating thread proportion peak value in the first preset time is smaller than the second proportion, the cluster is indicated to have too few operating threads, if the number of the blocked threads is smaller than the second number, the cluster is indicated to have no blocked threads basically, and the adjustment data is required to be determined according to the preset conditions; if the two conditions are not met within the second preset time, the thread in the cluster is normal, and the adjustment data needs to be further determined according to the TPS index.
The first proportion, the second proportion, the first number and the second number may be set according to an actual application scenario, for example, the first proportion is 50%, the second proportion is 30%, the first number is 10, and the second number is 0, which is not specifically limited in this application.
In some optional embodiments, the TPS indicator includes a TPS peak value within a third preset time period and an average time period of a request response, and the scaling information and the adjustment amount corresponding to the cluster may be determined according to the TPS indicator by, but not limited to, the following:
if the TPS peak value in a third preset time length is greater than the preset throughput and the average time length of request response is greater than the third time length, determining that the scaling information corresponding to the cluster represents the expansion and contraction, and the adjustment number corresponding to the cluster is a preset multiple of the current number of servers in the cluster;
if the TPS peak value in a third preset time length is less than or equal to the preset throughput and the average time length of request response is less than or equal to the third time length, determining an adjustment value according to the preset condition; if the adjustment value is a positive number, determining that the scaling information corresponding to the cluster represents the expansion capacity, and the adjustment quantity corresponding to the cluster is the adjustment value; if the adjustment value is a negative number, determining that the scaling information corresponding to the cluster represents the scaling, and the adjustment quantity corresponding to the cluster is an absolute value of the adjustment value; and if the adjustment value is 0, determining that the scaling information corresponding to the cluster represents no scaling, and the adjustment quantity corresponding to the cluster is 0.
If the TPS peak value in the third preset time length is greater than the preset throughput and the average request response time length is greater than the third time length, it is indicated that the TPS throughput in the cluster is too high, the request response speed is low, and the number of servers in the cluster needs to be increased to reduce the TPS throughput and improve the request response speed; if the TPS peak value in the third preset time length is less than or equal to the preset throughput and the average request response time length is less than or equal to the third time length, it is indicated that the TPS throughput in the cluster is normal and the request response speed is normal, and the adjustment data needs to be determined according to the preset conditions.
The preset throughput and the third time period may be set according to an actual application scenario, for example, the preset throughput is 1000/s, and the third time period is 1000ms, which is not specifically limited in this application.
The first preset time length, the second preset time length and the third preset time length may be the same or different, and this is not specifically limited in the present application.
In some exemplary embodiments, the preset condition may be the following formula:
the adjusting value C { (M × K/T) + L-N }, where M is a TPS peak value within a fourth preset duration, K is an amplification factor corresponding to a service type of the cluster, T is an average TPS value of servers in the cluster, L is an adjusting coefficient corresponding to the cluster, N is the current number of servers in the cluster, { } is a rounding operation, and T { } is M/N.
The adjustment coefficient L corresponding to the cluster is influenced by the CPU index and the JVM index, and illustratively, when other parameters are the same, the larger the CPU utilization rate is, the larger L is; when other parameters are the same, the larger the CPU waiting information is, the larger L is; when other parameters are the same, the memory recovery time length is longer, the memory recovery frequency is larger, and the L is larger; and when other parameters are the same, the larger the operating thread proportion peak value and the number of blocked threads are, the larger L is.
The fourth preset time period may be set according to an actual application scenario, for example, the fourth preset time period is 7 days, which is not specifically limited in this application.
The amplification factor K corresponding to the service type of the cluster is F +1, where F is a traffic ratio, and is affected by the service type of the cluster, for example: when a cluster processes hypertext Transfer Protocol (HTTP) data, F is 3/7; when the cluster processes Remote Procedure Call (RPC) data, F is 4/6.
The following is a specific example for illustration:
the fourth preset time is 7 days, M is 1200/s, the number of the servers in the cluster is currently 3, the cluster processes HTTP data, the corresponding amplification factor is 1.43, and the adjustment coefficient corresponding to the cluster is 1.5;
(M*K/T)+L-N=(1200*1.43/400)+1.5-N=2.786。
since the number of servers is an integer, it is necessary to perform rounding operation on the calculation result, and finally determine that the adjustment value is 3. The adjustment value is a positive number, so the scaling information corresponding to the cluster represents the capacity expansion, and the adjustment number corresponding to the cluster is 3.
The above example is only one possible implementation manner of determining the adjustment value according to the preset condition, and the application is not limited thereto.
In some alternative embodiments, step S802 may be implemented by, but not limited to, the following:
for any cluster to be adjusted, if the scaling information represents the scaling, releasing the servers with the adjusted number from the cluster to be adjusted to a corresponding resource pool;
if the expansion and contraction information represents expansion and the corresponding resource pool has the unoccupied servers with the adjusted number, mirroring the servers with the adjusted number from the corresponding resource pool, and deploying the mirrored servers to the cluster to be adjusted; or, if the expansion and contraction information represents expansion and the corresponding resource pool does not have the unoccupied servers with the adjusted number, generating the servers with the adjusted number in the corresponding resource pool, mirroring the generated servers from the corresponding resource pool, and deploying the mirrored servers to the cluster to be adjusted.
The above capacity expansion process is similar to the above process of constructing the target cluster according to the initial number of servers, and is not described here again.
As described above, the resource pool includes an environment pool and/or a container pool.
And if the scaling information represents the scaling, releasing the servers with the adjusted number from the cluster to be adjusted to the environment pool where the cluster to be adjusted is located, or releasing the servers with the adjusted number from the cluster to be adjusted to the container pool.
Applied to the cloud native system shown in fig. 9, the adjustment of the number of servers in the corresponding cluster to be adjusted may be determined by:
sending the adjusting data corresponding to each cluster to be adjusted to a Git Repo, and hosting the adjusting data corresponding to each cluster to be adjusted through the Git Repo;
adjusting data corresponding to each cluster to be adjusted, which is obtained from the Git Repo, through a Tekton CD to generate a message in a target format corresponding to each cluster to be adjusted;
analyzing the messages of the target format corresponding to each cluster to be adjusted, which are acquired from the Tekton CD, through a cloud native k8s system, arranging the analyzed data, and adjusting the number of servers in the corresponding cluster to be adjusted through a k8s component in a cloud native k8s system according to the sequence.
Illustratively, Pipeline is composed of multiple tasks, and when performing capacity expansion or constructing a target cluster, a Pipeline run (a function calling Pipeline) is generated through scheduling execution, and the Pipeline run (a function calling Task) controlled by the Pipeline run (a function calling Task) creates a server according to a container initialization mode.
In some optional embodiments, information representing the turning on or off may be further configured, so that the above mechanism for dynamically adjusting the number of servers in the cluster is applied according to an actual application requirement. For example: applying the mechanism for dynamically adjusting the number of the servers in the cluster by configuring the information for representing the starting; by configuring the information characterizing the shutdown, the above-described mechanism of dynamically adjusting the number of servers in the cluster is not applied.
Exemplary device
Based on the same inventive concept, the embodiment of the present application further provides a data processing apparatus, and the data processing apparatus embodiment may inherit the content described in the foregoing method embodiment. Based on the foregoing embodiment, as shown in fig. 10, a schematic structural diagram of a data processing apparatus provided in an embodiment of the present application is shown, where the data processing apparatus 1000 specifically includes:
a receiving module 1001, configured to receive a data request of a target data service API;
a determining module 1002, configured to determine, according to the established data service API and a first corresponding relationship between resource groups, a target resource group corresponding to the target data service API;
the determining module 1002 is further configured to determine, according to a second correspondence between resource groups and clusters, a target cluster corresponding to the target resource group;
a processing module 1003, configured to perform data processing on the data request through the target cluster.
In some alternative embodiments, clusters of the same scene are placed in the same environmental pool.
In some optional embodiments, the method further includes a creating module 1004, configured to, before the receiving module receives the data request of the target data service API, establish a first corresponding relationship between the created target data service API and the target resource group in response to the target data service API creating instruction.
In some optional embodiments, if the target resource group is not an existing resource group, the creating module 1004 is further configured to, before establishing the first corresponding relationship between the created target data service API and the target resource group:
and creating the target resource group.
In some optional embodiments, the creating module 1004, after creating the set of target resources, is further configured to:
determining an initial number of servers corresponding to the target cluster;
constructing the target cluster according to the initial number of servers;
and establishing a second corresponding relation between the created target resource group and the target cluster.
In some optional embodiments, the creating module 1004 is specifically configured to:
if the initial number of unoccupied servers exist in the resource pool corresponding to the target cluster, mirroring the initial number of servers from the corresponding resource pool, and deploying the mirrored servers to the target cluster;
if the resource pool corresponding to the target cluster does not have the initial number of unoccupied servers, generating the initial number of servers in the corresponding resource pool, mirroring the generated servers from the corresponding resource pool, and deploying the mirrored servers into the target cluster.
In some optional embodiments, after the processing module 1003 performs data processing on the data request through the target cluster, the processing module is further configured to:
and determining a processing result of a server in a target cluster for processing the data request, and returning the processing result through the target data service API.
In some optional embodiments, the apparatus further comprises an adjusting module 1005 configured to:
determining adjustment data corresponding to each cluster based on the monitored operation information of each cluster; the adjustment data comprises scaling information and adjustment quantity;
and adjusting the number of the servers in the corresponding clusters to be adjusted according to the adjustment data corresponding to each cluster to be adjusted.
In some optional embodiments, the adjusting module 1005 is specifically configured to:
for any cluster to be adjusted, if the scaling information represents the scaling, releasing the servers with the adjusted number from the cluster to be adjusted to a corresponding resource pool;
if the expansion and contraction information represents expansion and the corresponding resource pool has the unoccupied servers with the adjusted number, mirroring the servers with the adjusted number from the corresponding resource pool, and deploying the mirrored servers to the cluster to be adjusted; or, if the expansion and contraction information represents expansion and the corresponding resource pool does not have the unoccupied servers with the adjusted number, generating the servers with the adjusted number in the corresponding resource pool, mirroring the generated servers from the corresponding resource pool, and deploying the mirrored servers to the cluster to be adjusted.
In some optional embodiments, the adjusting module 1005 is specifically configured to:
for any cluster, determining the state of a CPU according to the CPU index in the running information;
if the CPU state is busy, determining that the scaling information corresponding to the cluster represents the scaling, and the adjustment quantity corresponding to the cluster is a preset multiple of the current server quantity in the cluster;
if the CPU state is normal, determining scaling information and adjustment quantity corresponding to the cluster according to JVM indexes in the running information;
if the CPU state is idle, determining an adjustment value according to a preset condition; if the adjustment value is a positive number, determining that the scaling information corresponding to the cluster represents the expansion capacity, and the adjustment quantity corresponding to the cluster is the adjustment value; if the adjustment value is a negative number, determining that the scaling information corresponding to the cluster represents the scaling, and the adjustment quantity corresponding to the cluster is an absolute value of the adjustment value; and if the adjustment value is 0, determining that the scaling information corresponding to the cluster represents no scaling, and the adjustment quantity corresponding to the cluster is 0.
In some optional embodiments, if the CPU index includes a CPU utilization, the adjusting module 1005 is specifically configured to:
if the CPU utilization rate is greater than the first utilization rate, determining that the CPU state is busy;
if the CPU utilization rate is less than the second utilization rate, determining that the CPU state is idle;
if the CPU utilization rate is within the range between the first utilization rate and the second utilization rate, determining that the CPU state is normal;
wherein the first utilization is greater than the second utilization.
In some optional embodiments, if the CPU index includes CPU waiting information, the adjusting module 1005 is specifically configured to:
if the CPU waiting information is larger than the first waiting value, determining that the CPU state is busy;
if the CPU waiting information is smaller than a second waiting value, determining that the CPU state is idle;
if the CPU waiting information is in the range between the first waiting value and the second waiting value, determining that the CPU state is normal;
wherein the first wait value is greater than the second wait value.
In some optional embodiments, if the JVM indicator includes a memory recycling duration and a number of times of memory recycling within a first preset duration, the adjusting module 1005 is specifically configured to:
if the memory recovery time length in the first preset time length is greater than the first time length and the memory recovery times are greater than the first times, determining that the scaling information corresponding to the cluster represents the expansion and the contraction, and the adjustment number corresponding to the cluster is a preset multiple of the current number of the servers in the cluster;
if the memory recovery time length in the first preset time length is less than the second time length and the memory recovery times are less than the second time number, determining an adjustment value according to the preset condition; if the adjustment value is a positive number, determining that the scaling information corresponding to the cluster represents the expansion capacity, and the adjustment quantity corresponding to the cluster is the adjustment value; if the adjustment value is a negative number, determining that the scaling information corresponding to the cluster represents the scaling, and the adjustment quantity corresponding to the cluster is an absolute value of the adjustment value; if the adjustment value is 0, determining that the scaling information corresponding to the cluster represents no scaling, and the adjustment quantity corresponding to the cluster is 0;
if the memory recovery time length within a first preset time length is within the range between the first time length and the second time length, and the memory recovery times are within the range between the first time number and the second time number, determining scaling information and adjustment quantity corresponding to the cluster according to TPS indexes in the operation information;
the first time length is larger than the second time length, and the first time number is larger than the second time number.
In some optional embodiments, if the JVM index includes a running thread proportion peak value and a blocked thread number within a second preset time period, the adjusting module 1005 is specifically configured to:
if the ratio peak value of the running threads in a second preset time length is larger than a first ratio or the number of the blocked threads is larger than a first number, determining that the scaling information corresponding to the cluster represents the expansion and contraction, and the adjustment number corresponding to the cluster is a preset multiple of the current number of the servers in the cluster;
if the operating thread proportion peak value in a second preset time length is smaller than a second proportion, or the number of blocked threads is smaller than a second number, determining an adjustment value according to the preset condition; if the adjustment value is a positive number, determining that the scaling information corresponding to the cluster represents the expansion capacity, and the adjustment quantity corresponding to the cluster is the adjustment value; if the adjustment value is a negative number, determining that the scaling information corresponding to the cluster represents the scaling, and the adjustment quantity corresponding to the cluster is an absolute value of the adjustment value; if the adjustment value is 0, determining that the scaling information corresponding to the cluster represents no scaling, and the adjustment quantity corresponding to the cluster is 0;
if the operating thread proportion peak value in a second preset time length is in the range between the first proportion and the second proportion, or the number of blocked threads is in the range between the first number and the second number, determining scaling information and an adjustment number corresponding to the cluster according to TPS indexes in the operating information;
wherein the first ratio is greater than the second ratio, and the first number is greater than the second number.
In some optional embodiments, if the TPS indicator includes a TPS peak value within a third preset time period and a request response average time period, the adjusting module 1005 is specifically configured to:
if the TPS peak value in a third preset time length is greater than the preset throughput and the average time length of request response is greater than the third time length, determining that the scaling information corresponding to the cluster represents the expansion and contraction, and the adjustment number corresponding to the cluster is a preset multiple of the current number of servers in the cluster;
if the TPS peak value in a third preset time length is less than or equal to the preset throughput and the average time length of request response is less than or equal to the third time length, determining an adjustment value according to the preset condition; if the adjustment value is a positive number, determining that the scaling information corresponding to the cluster represents the expansion capacity, and the adjustment quantity corresponding to the cluster is the adjustment value; if the adjustment value is a negative number, determining that the scaling information corresponding to the cluster represents the scaling, and the adjustment quantity corresponding to the cluster is an absolute value of the adjustment value; and if the adjustment value is 0, determining that the scaling information corresponding to the cluster represents no scaling, and the adjustment quantity corresponding to the cluster is 0.
In some optional embodiments, the adjusting module 1005 is specifically configured to:
sending the adjusting data corresponding to each cluster to be adjusted to a Git Repo, and hosting the adjusting data corresponding to each cluster to be adjusted through the Git Repo;
adjusting data corresponding to each cluster to be adjusted, which is obtained from the Git Repo, through a Tekton CD to generate a message in a target format corresponding to each cluster to be adjusted;
analyzing the messages of the target format corresponding to each cluster to be adjusted, which are acquired from the Tekton CD, through a cloud native k8s system, arranging the analyzed data, and adjusting the number of servers in the corresponding cluster to be adjusted through a k8s component in a cloud native k8s system according to the sequence.
Since the data processing apparatus is the data processing apparatus in the method in the embodiment of the present application, and the principle of the data processing apparatus for solving the problem is similar to that of the method, the implementation of the data processing apparatus may refer to the implementation of the method, and repeated details are not repeated.
An electronic device 1100 according to this embodiment of the present application is described below with reference to fig. 11. The electronic device shown in fig. 11 is only an example, and does not set any limit to the functions and the range of use of the embodiments of the present application.
As shown in fig. 11, the electronic device 1100 is embodied in the form of a general purpose computing apparatus. The components of the electronic device 1100 may include, but are not limited to: at least one processor 1101, at least one memory 1102, and a bus 1103 connecting the various system components (including the memory 1102 and the processor 1101).
Bus 1103 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, and a processor or local bus using any of a variety of bus architectures.
The memory 1102 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)11021 and/or cache memory 11022, and may further include Read Only Memory (ROM) 11023.
Memory 1102 may also include a program/utility 11025 having a set (at least one) of program modules 11024, such program modules 11024 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The electronic device 1100 may also communicate with one or more external devices 1104 (e.g., keyboard, pointing device, etc.), one or more devices that enable a user to interact with the electronic device 1100, and/or any devices (e.g., router, modem, etc.) that enable the electronic device 1100 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 1105. Also, the electronic device 1100 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 1106. As shown, the network adapter 1106 communicates with other modules for the electronic device 1100 over the bus 1103. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 1100, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In the present embodiment, the memory 1102 stores a computer program that, when executed by the processor 1101, causes the processor 1101 to perform the method of any of the embodiments described above.
Since the electronic device is the electronic device in the method in the embodiment of the present application, and the principle of the electronic device for solving the problem is similar to that of the method, reference may be made to implementation of the method for the electronic device, and repeated details are not described again.
Exemplary program product
In some possible embodiments, various aspects of the present application may also be implemented in the form of a program product including program code for causing a processor of an electronic device to perform the steps of any of the data processing methods described above when the program product is run on the electronic device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As shown in fig. 12, a program product 1200 according to an embodiment of the application is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on an electronic device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the electronic device, partly on the electronic device, as a stand-alone software package, partly on the electronic device and partly on a remote device or entirely on the remote device. In the case of a remote device, the remote device may be connected to the electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
It should be noted that although several modules or sub-modules of the system are mentioned in the above detailed description, such partitioning is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the modules described above may be embodied in one module according to embodiments of the application. Conversely, the features and functions of one module described above may be further divided into embodiments by a plurality of modules.
Moreover, although the operations of the modules of the subject systems are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain operations may be omitted, operations combined into one operation execution, and/or operations broken down into multiple operation executions.
While the spirit and principles of the application have been described with reference to several particular embodiments, it is to be understood that the application is not limited to the disclosed embodiments, nor is the division of aspects, which is for convenience only as the features in such aspects may not be combined to benefit from the description. The application is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (10)

1. A method of data processing, the method comprising:
receiving a data request of a target data service Application Programming Interface (API);
determining a target resource group corresponding to the target data service API according to the established data service API and a first corresponding relation between the resource groups;
determining a target cluster corresponding to the target resource group according to a second corresponding relation between the resource group and the cluster;
and performing data processing on the data request through the target cluster.
2. The method of claim 1, wherein clusters of the same scene are placed in the same environmental pool.
3. The method of claim 1, wherein prior to receiving the data request of the target data service API, further comprising:
and responding to the target data service API establishing instruction, and establishing a first corresponding relation between the established target data service API and the target resource group.
4. The method of claim 3, wherein if the target resource group is not an existing resource group, before establishing the first mapping relationship between the created target data service API and the target resource group, further comprising:
and creating the target resource group.
5. The method of claim 4, wherein after creating the set of target resources, further comprising:
determining an initial number of servers corresponding to the target cluster;
constructing the target cluster according to the initial number of servers;
and establishing a second corresponding relation between the created target resource group and the target cluster.
6. The method of claim 5, wherein constructing the target cluster based on the initial number of servers comprises:
if the initial number of unoccupied servers exist in the resource pool corresponding to the target cluster, mirroring the initial number of servers from the corresponding resource pool, and deploying the mirrored servers to the target cluster;
if the resource pool corresponding to the target cluster does not have the initial number of unoccupied servers, generating the initial number of servers in the corresponding resource pool, mirroring the generated servers from the corresponding resource pool, and deploying the mirrored servers into the target cluster.
7. The method of claim 1, wherein after the data processing of the data request by the target cluster, further comprising:
and determining a processing result of a server in a target cluster for processing the data request, and returning the processing result through the target data service API.
8. A data processing apparatus, characterized in that the apparatus comprises:
the receiving module is used for receiving a data request of a target data service API;
the determining module is used for determining a target resource group corresponding to the target data service API according to the established data service API and a first corresponding relation between the resource groups;
the determining module is further configured to determine a target cluster corresponding to the target resource group according to a second correspondence between the resource groups and the clusters;
and the processing module is used for processing the data of the data request through the target cluster.
9. An electronic device comprising at least one processor and at least one memory, wherein the memory stores a computer program that, when executed by the processor, causes the processor to perform the method of any of claims 1 to 7.
10. A storage medium storing a computer program executable by an electronic device, the program, when run on the electronic device, causing the electronic device to perform the method of any one of claims 1 to 7.
CN202111149589.8A 2021-09-29 2021-09-29 Data processing method and device, electronic equipment and storage medium Pending CN113806097A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111149589.8A CN113806097A (en) 2021-09-29 2021-09-29 Data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111149589.8A CN113806097A (en) 2021-09-29 2021-09-29 Data processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113806097A true CN113806097A (en) 2021-12-17

Family

ID=78938937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111149589.8A Pending CN113806097A (en) 2021-09-29 2021-09-29 Data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113806097A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114036031A (en) * 2022-01-05 2022-02-11 阿里云计算有限公司 Scheduling system and method for resource service application in enterprise digital middleboxes
CN114844911A (en) * 2022-04-20 2022-08-02 网易(杭州)网络有限公司 Data storage method and device, electronic equipment and computer readable storage medium
WO2024051148A1 (en) * 2022-09-09 2024-03-14 网易(杭州)网络有限公司 Cloud game control method and apparatus, electronic device, and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080243866A1 (en) * 2007-03-29 2008-10-02 Manu Pandey System and method for improving cluster performance
CN101694632A (en) * 2009-10-19 2010-04-14 浪潮电子信息产业股份有限公司 Method for executing resource plans on demands and being applied to data base cluster system
CN106557366A (en) * 2015-09-28 2017-04-05 阿里巴巴集团控股有限公司 Task distribution method, apparatus and system
CN107423085A (en) * 2017-04-24 2017-12-01 北京百度网讯科技有限公司 Method and apparatus for application deployment
CN109298897A (en) * 2018-06-29 2019-02-01 杭州数澜科技有限公司 A kind of system and method that the task using resource group is distributed
CN111158909A (en) * 2019-12-27 2020-05-15 中国联合网络通信集团有限公司 Cluster resource allocation processing method, device, equipment and storage medium
CN111858257A (en) * 2020-07-28 2020-10-30 浪潮云信息技术股份公司 System and method for acquiring container cluster resource use data
CN112162817A (en) * 2020-09-09 2021-01-01 新浪网技术(中国)有限公司 Processing method and device for deploying service resources of container cluster and storage medium
CN112925647A (en) * 2021-03-24 2021-06-08 北京金山云网络技术有限公司 Cloud edge coordination system, and control method and device of cluster resources
CN113112025A (en) * 2020-01-13 2021-07-13 顺丰科技有限公司 Model building system, method, device and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080243866A1 (en) * 2007-03-29 2008-10-02 Manu Pandey System and method for improving cluster performance
CN101694632A (en) * 2009-10-19 2010-04-14 浪潮电子信息产业股份有限公司 Method for executing resource plans on demands and being applied to data base cluster system
CN106557366A (en) * 2015-09-28 2017-04-05 阿里巴巴集团控股有限公司 Task distribution method, apparatus and system
CN107423085A (en) * 2017-04-24 2017-12-01 北京百度网讯科技有限公司 Method and apparatus for application deployment
CN109298897A (en) * 2018-06-29 2019-02-01 杭州数澜科技有限公司 A kind of system and method that the task using resource group is distributed
CN111158909A (en) * 2019-12-27 2020-05-15 中国联合网络通信集团有限公司 Cluster resource allocation processing method, device, equipment and storage medium
CN113112025A (en) * 2020-01-13 2021-07-13 顺丰科技有限公司 Model building system, method, device and storage medium
CN111858257A (en) * 2020-07-28 2020-10-30 浪潮云信息技术股份公司 System and method for acquiring container cluster resource use data
CN112162817A (en) * 2020-09-09 2021-01-01 新浪网技术(中国)有限公司 Processing method and device for deploying service resources of container cluster and storage medium
CN112925647A (en) * 2021-03-24 2021-06-08 北京金山云网络技术有限公司 Cloud edge coordination system, and control method and device of cluster resources

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
THOMAS RYAN ET AL.: "Multi-Tier Resource Allocation for Data-Intensive Computing", 《BIG DATA RESEARCH》, vol. 2, no. 3 *
台慧敏: "基于Kubernetes-on-EGO的两级资源调度器的设计与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2018 *
朱铮铮: "Oracle_RAC在社保数据中心的应用", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2016 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114036031A (en) * 2022-01-05 2022-02-11 阿里云计算有限公司 Scheduling system and method for resource service application in enterprise digital middleboxes
CN114844911A (en) * 2022-04-20 2022-08-02 网易(杭州)网络有限公司 Data storage method and device, electronic equipment and computer readable storage medium
WO2024051148A1 (en) * 2022-09-09 2024-03-14 网易(杭州)网络有限公司 Cloud game control method and apparatus, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
US11755452B2 (en) Log data collection method based on log data generated by container in application container environment, log data collection device, storage medium, and log data collection system
US11625281B2 (en) Serverless platform request routing
US9363195B2 (en) Configuring cloud resources
US8843621B2 (en) Event prediction and preemptive action identification in a networked computing environment
WO2024016596A1 (en) Container cluster scheduling method and apparatus, device, and storage medium
US20200236195A1 (en) Dynamically transitioning the file system role of compute nodes for provisioning a storlet
CN113806097A (en) Data processing method and device, electronic equipment and storage medium
CN113641413B (en) Target model loading updating method and device, readable medium and electronic equipment
WO2020134364A1 (en) Virtual machine migration method, cloud computing management platform, and storage medium
CN112395736B (en) Parallel simulation job scheduling method of distributed interactive simulation system
CN112463290A (en) Method, system, apparatus and storage medium for dynamically adjusting the number of computing containers
JP2021121921A (en) Method and apparatus for management of artificial intelligence development platform, and medium
CN112313627A (en) Mapping mechanism of events to serverless function workflow instances
JP7431490B2 (en) Data migration in hierarchical storage management systems
US10169076B2 (en) Distributed batch job promotion within enterprise computing environments
WO2022199206A1 (en) Memory sharing method and device for virtual machines
US11178216B2 (en) Generating client applications from service model descriptions
US20220019457A1 (en) Hardware placement and maintenance scheduling in high availability systems
CN117093335A (en) Task scheduling method and device for distributed storage system
CN114756301A (en) Log processing method, device and system
CN106484536B (en) IO scheduling method, device and equipment
CN111008074B (en) File processing method, device, equipment and medium
Lopes et al. MAG: A mobile agent based computational grid platform
CN116661686A (en) Data storage method, device, equipment and storage medium
Miao et al. LiveCom Instant Messaging Service Platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination