CN111124682A - Elastic resource allocation method and device, electronic equipment and storage medium - Google Patents

Elastic resource allocation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111124682A
CN111124682A CN201911351966.9A CN201911351966A CN111124682A CN 111124682 A CN111124682 A CN 111124682A CN 201911351966 A CN201911351966 A CN 201911351966A CN 111124682 A CN111124682 A CN 111124682A
Authority
CN
China
Prior art keywords
node
sub
data stream
data
service system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911351966.9A
Other languages
Chinese (zh)
Other versions
CN111124682B (en
Inventor
邓练兵
薛剑
陈金鹿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Dahengqin Technology Development Co Ltd
Original Assignee
Zhuhai Dahengqin Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Dahengqin Technology Development Co Ltd filed Critical Zhuhai Dahengqin Technology Development Co Ltd
Priority to CN201911351966.9A priority Critical patent/CN111124682B/en
Publication of CN111124682A publication Critical patent/CN111124682A/en
Application granted granted Critical
Publication of CN111124682B publication Critical patent/CN111124682B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system

Abstract

The application provides an elastic resource allocation method, an elastic resource allocation device, electronic equipment and a storage medium, wherein the method comprises the following steps: the receiving node receives the data stream of the service system and sends the data stream of the service system to the computing node; the method comprises the steps that a computing node receives data flow of a service system and processes the data flow of the service system to obtain a data block; the computing node sends the data block to the shunting node; the shunting node receives the data blocks and classifies the data blocks to obtain at least a first classified data stream and a second classified data stream; the shunting node transmits the first classified data stream to the first service platform through the output node, and transmits the second classified data stream to the second service platform. The method and the device can send various types of data to the adaptive computing nodes for processing; when the data processing of a certain calculation sub-node exceeds the load capacity, the data processing can be sent to other calculation sub-nodes for processing, so that the highest utilization rate of the calculation resources is realized.

Description

Elastic resource allocation method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of server technologies, and in particular, to a method and an apparatus for allocating elastic resources, an electronic device, and a storage medium.
Background
The smart city fully applies a new generation of information technology to various industries in the city, realizes informatization and industrialization based on the urban informatization advanced form of the knowledge society, is deeply integrated with urbanization, contributes to improving the urbanization quality, realizes fine and dynamic management, improves the urban management effect and improves the quality of life of citizens.
Smart cities need to cover all the fields in a city, including city management, social livelihood, resource environment, industrial economy, feature services, and so on. At present, information of each field is scattered, unified management of all-around information of a city cannot be realized, and the root cause for limiting the information of each field to be uniformly managed is that the existing data processing resources cannot meet the processing requirement of huge data information after each field is concentrated.
Disclosure of Invention
The present application provides a method and an apparatus for allocating elastic resources, an electronic device, and a storage medium to solve the above problems.
The application provides a method for allocating elastic resources in a first aspect, which is applied to a cloud server, wherein the cloud server comprises a receiving node, a computing node, a shunting node and an output node;
the method comprises the following steps:
the receiving node receives the data stream of the service system and sends the data stream of the service system to the computing node; the data stream of the service system at least comprises a first data stream of a first service system and a second data stream of a second service system;
the computing node receives the data stream of the service system and processes the data stream of the service system to obtain a data block; the computing node sends the data block to the shunting node;
the shunting node receives the data blocks and classifies the data blocks to at least obtain a first classified data stream and a second classified data stream; the first classified data stream corresponds to a data stream of the first service system, and the second classified data stream corresponds to a data stream of the second service system;
the shunting node transmits the first classified data stream to a first service platform through the output node, and transmits the second classified data stream to a second service platform; the first service platform corresponds to the first service system, and the second service platform corresponds to the second service system.
Further, the computing nodes comprise at least a first class of computing sub-nodes;
the receiving, by the compute node, the data stream of the service system, and processing the data stream of the service system to obtain a data block specifically includes:
and judging whether the data volume of the first data stream exceeds a first preset threshold of the first-class computation sub-node, and when the data volume of the first data stream does not exceed the first preset threshold of the first-class computation sub-node, receiving the first data stream by the first-class computation sub-node, and processing the first data stream to obtain a data block.
Further, the computing nodes at least comprise a second type of computing sub-node;
when the data volume of the first data stream exceeds a first preset threshold of the first-class computing sub-node, dividing the first data stream into at least two first sub-data streams, so that the data volume of any one first sub-data stream is smaller than the first preset threshold of the first-class computing sub-node;
the first type of calculation sub-node receives at least one first sub-data stream and processes the at least one first sub-data stream to obtain a data block; wherein the total data volume of the first sub-data stream received by the first type of computation sub-node does not exceed a first preset threshold of the first type of computation sub-node;
judging whether the data volume of the remaining first sub-data stream exceeds a second preset threshold value of the second type of calculation sub-node; wherein the remaining first sub-stream is: in all first sub-data streams obtained by the first data stream, a first sub-data stream which is not received by the first-class computing sub-node;
and when the data volume of the remaining first sub-data stream does not exceed a second preset threshold of the second-class computing sub-node, the second-class computing sub-node receives the remaining first sub-data stream and processes the remaining first sub-data stream to obtain a data block.
Further, the cloud server further comprises a mirror image receiving node, a mirror image computing node, a mirror image shunting node and a mirror image output node;
wherein the mirror image receiving node is obtained by mirroring the receiving node; the mirror image computing node is obtained by mirroring the computing node; the mirror image shunting node is obtained by mirroring the shunting node; the mirrored output node is mirrored by the output node.
Further, the cloud server further includes a snapshot node, and the snapshot node is configured to implement a snapshot operation for the receiving node, the computing node, the shunting node, and the output node.
A second aspect of the present application provides an elastic resource allocation apparatus, including a receiving node, a computing node, a shunting node, and an output node;
the receiving node comprises a first receiving module and a first sending module, wherein the first receiving module is used for receiving the data stream of the service system; the first sending module is used for sending the data stream of the service system to the computing node; the data stream of the service system at least comprises a first data stream of a first service system and a second data stream of a second service system;
the computing node comprises a second receiving module, a processing module and a second sending module, wherein the second receiving module is used for receiving the data stream of the service system; the processing module is used for processing the data stream of the service system to obtain a data block; the second sending module is configured to send the data block to the shunting node;
the shunting node comprises a third receiving module, a classifying module and a third sending module, wherein the third receiving module is used for receiving the data block; the classification module is used for classifying the data blocks to obtain at least a first classified data stream and a second classified data stream; the first classified data stream corresponds to a data stream of the first service system, and the second classified data stream corresponds to a data stream of the second service system;
the third sending module is configured to transmit the first classified data stream to a first service platform through the output node, and transmit the second classified data stream to a second service platform; the first service platform corresponds to the first service system, and the second service platform corresponds to the second service system.
Further, the computing nodes comprise at least a first class of computing sub-nodes;
the first-class computing sub-node comprises a first judgment sub-module and a first processing sub-module, and the first judgment sub-module is used for judging whether the data volume of the first data stream exceeds a first preset threshold of the first-class computing sub-node; the first processing sub-module is configured to, when the data amount of the first data stream does not exceed a first preset threshold of the first-class computation sub-node, receive the first data stream by the first-class computation sub-node, and process the first data stream to obtain a data block.
Further, the computing nodes at least comprise a second type of computing sub-node;
the first class of computation sub-node further comprises a first dividing sub-module and a first receiving sub-module, wherein the first dividing sub-module is configured to divide the first data stream into at least two first sub-data streams when the data volume of the first data stream exceeds a first preset threshold of the first class of computation sub-node, so that the data volume of any one of the first sub-data streams is smaller than the first preset threshold of the first class of computation sub-node;
the first receiving submodule is used for receiving at least one first sub data stream; the first processing submodule is used for processing the at least one first sub-data stream to obtain a data block; wherein the total data volume of the first sub-data stream received by the first type of computation sub-node does not exceed a first preset threshold of the first type of computation sub-node;
the second type of calculation sub-node comprises a second judgment sub-module, a second receiving sub-module and a second processing sub-module; the second judgment submodule is used for judging whether the data volume of the remaining first sub-data stream exceeds a second preset threshold value of the second type of calculation sub-node; wherein the remaining first sub-stream is: in all first sub-data streams obtained by the first data stream, a first sub-data stream which is not received by the first-class computing sub-node;
the second receiving submodule is configured to receive, by the second class computing sub-node, the remaining first sub-data stream when the data size of the remaining first sub-data stream does not exceed a second preset threshold of the second class computing sub-node; and the second processing submodule is used for processing the remaining first sub-data stream to obtain a data block.
A third aspect of the present application provides an electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute to implement the one elastic resource allocation method.
A fourth aspect of the present application provides a non-transitory computer-readable storage medium having instructions which, when executed by a processor of an electronic device, enable the electronic device to perform a method for implementing a flexible resource allocation as described.
Compared with the prior art, the method has the following advantages:
the cloud server-based data processing method and system are based on the cloud server, the receiving nodes are used for facing each business system, the data streams sent by each business system are received, then the data streams are sent to the computing nodes for processing, the computing nodes are provided with a plurality of computing sub-nodes, and various types of data can be sent to the adaptive computing nodes for processing; when the data processing of a certain calculation sub-node exceeds the load capacity, the data processing of the certain calculation sub-node can be sent to other calculation sub-nodes for processing, and then the purposes of quickly processing data and balancing the data processing load of each calculation sub-node are achieved, so that the highest utilization rate of calculation resources is achieved. In order to perform classified management on the data processed by each service system, the data blocks output from the computing nodes are sent to the shunting nodes to classify the data blocks according to different service systems, and then the data blocks are sent to corresponding service platforms, so that unified management and data sharing of each service system are realized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a flowchart illustrating steps of a method for allocating elastic resources according to the present application;
fig. 2 is a schematic structural diagram of an elastic resource allocation apparatus provided in the present application;
FIGS. 3 and 4 are process flow diagrams of an application of the present application to a method for elastic resource allocation;
fig. 5 is a schematic structural diagram of an electronic device provided in the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
The smart city aims to build a large data center by taking data concentration and sharing as a way, promote technical fusion, business fusion and data fusion, and realize cross-level, cross-region, cross-system, cross-department and cross-business cooperative management and service.
However, in the related art, data of each field of a city is relatively closed, and data of each field, such as city management, social livelihood, resource environment, industrial economy, feature service, and the like, basically exists inside a corresponding management unit. For technical reasons, the data in each management unit cannot be shared, for example, the types of data managed by each management unit are different, which results in different systems adopted by each management unit; for another example, the data amount managed by each management unit is huge, and the data amount managed by each management unit is further managed comprehensively, so that the data amount is huge, the data types are various, the data processing process is complex, and the way of performing independent calculation on various types of data proposed in the related art cannot meet the corresponding processing requirements.
The smart city needs to cover all aspects of information in one city, including various fields such as city management, social and civil life, resource environment, industrial economy, characteristic services and the like. At present, information of each field is scattered, unified management of all-around information of a city cannot be realized, and the root cause for limiting the information of each field to be uniformly managed is that the existing data processing resources cannot meet the processing requirement of huge data information after each field is concentrated.
In order to solve the above technical problem, the present application provides an elastic resource allocation method as shown in fig. 1, which is applied to a cloud server as shown in fig. 2, where the cloud server includes a receiving node, a computing node, a splitting node, and an output node;
the method comprises the following steps:
step S11, the receiving node receives a data stream of a service system and sends the data stream of the service system to the computing node; the data stream of the service system at least comprises a first data stream of a first service system and a second data stream of a second service system;
the business system in the application can comprise business systems in various fields of city management, social and civil life, resource environment, industrial economy, special service and the like, and specifically, the business system in the city management field can be a business system in government affairs, traffic, city management, safety, fire fighting and the like; the business system in the social and civil field can be a medical, community, social security, education and other business system; the service system in the resource environment field can be an energy, environment-friendly, ecological and other service system; the business system in the industrial economic field can be business systems of tourism, finance, port, park, enterprise service and the like; the business system in the characteristic service field can be a city ecological service platform, a postdoctor management service platform, a scientific and technological enterprise management service platform, a self-service enterprise management platform and other business systems.
Because each service system is relatively independent, the number of receiving nodes of the cloud service platform can be adjusted and changed according to the number of the accessed service systems; for example, the data generated in the city management field is received by the determined one or more receiving nodes, and the data generated in the social civil field is received by the other determined one or more receiving nodes. More specifically, the number of receiving nodes corresponding to each field may be adjusted according to the amount of data generated by the field, for example, the number of receiving nodes corresponding to the city management field may be adjusted according to the amount of data generated by the field.
Meanwhile, the receiving node may be set according to the type of the received data. For example, the types of data processed by the traffic department may include documents (e.g., tickets to record violations, vehicle violations, etc.), pictures (e.g., picture evidence of vehicle violations, vehicle violations), and videos (e.g., vehicle running a red light, pedestrian running a red light, captured video of a traffic intersection, traffic police forensic video, etc.). Then, the receiving node can be adjusted according to different data types such as documents, pictures, videos and the like, data transmission can be effectively improved, preliminary classification of the data can also be achieved, the data after the preliminary classification is transmitted to the computing node, preparation can be made for data processing of the computing node, and processing speed of the computing node can be improved.
Step S12, the computing node receives the data stream of the service system, and processes the data stream of the service system to obtain a data block; the computing node sends the data block to the shunting node;
for a better explanation of the present application, a specific example will now be presented, which is explained in a certain application context.
The application background is specifically as follows: as shown in fig. 3, the business system for providing data includes business systems in the city management field (specifically, a government business system a1 and a traffic business system a2), and business systems in the social and civilian fields (specifically, a medical business system B1 and a social security business system B2).
The receiving nodes are set according to the types of the service systems, and each service system is allocated with one receiving node (when in specific use, different numbers of receiving nodes can be allocated according to the types of the data of the service systems), that is, the data of the government service system a1 is sent to the first receiving node, the data of the traffic service system a2 is sent to the second receiving node, the data of the medical service system B1 is sent to the third receiving node, and the social security service system B2 is sent to the fourth receiving node.
The receiving node sends the received data to the computing nodes, and in order to reasonably distribute computing resources of the computing nodes, the computing nodes are divided into a plurality of computing sub-nodes. The number of the calculation sub-nodes may be determined according to the number and type of the service systems to be processed, and/or the type of data, and/or the total calculation resources, which are not described herein again.
In order to reasonably utilize the computing resources provided by the computing nodes, a "directional allocation manner" is adopted, and specifically, the total computing resources provided by the computing nodes and the number of the service systems are allocated and divided into a plurality of computing sub-nodes, according to the above example, the method can be specifically divided into a first type of computing sub-nodes, a second type of computing sub-nodes, a third type of computing sub-nodes, and a fourth type of computing sub-nodes, that is, data received by a first receiving node is sent to the first type of computing sub-nodes for processing, data received by a second receiving node is sent to the second type of computing sub-nodes for processing, data received by the third receiving node is sent to the third type of computing sub-nodes for processing, and data received by the fourth receiving node is sent to the fourth type of computing sub-nodes for processing.
That is, the data sent by each service system is processed by a class of computing sub-nodes, and the computing sub-nodes can specifically set according to the characteristics of the processed data sent by the service system, so as to improve the processing speed of the computing sub-nodes on the data sent by the service system.
However, the computing resources of the various computing sub-nodes may be the same or different; or, the computing capabilities of the computing sub-nodes may be the same or different. In order to improve the processing efficiency of the computation sub-nodes, it is necessary to determine whether the data volume of the first data stream exceeds a first preset threshold of the first class of computation sub-nodes, and when the data volume of the first data stream does not exceed the first preset threshold of the first class of computation sub-nodes, the first class of computation sub-nodes receives the first data stream and processes the first data stream to obtain a data block.
Specifically, when the amount of data sent by the receiving node to the computation sub-node does not exceed the computation capability of the computation sub-node, the computation sub-node can quickly process the part of data.
However, when the amount of data sent by the receiving node to the computation sub-node exceeds the computation capability of the computation sub-node, the processing time of the computation sub-node is long, and even blocking may occur. Meanwhile, when the data volume sent by the receiving node to the computing sub-node is small, the computing sub-node finishes processing quickly, and the computing sub-node is idle, which means that part of computing resources are wasted and the utilization rate is not high.
In order to solve the problem, the present application provides a "combination manner of directional and random allocation", specifically, when a data amount of the first data stream exceeds a first preset threshold of the first-class computation sub-node, the first data stream is divided into at least two first sub-data streams, so that the data amount of any one of the first sub-data streams is smaller than the first preset threshold of the first-class computation sub-node;
the first type of calculation sub-node receives at least one first sub-data stream and processes the at least one first sub-data stream to obtain a data block; wherein the total data volume of the first sub-data stream received by the first type of computation sub-node does not exceed a first preset threshold of the first type of computation sub-node;
judging whether the data volume of the remaining first sub-data stream exceeds a second preset threshold value of the second type of calculation sub-node; wherein the remaining first sub-stream is: in all first sub-data streams obtained by the first data stream, a first sub-data stream which is not received by the first-class computing sub-node;
and when the data volume of the remaining first sub-data stream does not exceed a second preset threshold of the second-class computing sub-node, the second-class computing sub-node receives the remaining first sub-data stream and processes the remaining first sub-data stream to obtain a data block.
As shown in fig. 4, when the data amount of the first data stream sent by the first receiving node to the first-class computing sub-node is greater than a first preset threshold of the first-class computing sub-node, the first data stream is divided into two first sub-data streams (represented by data block 1 and data block 2 in fig. 4), the data block 1 is sent to the first-class computing sub-node in a directional distribution manner, and the data block 2 is sent to other computing sub-nodes at random for processing, for example, the second-class computing sub-node sends the data block 2 to the second-class computing sub-node for processing in addition to processing the data sent by the second receiving node and remaining computing resources processing the data block 2.
According to the method and the device, the data volume is distributed in a directional distribution mode and a directional and random matching mode according to the received data volume and the computing resources of the computing sub-nodes, so that the utilization rate of the computing resources of the computing nodes is improved, and the situation that part of the computing sub-nodes are busy and the other part of the computing sub-nodes are idle is prevented.
Step S13, the shunting node receives the data block, and classifies the data block to obtain at least a first classified data stream and a second classified data stream; the first classified data stream corresponds to a data stream of the first service system, and the second classified data stream corresponds to a data stream of the second service system;
when a directional distribution mode is adopted, namely when the first data stream of the first receiving node is only sent to the first-class computing sub-node, the classified data stream generated by the first-class computing sub-node is sent to the first shunting node.
When a directional and random matching manner is adopted, that is, when the data amount of the first data stream of the first receiving node exceeds the first-class computing sub-node, the first data stream is divided into at least two first sub-data streams (namely, a data block 1 and a data block 2 shown in fig. 3), and the data block 1 is processed by the first-class computing sub-node to obtain a classified data stream 1 and is sent to the first classification node. After the data block 2 is processed by the second-class computing sub-node, the classified data stream 2 is obtained, and the classified data stream 1 and the classified data stream 2 belong to the same class of data (or belong to the same data source), the second-class computing byte sends the classified data stream 2 to the first shunting node, and the classified data stream 1 and the classified data stream 2 are converged by the first shunting node and then sent to the service platform by the output node.
Step S14, the shunting node transmits the first classified data stream to a first service platform through the output node, and transmits the second classified data stream to a second service platform; the first service platform corresponds to the first service system, and the second service platform corresponds to the second service system.
The output nodes can be set according to the number of the service systems and also can be set according to the field. As shown in fig. 4, the present application preferably outputs data belonging to the same domain to a corresponding service platform through the same output node. The service general platform in fig. 3 is a set of a first service platform, a second service platform, and the like.
In order to cope with the situations of equipment failure, cloud server paralysis and the like, the mirror image cloud server which is the same as the cloud server is constructed by adopting a mirror image technology, and can timely act as the cloud server when the cloud server is paralyzed; the mirror image cloud server is composed of a mirror image receiving node, a mirror image computing node, a mirror image shunting node and a mirror image output node.
Wherein the mirror image receiving node is obtained by mirroring the receiving node; the mirror image computing node is obtained by mirroring the computing node; the mirror image shunting node is obtained by mirroring the shunting node; the mirrored output node is mirrored by the output node.
The cloud server further comprises a snapshot node for implementing snapshot operations for the receiving node, the computing node, the forking node, and the output node. Due to the fact that the data volume of the cloud server is huge, in order to deal with the situation that part of data is lost, the method and the system for achieving the data backup and recovery further achieve fast data backup and recovery in a snapshot mode, and specifically snapshot operation can be conducted timely according to requirements.
The cloud server-based data processing method and system are based on the cloud server, the receiving nodes are used for facing each business system, the data streams sent by each business system are received, then the data streams are sent to the computing nodes for processing, the computing nodes are provided with a plurality of computing sub-nodes, and various types of data can be sent to the adaptive computing nodes for processing; when the data processing of a certain calculation sub-node exceeds the load capacity, the data processing of the certain calculation sub-node can be sent to other calculation sub-nodes for processing, and then the purposes of quickly processing data and balancing the data processing load of each calculation sub-node are achieved, so that the highest utilization rate of calculation resources is achieved. In order to perform classified management on the data processed by each service system, the data blocks output from the computing nodes are sent to the shunting nodes to classify the data blocks according to different service systems, and then the data blocks are sent to corresponding service platforms, so that unified management and data sharing of each service system are realized.
The application provides an elastic resource allocation device based on the same technical concept, which comprises a receiving node, a computing node, a shunting node and an output node;
the receiving node comprises a first receiving module and a first sending module, wherein the first receiving module is used for receiving the data stream of the service system; the first sending module is used for sending the data stream of the service system to the computing node; the data stream of the service system at least comprises a first data stream of a first service system and a second data stream of a second service system;
the computing node comprises a second receiving module, a processing module and a second sending module, wherein the second receiving module is used for receiving the data stream of the service system; the processing module is used for processing the data stream of the service system to obtain a data block; the second sending module is configured to send the data block to the shunting node;
the shunting node comprises a third receiving module, a classifying module and a third sending module, wherein the third receiving module is used for receiving the data block; the classification module is used for classifying the data blocks to obtain at least a first classified data stream and a second classified data stream; the first classified data stream corresponds to a data stream of the first service system, and the second classified data stream corresponds to a data stream of the second service system;
the third sending module is configured to transmit the first classified data stream to a first service platform through the output node, and transmit the second classified data stream to a second service platform; the first service platform corresponds to the first service system, and the second service platform corresponds to the second service system.
Specifically, the computing nodes at least comprise computing sub-nodes of a first type;
the first-class computing sub-node comprises a first judgment sub-module and a first processing sub-module, and the first judgment sub-module is used for judging whether the data volume of the first data stream exceeds a first preset threshold of the first-class computing sub-node; the first processing sub-module is configured to, when the data amount of the first data stream does not exceed a first preset threshold of the first-class computation sub-node, receive the first data stream by the first-class computation sub-node, and process the first data stream to obtain a data block.
Specifically, the computing nodes at least comprise a second type of computing sub-node;
the first class of computation sub-node further comprises a first dividing sub-module and a first receiving sub-module, wherein the first dividing sub-module is configured to divide the first data stream into at least two first sub-data streams when the data volume of the first data stream exceeds a first preset threshold of the first class of computation sub-node, so that the data volume of any one of the first sub-data streams is smaller than the first preset threshold of the first class of computation sub-node;
the first receiving submodule is used for receiving at least one first sub data stream; the first processing submodule is used for processing the at least one first sub-data stream to obtain a data block; wherein the total data volume of the first sub-data stream received by the first type of computation sub-node does not exceed a first preset threshold of the first type of computation sub-node;
the second type of calculation sub-node comprises a second judgment sub-module, a second receiving sub-module and a second processing sub-module; the second judgment submodule is used for judging whether the data volume of the remaining first sub-data stream exceeds a second preset threshold value of the second type of calculation sub-node; wherein the remaining first sub-stream is: in all first sub-data streams obtained by the first data stream, a first sub-data stream which is not received by the first-class computing sub-node;
the second receiving submodule is configured to receive, by the second class computing sub-node, the remaining first sub-data stream when the data size of the remaining first sub-data stream does not exceed a second preset threshold of the second class computing sub-node; and the second processing submodule is used for processing the remaining first sub-data stream to obtain a data block.
The present application also provides an electronic device as shown in fig. 5, including:
a processor 51;
a memory 52 for storing instructions executable by the processor 51;
wherein the processor 51 is configured to execute to implement the method for allocating flexible resources.
The present application also provides a non-transitory computer readable storage medium, wherein instructions of the storage medium, when executed by the processor 51 of the electronic device, enable the electronic device to perform a method for implementing the elastic resource allocation.
For the system embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The foregoing describes in detail a method, an apparatus, an electronic device, and a storage medium for allocating elastic resources provided in the present application, and a specific example is applied in the present application to explain the principles and embodiments of the present application, and the description of the foregoing embodiments is only used to help understand the method and core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. The elastic resource allocation method is applied to a cloud server, wherein the cloud server comprises a receiving node, a computing node, a shunting node and an output node;
the method comprises the following steps:
the receiving node receives the data stream of the service system and sends the data stream of the service system to the computing node; the data stream of the service system at least comprises a first data stream of a first service system and a second data stream of a second service system;
the computing node receives the data stream of the service system and processes the data stream of the service system to obtain a data block; the computing node sends the data block to the shunting node;
the shunting node receives the data blocks and classifies the data blocks to at least obtain a first classified data stream and a second classified data stream; the first classified data stream corresponds to a data stream of the first service system, and the second classified data stream corresponds to a data stream of the second service system;
the shunting node transmits the first classified data stream to a first service platform through the output node, and transmits the second classified data stream to a second service platform; the first service platform corresponds to the first service system, and the second service platform corresponds to the second service system.
2. The method of claim 1, wherein the compute nodes include at least compute children of a first class;
the receiving, by the compute node, the data stream of the service system, and processing the data stream of the service system to obtain a data block specifically includes:
and judging whether the data volume of the first data stream exceeds a first preset threshold of the first-class computation sub-node, and when the data volume of the first data stream does not exceed the first preset threshold of the first-class computation sub-node, receiving the first data stream by the first-class computation sub-node, and processing the first data stream to obtain a data block.
3. The method of claim 2, wherein the compute nodes further comprise at least a second class of compute child nodes;
when the data volume of the first data stream exceeds a first preset threshold of the first-class computing sub-node, dividing the first data stream into at least two first sub-data streams, so that the data volume of any one first sub-data stream is smaller than the first preset threshold of the first-class computing sub-node;
the first type of calculation sub-node receives at least one first sub-data stream and processes the at least one first sub-data stream to obtain a data block; wherein the total data volume of the first sub-data stream received by the first type of computation sub-node does not exceed a first preset threshold of the first type of computation sub-node;
judging whether the data volume of the remaining first sub-data stream exceeds a second preset threshold value of the second type of calculation sub-node; wherein the remaining first sub-stream is: in all first sub-data streams obtained by the first data stream, a first sub-data stream which is not received by the first-class computing sub-node;
and when the data volume of the remaining first sub-data stream does not exceed a second preset threshold of the second-class computing sub-node, the second-class computing sub-node receives the remaining first sub-data stream and processes the remaining first sub-data stream to obtain a data block.
4. The method of claim 1, wherein the cloud server further comprises a mirror receiving node, a mirror computing node, a mirror offloading node, and a mirror output node;
wherein the mirror image receiving node is obtained by mirroring the receiving node; the mirror image computing node is obtained by mirroring the computing node; the mirror image shunting node is obtained by mirroring the shunting node; the mirrored output node is mirrored by the output node.
5. The method of claim 1, wherein the cloud server further comprises a snapshot node configured to implement snapshot operations for the receiving node, the computing node, the forking node, and the output node.
6. An elastic resource allocation device is characterized by comprising a receiving node, a computing node, a shunting node and an output node;
the receiving node comprises a first receiving module and a first sending module, wherein the first receiving module is used for receiving the data stream of the service system; the first sending module is used for sending the data stream of the service system to the computing node; the data stream of the service system at least comprises a first data stream of a first service system and a second data stream of a second service system;
the computing node comprises a second receiving module, a processing module and a second sending module, wherein the second receiving module is used for receiving the data stream of the service system; the processing module is used for processing the data stream of the service system to obtain a data block; the second sending module is configured to send the data block to the shunting node;
the shunting node comprises a third receiving module, a classifying module and a third sending module, wherein the third receiving module is used for receiving the data block; the classification module is used for classifying the data blocks to obtain at least a first classified data stream and a second classified data stream; the first classified data stream corresponds to a data stream of the first service system, and the second classified data stream corresponds to a data stream of the second service system;
the third sending module is configured to transmit the first classified data stream to a first service platform through the output node, and transmit the second classified data stream to a second service platform; the first service platform corresponds to the first service system, and the second service platform corresponds to the second service system.
7. The apparatus of claim 6, wherein the compute nodes include at least compute children of a first class;
the first-class computing sub-node comprises a first judgment sub-module and a first processing sub-module, and the first judgment sub-module is used for judging whether the data volume of the first data stream exceeds a first preset threshold of the first-class computing sub-node; the first processing sub-module is configured to, when the data amount of the first data stream does not exceed a first preset threshold of the first-class computation sub-node, receive the first data stream by the first-class computation sub-node, and process the first data stream to obtain a data block.
8. The apparatus of claim 7, wherein the compute nodes further comprise at least a second class of compute child nodes;
the first class of computation sub-node further comprises a first dividing sub-module and a first receiving sub-module, wherein the first dividing sub-module is configured to divide the first data stream into at least two first sub-data streams when the data volume of the first data stream exceeds a first preset threshold of the first class of computation sub-node, so that the data volume of any one of the first sub-data streams is smaller than the first preset threshold of the first class of computation sub-node;
the first receiving submodule is used for receiving at least one first sub data stream; the first processing submodule is used for processing the at least one first sub-data stream to obtain a data block; wherein the total data volume of the first sub-data stream received by the first type of computation sub-node does not exceed a first preset threshold of the first type of computation sub-node;
the second type of calculation sub-node comprises a second judgment sub-module, a second receiving sub-module and a second processing sub-module; the second judgment submodule is used for judging whether the data volume of the remaining first sub-data stream exceeds a second preset threshold value of the second type of calculation sub-node; wherein the remaining first sub-stream is: in all first sub-data streams obtained by the first data stream, a first sub-data stream which is not received by the first-class computing sub-node;
the second receiving submodule is configured to receive, by the second class computing sub-node, the remaining first sub-data stream when the data size of the remaining first sub-data stream does not exceed a second preset threshold of the second class computing sub-node; and the second processing submodule is used for processing the remaining first sub-data stream to obtain a data block.
9. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute to implement a method of elastic resource allocation according to any one of claims 1 to 5.
10. A non-transitory computer readable storage medium having instructions which, when executed by a processor of an electronic device, enable the electronic device to perform implementing a method of elastic resource allocation as claimed in any one of claims 1 to 5.
CN201911351966.9A 2019-12-24 2019-12-24 Elastic resource allocation method and device, electronic equipment and storage medium Active CN111124682B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911351966.9A CN111124682B (en) 2019-12-24 2019-12-24 Elastic resource allocation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911351966.9A CN111124682B (en) 2019-12-24 2019-12-24 Elastic resource allocation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111124682A true CN111124682A (en) 2020-05-08
CN111124682B CN111124682B (en) 2021-01-08

Family

ID=70502312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911351966.9A Active CN111124682B (en) 2019-12-24 2019-12-24 Elastic resource allocation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111124682B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112799831A (en) * 2021-01-18 2021-05-14 金扬芳 Big data processing method, big data processing system and electronic equipment

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800038A (en) * 2012-08-13 2012-11-28 南京鑫三强科技实业有限公司 Intelligence education E-card system platform based on internet of things and cloud computation
CN102891881A (en) * 2012-07-09 2013-01-23 北京中创信测科技股份有限公司 Method for implementing equivalence and balance of nodes under cloud environment
CN103093306A (en) * 2012-12-21 2013-05-08 大唐软件技术股份有限公司 Method and device of business data coprocessing
CN104424240A (en) * 2013-08-27 2015-03-18 腾讯科技(深圳)有限公司 Multi-table correlation method and system, main service node and computing node
CN104820946A (en) * 2015-02-05 2015-08-05 宁夏赛恩科技集团股份有限公司 Cloud computing system for agricultural information integration
CN105023188A (en) * 2015-01-07 2015-11-04 泰华智慧产业集团股份有限公司 Digitized city management data sharing system based on cloud data
US20150363694A1 (en) * 2013-01-16 2015-12-17 Tata Consultancy Services Limited A system and method for smart public alerts and notifications
CN105389766A (en) * 2015-12-17 2016-03-09 北京中科云集科技有限公司 Smart city management method and system based on cloud platform
CN105825462A (en) * 2016-03-07 2016-08-03 华侨大学 Smart city information system based on internet of things and cloud computing
US20170126419A1 (en) * 2015-10-29 2017-05-04 Samsung Electronics Co., Ltd. Method and apparatus of managing guest room
CN103945004B (en) * 2014-05-06 2017-05-31 中国联合网络通信集团有限公司 Data dispatching method and system between a kind of data center
CN107741955A (en) * 2017-09-15 2018-02-27 平安科技(深圳)有限公司 Business datum monitoring method, device, terminal device and storage medium
CN108123886A (en) * 2016-11-29 2018-06-05 上海有云信息技术有限公司 The data forwarding method and device of a kind of cloud computing platform
CN108449324A (en) * 2018-02-14 2018-08-24 北京明朝万达科技股份有限公司 The secure exchange method and system of data between a kind of net
CN109117320A (en) * 2018-07-05 2019-01-01 珠海许继芝电网自动化有限公司 Power distribution automation main station failure disaster tolerance processing system and method based on cloud platform
CN109391700A (en) * 2018-12-12 2019-02-26 北京华清信安科技有限公司 Internet of Things safe cloud platform based on depth traffic aware
US20190146849A1 (en) * 2017-11-16 2019-05-16 Sas Institute Inc. Scalable cloud-based time series analysis
CN110209492A (en) * 2019-03-21 2019-09-06 腾讯科技(深圳)有限公司 A kind of data processing method and device

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102891881A (en) * 2012-07-09 2013-01-23 北京中创信测科技股份有限公司 Method for implementing equivalence and balance of nodes under cloud environment
CN102800038A (en) * 2012-08-13 2012-11-28 南京鑫三强科技实业有限公司 Intelligence education E-card system platform based on internet of things and cloud computation
CN103093306A (en) * 2012-12-21 2013-05-08 大唐软件技术股份有限公司 Method and device of business data coprocessing
US20150363694A1 (en) * 2013-01-16 2015-12-17 Tata Consultancy Services Limited A system and method for smart public alerts and notifications
CN104424240A (en) * 2013-08-27 2015-03-18 腾讯科技(深圳)有限公司 Multi-table correlation method and system, main service node and computing node
CN103945004B (en) * 2014-05-06 2017-05-31 中国联合网络通信集团有限公司 Data dispatching method and system between a kind of data center
CN105023188A (en) * 2015-01-07 2015-11-04 泰华智慧产业集团股份有限公司 Digitized city management data sharing system based on cloud data
CN104820946A (en) * 2015-02-05 2015-08-05 宁夏赛恩科技集团股份有限公司 Cloud computing system for agricultural information integration
US20170126419A1 (en) * 2015-10-29 2017-05-04 Samsung Electronics Co., Ltd. Method and apparatus of managing guest room
CN105389766A (en) * 2015-12-17 2016-03-09 北京中科云集科技有限公司 Smart city management method and system based on cloud platform
CN105825462A (en) * 2016-03-07 2016-08-03 华侨大学 Smart city information system based on internet of things and cloud computing
CN108123886A (en) * 2016-11-29 2018-06-05 上海有云信息技术有限公司 The data forwarding method and device of a kind of cloud computing platform
CN107741955A (en) * 2017-09-15 2018-02-27 平安科技(深圳)有限公司 Business datum monitoring method, device, terminal device and storage medium
US20190146849A1 (en) * 2017-11-16 2019-05-16 Sas Institute Inc. Scalable cloud-based time series analysis
CN108449324A (en) * 2018-02-14 2018-08-24 北京明朝万达科技股份有限公司 The secure exchange method and system of data between a kind of net
CN109117320A (en) * 2018-07-05 2019-01-01 珠海许继芝电网自动化有限公司 Power distribution automation main station failure disaster tolerance processing system and method based on cloud platform
CN109391700A (en) * 2018-12-12 2019-02-26 北京华清信安科技有限公司 Internet of Things safe cloud platform based on depth traffic aware
CN110209492A (en) * 2019-03-21 2019-09-06 腾讯科技(深圳)有限公司 A kind of data processing method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112799831A (en) * 2021-01-18 2021-05-14 金扬芳 Big data processing method, big data processing system and electronic equipment

Also Published As

Publication number Publication date
CN111124682B (en) 2021-01-08

Similar Documents

Publication Publication Date Title
CN110958218B (en) Data transmission method based on multi-network communication and related equipment
CN107104961B (en) distributed real-time video monitoring processing system based on ZooKeeper
CN105468626B (en) data processing method and system
WO2018072687A1 (en) Resource scheduling method and apparatus, and filtered scheduler
CN109547541B (en) Node low-overhead cooperation method based on filtering and distribution mechanism in fog computing environment
CN103444137A (en) Prioritizing network traffic
CN105577801B (en) A kind of business accelerating method and device
CN111124682B (en) Elastic resource allocation method and device, electronic equipment and storage medium
CN116708450A (en) Load balancing method, load balancing device, electronic equipment and computer readable storage medium
US20170141949A1 (en) Method and apparatus for processing alarm information in cloud computing
CN113835876A (en) Artificial intelligent accelerator card scheduling method and device based on domestic CPU and OS
CN116319810A (en) Flow control method, device, equipment, medium and product of distributed system
CN113946857B (en) Distributed cross-link scheduling method and device based on data routing
CN111049927B (en) File storage method and device, electronic equipment and storage medium
CN111125031B (en) Object storage method and device, electronic equipment and storage medium
CN111049929B (en) Virtual network resource service method, device, electronic equipment and storage medium
CN115373811A (en) Service related data reporting system, method and device
CN113190347A (en) Edge cloud system and task management method
Luo et al. Rinegan: a scalable image processing architecture for large scale surveillance applications
CN112202932A (en) Method and device for performing structured analysis on video based on edge calculation
Purbey et al. Analyzing frameworks for IoT data storage, representation and analysis: A statistical perspective
CN108363551A (en) A kind of storage system copy link flow control realizing method
CN105554444B (en) security monitoring system and method
CN115378946A (en) Data processing method, device and system
CN110430172B (en) Internet protocol content restoration system and method based on dynamic session association technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant