Background
In the process of executing a cloud computing task through a cloud computing cluster, in order to consider load balance of cloud computing nodes in the computing process of the cloud computing task, generally, uniform and balanced scheduling is performed on computing nodes of each task process of the cloud computing task according to the occupation condition of computing resources of each cloud computing node and computing resources required by the cloud computing task. However, through research by the inventor of the present application, it is found that each task process node in a cloud computing task does not exist as an independent service, but a certain service association relationship exists between the task process nodes, and therefore if a scheduling manner according to a conventional scheme is adopted, it may be difficult for a certain key service to be effectively matched with other associated task process nodes in time due to unreasonable allocation of different task process nodes, so that service waiting time may be long.
Disclosure of Invention
In view of this, an object of the present application is to provide a cloud computing task scheduling method, an apparatus, a cloud computing system, and a server, which can improve a situation that it is difficult for a certain task process node to effectively cooperate with other associated task process nodes in time due to unreasonable allocation of different task process nodes for some key services, and reduce service waiting time.
In a first aspect, the present application provides a cloud computing task scheduling method, which is applied to a server, where the server is in communication connection with a plurality of cloud computing nodes, and the method includes:
the method comprises the steps of obtaining a plurality of task process nodes from a target cloud computing task, and respectively extracting corresponding task process characteristics from the task process nodes, wherein the task process characteristics are used for representing business characteristics corresponding to resources to be computed and corresponding to the task process nodes;
determining a task process incidence relation among task process nodes according to the extracted task process characteristics, and constructing a corresponding task process incidence network according to the calculated task process incidence relation among the task process nodes;
respectively determining a scheduling process corresponding to each task process node according to the constructed task process association network;
and determining scheduling cloud computing nodes corresponding to the task process nodes in the target cloud computing task according to the scheduling processes corresponding to the task process nodes and the scheduling computing relationship among the cloud computing nodes.
In a possible design of the first aspect, the step of extracting corresponding task process features from the plurality of task process nodes respectively includes:
performing service classification processing on the task process service information corresponding to the task process nodes to obtain a service classification table entry of the task process service information;
determining the service grade of the service classification list items obtained by service classification processing, performing descending ranking on each service classification list item according to the corresponding service grade, and then selecting the service classification list items in the set sequence from each service classification list item obtained by service classification processing;
determining a task process characteristic service classification list item appointed by the task process characteristic extraction strategy according to a preset task process characteristic extraction strategy aiming at the service classification list item;
when the same task process service information comprises a plurality of service classification table entries belonging to different task process characteristics, counting the number of the service classification table entries of each task process characteristic in the same task process service information;
determining the task process characteristics with the largest number of the counted service classification table items, adding the characteristic attributes of the determined task process characteristics to the same task process service information, and adding the characteristic attributes of the non-task process characteristics to the task process service information of the service classification table items which do not include the task process characteristics;
fusing task process service information to be subjected to feature extraction and the added feature attributes to obtain a first network model, inputting each service classification table item into the first network model, and outputting the confidence coefficient of each service classification table item on each task process feature;
re-determining the service classification table entry with the confidence coefficient greater than or equal to the first confidence coefficient threshold value for each task process characteristic as the service classification table entry of the task process characteristic, returning the characteristic attribute of the task process characteristic added to the same task process service information and continuing processing until the iteration stop condition is met to obtain the characteristic attribute of the task process service information to be subjected to characteristic extraction;
after the iteration stopping condition is met, acquiring the confidence coefficient of the task process business information to be subjected to feature extraction determined by the corresponding network model for each task process feature, and screening that the confidence coefficient for each task process feature is greater than or equal to a second confidence coefficient threshold;
fusing the screened task process service information and the corresponding characteristic attributes to obtain a second network model;
determining the confidence coefficient of the task process business information to be subjected to feature extraction on each task process feature through the second network model, and updating the feature attribute of the corresponding task process business information according to the confidence coefficient of the task process business information to be subjected to feature extraction on each task process feature;
after the characteristic attribute of the corresponding task process business information is updated according to the confidence coefficient of the task process business information to be subjected to characteristic extraction on each task process characteristic, returning the step of screening the task process business information of which the confidence coefficient on each task process characteristic is greater than or equal to a second confidence coefficient threshold value to continue execution until the updating stop condition is met, and obtaining the updated characteristic attribute of the task process business information to be subjected to characteristic extraction;
obtaining the confidence coefficient of the task process business information to be extracted by each feature determined by the second network model after the feature attribute is updated on each task process feature and the confidence coefficient of the non-task process feature;
selecting task process service information which is determined after the characteristic attribute is updated and has the confidence coefficient of each task process characteristic greater than or equal to a third confidence coefficient threshold value, and fusing the task process service information and the corresponding characteristic attribute according to the selected task process service information to obtain a third network model;
determining the confidence coefficient of the task process business information to be subjected to feature extraction on each task process feature through the third network model, and determining the task process feature of the corresponding task process business information according to the confidence coefficient of each task process feature determined through the third network model;
acquiring target task process service information different from the task process service information to be feature extracted, determining the confidence coefficient of the target task process service information on each task process feature through the third network model, and then determining the task process feature corresponding to the target task process service information according to the confidence coefficient of the target task process service information on each task process feature;
and summarizing according to the task process characteristics of the determined task process service information to obtain task process nodes, and extracting corresponding task process characteristics respectively.
In a possible design of the first aspect, the step of determining a task process association relationship between task process nodes according to the extracted task process features includes:
determining a first vector set of task feature vectors corresponding to at least two task process nodes according to the extracted task process features; wherein the task feature vector comprises a plurality of task feature vector elements;
selecting a first initial task associated network sequence; the task associated network group corresponding to the first task associated network sequence comprises a preset first prediction node, a fusion node to be combined and a depth extraction node;
for a first vector set corresponding to each task feature vector element, combining a first prediction node of the first initial task correlation network model and a fusion node of each order to obtain a plurality of combined node sequences;
mapping the first vector set according to the plurality of combined node sequences respectively to obtain sequence pairs of various different combined node sequences; the input parameters of the fusion nodes in the combined node sequence are task process characteristics of task process nodes corresponding to the first vector set, and the output parameters of the first prediction nodes are task process correlation parameters of the task process nodes corresponding to the first vector set;
according to the sequence pair and a plurality of depth extraction nodes with different orders of the first initial task associated network sequence, updating the first initial task associated network sequence, determining a first node combination of a task associated network group corresponding to the minimum prediction loss function value, and obtaining a first task associated network model comprising the first node combination; the task associated network group corresponding to the first initial task associated network sequence comprises a preset first prediction node, a fusion node to be combined and a depth extraction node;
after the updated model parameters of the first task associated network model are determined to meet preset conditions, comparing the predicted parameters of the task process nodes output by the first task associated network model based on the task process associated parameters in the first vector set with the task process associated parameters of the task process nodes, and determining a first confidence degree of the first task associated network model according to the confidence range determined that the confidence degrees between the plurality of predicted parameters and the task process associated parameters are greater than a preset second threshold value;
updating a preset second initial task associated network sequence according to a parameter comparison result of a loss task process associated parameter and a prediction parameter of the first task associated network model, determining a second node combination of a task associated network group corresponding to the minimum prediction loss function value to obtain a second task associated network model comprising the second node combination, and determining a second confidence degree of the first vector set based on a plurality of second task associated network sequences obtained by updating; the task associated network group in the second initial task associated network model comprises the preset fusion node, a second prediction node and a depth extraction node to be combined; the second prediction node and the first prediction node have the same order but different output parameters, the output parameter of the first prediction node is a task process correlation parameter, and the output parameter of the second prediction node is a parameter comparison result of the prediction parameter of the first task correlation network model and the task process correlation parameter;
determining a prediction vector corresponding to a prediction parameter of the first task association network model according to the first confidence degree and the second confidence degree, generating a relation feature map based on a plurality of task feature vector elements based on a constraint relation among the plurality of task feature vector elements in a vector set of the task feature vector, and calculating an association value of each level of association relation in the relation feature map, wherein the first confidence degree and the second confidence degree determine the prediction vector corresponding to the prediction parameter of the first task association network model through respective corresponding weight parameters;
and determining the task process incidence relation between the at least two task process nodes according to the incidence value of each level of incidence relation in the relation characteristic map, wherein when the incidence value is greater than a set incidence value, the level of incidence relation is determined to exist between the at least two task process nodes, otherwise, the level of incidence relation is determined not to exist between the at least two task process nodes.
In one possible design of the first aspect, the plurality of fusion nodes of different orders of the first initial task association network sequence are determined by:
analyzing the task process association parameters and the corresponding task process characteristics for the task process association parameters corresponding to the first vector set to obtain target task process characteristics of which the correlation degree with the task process association parameters is greater than a preset first threshold;
and determining the fusion node order of the first initial task associated network sequence according to the quantity of the target task process characteristics.
In a possible design of the first aspect, the step of constructing a corresponding task process association network according to the calculated task process association relationship between the task process nodes includes:
according to the calculated task process incidence relation among the task process nodes, dividing each target task process node covered by the same type of task process incidence relation into a node matrix, according to the node distribution quantity in each node matrix, reducing the matrix order of the node matrix of which the node distribution quantity is greater than a preset quantity threshold value, and expanding the matrix order of the node matrix of which the node distribution quantity is less than the preset quantity threshold value to obtain each adjusted node matrix; wherein, all task process nodes in each node matrix form a network unit;
calculating the network relation between each task process node and other task process nodes in a single network unit according to the position of each task process node in the single network unit;
for a single network unit, sequencing each task process node in the single network unit according to the sequence of the network relationship between each task process node and other task process nodes to obtain a task process node sequencing list;
for a single network unit, sequentially executing the following processes for each task process node in the task process node ordered list until determining a head task process node of the single network unit:
judging whether a first task grade of the task process nodes in the task process node ordered list is greater than a first preset grade or not, and if so, taking the task process nodes greater than the first preset grade as head task process nodes of a single network unit;
for a single network unit, determining a head task process node of the single network unit as a task process node which is in mapping association with the head task process node, and determining other task process nodes except the head task process node of the single network unit as member task process nodes of the single network unit, wherein the member task process nodes of the single network unit are the task process nodes which are in mapping association with the head task process node of the single network unit;
and constructing a corresponding task process association network according to the determined head task process nodes and the member task process nodes of each network unit.
In a possible design of the first aspect, the step of determining, according to the constructed task process association network, a scheduling process corresponding to each task process node respectively includes:
acquiring a scheduling process topological space of each head task process node and the member task process nodes according to the network connection relation between each head task process node and the member task process nodes in the constructed task process association network, and taking the scheduling process topological space as a scheduling unit to enable each head task process node and the member task process nodes to be expressed into a scheduling unit consisting of the scheduling process topological spaces of the head task process nodes and the member task process nodes;
acquiring all similar scheduling units from the scheduling units of each head task process node and member task process node according to the scheduling types of the scheduling units corresponding to the head task process node and the member task process node to form a first scheduling unit sequence;
performing decision tree processing on the scheduling units in the first scheduling unit sequence corresponding to the head task process node and the member task process nodes to obtain a decision tree structure and a decision tree hierarchy;
calculating a screening scheduling relation of a scheduling unit which takes the head task process node and the member task process nodes as a reference and does not contain a scheduling relation more than a preset hierarchy according to the decision tree structure and the decision tree hierarchy;
when each head task process node and member task process node calculate and obtain a screening scheduling relation of which the scheduling unit taking the head task process node and the member task process node as the center does not contain a scheduling relation above a preset level, obtaining the head task process node and the member task process node which do not contain the scheduling relation above the preset level according to the screening scheduling relation of which the screening scheduling relation does not contain the scheduling relation above the preset level and is corresponding to each head task process node and the member task process node;
obtaining a second scheduling unit sequence according to the head task process node and the member task process node which do not contain the scheduling relationship above the preset level, and processing the second scheduling unit sequence to obtain a decision tree structure sequence corresponding to the second scheduling unit sequence;
calculating opportunity nodes and decision tree characteristic vectors for the decision tree structure sequence, taking the decision tree characteristic vectors as initial values, and respectively processing scheduling units corresponding to the head task process node and the member task process nodes in the second scheduling unit sequence according to the opportunity nodes to obtain corresponding topology decision trees;
and respectively determining the scheduling process corresponding to each task process node according to the decision result in the topology decision tree.
In a possible design of the first aspect, the step of determining, according to a scheduling process corresponding to each task process node and a scheduling computation relationship between the plurality of cloud computing nodes, a scheduling cloud computing node corresponding to each task process node in the target cloud computing task includes:
determining a task process node sequence under each scheduling process according to the scheduling process corresponding to each task process node;
and determining scheduling cloud computing nodes aiming at the task process node sequence under each scheduling process according to the scheduling computing relationship among the plurality of cloud computing nodes.
In a second aspect, an embodiment of the present application further provides a cloud computing task scheduling apparatus, which is applied to a server, where the server is communicatively connected to a plurality of cloud computing nodes, and the apparatus includes:
the extraction module is used for acquiring a plurality of task process nodes from a target cloud computing task and extracting corresponding task process characteristics from the plurality of task process nodes respectively, wherein the task process characteristics are used for representing service characteristics corresponding to resources to be computed and corresponding to the task process nodes;
the construction module is used for determining task process incidence relations among the task process nodes according to the extracted task process characteristics and constructing corresponding task process incidence networks according to the calculated task process incidence relations among the task process nodes;
the determining module is used for respectively determining the scheduling process corresponding to each task process node according to the constructed task process association network;
and the scheduling module is used for determining scheduling cloud computing nodes corresponding to the task process nodes in the target cloud computing task according to the scheduling processes corresponding to the task process nodes and the scheduling computing relationship among the plurality of cloud computing nodes.
In a third aspect, an embodiment of the present application further provides a cloud computing system, where the cloud computing system includes a server and a plurality of cloud computing nodes communicatively connected to the server;
the server is used for acquiring a plurality of task process nodes from a target cloud computing task and extracting corresponding task process characteristics from the task process nodes respectively, wherein the task process characteristics are used for representing service characteristics corresponding to resources to be computed and corresponding to the task process nodes;
the server is used for determining task process incidence relations among the task process nodes according to the extracted task process characteristics and constructing corresponding task process incidence networks according to the calculated task process incidence relations among the task process nodes;
the server is used for respectively determining the scheduling process corresponding to each task process node according to the constructed task process association network;
the server is used for determining scheduling cloud computing nodes corresponding to the task process nodes in the target cloud computing task according to scheduling processes corresponding to the task process nodes and scheduling computing relations among the plurality of cloud computing nodes;
each cloud computing node is used for performing cloud computing processing on task process nodes in the target cloud computing tasks distributed by the server.
In a fourth aspect, an embodiment of the present application further provides a server, where the server includes a processor, a machine-readable storage medium, and a network interface, where the machine-readable storage medium, the network interface, and the processor are connected through a bus system, the network interface is configured to be communicatively connected to at least one cloud computing node, the machine-readable storage medium is configured to store a program, an instruction, or code, and the processor is configured to execute the program, the instruction, or the code in the machine-readable storage medium to perform the cloud computing task scheduling method in the first aspect or any possible design of the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are detected on a computer, the instructions cause the computer to perform the cloud computing task scheduling method in the first aspect or any one of the possible designs of the first aspect.
According to any one of the aspects, the task process association relation among the task process nodes is determined by respectively extracting corresponding task process characteristics from the task process nodes in the target cloud computing task, and the task process nodes are scheduled by combining the task process association relation among the task process nodes and the scheduling computing relation among the cloud computing nodes, so that the condition that certain key services cannot be effectively matched with other associated task process nodes in time due to unreasonable distribution of different task process nodes is improved, and the service waiting time is reduced.
Detailed Description
The present application will now be described in detail with reference to the drawings, and the specific operations in the method embodiments may also be applied to the apparatus embodiments or the system embodiments. In the description of the present application, "at least one" includes one or more unless otherwise specified. "plurality" means two or more. For example, at least one of A, B and C, comprising: a alone, B alone, a and B in combination, a and C in combination, B and C in combination, and A, B and C in combination. In this application, "/" means "or, for example, A/B may mean A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone.
Fig. 1 is an interaction diagram of a cloud computing system 10 according to an embodiment of the present application. The cloud computing system 10 may include a server 100 and a cloud computing node 200 communicatively connected to the server 100, and the server 100 may include a processor for executing instruction operations. The cloud computing system 10 shown in fig. 1 is merely one possible example, and in other possible embodiments, the cloud computing system 10 may include only a portion of the components shown in fig. 1 or may include other components.
In some embodiments, the server 100 may be a single server or a group of servers. The set of servers may be centralized or distributed (e.g., server 100 may be a distributed system). In some embodiments, the server 100 may be local or remote to the cloud computing node 200. For example, the server 100 may access information stored in the cloud computing node 200 and a database, or any combination thereof, via a network. As another example, the server 100 may be directly connected to at least one of the cloud computing node 200 and a database to access information and/or data stored therein. In some embodiments, the server 100 may be implemented on a cloud platform; by way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud (community cloud), a distributed cloud, an inter-cloud, a multi-cloud, and the like, or any combination thereof.
In some embodiments, the server 100 may include a processor. The processor may process information and/or data related to the service request to perform one or more of the functions described herein. A processor may include one or more processing cores (e.g., a single-core processor (S) or a multi-core processor (S)). Merely by way of example, a Processor may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an Application Specific Instruction Set Processor (ASIP), a Graphics Processing Unit (GPU), a Physical Processing Unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a microcontroller Unit, a Reduced Instruction Set computer (Reduced Instruction Set computer), a microprocessor, or the like, or any combination thereof.
The network may be used for the exchange of information and/or data. In some embodiments, one or more components in the cloud computing system 10 (e.g., the server 100, the cloud computing node 200, and the database) may send information and/or data to other components. In some embodiments, the network may be any type of wired or wireless network, or combination thereof. Merely by way of example, Network 130 may include a wired Network, a Wireless Network, a fiber optic Network, a telecommunications Network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a WLAN, a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a Public Switched Telephone Network (PSTN), a bluetooth Network, a ZigBee Network, a Near Field Communication (NFC) Network, or the like, or any combination thereof. In some embodiments, the network may include one or more network access points. For example, the network may include wired or wireless network access points, such as base stations and/or network switching nodes, through which one or more components of the cloud computing system 10 may connect to the network to exchange data and/or information.
The aforementioned database may store data and/or instructions. In some embodiments, the database may store data distributed to the cloud computing node 200. In some embodiments, the database may store data and/or instructions for the exemplary methods described herein. In some embodiments, the database may include mass storage, removable storage, volatile Read-write Memory, or Read-Only Memory (ROM), among others, or any combination thereof. By way of example, mass storage may include magnetic disks, optical disks, solid state drives, and the like; removable memory may include flash drives, floppy disks, optical disks, memory cards, zip disks, tapes, and the like; volatile read-write Memory may include Random Access Memory (RAM); the RAM may include Dynamic RAM (DRAM), Double data Rate Synchronous Dynamic RAM (DDR SDRAM); static RAM (SRAM), Thyristor-Based Random Access Memory (T-RAM), Zero-capacitor RAM (Zero-RAM), and the like. By way of example, ROMs may include Mask Read-Only memories (MROMs), Programmable ROMs (PROMs), Erasable Programmable ROMs (PERROMs), Electrically Erasable Programmable ROMs (EEPROMs), compact disk ROMs (CD-ROMs), digital versatile disks (ROMs), and the like. In some embodiments, the database may be implemented on a cloud platform. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, across clouds, multiple clouds, or the like, or any combination thereof.
In some embodiments, the database may be connected to a network to communicate with one or more components in the cloud computing system 10 (e.g., server 100, cloud computing node 200, etc.). One or more components in cloud computing system 10 may access data or instructions stored in a database via a network. In some embodiments, the database may be directly connected to one or more components in the cloud computing system 10 (e.g., the server 100, the cloud computing node 200, etc.; or, in some embodiments, the database may be part of the server 100.
In this embodiment, the cloud computing node 200 may be various computing devices for performing cloud computing tasks, such as a server, a high performance computer, and the like, and this embodiment is not limited in particular herein.
To solve the technical problem in the foregoing background art, fig. 2 is a schematic flowchart of a cloud computing task scheduling method provided in an embodiment of the present application, where the cloud computing task scheduling method provided in the present embodiment may be executed by the server 100 shown in fig. 1, and the following describes the cloud computing task scheduling method in detail.
Step S110, a plurality of task process nodes are obtained from the target cloud computing task, and corresponding task process characteristics are respectively extracted from the task process nodes.
Step S120, determining task process association relations among the task process nodes according to the extracted task process characteristics, and constructing corresponding task process association networks according to the calculated task process association relations among the task process nodes.
Step S130, according to the constructed task process association network, respectively determining the scheduling process corresponding to each task process node.
Step S140, determining the cloud computing node 200 corresponding to each task process node in the target cloud computing task according to the scheduling process corresponding to each task process node and the scheduling computing relationship between the plurality of cloud computing nodes 200.
In this embodiment, for step S110, the task process characteristics may be used to represent service characteristics corresponding to resources to be calculated corresponding to the task process node. For example, the resource to be calculated may be 3D modeling basic data, and the service feature corresponding to the 3D modeling basic data may be a service feature of an animation rendering service, which is not limited herein.
Based on the above design, in this embodiment, corresponding task process features are respectively extracted from a plurality of task process nodes in a target cloud computing task to determine task process association relationships among the task process nodes, and the task process nodes are scheduled by combining the task process association relationships among the task process nodes and the scheduling computing relationship among the cloud computing nodes 200, so that a situation that a certain task process node cannot be effectively matched with other associated task process nodes in time due to unreasonable allocation of different task process nodes in some key services is improved, and service waiting time is reduced.
In a possible design, in step S110, in order to reduce redundant features and improve accuracy and reliability of the association relationship of the subsequent task processes in the process of extracting task process features, in this embodiment, service classification processing may be performed on each task process service information corresponding to a plurality of task process nodes to obtain a service classification entry of each task process service information, then, a service level of the service classification entry obtained by the service classification processing is determined, and each service classification entry is ranked in a descending order according to the corresponding service level, and then, a service classification entry in a set ranking is selected from each service classification entry obtained by the service classification processing.
For example, the service classification table entry may be determined in advance according to the classification type described in the service information of each task process, and one classification type may correspond to multiple service classification table entries.
On the basis, the task process feature extraction strategy for the task process feature specified by the task process feature extraction strategy can be determined according to the preset task process feature extraction strategy for the task process feature extraction strategy. And when the same task process service information comprises a plurality of service classification table entries belonging to different task process characteristics, counting the number of the service classification table entries of each task process characteristic in the same task process service information. Therefore, the task process characteristics with the largest number of the counted service classification table entries can be determined, the characteristic attributes of the determined task process characteristics are added to the same task process service information, and the characteristic attributes of the non-task process characteristics are added to the task process service information of the service classification table entries which do not include the task process characteristics.
In this embodiment, the task process feature extraction policy may specify a service classification table entry of the task process feature, and may specifically be predetermined according to the service level and the service importance, which is not specifically limited herein. In addition, the feature attribute may refer to a cloud computing type corresponding to the task process feature, such as big data processing, cloud rendering, and the like.
And then, fusing the task process service information to be subjected to feature extraction and the added feature attributes to obtain a first network model, inputting each service classification table entry into the first network model, and outputting the confidence coefficient of each service classification table entry for each task process feature.
Then, the business classification table entry with the confidence coefficient greater than or equal to the first confidence coefficient threshold value for each task process characteristic can be re-determined as the business classification table entry for the task process characteristic, and the determined characteristic attribute of the task process characteristic is added to the same task process business information and is continuously processed until the characteristic attribute of the task process business information to be subjected to characteristic extraction is obtained when the iteration stop condition is met.
It is worth to be noted that after the iteration stop condition is met, the confidence degree of the task process business information to be subjected to feature extraction determined by the corresponding network model for each task process feature is obtained, and the confidence degree for each task process feature is screened to be greater than or equal to a second confidence degree threshold value.
On the basis, fusion can be performed according to the screened task process business information and the corresponding characteristic attribute to obtain a second network model, then the confidence coefficient of the task process business information to be subjected to characteristic extraction on each task process characteristic is determined through the second network model, and the characteristic attribute of the corresponding task process business information is updated according to the confidence coefficient of the task process business information to be subjected to characteristic extraction on each task process characteristic.
After the characteristic attribute of the corresponding task process business information is updated according to the confidence coefficient of the task process business information to be subjected to characteristic extraction on each task process characteristic, the step of screening the task process business information with the confidence coefficient larger than or equal to the second confidence coefficient threshold value on each task process characteristic is returned to be continuously executed until the updating stop condition is met, and the characteristic attribute after the task process business information to be subjected to characteristic extraction is updated is obtained.
Next, the confidence degree of the task process service information to be extracted by each feature determined by the second network model after updating the feature attributes to each task process feature and the confidence degree of the task process service information belonging to non-task process features can be obtained, the task process service information of which the confidence degree to each task process feature determined after updating the feature attributes is greater than or equal to a third confidence degree threshold value is selected, and the selected task process service information and the corresponding feature attributes are fused to obtain a third network model. Then, the confidence degree of the task process business information to be subjected to feature extraction to each task process feature is determined through a third network model, the task process feature of the corresponding task process business information is determined according to the confidence degree of the task process feature determined through the third network model, target task process business information different from the task process business information to be subjected to feature extraction is obtained, the confidence degree of the target task process business information to each task process feature is determined through the third network model, and then the task process feature corresponding to the target task process business information is determined according to the confidence degree of the target task process business information to each task process feature.
Therefore, according to the task process characteristics of the determined task process service information, the task process characteristics can be collected to obtain the task process nodes, and the corresponding task process characteristics are extracted respectively.
Therefore, based on the design, the embodiment can effectively reduce redundant features and improve the accuracy and reliability of the incidence relation of the subsequent task process through the series of data screening and the classification processing of the network model.
In a possible design, for step S120, in the process of determining the task process association relationship between the task process nodes, in order to avoid that a part of redundant or excessive hierarchical task process association relationships are determined, which may cause unreasonable subsequent scheduling allocation, the embodiment may determine, according to the extracted task process features, a first vector set of task feature vectors corresponding to at least two task process nodes. In this embodiment, the task feature vector may include a plurality of task feature vector elements, such as a decision association element (which is associated with another task process node if a decision condition is satisfied), a judgment association element (which is associated with another task process node if a judgment result is satisfied), and the like, which are not limited herein.
Next, a first initial task-associated network sequence may be selected. And the task associated network group corresponding to the first initial task associated network sequence comprises a preset first prediction node, a fusion node to be combined and a depth extraction node. It should be noted that the preset first prediction node, the fusion node to be combined, and the depth extraction node may select an existing general feature network structure according to actual requirements, and this embodiment is not specifically limited herein.
On the basis, for the first vector set corresponding to each task feature vector element, combining the first prediction node of the first initial task association network model and the fusion node of each order to obtain a plurality of combined node sequences. Then, the first vector set is mapped according to the plurality of combined node sequences respectively to obtain sequence pairs of various different combined node sequences.
It should be further noted that the input parameter of the fusion node in the above-mentioned combined node sequence is the task process characteristic of the task process node corresponding to the first vector set, and the output parameter of the first prediction node is the task process related parameter of the task process node corresponding to the first vector set.
Then, according to the determined sequence pair and the depth extraction nodes of the first initial task associated network sequence with the plurality of different orders, the first initial task associated network sequence is updated, a first node combination of the task associated network group corresponding to the minimum prediction loss function value is determined, and a first task associated network model including the first node combination is obtained.
It should be further explained that the task related network group corresponding to the first initial task related network sequence includes a preset first prediction node, a fusion node to be combined, and a depth extraction node.
Therefore, after the updated model parameters of the first task associated network model are determined to meet the preset conditions, the predicted parameters of the task process nodes output by the first task associated network model based on the task process associated parameters in the first vector set are compared with the task process associated parameters of the task process nodes, and the first confidence degree of the first task associated network model is determined according to the confidence range that the confidence degrees between the plurality of predicted parameters and the task process associated parameters are larger than the preset second threshold value.
Meanwhile, according to a parameter comparison result of the loss task process correlation parameter and the prediction parameter of the first task correlation network model, a preset second initial task correlation network sequence is updated, a second node combination of the task correlation network group corresponding to the minimum prediction loss function value is determined, a second task correlation network model comprising the second node combination is obtained, and a second confidence degree of the first vector set is determined based on a plurality of second task correlation network sequences obtained through updating.
It should be further noted that the task associated network group in the second initial task associated network model includes a preset fusion node, a second prediction node, and a depth extraction node to be combined, the second prediction node and the first prediction node have the same order but different output parameters, the output parameter of the first prediction node is a task process associated parameter, and the output parameter of the second prediction node is a parameter comparison result between the prediction parameter of the first task associated network model and the task process associated parameter.
Therefore, according to the first confidence degree and the second confidence degree, the prediction vector corresponding to the prediction parameter of the first task association network model is determined, the relation feature map based on the various task feature vector elements is generated based on the constraint relation among the various task feature vector elements in the vector set of the task feature vector, and the association value of each level of association relation in the relation feature map is calculated. And then, determining the task process incidence relation between the at least two task process nodes according to the incidence value of each level of incidence relation in the relation characteristic map.
The first confidence degree and the second confidence degree can determine a prediction vector corresponding to the prediction parameter of the first task correlation network model through the corresponding weight parameter. For example, if the first confidence is a, the second confidence level is B, and the respective corresponding weight parameters are a1 and B1, then the corresponding prediction vector can be calculated according to the result of a1+ B1 and the prediction parameters of the first task related network model.
It can be understood that, when the association value is greater than the set association value, it is determined that the level association relationship exists between the at least two task process nodes, otherwise, it is determined that the level association relationship does not exist between the at least two task process nodes. Therefore, the situation that a part of redundant task processes or task process association relations with excessive levels are determined can be effectively avoided, and the rationality of subsequent scheduling distribution is improved.
Illustratively, the above-mentioned multiple fusion nodes of different orders of the first initial task correlation network sequence may be determined by:
and analyzing the task process associated parameters and the corresponding task process characteristics corresponding to the first vector set to obtain target task process characteristics of which the correlation degree with the task process associated parameters is greater than a preset first threshold, and determining the fusion node order of the first initial task associated network sequence according to the number of the target task process characteristics.
In a possible design, for step S120, in order to further reduce the amount of computation and ensure timely scheduling of task process nodes with high priority in the process of constructing a corresponding task process association network, this embodiment may divide each target task process node covered by the same type of task process association relationship into a node matrix according to the computed task process association relationship among each task process node, reduce the matrix order of the node matrix whose node distribution number is greater than the preset number threshold according to the node distribution number in each node matrix, and expand the matrix order of the node matrix whose node distribution number is less than the preset number threshold, to obtain each adjusted node matrix. All task process nodes in each node matrix form a network unit.
Then, according to the position of each task process node in a single network unit, a network relationship between each task process node and other task process nodes in the single network unit is calculated, for example, the network relationship may refer to a network unit distance between each task process node and other task process nodes.
And for a single network unit, sequencing each task process node in the single network unit according to the sequence of the network relationship between each task process node and other task process nodes to obtain a task process node sequencing list. Meanwhile, for a single network unit, sequentially executing the following processes on each task process node in the task process node ordered list until determining a head task process node of the single network unit:
on the basis, whether the first task level of the task process nodes in the task process node ordered list is larger than a first preset level or not can be judged, and if yes, the task process nodes larger than the first preset level are used as head task process nodes of a single network unit.
Further, for a single network unit, determining a head task process node of the single network unit as a task process node associated with the single network unit for mapping, and determining other task process nodes except the head task process node of the single network unit as member task process nodes of the single network unit, wherein the member task process node of the single network unit is a task process node associated with the head task process node of the single network unit for mapping.
Therefore, the corresponding task process association network can be constructed according to the determined head task process node and the member task process node of each network unit. That is, the task process association network may be a network including a plurality of network units, each of which is formed by a head task process node and a member task process node associated with the head task process node in a mapping manner.
Based on the above description, for step S130, a scheduling process topology space of each head task process node and member task process node in the constructed task process association network may be obtained according to the network connection relationship between each head task process node and the member task process node, and the scheduling process topology space is used as a scheduling unit, so that each head task process node and member task process node are expressed as a scheduling unit composed of the scheduling process topology spaces of the head task process node and the member task process node.
Then, all similar scheduling units are obtained from the scheduling units of each head task process node and the member task process node according to the scheduling types of the scheduling units corresponding to the head task process node and the member task process node to form a first scheduling unit sequence, and the scheduling units corresponding to the head task process node and the member task process node in the first scheduling unit sequence are subjected to decision tree processing to obtain a decision tree structure and a decision tree hierarchy.
Then, a screening scheduling relationship in which the scheduling unit based on the head task process node and the member task process nodes does not include a scheduling relationship above a preset level can be calculated according to the decision tree structure and the decision tree level.
When each head task process node and member task process node calculate and obtain a screening scheduling relation of which the scheduling unit taking the head task process node and the member task process node as the center does not contain the scheduling relation above a preset level, the head task process node and the member task process node which do not contain the scheduling relation above the preset level are obtained according to the screening scheduling relation which does not contain the scheduling relation above the preset level and corresponds to each head task process node and the member task process node.
Then, a second scheduling unit sequence may be obtained according to the head task process node and the member task process node that do not contain the scheduling relationship above the preset level, and the decision tree processing may be performed on the second scheduling unit sequence to obtain a decision tree structure sequence corresponding to the second scheduling unit sequence. And then calculating opportunity nodes and decision tree feature vectors for the decision tree structure sequence, taking the decision tree feature vectors as initial values, and respectively processing the scheduling units corresponding to the head task process node and the member task process nodes in the second scheduling unit sequence according to the opportunity nodes to obtain the corresponding topology decision tree. Therefore, the scheduling process corresponding to each task process node can be respectively determined according to the decision result in the topology decision tree.
In a possible design, for step S140, the present embodiment may determine a task process node sequence under each scheduling process according to the scheduling process corresponding to each task process node. Next, the cloud computing node 200 for the task process node sequence under each scheduling process may be determined according to the scheduling computing relationship among the plurality of cloud computing nodes 200.
For example, assuming that the scheduling process includes a scheduling process a, a scheduling process B, a scheduling process C, and a scheduling process D, the cloud computing node 200 includes a cloud computing node a, a cloud computing node B, a cloud computing node C, a cloud computing node D, and a cloud computing node E, and the scheduling computing relationship may refer to an execution time sequence between the cloud computing node a, the cloud computing node B, the cloud computing node C, the cloud computing node D, and the cloud computing node E when executing the same scheduling process, where the execution time sequence is generally related to a current load balancing degree of the cloud computing node a, the cloud computing node B, the cloud computing node C, the cloud computing node D, and the cloud computing node E. Therefore, the task process node sequences of the scheduling process A, the scheduling process B, the scheduling process C and the scheduling process D can be scheduled according to the current load balancing degree of the cloud computing node A, the cloud computing node B, the cloud computing node C, the cloud computing node D and the cloud computing node E, and the task process node sequences in one scheduling process are taken as a unit, so that the situation that certain key services cannot be matched with other related task process nodes effectively in time due to unreasonable distribution of different task process nodes can be improved, and the service waiting time is reduced.
Fig. 3 is a schematic functional module diagram of a cloud computing task scheduling device 300 according to an embodiment of the present disclosure, where the cloud computing task scheduling device 300 may be divided into functional modules according to the foregoing method embodiments. For example, the functional blocks may be divided for the respective functions, or two or more functions may be integrated into one processing block. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the present application is schematic, and is only a logical function division, and there may be another division manner in actual implementation. For example, in the case of dividing each functional module according to each function, the cloud computing task scheduling device 300 shown in fig. 3 is only a schematic device diagram. The cloud computing task scheduling device 300 may include an extraction module 310, a construction module 320, a determination module 330, and a scheduling module 340, and the functions of the functional modules of the cloud computing task scheduling device 300 are described in detail below.
The extracting module 310 is configured to obtain a plurality of task process nodes from a target cloud computing task, and extract corresponding task process features from the plurality of task process nodes, where the task process features are used to represent service features corresponding to resources to be computed corresponding to the task process nodes.
The constructing module 320 is configured to determine a task process association relationship between each task process node according to the extracted task process features, and construct a corresponding task process association network according to the calculated task process association relationship between each task process node.
The determining module 330 is configured to determine, according to the constructed task process association network, a scheduling process corresponding to each task process node.
The scheduling module 340 is configured to determine, according to a scheduling process corresponding to each task process node and a scheduling computation relationship between the plurality of cloud computing nodes 200, a cloud computing node 200 corresponding to each task process node in the target cloud computing task.
Further, fig. 4 is a schematic structural diagram of a server 100 for executing the cloud computing task scheduling method according to the embodiment of the present application. As shown in FIG. 4, the server 100 may include a network interface 110, a machine-readable storage medium 120, a processor 130, and a bus 140. The processor 130 may be one or more, and one processor 130 is illustrated in fig. 4 as an example. The network interface 110, the machine-readable storage medium 120, and the processor 130 may be connected by a bus 140 or otherwise, as exemplified by the connection by the bus 140 in fig. 4.
The machine-readable storage medium 120 is a computer-readable storage medium, and can be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the cloud computing task scheduling method in the embodiment of the present application (for example, the extraction module 310, the construction module 320, the determination module 330, and the scheduling module 340 of the cloud computing task scheduling apparatus 300 shown in fig. 3). The processor 130 executes various functional applications and data processing of the terminal device by detecting the software programs, instructions and modules stored in the machine-readable storage medium 120, that is, the cloud computing task scheduling method is implemented, and details are not described herein.
The machine-readable storage medium 120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, diagnostic items required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the machine-readable storage medium 120 may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data rate Synchronous Dynamic random access memory (DDR SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), and direct memory bus RAM (DR RAM). It should be noted that the memories of the systems and methods described herein are intended to comprise, without being limited to, these and any other suitable memory of a publishing node. In some examples, the machine-readable storage medium 120 may further include memory located remotely from the processor 130, which may be connected to the server 100 over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor 130 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 130. The processor 130 may be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor.
The server 100 may interact with other devices (e.g., the cloud computing node 200) via the network interface 110. Network interface 110 may be a circuit, bus, transceiver, or any other device that may be used to exchange information. Processor 130 may send and receive information using network interface 110.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the embodiments of the present application without departing from the spirit and scope of the application. Thus, to the extent that such expressions and modifications of the embodiments of the application fall within the scope of the claims and their equivalents, the application is intended to embrace such alterations and modifications.