CN114780224A - Resource scheduling method and system applied to meta universe - Google Patents

Resource scheduling method and system applied to meta universe Download PDF

Info

Publication number
CN114780224A
CN114780224A CN202210475560.7A CN202210475560A CN114780224A CN 114780224 A CN114780224 A CN 114780224A CN 202210475560 A CN202210475560 A CN 202210475560A CN 114780224 A CN114780224 A CN 114780224A
Authority
CN
China
Prior art keywords
resource
fog
learning model
fog node
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210475560.7A
Other languages
Chinese (zh)
Inventor
杨亮
李晓燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Gujian Technology Co ltd
Original Assignee
Shenzhen Gujian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Gujian Technology Co ltd filed Critical Shenzhen Gujian Technology Co ltd
Priority to CN202210475560.7A priority Critical patent/CN114780224A/en
Publication of CN114780224A publication Critical patent/CN114780224A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44594Unloading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a resource scheduling method applied to a meta universe, which comprises the following steps: the cloud server acquires resource index parameters of a plurality of fog nodes; early warning the resource occupation condition of the plurality of fog nodes based on a distributed federal learning model; and providing a calculation task unloading instruction to the first fog node based on the early warning information of the resource occupation condition so as to complete task unloading of the first fog node, wherein the predicted resource occupancy rate of the first fog node exceeds a preset threshold value, and the task is a metastic application.

Description

Resource scheduling method and system applied to meta universe
Technical Field
The invention belongs to the technical field of information, and particularly relates to a resource scheduling method and system applied to a meta universe.
Background
The metasma is a process of virtualization and digitization of the real world, and requires a great deal of modification to content production, economic systems, user experience, and physical world content. However, the development of the meta universe is gradual, and is finally shaped by continuously fusing and evolving a plurality of tools and platforms under the support of shared infrastructure, standards and protocols. The method provides immersive experience based on an augmented reality technology, generates a mirror image of the real world based on a digital twin technology, builds an economic system based on a block chain technology, fuses the virtual world and the real world closely on the economic system, a social system and an identity system, and allows each user to perform content production and world editing.
The popularization of the metasphere is not separated from the upgrading and fusion of various technologies, such as a 5G communication technology, a big data processing technology and the like, a task processing form with low time delay and high reliability is required in the current metasphere application scene, the traditional cloud computing and fog computing processing mode cannot meet the requirements, and a better resource scheduling solution is required.
Disclosure of Invention
The invention provides a resource scheduling method and system applied to a metasma, which effectively solve the problem that the prior art cannot meet the requirements of low-delay and high-reliability task processing of a metasma application scene, improve the processing efficiency of resource scheduling and improve the reliability of task processing.
In order to achieve the above object, the present invention provides a resource scheduling method applied to a meta universe, including:
the cloud server acquires resource index parameters of a plurality of fog nodes;
early warning the resource occupation condition of the plurality of fog nodes based on a distributed federal learning model;
and providing a calculation task unloading instruction to the first fog node based on the early warning information of the resource occupation condition so as to complete task unloading of the first fog node, wherein the predicted resource occupancy rate of the first fog node exceeds a preset threshold value, and the task is a metastic application.
Optionally, the performing early warning on the resource occupation condition of the plurality of fog nodes based on the distributed federal learning model includes:
analyzing the resource occupation index parameter in the first historical period by using a transformer, and outputting a first resource abnormal characteristic vector;
performing secondary interpolation on the first resource abnormal feature vector, and outputting a second resource abnormal feature vector;
outputting the second resource abnormal feature vector to the distributed federated learning model so as to train the distributed federated learning model and obtain the correlation of different time and space dimensions;
and early warning the resource occupation condition of the plurality of fog nodes based on the trained distributed federal learning model.
Optionally, the training of the distributed federated learning model to obtain the association of different time and space dimensions includes:
performing space-time feature splitting on the second resource abnormal feature vector, and splitting the second resource abnormal feature vector into features of a time dimension and a space dimension;
aiming at the characteristics of the time dimension, performing time sequence coding by using Position-Encoding pairs, and exploring the characteristic association of the time sequence dimension by adopting an Attention method;
and aiming at the features of the space dimensions, extracting different Multi-space dimension features by utilizing Multi-Head orientation.
Optionally, performing secondary interpolation on the first resource abnormal feature vector, and outputting a second resource abnormal feature vector, including:
carrying out mean value processing on the first resource abnormal feature vector;
and performing interpolation on every 3 adjacent points according to the data subjected to the average processing to obtain the second resource abnormal feature vector.
Optionally, before the warning about the resource occupation of the plurality of fog nodes based on the distributed federal learning model, the method further includes:
the central cloud establishes a federal learning model;
the method comprises the steps that a center cloud obtains a heartbeat package sent by a cloud server, wherein the heartbeat package comprises a federal learning model training mark;
and if the federal learning model training identification is 0, distributing the federal learning model to the cloud server.
Optionally, after the early warning information based on the resource occupation situation provides a calculation task unloading instruction to the first fog node to complete task unloading of the first fog node, the method further includes:
a game strategy is formulated, and a service reward total package and a sub-package strategy are formulated to a fog center node based on the game strategy;
the fog center node distributes a service reward total packet and a sub-packet strategy to a second fog node;
and receiving feedback of the second fog node, and scheduling the task unloaded by the first fog node to the second fog node for execution.
Optionally, the service reward total package is a reward total package of the fog center cluster, and the subcontracting policy is a proportion policy between a reward distributed to a single fog node and the reward total package.
Optionally, the resource indicator parameter includes disk occupancy, cpu occupancy, gpu occupancy, memory occupancy, and connection number.
The embodiment of the invention also provides a resource scheduling system applied to the meta universe, which comprises:
the acquisition unit is used for acquiring resource index parameters of a plurality of fog nodes;
the early warning unit is used for early warning the resource occupation conditions of the plurality of fog nodes based on a distributed federal learning model;
and the task unloading unit is used for providing a calculation task unloading instruction for the first fog node based on the early warning information of the resource occupation condition so as to complete task unloading of the first fog node, wherein the predicted resource occupancy rate of the first fog node exceeds a preset threshold value, and the task is a metastic application.
The method and the system of the embodiment of the invention have the following advantages:
in the embodiment of the invention, the resource occupation condition of each fog node is subjected to early warning analysis in advance through a distributed federal learning model, and the task of the first fog node is unloaded in advance under the condition that the resource occupation of the first fog node is predicted to be higher than the early warning threshold value, so that adverse effects caused by time delay increase, reliability reduction and the like due to excessive tasks in the first fog node are prevented, and the network guarantee in a metastic application scene is effectively promoted.
Drawings
FIG. 1 is a resource scheduling system architecture diagram applied to the metas in one embodiment;
FIG. 2 is a flow diagram of a method for resource scheduling applied to the metasma, in one embodiment;
FIG. 3 is a schematic diagram of the logic of the transformer in one embodiment;
FIG. 4 is a block diagram of a resource scheduling system component applied to the metasma, in one embodiment;
FIG. 5 is a diagram illustrating the hardware components of the system in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Fig. 1 is a resource scheduling system architecture diagram applied to the meta universe in an embodiment of the present invention, and as shown in fig. 1, the system is a cloud architecture 10, and includes a central cloud 11, a plurality of cloud servers 12, and a plurality of fog node clusters 13, where each fog node cluster includes at least one central fog node 131 and the remaining fog nodes 132, the central fog node 131 is used to implement operations such as message forwarding and control of the fog nodes, and the remaining fog nodes are connected to a user terminal to implement message response of the user terminal.
In the above architecture, the central cloud 11 and the plurality of cloud servers 12 present a distributed architecture, wherein the plurality of cloud servers 12 may be distributed and arranged in different regions, for example, the cloud servers with different densities are deployed in east China, south China, north China, and the like, and the central cloud 11 is connected to the cloud servers 12 respectively, and is used for implementing operations such as signaling control, scheduling, and the like.
The metasuniverse application can realize smooth playing or execution of the upper metasuniverse application by depending on the basic architecture, the metasuniverse has high requirements on high reliability and low delay in a network, and once a task of a fog node is fully loaded in the current network architecture, the situation that new task response is not timely, even response interruption, a server is down and the like is probably caused, so that how to ensure that the task of the fog node is not abnormal is a very important problem.
As shown in fig. 2, an embodiment of the present invention provides a resource scheduling method applied to a meta universe, which is applied to a cloud architecture shown in fig. 1, and includes:
s101, a cloud server acquires resource index parameters of a plurality of fog nodes;
in the framework shown in fig. 1, the cloud server may obtain the resource index parameters of the multiple fog nodes in the current time period in a heartbeat packet manner, that is, the multiple fog nodes may send the resource index parameters of the multiple fog nodes to the cloud server in a heartbeat packet manner, or alternatively, the central fog node may collect the resource index parameters of the multiple fog nodes of the cluster where the central fog node is located, and send the collected resource index parameters to the cloud server.
In the embodiment of the present invention, the resource index parameters include a disk occupancy rate, a cpu occupancy rate, a gpu occupancy rate, a memory occupancy rate, a connection number, and the like of the fog node.
S102, pre-warning the resource occupation conditions of the plurality of fog nodes based on a distributed federal learning model;
federal learning is essentially a distributed machine learning technique, or machine learning framework. Target: the purpose of federal learning is to realize common modeling and improve the effect of an AI model on the basis of ensuring the data privacy safety and legal compliance. Federal learning was originally proposed by google in 2016, and is originally used for solving the problem that an android mobile phone terminal user updates a model locally; each enterprise participating in the common modeling is called a participant, and the federal learning is divided into three categories according to the difference of data distribution among multiple participants: horizontal federal learning, vertical federal learning, and federal migratory learning.
The essence of horizontal federal learning is sample union, which is suitable for scenes with the same state among participants but different clients, i.e. more overlapped features and less overlapped users, such as banks in different regions, which have similar business (similar features) but different users (different samples).
Step 1: the participants download the latest model from the server A respectively;
step 2: each participant utilizes a local data training model, encrypts gradient and uploads the gradient to a server A, and the server A aggregates gradient update model parameters of each user;
step 3: the server A returns the updated model to each participant;
step 4: each participant updates its respective model.
In conventional machine learning modeling, data required for model training is generally collected into a data center, then the model is trained, and then prediction is performed. In the horizontal federal learning, the distributed model training based on samples can be regarded as distributed model training, all data are distributed to different machines, each machine downloads a model from a server, then the model is trained by using local data, and then the parameters needing to be updated are returned to the server; the server aggregates the returned parameters on each machine, updates the model, and feeds back the latest model to each machine. In the process, the model is the same and complete under each machine, the machines are independent of each other, each machine can be predicted independently during prediction, and the process can be regarded as sample-based distributed model training. Google initially solved the problem of local model update by android phone end users in a horizontal federal manner.
The essence of the longitudinal federal learning is the combination of features, which is suitable for the scenes that users overlap more and the features overlap less, such as business superman and banks in the same region, and the users that they reach are all residents in the region (the same sample) but have different services (different features). The essence of longitudinal federated learning is that cross users combine features in different business states, such as business super A and bank B, in the traditional machine learning modeling process, two parts of data need to be concentrated into one data center, and then the feature join of each user is used as one piece of data to train a model, so that the two parties need to have a user intersection (modeling based on join results), and one party has a label. The learning steps are as shown in the above figure, and are divided into two major steps:
the first step is as follows: the encrypted samples are aligned. This is done at the system level so that non-intersecting users are not exposed at the enterprise-aware level.
The second step: aligning samples and carrying out model encryption training:
step 1: sending a public key to the A and the B by a third party C, and encrypting data to be transmitted;
step 2: a and B respectively calculate characteristic intermediate results related to the A and B, and encrypt and interact the characteristic intermediate results to obtain respective gradient and loss;
step 3: a and B respectively calculate the gradient after respective encryption and add a mask to send to C, and B calculates the loss after encryption and sends to C;
step 4: c decrypts the gradient and loss and passes back to a and B, A, B to remove the mask and update the model.
Federal transfer learning use of Federal transfer learning, such as the union of banks and business trips in different regions, may be considered when there is little overlap of features and samples between the participants. The method is mainly suitable for scenes using a deep neural network as a base model.
In the embodiment of the invention, a transverse federated learning mode is mainly adopted, namely, a central cloud establishes federated learning, each participant (each cloud server) downloads the federated learning model, and the federated learning model is trained, and finally gradient uploading is realized. In the process of federal learning, how to train the federal learning model is a key and is one of the cores of the invention, in the embodiment of the invention, the input vector is optimized in an artificial intelligence mode, and the distributed federal learning model is trained, so that the training result is more accurate.
The distributed federal learning model-based early warning of the resource occupation conditions of the plurality of fog nodes can be specifically as follows:
s1021, analyzing the resource occupation index parameter in the first history period by using a transformer, and outputting a first resource abnormal feature vector;
and continuously analyzing the resource occupation data for a period of time by using a transformer, and discovering the potential characteristics of the resource occupation situation aiming at the data sequence, so that the resource occupation early warning is more accurate.
As shown in FIG. 3, the Transformer has 6 encoders and 6 decoders, unlike Seq2Seq, the encoder contains two sub-layers, a multi-headed self-attention layer and a fully-connected layer. The decoder comprises three sub-layers, a multi-headed self-attention layer, an additional layer capable of performing multi-headed self-attention of the encoder output, and a fully-connected layer. Each sub-layer in the encoder and decoder has a residual concatenation, and then a layer normalization is performed.
Input to the encoder and decoder: the input and output labels of all encoders/decoders are converted into vectors using learned embedding. These inputs are then embedded into the incoming stream for position coding.
Position coding: the transform architecture does not contain any recursion or convolution and therefore does not have the notion of word order. All resource parameters in the input sequence are input into the network without a specific order or position, since they all flow through the encoder and decoder stacks simultaneously.
A position code is added to the model to help inject information about the relative or absolute position associated with each parameter index. The position code has the same dimension as the input embedding and therefore the two can be added.
Self-attention (self attention): attention is directed to better understanding the meaning and contextual relationship of the various parameters.
Self-attention is an attention mechanism that relates different positions of a single sequence to compute a sequence representation.A self-attention layer connects all positions with a constant number of operations performed in succession, and is therefore faster than a repeated layer. The attention function in the Transformer is described as mapping a query and a set of key-value pairs to an output. Queries, keys, and values are all vectors. The attention weight is calculated by calculating the dot product attention of each parameter in the sentence. The final score is a weighted sum of these values.
In a typical method for analyzing a resource occupation index parameter in a first history period by using a transformer and outputting a first resource abnormal feature vector, the method includes the following steps:
and step1, taking the dot product of each resource index parameter by the dot product. The dot product determines the weight of quality inspection of different index parameters.
Step2, zooming: the dot product is scaled by dividing by the square root of the key vector dimension. The size is 64, so the dot product is divided by 8.
And 3, using softmax. Softmax normalizes the scale values. After application of Softmax, all values are positive and add up to 1
And 4, calculating the weighted sum of all the values, applying the dot product between the normalized fraction and the value vector, and then calculating the sum to obtain a first resource abnormal feature vector.
S1022, carrying out secondary interpolation on the first resource abnormal characteristic vector, and outputting a second resource abnormal characteristic vector;
in order to adapt to model processing, differential processing needs to be performed on data sampled unevenly at different nodes. And (3) performing interpolation by using each 3 adjacent points by adopting a quadratic difference method to obtain quadratic interpolation. Namely, the alarm data after the artificial intelligence algorithm is optimized. The scheme can make the interval uniform and better match with the transform timing processing. Specifically, in S1022, performing mean processing on the first resource abnormal feature vector; and performing interpolation on every 3 adjacent points according to the data after the average processing to obtain the second resource abnormal feature vector.
S1023, outputting the second resource abnormal feature vector to the distributed federated learning model so as to train the distributed federated learning model and obtain the correlation of different time and space dimensions;
the distributed federal learning model is trained to obtain associations of different time and space dimensions, and the associations may specifically be:
performing space-time feature splitting on the second resource abnormal feature vector, and splitting the second resource abnormal feature vector into features of a time dimension and a space dimension;
aiming at the characteristics of the time dimension, performing time sequence coding by using Position-Encoding pairs, and exploring the characteristic association of the time sequence dimension by adopting an Attention method;
the formula is as follows:
PositionEncoding=cos2(pos/N)
description of the parameters: n is the adjustable length
Attention_output=Attention(Q,K,V)
Description of the parameters: wherein Q is the query feature mapping, K is the feature mapping to be matched, and V is the monitoring data mapping.
And aiming at the features of the space dimensions, extracting different Multi-space dimension features by utilizing Multi-Head orientation.
The formula is as follows:
Headi=Attention(Qi,Ki,Vi)
MultiHead(Q,K,V)=Concat(head1,...,headh)*WO
wherein WO is a self-defined constant, headi is a result obtained by time attention, i is a positive integer between 1 and h, h is a positive integer greater than 1, and the multi-spatial dimension characteristics are fused through Multihead.
In addition, before the distributed federal learning model is used for early warning the resource occupation conditions of the plurality of fog nodes, the center cloud establishes a federal learning model; the method comprises the steps that a center cloud obtains a heartbeat package sent by a cloud server, wherein the heartbeat package comprises a federal learning model training identifier; and if the federal learning model training identifier is 0, distributing the federal learning model to the cloud server. If the federal learning model training identifier is not 0, the cloud server already acquires the federal learning model without repeated acquisition.
And S1024, pre-warning the resource occupation condition of the plurality of fog nodes based on the trained distributed federal learning model.
In the embodiment of the present invention, the resource occupancy of the plurality of fog nodes is pre-warned, and one or more preset thresholds may be set, where the threshold is defined as a pre-warning threshold, for example, when the CPU occupancy exceeds 90%, it is considered that the tasks are saturated, and 90% is the pre-warning threshold of the CPU, so that the resource occupancy is pre-warned, and the fact that the resource occupancy is predicted in N future times and compared with the preset threshold is substantial, if the CPU occupancy exceeds the preset threshold, an early warning signal is generated, otherwise, no generation is required. Prediction of parameters using federal learning is prior art and will not be described again.
S103, providing a calculation task unloading instruction to the first fog node based on early warning information of the resource occupation condition to finish task unloading of the first fog node, wherein the resource occupancy rate of the first fog node after prediction exceeds a preset threshold value, and the task is a metastic application.
In the embodiment of the invention, if the resource occupancy rate of the first fog node at the nth time (after prediction) in the future is determined to exceed the preset threshold after the early warning analysis, it is determined that a certain amount of task unloading operation needs to be performed, so that the cloud server needs to provide a task unloading instruction to the first fog node, complete the task unloading of the first fog node, and transfer the task to other fog nodes without early warning after the task unloading. Wherein the task is one of the applications of the metas, including but not limited to a metas application scenario or an application program.
In this embodiment of the present invention, after the providing the calculation task unloading instruction to the first fog node based on the early warning information of the resource occupation situation to complete the task unloading of the first fog node, the method further includes:
a game strategy is formulated, and a service reward total package and a sub-package strategy are formulated to a fog center node based on the game strategy;
in the embodiment of the invention, an excitation mechanism based on a game is introduced, the characteristics and the network environment of the cloud server and the fog nodes are comprehensively considered, the cloud server pays optimal task unloading consideration to the fog nodes through the game process while the cloud server is excited to participate in unloading service, and the fog nodes provide optimal computing capacity for the cloud server. The task unloading delay of the cloud server is reduced, the respective net benefits of the cloud server and the fog nodes are maximized, and the global optimal point is reached. For the embodiment of the invention, a game strategy is generated, the service reward with excitation property is formulated based on the game strategy, the service reward and the calculation task information are distributed to one or more fog nodes, and different fog nodes are willing to carry out multi-task scheduling to a certain extent through the excitation mechanism, so that the task scheduling efficiency of the time delay sensitive task is effectively improved, and the task processing timeliness is enhanced.
The definition of the game strategy is: the cloud server integrates the self condition to provide the optimal service reward, and the fog node integrates the self condition to participate in unloading and provide the optimal computing capacity. Specifically, the following three aspects are included:
(1) the cloud server serves as a game leader, firstly predicts the computing capacity provided by the fog nodes, calculates the net income between income and cost according to the characteristics of the cloud server and the network environment, then optimizes the net income of the cloud server, and provides task unloading reward price for the fog nodes according to the optimal net income.
(2) And the fog nodes are used as followers of the game, the net income between income and cost is calculated according to the unloading return price given by the cloud server, the characteristics of the fog nodes and the network environment, then the net income of the fog nodes is optimized, and the optimal computing capacity provided for the cloud server is determined.
(3) The cloud server pays the fog node a reward at the optimal offload service price, and the fog node participates in the offload service with optimal computing power.
Specifically, the method for establishing service remuneration based on the game strategy comprises the following steps:
the cloud server calculates income and cost, and net income is obtained, wherein the net income is the income minus the cost; and optimizing the net income, and providing the service reward to each service node according to the optimized net income.
Wherein the cloud server optimizes net gain based on the following formula:
Figure BDA0003625335420000131
s.t.ωi≥0.
the income is FtAnd a cost of GtThe service reward is
Figure BDA0003625335420000132
The income is determined according to the task computing unit price paid by the cloud server, the size of the task data volume and the computing rate of the service node; the cost is determined according to the size of computing capacity provided for the cloud server, the size of task data volume, the service node CPU period and the data processing energy consumption per bit of the service node. For example, the higher the computation rate, the higher its revenue, and the smaller the delay, the higher the revenue.
The cost is determined according to the task calculation unit price paid to the fog node, the size of the task data volume and the calculation rate of the fog node. For example, when the task calculation unit price paid to the fog node is constant, the higher the calculation rate of the fog node is, the larger the task data amount is, the more the reward is paid to the fog node, and the higher the cost of the cloud server is.
Therefore, the cloud server generates a service reward total package and a sub-package strategy through the design of an incentive mechanism, and sends the service reward total package and the sub-package strategy to the fog center node. The service reward total packet is a reward total packet of the fog center cluster, and the sub-packet strategy is a proportion strategy of the reward distributed by a single fog node and the reward total packet, for example, the reward distributed by the single fog node is 5% of the reward of the total packet.
The fog center node distributes a service reward total packet and a sub-packet strategy to a second fog node;
and receiving feedback of the second fog node, and scheduling the task unloaded by the first fog node to the second fog node for execution.
And if the second fog node agrees to the reward distribution scheme, the second fog node sends feedback information to the cloud server, the cloud server schedules the task unloaded from the first fog node to the second fog node, and the second fog node completes the task.
As shown in fig. 4, an embodiment of the present invention further provides a resource scheduling system 40 applied to the metastic universe, where the system includes:
an obtaining unit 41, configured to obtain resource index parameters of a plurality of fog nodes;
the obtaining unit 41 may obtain the resource index parameters of the multiple fog nodes in the current time period in a heartbeat packet manner, that is, the multiple fog nodes may send the resource index parameters of the multiple fog nodes to the cloud server in the form of heartbeat packets, or alternatively, the central fog node may collect the resource index parameters of the multiple fog nodes of the cluster where the central fog node is located, and send the collected resource index parameters to the obtaining unit 41.
In the embodiment of the present invention, the resource index parameters include disk occupancy, cpu occupancy, gpu occupancy, memory occupancy, connection number, and the like of the fog node.
The early warning unit 42 is used for early warning the resource occupation conditions of the plurality of fog nodes based on a distributed federal learning model;
federal learning is essentially a distributed machine learning technique, or machine learning framework. The target is as follows: the federal learning aims to realize common modeling and improve the effect of an AI model on the basis of ensuring the data privacy safety and legal compliance. Federal study was originally proposed by Google in 2016, and was originally used for solving the problem of local model updating of android mobile phone terminal users; each enterprise participating in the common modeling is called a participant, and the federal learning is divided into three categories according to the difference of data distribution among multiple participants: horizontal federal learning, vertical federal learning, and federal migratory learning.
The essence of horizontal federal learning is sample union, which is suitable for scenes with the same state among participants but different clients, i.e. more overlapped features and less overlapped users, such as banks in different regions, which have similar business (similar features) but different users (different samples).
Step 1: the participants download the latest model from the server A respectively;
step 2: each participant utilizes a local data training model, encrypts gradient and uploads the gradient to a server A, and the server A aggregates gradient update model parameters of each user;
step 3: the server A returns the updated model to each participant;
step 4: each participant updates its respective model.
In the embodiment of the invention, a transverse federated learning mode is mainly adopted, namely, a central cloud establishes federated learning, each participant (each cloud server) downloads the federated learning model, and the federated learning model is trained, and finally gradient uploading is realized. In the process of federal learning, how to train the federal learning model is a key and is one of the cores of the invention, in the embodiment of the invention, the input vector is optimized in an artificial intelligence mode, and the distributed federal learning model is trained, so that the training result is more accurate.
The distributed federal learning model-based early warning of the resource occupation conditions of the plurality of fog nodes can be specifically as follows:
analyzing the resource occupation index parameters in the first historical period by using a transformer, and outputting a first resource abnormal characteristic vector;
and continuously analyzing the resource occupation data for a period of time by using a transformer, and discovering the potential characteristics of the resource occupation situation aiming at the data sequence to ensure that the resource occupation early warning is more accurate.
The Transformer has 6 encoders and 6 decoders, unlike Seq2Seq, the encoder contains two sub-layers, a multi-headed self-attention layer and a fully-connected layer. The decoder comprises three sub-layers, a multi-headed self-attention layer, an additional layer capable of performing multi-headed self-attention of the encoder output, and a fully-connected layer. Each sub-layer in the encoder and decoder has a residual concatenation, and then a layer normalization is performed.
Input to the encoder and decoder: the input and output labels of all encoders/decoders are converted into vectors using learned embedding. These inputs are then embedded into the incoming stream for position coding.
Position coding: the framework of the Transformer does not contain any recursion or convolution and therefore does not have the notion of word order. All resource parameters in the input sequence are input into the network without a specific order or position, since they all flow through the encoder and decoder stacks simultaneously.
A position code is added to the model to help inject information about the relative or absolute position associated with each parameter index. The position code has the same dimension as the input embedding and therefore the two can be added.
Self-attention (self attention): attention is directed to better understanding the meaning and contextual relevance of the parameters.
Self-attention is an attention mechanism that relates different positions of a single sequence to compute the sequence representation-a self-attention layer connects all positions with a constant number of operations performed in succession, and is therefore faster than a repeated layer. The attention function in the Transformer is described as mapping a query and a set of key-value pairs to an output. Queries, keys, and values are all vectors. The attention weight is calculated by calculating the dot product attention of each parameter in the sentence. The final score is a weighted sum of these values.
In an exemplary embodiment, the early warning unit 42 is configured to analyze the resource occupation indicator parameter in the first history period by using a transformer, and output a first resource abnormal feature vector, which includes the following contents:
the dot product is the dot product of each resource index parameter. The dot product determines the weight of quality inspection of different index parameters.
Zooming: the dot product is scaled by dividing by the square root of the key vector dimension. The size is 64, so the dot product is divided by 8.
Softmax was used. Softmax normalizes the scale values. After application of Softmax, all values are positive and add up to 1
And calculating the weighted sum of all the values, applying the dot product between the normalized fraction and the value vector, and then calculating the sum to obtain a first resource abnormal feature vector.
Performing secondary interpolation on the first resource abnormal feature vector, and outputting a second resource abnormal feature vector;
in order to adapt to model processing, differential processing needs to be performed on data sampled unevenly at different nodes. And (3) performing interpolation by using each 3 adjacent points by adopting a quadratic difference method to obtain quadratic interpolation. Namely, the alarm data after the artificial intelligence algorithm is optimized. The scheme can make the interval uniform and better match with the transform timing processing. Specifically, in S1022, performing mean processing on the first resource abnormal feature vector; and performing interpolation on every 3 adjacent points according to the data after the average processing to obtain the second resource abnormal feature vector.
Outputting the second resource abnormal feature vector to the distributed federated learning model so as to train the distributed federated learning model and obtain the correlation of different time and space dimensions;
the distributed federal learning model is trained to obtain associations of different time and space dimensions, and the associations may specifically be:
performing space-time feature splitting on the second resource abnormal feature vector, and splitting the second resource abnormal feature vector into features of a time dimension and a space dimension;
aiming at the characteristics of the time dimension, performing time sequence coding by using Position-Encoding pairs, and exploring the characteristic association of the time sequence dimension by adopting an Attention method;
the formula is as follows:
PositionEncoding=cos2(pos/N)
description of the parameters: n is the adjustable length
Attention_output=Attention(Q,K,V)
Description of the parameters: wherein Q is the query feature mapping, K is the feature mapping to be matched, and V is the monitoring data mapping.
And aiming at the features of the space dimensions, extracting different Multi-space dimension features by utilizing Multi-Head orientation.
The formula is as follows:
Headi=Attention(Qi,Ki,Vi)
MultiHead(Q,K,V)=Concat(head1,...,headh)*WO
WO is a self-defined constant, headi is a result obtained by time attention, i is a positive integer between 1 and h, h is a positive integer larger than 1, and multi-space dimension characteristics are fused through Multihead.
In addition, before the distributed federal learning model is used for early warning the resource occupation condition of the plurality of fog nodes, the central cloud establishes the federal learning model; the method comprises the steps that a center cloud obtains a heartbeat package sent by a cloud server, wherein the heartbeat package comprises a federal learning model training mark; and if the federal learning model training identifier is 0, distributing the federal learning model to the cloud server. If the federal learning model training identifier is not 0, the cloud server already acquires the federal learning model without repeated acquisition.
And early warning the resource occupation condition of the plurality of fog nodes based on the trained distributed federal learning model.
In the embodiment of the present invention, the resource occupancy of the plurality of fog nodes is pre-warned, and one or more preset thresholds may be set, where the threshold is defined as a pre-warning threshold, for example, when the CPU occupancy exceeds 90%, it is considered that the tasks are saturated, and 90% is the pre-warning threshold of the CPU, so that the resource occupancy is pre-warned, and the fact that the resource occupancy is predicted in N future times and compared with the preset threshold is substantial, if the CPU occupancy exceeds the preset threshold, an early warning signal is generated, otherwise, no generation is required. Parameter prediction using federal learning is prior art and will not be described again.
And the task unloading unit 43 is configured to provide a calculation task unloading instruction to the first fog node based on the early warning information of the resource occupation situation, so as to complete task unloading of the first fog node, where the resource occupancy rate predicted by the first fog node exceeds a preset threshold, and the task is a metastic application.
In the embodiment of the present invention, after the early warning analysis, if the task offloading unit 43 determines that the resource occupancy rate of the first fog node at the nth time (after prediction) in the future exceeds the preset threshold, it determines that a certain amount of task offloading operations need to be performed on the first fog node, and therefore, the cloud server needs to provide a task offloading instruction to the first fog node, complete task offloading on the first fog node, and transfer the task to another fog node without an early warning after offloading.
In this embodiment of the present invention, after the early warning information based on the resource occupation situation provides the calculation task offloading instruction to the first fog node to complete task offloading of the first fog node, the task offloading unit 43 is further configured to:
a game strategy is formulated, and a service reward total package and a sub-package strategy are formulated to a fog center node based on the game strategy;
in the embodiment of the invention, an excitation mechanism based on a game is introduced, the characteristics and the network environment of the cloud server and the fog nodes are comprehensively considered, the cloud server pays optimal task unloading consideration to the fog nodes through the game process while the cloud server is excited to participate in unloading service, and the fog nodes provide optimal computing capacity for the cloud server. The task unloading delay of the cloud server is reduced, the respective net benefits of the cloud server and the fog nodes are maximized, and the global optimal point is reached. For the embodiment of the invention, a game strategy is generated, the service reward with excitation property is formulated based on the game strategy, the service reward and the calculation task information are distributed to one or more fog nodes, and different fog nodes are willing to carry out multi-task scheduling to a certain extent through the excitation mechanism, so that the task scheduling efficiency of the time delay sensitive task is effectively improved, and the task processing timeliness is enhanced.
The definition of the game strategy is: the cloud server synthesizes the self condition, provides the optimal service reward, and the fog node synthesizes the self condition, participates in unloading and provides the optimal computing capacity. Specifically, the following three aspects are included:
(1) the cloud server serves as a game leader, firstly predicts the computing capacity provided by the fog nodes, calculates the net income between income and cost according to the characteristics of the cloud server and the network environment, then optimizes the net income of the cloud server, and provides task unloading reward price for the fog nodes according to the optimal net income.
(2) And the fog nodes are used as followers of the game, the net income between income and cost is calculated according to the unloading return price given by the cloud server, the characteristics of the fog nodes and the network environment, then the net income of the fog nodes is optimized, and the optimal computing capacity provided for the cloud server is determined.
(3) The cloud server pays the fog node a reward at the optimal offload service price, and the fog node participates in the offload service with optimal computing power.
Specifically, the method for establishing service remuneration based on the game strategy comprises the following steps:
the cloud server calculates income and cost, and net income is obtained, wherein the net income is the income minus the cost; and optimizing the net income, and providing the service reward to each service node according to the optimized net income.
Wherein the cloud server optimizes net gain based on the following formula:
Figure BDA0003625335420000201
s.t.ωi≥0.
the income is FtAnd a cost of GtThe service reward is
Figure BDA0003625335420000202
The income is determined according to the task computing unit price paid by the cloud server, the size of the task data volume and the computing rate of the service node; the cost is determined according to the size of computing capacity provided for the cloud server, the size of task data volume, the service node CPU period and the data processing energy consumption per bit of the service node. For example, the higher the computation rate, the higher its revenue, and the smaller the delay, the higher the revenue.
And the cost is determined according to the task calculation unit price paid to the fog node, the size of the task data volume and the calculation rate of the fog node. For example, when the calculation unit price of the task paid to the fog node is constant, the higher the calculation rate of the fog node is, and the larger the task data amount is, the more the reward to be paid to the fog node is, and the higher the cost of the cloud server is.
Therefore, the cloud server generates a service reward total package and a sub-package strategy through the design of an incentive mechanism, and sends the service reward total package and the sub-package strategy to the fog center node. The service reward total packet is a reward total packet of the fog center cluster, and the sub-packet strategy is a proportion strategy of the reward distributed by a single fog node and the reward total packet, for example, the reward distributed by the single fog node is 5% of the reward of the total packet.
The fog center node distributes a service reward total packet and a sub-packet strategy to a second fog node;
and receiving feedback of the second fog node, and scheduling the task unloaded by the first fog node to the second fog node for execution.
And if the second fog node agrees with the reward distribution scheme, the second fog node sends feedback information to the cloud server, the cloud server schedules the task unloaded from the first fog node to the second fog node, and the second fog node completes the task.
In the embodiment of the invention, the resource occupation condition of each fog node is subjected to early warning analysis in advance through a distributed federal learning model, and the task of the first fog node is unloaded in advance under the condition that the resource occupation of the first fog node is predicted to be higher than the early warning threshold value, so that adverse effects caused by time delay increase, reliability reduction and the like due to excessive tasks in the first fog node are prevented, and the network guarantee in a metastic application scene is effectively promoted.
The embodiment of the invention also provides a resource scheduling system applied to the metauniverse, which comprises a memory and a processor, wherein the memory is stored with computer executable instructions, and the processor realizes the method when running the computer executable instructions on the memory.
Embodiments of the present invention also provide a computer-readable storage medium having stored thereon computer-executable instructions for performing the method in the foregoing embodiments.
An embodiment of the present invention further provides a system, as shown in fig. 5, including a memory and a processor, where the memory has stored thereon computer-executable instructions, and the processor executes the computer-executable instructions on the memory to implement the method described above.
In practical applications, the systems may also respectively include other necessary elements, including but not limited to any number of input/output systems, processors, controllers, memories, etc., and all systems that can implement the big data management method of the embodiments of the present application are within the protection scope of the present application.
The memory includes, but is not limited to, Random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or portable read-only memory (CD-ROM), which is used for storing instructions and data.
The input system is for inputting data and/or signals and the output system is for outputting data and/or signals. The output system and the input system may be separate devices or may be an integral device.
The processor may include one or more processors, for example, one or more Central Processing Units (CPUs), and in the case of one CPU, the CPU may be a single-core CPU or a multi-core CPU. The processor may also include one or more special purpose processors, which may include GPUs, FPGAs, etc., for accelerated processing.
The memory is used to store program codes and data of the network device.
The processor is used for calling the program codes and data in the memory and executing the steps in the method embodiment. Specifically, reference may be made to the description of the method embodiment, which is not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed system and method may be implemented in other ways. For example, the division of the unit is only one logical function division, and other division may be implemented in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. The shown or discussed mutual coupling, direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, systems or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are wholly or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable system. The computer instructions may be stored on or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a read-only memory (ROM), or a Random Access Memory (RAM), or a magnetic medium, such as a floppy disk, a hard disk, a magnetic tape, a magnetic disk, or an optical medium, such as a Digital Versatile Disk (DVD), or a semiconductor medium, such as a Solid State Disk (SSD).
The above is only a specific embodiment of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A resource scheduling method applied to a meta universe, comprising:
the cloud server acquires resource index parameters of a plurality of fog nodes;
early warning the resource occupation condition of the plurality of fog nodes based on a distributed federal learning model;
and providing a calculation task unloading instruction to the first fog node based on the early warning information of the resource occupation condition so as to complete task unloading of the first fog node, wherein the predicted resource occupancy rate of the first fog node exceeds a preset threshold value, and the task is a metastic application.
2. The method of claim 1, wherein the pre-warning of resource occupancy of the plurality of fog nodes based on a distributed federal learning model comprises:
analyzing the resource occupation index parameter in the first historical period by using a transformer, and outputting a first resource abnormal characteristic vector;
performing secondary interpolation on the first resource abnormal feature vector, and outputting a second resource abnormal feature vector;
outputting the second resource abnormal feature vector to the distributed federated learning model so as to train the distributed federated learning model and obtain the correlation of different time and space dimensions;
and early warning the resource occupation conditions of the plurality of fog nodes based on the trained distributed federal learning model.
3. The method of claim 2, wherein training the distributed federated learning model to obtain associations in different temporal and spatial dimensions comprises:
performing space-time feature splitting on the second resource abnormal feature vector, and splitting the second resource abnormal feature vector into features of a time dimension and a space dimension;
aiming at the characteristics of the time dimension, performing time sequence coding by using Position-Encoding pairs, and exploring the characteristic association of the time sequence dimension by adopting an Attention method;
and aiming at the features of the space dimensions, extracting different Multi-space dimension features by utilizing Multi-Head orientation.
4. The method of claim 2, wherein performing a quadratic interpolation on the first resource abnormality feature vector and outputting a second resource abnormality feature vector comprises:
carrying out mean value processing on the first resource abnormal feature vector;
and performing interpolation on every 3 adjacent points according to the data after the average processing to obtain the second resource abnormal feature vector.
5. The method of claim 1, wherein prior to the pre-warning of resource occupancy for the plurality of fog nodes based on the distributed federal learning model, the method further comprises:
the central cloud establishes a federal learning model;
the method comprises the steps that a center cloud obtains a heartbeat package sent by a cloud server, wherein the heartbeat package comprises a federal learning model training mark;
and if the federal learning model training identifier is 0, distributing the federal learning model to the cloud server.
6. The method of claim 1, wherein after the pre-warning information based on the resource occupancy provides the computing task offloading instruction to the first fog node to complete task offloading of the first fog node, the method further comprises:
a game strategy is formulated, and a service reward total package and a sub-package strategy are formulated to a fog center node based on the game strategy;
the fog center node distributes a service reward total packet and a sub-packet strategy to a second fog node;
and receiving feedback of the second fog node, and scheduling the task unloaded by the first fog node to the second fog node for execution.
7. The method of claim 6, wherein the service reward package is a reward package of the fog center cluster, and the subcontracting policy is a proportion policy of a reward allocated to a single fog node to the reward package.
8. The method of claim 1, wherein the resource indicator parameters include disk occupancy, cpu occupancy, gpu occupancy, memory occupancy, and number of connections.
9. A resource scheduling system applied to a metasphere, the system comprising:
the acquisition unit is used for acquiring resource index parameters of a plurality of fog nodes;
the early warning unit is used for early warning the resource occupation condition of the plurality of fog nodes based on a distributed federal learning model;
and the task unloading unit is used for providing a calculation task unloading instruction for the first fog node based on the early warning information of the resource occupation condition so as to complete task unloading of the first fog node, wherein the predicted resource occupancy rate of the first fog node exceeds a preset threshold value, and the task is a metastic application.
10. A resource scheduling system for application to the meta universe comprising a memory having stored thereon computer-executable instructions and a processor which, when executing the computer-executable instructions on the memory, implements the method of any one of claims 1 to 8.
CN202210475560.7A 2022-04-29 2022-04-29 Resource scheduling method and system applied to meta universe Withdrawn CN114780224A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210475560.7A CN114780224A (en) 2022-04-29 2022-04-29 Resource scheduling method and system applied to meta universe

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210475560.7A CN114780224A (en) 2022-04-29 2022-04-29 Resource scheduling method and system applied to meta universe

Publications (1)

Publication Number Publication Date
CN114780224A true CN114780224A (en) 2022-07-22

Family

ID=82435813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210475560.7A Withdrawn CN114780224A (en) 2022-04-29 2022-04-29 Resource scheduling method and system applied to meta universe

Country Status (1)

Country Link
CN (1) CN114780224A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115858182A (en) * 2023-03-01 2023-03-28 深圳市卡妙思电子科技有限公司 Intelligent adaptation method and system applied to edge computing nodes of metauniverse

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115858182A (en) * 2023-03-01 2023-03-28 深圳市卡妙思电子科技有限公司 Intelligent adaptation method and system applied to edge computing nodes of metauniverse

Similar Documents

Publication Publication Date Title
Duan et al. Distributed artificial intelligence empowered by end-edge-cloud computing: A survey
CN110084377B (en) Method and device for constructing decision tree
Lim et al. Federated learning in mobile edge networks: A comprehensive survey
CN110309587B (en) Decision model construction method, decision method and decision model
CN113505882B (en) Data processing method based on federal neural network model, related equipment and medium
Deng et al. Task scheduling for smart city applications based on multi-server mobile edge computing
CN113014415A (en) End-to-end quality of service in an edge computing environment
CN104115165A (en) Method for mapping media components using machine learning
CN113422801B (en) Edge network node content distribution method, system, device and computer equipment
CN105893456B (en) Method and system for geo-fence aware computing infrastructure separation
JP7427169B2 (en) Common database architecture to support large-scale transactions and node archiving on MaaS platforms
CN113052329B (en) Method and device for jointly updating service model
US20230186049A1 (en) Training method and apparatus for a neural network model, device and storage medium
CN111985921A (en) Verification processing method based on block chain offline payment and digital financial service platform
CN114780224A (en) Resource scheduling method and system applied to meta universe
CN107276912B (en) Memory, message processing method and distributed storage system
CN116796338A (en) Online deep learning system and method for privacy protection
CN114894210A (en) Logistics vehicle path planning method, device, equipment and storage medium
CN117119535A (en) Data distribution method and system for mobile terminal cluster hot spot sharing
CN113630476B (en) Communication method and communication device applied to computer cluster
EP4399647A1 (en) Systems and methods for providing a split inference approach to protect data and model
CN115712701A (en) Language processing method, apparatus and storage medium
Eremina et al. Application of distributed and decentralized technologies in the management of intelligent transport systems
Bandara et al. Lightweight, geo-scalable deterministic blockchain design for 5G networks sliced applications with hierarchical CFT/BFT consensus groups, IPFS and novel hardware design
Xu et al. Digital twin-enabled hybrid deep evolutionary framework for smart building sustainable infrastructure management

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220722

WW01 Invention patent application withdrawn after publication