CN116820762A - Bian Yun cooperative computing method based on power edge chip - Google Patents

Bian Yun cooperative computing method based on power edge chip Download PDF

Info

Publication number
CN116820762A
CN116820762A CN202310751564.8A CN202310751564A CN116820762A CN 116820762 A CN116820762 A CN 116820762A CN 202310751564 A CN202310751564 A CN 202310751564A CN 116820762 A CN116820762 A CN 116820762A
Authority
CN
China
Prior art keywords
subtasks
task
cloud server
training
power edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310751564.8A
Other languages
Chinese (zh)
Inventor
辛明勇
徐长宝
王宇
杨婧
林呈辉
高吉普
祝健杨
冯起辉
何雨旻
徐玉韬
李博文
古庭赟
刘斌
张后谊
汪明媚
邓松
谈竹奎
文贤馗
孟令雯
张历
冯义
周洋
代奇迹
毛均毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Power Grid Co Ltd
Original Assignee
Guizhou Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Power Grid Co Ltd filed Critical Guizhou Power Grid Co Ltd
Priority to CN202310751564.8A priority Critical patent/CN116820762A/en
Publication of CN116820762A publication Critical patent/CN116820762A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Supply And Distribution Of Alternating Current (AREA)

Abstract

The application discloses a Bian Yun cooperative computing method based on a power edge chip, which comprises the following steps: decomposing a task to be executed into a plurality of subtasks, dividing the subtasks into a plurality of task levels according to the complexity of the subtasks, and sending the subtasks with the complexity levels to a cloud server; according to the task processing instruction fed back by the cloud server, respectively transmitting a plurality of subtasks to each subtask network model for training, and calculating the training weight of each subtask network model; and sending the training result and the training weight to a cloud server. The Bian Yun collaborative computing method based on the power edge chip can enable the power edge chip to fully utilize the self redundancy computing capability, solve the problems that the computing capability of the power edge chip is insufficient, some power edge chips are left on due to the redundancy of computing, and the computing capability of reserve is wasted, and meanwhile solve the problem that the response speed of a cloud server to tasks is lower than that of the power edge chip.

Description

Bian Yun cooperative computing method based on power edge chip
Technical Field
The application relates to the technical field of power edge computing chips, in particular to a Bian Yun collaborative computing method based on a power edge chip.
Background
Over ten years of development, we have now stepped into the cloud application explosion era. Indeed, clouds can bring many benefits to enterprises in terms of cost, benefit, scale, automation, interoperability, and concentration, and thus the services of a large number of IT companies exist entirely on or rely to a great extent on the cloud. In a "cloud" only world, data may be transmitted hundreds or even thousands of miles, with delays being unavoidable, and edge computing can effectively solve this problem.
In the edge computing scene, the types of the power terminal equipment are many, the number is large, the distribution is wide, a lot of data relate to training of the neural network model, the computing capacity of the edge computing chip is limited, the edge computing chip is difficult to realize computation under the condition of huge data volume, and the edge computing chip is sent to the cloud server to execute tasks, so that the transmission path is too long, and the delay problem is caused.
Therefore, the problem of cooperative cooperation between the power edge chip and the cloud server is to be solved.
Disclosure of Invention
This section is intended to outline some aspects of embodiments of the application and to briefly introduce some preferred embodiments. Some simplifications or omissions may be made in this section as well as in the description of the application and in the title of the application, which may not be used to limit the scope of the application.
The present application has been made in view of the above-described problems.
Therefore, the technical problems solved by the application are as follows: the existing edge computing chip cannot process huge data volume, and delay problems can be caused by overlong transmission paths when the huge data volume is sent to a cloud server for processing tasks.
In order to solve the technical problems, the application provides the following technical scheme: a Bian Yun collaborative computing method based on a power edge chip, comprising: decomposing a task to be executed into a plurality of subtasks, dividing the subtasks into a plurality of task levels according to the complexity of the subtasks, and sending the subtasks with the complexity levels to a cloud server;
according to the task processing instruction fed back by the cloud server, respectively transmitting a plurality of subtasks to each subtask network model for training, and calculating the training weight of each subtask network model;
and sending the training result and the training weight to a cloud server.
As a preferable scheme of the Bian Yun collaborative computing method based on the power edge chip, the application comprises the following steps: decomposing the task to be executed into a plurality of subtasks includes: dividing a data set in a task to be executed into a plurality of data blocks by using a Map restoration model, starting a Map task for each data block, and inputting the data blocks into a Map function for processing;
and the Map function outputs key value pairs, the Map restoration model is used for aggregating and sequencing the key value pairs output by all Map tasks, key values of each key value pair are recorded, the key values are transmitted to the cloud server, so that the cloud server identifies subtasks according to the key values, and finally, the integration training is carried out.
As a preferable scheme of the Bian Yun collaborative computing method based on the power edge chip, the application comprises the following steps: the complexity of the subtasks is positively correlated with the file size of the subtasks, expressed as:
wherein x represents the file size of the subtask; f (x) represents the complexity value of the subtask; oc represents the upper limit of task complexity; sigma represents the rate at which task complexity rises; exp represents an exponential function based on a natural constant e.
As a preferable scheme of the Bian Yun collaborative computing method based on the power edge chip, the application comprises the following steps: the task classes include complex, medium and simple,
when the task level is complex, sending subtasks corresponding to the task level to a cloud server; when the task level is medium and simple, subtasks corresponding to the task level are processed locally.
As a preferable scheme of the Bian Yun collaborative computing method based on the power edge chip, the application comprises the following steps: the task processing instructions include:
when the calculation capacity value and the processing redundancy of the power edge chip of the uploading subtask are enough, feeding back a locally processed task instruction; and when the calculation capacity value and the processing redundancy of the power edge chip of the uploaded subtask are insufficient, feeding back a task instruction processed at different places, and sending the subtask to the power edge chip with the processing capacity.
As a preferable scheme of the Bian Yun collaborative computing method based on the power edge chip, the application comprises the following steps: the subtask network model is a cyclic neural network, expressed as:
h t =f(W xh x t +W hh h t-1 +b h )
y t =g(W hy h t +b y )
wherein x is t Input data representing a time t; h is a t A hidden state at time t; y is t Output results at time t are shown; w (W) xh For input layer, W hh To hide layer, W hy Is a weight matrix; b h Indicating bias of hidden layerVector, b y A bias vector representing an output layer; f represents an activation function, g represents an activation function.
As a preferable scheme of the Bian Yun collaborative computing method based on the power edge chip, the application comprises the following steps: the training model of the cloud server is a convolutional neural network, and is expressed as:
Y i,j,k =f(Z i,j,k )
wherein i represents the abscissa of the feature map and j represents the ordinate of the feature map; k represents the number of convolution kernels; l represents the number of channels of the convolution kernel; m represents the width of the convolution kernel and n represents the height of the convolution kernel; x represents input image data; w represents a convolution kernel weight matrix; b represents a bias vector; f represents an activation function; y represents the output result of the convolution layer; the activation function is a ReLU function.
In a second aspect, the present application also provides an apparatus for Bian Yun collaborative computing method based on a power edge chip, including,
the complexity evaluation unit is used for decomposing the task to be executed into a plurality of subtasks, respectively evaluating the complexity of the subtasks, carrying out combined evaluation on the evaluation results of the complexity of the subtasks and the calculation capacity of the subtasks to obtain the task grades of the subtasks, and sending the subtasks of the complexity grades to the cloud server;
the task grade feedback unit is used for responding to task processing instructions fed back by the cloud server according to the complex task grade and processing corresponding subtasks;
the model unit responds to the task processing instruction, respectively transmits the plurality of subtasks to each subtask network model for training, calculates the training weight of each subtask network model, and transmits the training result and the training weight to the cloud server so that the training model of the cloud server integrates training.
In a third aspect, the present application also provides a computing device comprising: a memory and a processor;
the memory is configured to store computer-executable instructions that, when executed by the processor, perform the steps of the power edge chip-based Bian Yun collaborative computing method.
In a fourth aspect, the present application also provides a computer readable storage medium storing computer executable instructions which when executed by a processor implement the steps of the power edge chip based Bian Yun collaborative computing method.
The application has the beneficial effects that: the Bian Yun collaborative computing method based on the power edge chip can enable the power edge chip to fully utilize the self redundancy computing capability, solve the problems that the computing capability of the power edge chip is insufficient, some power edge chips are left on due to the redundancy of computing, and the computing capability of reserve is wasted, and meanwhile solve the problem that the response speed of a cloud server to tasks is lower than that of the power edge chip.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
fig. 1 is an overall flowchart of a Bian Yun collaborative computing method based on a power edge chip according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a system of a Bian Yun collaborative computing method based on a power edge chip according to a second embodiment of the present application;
fig. 3 is a schematic flow chart of decomposing a task to be executed into a plurality of subtasks in a Bian Yun collaborative computing method based on a power edge chip according to a second embodiment of the present application.
Detailed Description
So that the manner in which the above recited objects, features and advantages of the present application can be understood in detail, a more particular description of the application, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, but the present application may be practiced in other ways other than those described herein, and persons skilled in the art will readily appreciate that the present application is not limited to the specific embodiments disclosed below.
Further, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic can be included in at least one implementation of the application. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
While the embodiments of the present application have been illustrated and described in detail in the drawings, the cross-sectional view of the device structure is not to scale in the general sense for ease of illustration, and the drawings are merely exemplary and should not be construed as limiting the scope of the application. In addition, the three-dimensional dimensions of length, width and depth should be included in actual fabrication.
Also in the description of the present application, it should be noted that the orientation or positional relationship indicated by the terms "upper, lower, inner and outer", etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of describing the present application and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present application. Furthermore, the terms "first, second, or third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected, and coupled" should be construed broadly in this disclosure unless otherwise specifically indicated and defined, such as: can be fixed connection, detachable connection or integral connection; it may also be a mechanical connection, an electrical connection, or a direct connection, or may be indirectly connected through an intermediate medium, or may be a communication between two elements. The specific meaning of the above terms in the present application will be understood in specific cases by those of ordinary skill in the art.
Example 1
Referring to fig. 1, for one embodiment of the present application, a Bian Yun collaborative computing method based on a power edge chip is provided, including:
s1: and decomposing the task to be executed into a plurality of subtasks, dividing the subtasks into a plurality of task levels according to the complexity of the subtasks, and sending the subtasks with the complexity levels to the cloud server.
Further, the task to be executed is decomposed into a plurality of subtasks, a Map reduction (MapReduce) model is used for dividing a data set in the task to be executed into a plurality of data blocks, then a Map task is started for each data block, and the data blocks are input into Map functions for processing.
Still further, the complexity of a subtask is positively correlated to the subtask's file size.
The evaluation mode of the complexity of the subtasks is as follows:
wherein x represents the file size of the subtask; f (x) represents the complexity value of the subtask; oc represents the upper limit of task complexity; sigma represents the rate at which task complexity rises; smaller σ represents an earlier approach to the upper limit of task complexity, and oc and σ can be determined by curve fitting methods.
The task level of the subtasks comprises complex, medium and simple, wherein the subtasks of the complex level are sent to the cloud server, and the subtasks corresponding to the medium or simple task level are processed locally. The subtasks are classified according to complexity, medium and simplicity, so that the power edge chip can identify the simple or medium tasks, local processing is prioritized, and if the subtasks are identified as complex tasks, the task level needs to be transmitted to the cloud server under the permission of the self processing capacity.
It should be noted that MapReduce is a calculation model that can decompose a large data processing task into many individual tasks that can be executed in parallel in a server cluster, and the calculation results of these tasks can be combined together to calculate the final result. In short, hadoop mapreduce is a software framework that is easy to program and can quickly handle large amounts of data in parallel across large clusters (thousands of nodes), deployed on commodity machines in a reliable, fault tolerant manner.
The term MapReduce comes from two basic data conversion operations: the map process and reduce process, map operations convert elements in a collection from one form to another, in which case the input key-value pairs are converted to zero to multiple key-value pair outputs. Where the input and output keys must be quite different and the input and output values may be quite different.
All key-value pairs for a key are distributed to the same reduce operation. Specifically, this key and all the values corresponding to this key are passed to the same Reducer. reduction device
The purpose of the process is to convert a set of values into one value (e.g., summing or averaging), or into another set. This Reducer ultimately produces a key pair. It should be noted that the reduce process may be omitted if it is not required.
Map functions are a commonly used functional programming tool that can apply a function to each element of a list or other iteratable object and return a new list containing the results of applying the function. The Map function has very wide application and can be used for various data processing and conversion tasks. In data processing and modeling, we often need to transform data into different forms or map into different spaces. Map functions can help us to quickly perform data conversion and mapping, such as converting data into matrix form, mapping data into high-dimensional space, etc., which can be implemented by custom functions or by using functions built in Python.
Map function output key value pairs, map reduction (MapReduce) models are used for aggregating and sequencing the key value pairs output by all Map tasks, key values of each key value pair are recorded, the key values are transmitted to a cloud server, so that the cloud server can identify the key values to subtasks, and finally integrated training is carried out. And the cloud server identifies each subtask according to the key value and integrates the subtasks according to the characteristics of the subtasks.
S2: and respectively transmitting the plurality of subtasks to each subtask network model for training according to the task processing instruction fed back by the cloud server, and calculating the training weight of each model.
Further, the cloud server judges according to the computing power value and the processing redundancy of each power edge chip: and feeding back the locally processed task processing instruction or feeding back the remotely processed task processing instruction.
The cloud server uniformly and comprehensively judges whether the power edge chip of the transmitting end is processed by the power edge chip of the original transmitting end, namely local processing, or other power edge computing chips, namely remote processing according to the computing capacity of the power edge chip of the transmitting end and the redundant processing capacity of other power edge chips.
Still further, the subtask network model is a recurrent neural network, expressed as:
h t =f(W xh x t +W hh h t-1 +b h )
y t =g(W hy h t +b y )
wherein x is t Input data representing a time t; h is a t A hidden state at time t; y is t Output results at time t are shown; w (W) xh For input layer, W hh To hide layer, W hy Is a weight matrix; b h Representing the bias vector of the hidden layer, b y A bias vector representing an output layer; f represents an activation function, g represents an activation function.
In deep learning, training weights refer to weights of each neuron connection in the neural network, and are optimized by a back propagation algorithm. These weights determine how the neurons respond to the inputs and control the output of the overall network. During training, the optimization algorithm will continuously adjust these weights to minimize the gap between model predictions and actual labels until the best results are achieved. After model training is completed, these training weights are saved for later use.
S3: and sending the training result and the training weight to the cloud server, so that the training model of the cloud server integrates training.
Further, the training model of the cloud server is a convolutional neural network, expressed as:
Y i,j,k =f(Z i,j,k )
wherein i represents the abscissa of the feature map and j represents the ordinate of the feature map; k represents the number of convolution kernels; l represents the number of channels of the convolution kernel; m represents the width of the convolution kernel and n represents the height of the convolution kernel; x represents input image data; w represents a convolution kernel weight matrix; b represents a bias vector; f represents an activation function; y represents the output result of the convolution layer;
the activation function is a ReLU function, expressed as:
f(x)=max(0,x)
where x is the input and f (x) is the output. When x >0, f (x) =x; when x is less than or equal to 0, f (x) =0. Thus, the ReLU changes the zero or less input to zero, leaving the zero or greater input. The ReLU activation function is used as a nonlinear activation function of the convolutional neural network, so that the expression capacity of the model can be enhanced, and the calculation speed is high.
The present embodiment also provides a computing device comprising, a memory and a processor; the memory is used for storing computer executable instructions, and the processor is used for executing the computer executable instructions to realize the Bian Yun collaborative computing method based on the power edge chip according to the embodiment.
The present embodiment also provides a storage medium having stored thereon a computer program which, when executed by a processor, implements the Bian Yun collaborative computing method based on a power edge chip as proposed in the above embodiments.
The storage medium proposed in this embodiment belongs to the same inventive concept as the Bian Yun collaborative computing method based on the power edge chip proposed in the above embodiment, and technical details not described in detail in this embodiment can be seen in the above embodiment, and this embodiment has the same beneficial effects as the above embodiment.
From the above description of embodiments, it will be clear to a person skilled in the art that the present application may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read Only Memory (ROM), a random access Memory (RandomAccessMemory, RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, etc., including several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to execute the method of the embodiments of the present application.
Example 2
Referring to fig. 2 and 3, for one embodiment of the present application, a Bian Yun collaborative computing method based on a power edge chip is provided, and in order to verify the beneficial effects of the present application, scientific demonstration is performed through simulation experiments.
The experimental procedure was as follows: step 1: and uploading the MNIST handwriting digital recognition data set to a cloud server.
Step 2: and running a CNN model on the cloud server to train the cloud server as a basic model to obtain the weight parameters of the basic model.
In this model, the first layer is 64 convolution kernels of 3×3, the second layer is 128 convolution kernels of 3×3, the third layer is 256 convolution kernels of 3×3, and a pooling layer is added between the convolution layers to reduce fitting, and the activation function is a ReLU function.
Step 3: decomposing a task to be executed into a plurality of subtasks by using a MapReduce model, and dividing the subtasks into three task levels according to the complexity of pictures: simple, medium and difficult.
Step 4: and sending the subtasks of the difficulty level to the cloud server, and sending other subtasks to the edge equipment.
Step 5: and after receiving the subtasks of the difficulty level, the cloud server distributes the subtasks to the high-performance computing nodes for training, and calculates the training weight of the model.
Step 6: after the edge equipment receives the task, the edge equipment is distributed to different local network models for training according to the task level, and training weights of the models are calculated.
Step 7: and sending the training results and the training weights of each subtask to a cloud server, and the cloud server integrates the training results and the weights of all models and retrains the models.
Step 8: the performance of the new model is evaluated. The basic model, the cloud server only training model, the edge device only training model, and the Bian Yun co-training model were tested using the test set to compare their differences in accuracy and time.
The experimental results are as follows:
basic model accuracy: 93.5%;
only cloud server trains model accuracy: 93.9%;
edge only training model accuracy: 92.2%;
bian Yun co-training model accuracy: 94.1%;
comparing the training time of each model: only the cloud server trains the model longest, and the edge cloud cooperative training model shortest. It can be seen that Bian Yun synergistic computing mode has significant advantages in terms of accuracy and training time.
It should be noted that the above embodiments are only for illustrating the technical solution of the present application and not for limiting the same, and although the present application has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present application may be modified or substituted without departing from the spirit and scope of the technical solution of the present application, which is intended to be covered in the scope of the claims of the present application.

Claims (10)

1. A Bian Yun collaborative computing method based on a power edge chip, comprising:
decomposing a task to be executed into a plurality of subtasks, dividing the subtasks into a plurality of task levels according to the complexity of the subtasks, and sending the subtasks with the complexity levels to a cloud server;
according to the task processing instruction fed back by the cloud server, respectively transmitting a plurality of subtasks to each subtask network model for training, and calculating the training weight of each subtask network model;
and sending the training result and the training weight to a cloud server.
2. The power edge chip-based Bian Yun collaborative computing method according to claim 1, wherein: decomposing the task to be executed into a plurality of subtasks includes: dividing a data set in a task to be executed into a plurality of data blocks by using a Map restoration model, starting a Map task for each data block, and inputting the data blocks into a Map function for processing;
and the Map function outputs key value pairs, the Map restoration model is used for aggregating and sequencing the key value pairs output by all Map tasks, key values of each key value pair are recorded, the key values are transmitted to the cloud server, so that the cloud server identifies subtasks according to the key values, and finally, the integration training is carried out.
3. The power edge chip-based Bian Yun collaborative computing method according to claim 2, wherein: the complexity of the subtasks is positively correlated with the file size of the subtasks, expressed as:
wherein x represents the file size of the subtask; f (x) represents the complexity value of the subtask; oc represents the upper limit of task complexity; sigma represents the rate at which task complexity rises; exp represents an exponential function based on a natural constant e.
4. The power edge chip-based Bian Yun collaborative computing method according to claim 3, wherein: the task classes include complex, medium and simple,
when the task level is complex, sending subtasks corresponding to the task level to a cloud server;
when the task level is medium and simple, subtasks corresponding to the task level are processed locally.
5. The power edge chip-based Bian Yun collaborative computing method of claim 4, wherein: the task processing instructions include:
when the calculation capacity value and the processing redundancy of the power edge chip of the uploading subtask are enough, feeding back a locally processed task instruction;
and when the calculation capacity value and the processing redundancy of the power edge chip of the uploaded subtask are insufficient, feeding back a task instruction processed at different places, and sending the subtask to the power edge chip with the processing capacity.
6. The power edge chip-based Bian Yun collaborative computing method according to claim 5, wherein: the subtask network model is a cyclic neural network, expressed as:
h t =f(W xh x t +W hh h t-1 +b h )
y t =g(W hy h t +b y )
wherein x is t Input data representing a time t; h is a t A hidden state at time t; y is t Output results at time t are shown; w (W) xh For input layer, W hh To hide layer, W hy Is a weight matrix; b h Representing the bias vector of the hidden layer, b y A bias vector representing an output layer; f represents an activation function, g represents an activation function.
7. The power edge chip-based Bian Yun collaborative computing method of claim 6, wherein: the training model of the cloud server is a convolutional neural network, and is expressed as:
Y i,j,k =f(Z i,j,k )
wherein i represents the abscissa of the feature map and j represents the ordinate of the feature map; k represents the number of convolution kernels; l represents the number of channels of the convolution kernel; m represents the width of the convolution kernel and n represents the height of the convolution kernel; x represents input image data; w represents a convolution kernel weight matrix; b represents a bias vector; y represents the output result of the convolution layer; f represents an activation function;
the activation function is a ReLU function.
8. An apparatus employing the power edge chip-based Bian Yun collaborative computing method according to any one of claims 1-7, comprising,
the complexity evaluation unit is used for decomposing the task to be executed into a plurality of subtasks, respectively evaluating the complexity of the subtasks, carrying out combined evaluation on the evaluation results of the complexity of the subtasks and the calculation capacity of the subtasks to obtain the task grades of the subtasks, and sending the subtasks of the complexity grades to the cloud server;
the task grade feedback unit is used for responding to task processing instructions fed back by the cloud server according to the complex task grade and processing corresponding subtasks;
the model unit responds to the task processing instruction, respectively transmits the plurality of subtasks to each subtask network model for training, calculates the training weight of each subtask network model, and transmits the training result and the training weight to the cloud server so that the training model of the cloud server integrates training.
9. A computing device, comprising: a memory and a processor;
the memory is configured to store computer-executable instructions that, when executed by a processor, implement the steps of the power edge chip-based Bian Yun collaborative computing method of any of claims 1-7.
10. A computer readable storage medium storing computer executable instructions which when executed by a processor implement the steps of the power edge chip-based Bian Yun collaborative computing method of any of claims 1-7.
CN202310751564.8A 2023-06-25 2023-06-25 Bian Yun cooperative computing method based on power edge chip Pending CN116820762A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310751564.8A CN116820762A (en) 2023-06-25 2023-06-25 Bian Yun cooperative computing method based on power edge chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310751564.8A CN116820762A (en) 2023-06-25 2023-06-25 Bian Yun cooperative computing method based on power edge chip

Publications (1)

Publication Number Publication Date
CN116820762A true CN116820762A (en) 2023-09-29

Family

ID=88125159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310751564.8A Pending CN116820762A (en) 2023-06-25 2023-06-25 Bian Yun cooperative computing method based on power edge chip

Country Status (1)

Country Link
CN (1) CN116820762A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118118526A (en) * 2024-04-23 2024-05-31 浙江大学 Cloud edge cooperative data acquisition and control method for new energy power station

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118118526A (en) * 2024-04-23 2024-05-31 浙江大学 Cloud edge cooperative data acquisition and control method for new energy power station

Similar Documents

Publication Publication Date Title
CN111537945B (en) Intelligent ammeter fault diagnosis method and equipment based on federal learning
US9390373B2 (en) Neural network and method of neural network training
US20200193273A1 (en) Residual quantization for neural networks
Wang et al. Qtt-dlstm: A cloud-edge-aided distributed lstm for cyber–physical–social big data
CN113128702A (en) Neural network self-adaptive distributed parallel training method based on reinforcement learning
Wang et al. PR-ELM: Parallel regularized extreme learning machine based on cluster
CN110781686B (en) Statement similarity calculation method and device and computer equipment
CN111898703B (en) Multi-label video classification method, model training method, device and medium
CN114219076A (en) Quantum neural network training method and device, electronic device and medium
CN109445935A (en) A kind of high-performance big data analysis system self-adaption configuration method under cloud computing environment
CN116820762A (en) Bian Yun cooperative computing method based on power edge chip
CN114490065A (en) Load prediction method, device and equipment
Parthasarathy et al. DEFER: distributed edge inference for deep neural networks
CN115017178A (en) Training method and device for data-to-text generation model
CN114282678A (en) Method for training machine learning model and related equipment
Zhu et al. Difftraj: Generating gps trajectory with diffusion probabilistic model
CN109886317B (en) General image aesthetic evaluation method, system and equipment based on attention mechanism
Kumar et al. Neuron specific pruning for communication efficient federated learning
CN116976429A (en) Distributed training method, device, electronic equipment, storage medium and program product
Gurung et al. Decentralized quantum federated learning for metaverse: Analysis, design and implementation
Chen et al. Research on high performance computing of power system based on machine learning algorithm
CN114445692B (en) Image recognition model construction method and device, computer equipment and storage medium
CN111275062A (en) Model training method, device, server and computer readable storage medium
CN117033997A (en) Data segmentation method, device, electronic equipment and medium
CN117709497A (en) Object information prediction method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination