CN115543619A - Edge calculation method for realizing multi-network data heterogeneous fusion - Google Patents

Edge calculation method for realizing multi-network data heterogeneous fusion Download PDF

Info

Publication number
CN115543619A
CN115543619A CN202211220786.9A CN202211220786A CN115543619A CN 115543619 A CN115543619 A CN 115543619A CN 202211220786 A CN202211220786 A CN 202211220786A CN 115543619 A CN115543619 A CN 115543619A
Authority
CN
China
Prior art keywords
edge
data
task
calculation
data group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211220786.9A
Other languages
Chinese (zh)
Inventor
陈万胜
昂少强
常先久
朱前进
孙朋
朱全胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wansn Technology Co ltd
Original Assignee
Wansn Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wansn Technology Co ltd filed Critical Wansn Technology Co ltd
Priority to CN202211220786.9A priority Critical patent/CN115543619A/en
Publication of CN115543619A publication Critical patent/CN115543619A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/20Network architectures or network communication protocols for network security for managing network security; network security policies in general
    • H04L63/205Network architectures or network communication protocols for network security for managing network security; network security policies in general involving negotiation or determination of the one or more network security mechanisms to be used, e.g. by negotiation between the client and the server or between peers or by selection according to the capabilities of the entities involved
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/502Proximity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/508Monitor

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Storage Device Security (AREA)

Abstract

The invention relates to the technical field of edge computing, and particularly discloses an edge computing method for realizing heterogeneous fusion of multi-network data, which comprises the following steps: s100, establishing an edge task allocation unit, wherein the edge task allocation unit is respectively connected with an edge task processor, a cloud platform and edge equipment; s200, receiving heterogeneous resource data through an edge task allocation unit, and dividing each task data into data groups with identifications according to a preset segmentation strategy according to encryption levels corresponding to each task in the heterogeneous resource data; s300, according to the running state of the edge task processor, carrying out distribution calculation on the data group according to a preset distribution strategy and obtaining an identification sequence of a distribution mode; and S400, calculating the data in the data group and then transmitting the data to the edge task distribution unit, wherein the edge task distribution unit obtains the calculation result of each task according to the identification sequence and transmits the calculation result to the cloud platform and the corresponding edge device.

Description

Edge calculation method for realizing multi-network data heterogeneous fusion
Technical Field
The invention relates to the technical field of edge computing, in particular to an edge computing method for realizing multi-network data heterogeneous fusion.
Background
With the development of the internet technology and the wide application of the internet of things technology, the requirement on computing processing is higher and higher, mainly because services such as 5G, industrial internet, ioT and the like and scenes are developed faster and faster, and more intelligent terminal devices are provided, so that more and more sinking appeal is required on edge computing services; therefore, if a traditional cloud computing mode is adopted, all processing is completely placed in the center, the increase of large-scale edge intelligent equipment is difficult to meet, all data or videos are completely transmitted back to the cloud end to be processed, the whole cost is high, high delay is achieved, and the requirement on efficiency cannot be met; in an industrial application scene, bandwidth blocking, delay problems and unpredictable network interruption are unacceptable, so that besides the uniform control of a cloud end, network nodes on an industrial site have certain computing capacity, can autonomously judge and solve problems, timely detect and predict abnormal conditions in real time, and the mode of edge computing is the trend of a future computing mode.
Edge computing is used as a mode that cloud computing extends to an edge, and compared with cloud computing, software and hardware resources are intensively deployed in a large data center far away from a user, edge computing is that computing resources are placed nearby at the edge closer to the user or equipment, so that delay and bandwidth consumption are reduced, and real-time processing close to a data source is provided; on one hand, the IT service is decentralized on the edge side, so that the computing efficiency is ensured, the computing pressure of cloud computing is reduced, and on the other hand, because the edge service or the facility is autonomous, the IT service has certain control capability and edge hosting capability under the condition that the network between the cloud and the edge is disconnected, so that the stability of the computing form is ensured.
In the prior art, since edge computing resources are dynamically changed, when edge computing is performed, unreasonable invocation of edge computing resources may cause congestion of an edge computing server, affect other computing tasks, and further may not effectively process the computing tasks due to a small amount of invoked resources.
Disclosure of Invention
The invention aims to provide an edge calculation method for realizing multi-network data heterogeneous fusion, which solves the following technical problems:
how to improve the reasonability and the safety of computing task allocation in the edge computing process.
The purpose of the invention can be realized by the following technical scheme:
an edge computing method for realizing multi-network data heterogeneous fusion, the method comprising:
s100, establishing an edge task allocation unit, wherein the edge task allocation unit is respectively connected with an edge task processor, a cloud platform and edge equipment;
s200, receiving heterogeneous resource data through an edge task allocation unit, and dividing each task data into data groups with identifications according to a preset segmentation strategy according to the encryption level corresponding to each task in the heterogeneous resource data;
s300, according to the running state of the edge task processor, carrying out distribution calculation on the data group according to a preset distribution strategy and obtaining an identification sequence of a distribution mode;
and S400, calculating the data in the data group and then transmitting the data to the edge task distribution unit, wherein the edge task distribution unit obtains the calculation result of each task according to the identification sequence and transmits the calculation result to the cloud platform and the corresponding edge device.
In an embodiment, the preset slicing strategy is as follows:
grouping the task data by the data interval length L, and identifying each grouped data group in sequence;
the data interval length L is determined according to the encryption level.
In one embodiment, the preset allocation policy is:
obtaining the predicted calculated amount C of all tasks for Obtaining the operation condition of each edge computing device, wherein the operation condition comprises the maximum computation C under a first specific time length max And the current calculation amount C to be processed await
According to the formula
Figure BDA0003877268330000031
Calculating the residual calculated amount C of the edge task processor edge ,α i For the available calculation of the quantity coefficient, alpha i Less than 1;
will predict the calculated amount C for And the remaining calculated amount C edge And (3) carrying out comparison:
if C for ≤C edge Then the data group is distributed to the edge task processor;
if C for >C edge And distributing the non-overflow part of data set to the edge task processor and distributing the overflow part of data set to the cloud platform.
In one embodiment, the method for distributing data sets to edge task processors comprises:
SS100, get edge task processor t i Edge task processors that are idle at a point in time, t i The time point is t from the current time point i-1 A time point of a second specific duration;
SS200, calculating amount according to unit time length of edge task processor and starting calculating point distance t i Randomly distributing a data group by time length;
wherein i =1, 2, \8230;, m, t m -t 0 = first specific duration, t i -t i-1 = second specific duration;
SS300, repeat the steps S1, S2 incrementally for i until i = m.
In one embodiment, the generation process of the identification sequence is as follows:
when the data group is randomly distributed, recording the serial number of the processing platform corresponding to the data group and the sequence of the processing platform on the platform, and linking the recorded data with the identifier corresponding to the data group;
for the data group of each task, a sequence of recording data is formed in the identification order as an identification sequence.
In one embodiment, the method for obtaining the calculation result includes:
and recombining the calculation data of the data group according to the corresponding processing platform serial number in each task identification sequence and the sequence of the processing platform to obtain a calculation result.
In an embodiment, the step S300 further includes:
and encrypting the distributed data set, and sending the encrypted data set to the edge task processor and the cloud platform.
In one embodiment, in steps S1 and S3, the idle edge task processors are sorted according to the priority coefficient of each edge task processor, and are sequentially allocated according to the sorting order.
In an embodiment, the method for calculating the priority coefficient of the edge task processor includes:
by the formula
Figure BDA0003877268330000041
Calculating a priority coefficient P of the edge task processor;
wherein, delta 1 、δ 2 Respectively, a transmission weight coefficient and a calculation weight coefficient, L is a path distance between the edge task processor and the edge task allocation unit, v t For the transfer speed of the edge task processor to the edge task allocation unit, C th For a predetermined amount of tasks, v c Is the computation speed of the edge task processor.
The invention has the beneficial effects that:
(1) The invention can utilize the distribution process of the computing resources more through the segmentation of the data group, improves the distribution rationality of the edge computing, and the segmentation and backtracking processes of the data group can realize the encryption mechanism of the data fusion process, thereby improving the safety of the data transmission process.
(2) According to the invention, through the analysis of the calculated amount to be processed, the calculation tasks are reasonably distributed to the edge task processor and the cloud platform according to the size of the calculated amount, and the calculation efficiency and the reasonability of the distribution of the calculated amount are ensured.
(3) The invention can continuously adjust the distribution of the tasks according to the specific state of the edge task processor, thereby realizing the timely adjustment of the data distribution in the dynamic task change process of the edge task manager and further ensuring the reasonability of the data distribution.
(4) According to the method, the edge task processors related to the edge equipment are subjected to priority sequencing and are sequentially distributed according to the sequence, so that the edge task processors with higher priorities can be distributed preferentially, calculation tasks can be distributed more reasonably, and the reasonability and the calculation efficiency of edge calculation distribution are ensured.
Drawings
The invention will be further described with reference to the accompanying drawings.
FIG. 1 is a flowchart illustrating the steps of an edge computing method for implementing heterogeneous fusion of multi-network data according to the present invention;
FIG. 2 is a flow chart of the steps of a method of assigning data groups to edge task handlers in accordance with the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, in an embodiment, an edge computing method for implementing heterogeneous fusion of multi-network data is provided, where the method includes:
s100, establishing an edge task allocation unit, wherein the edge task allocation unit is respectively connected with an edge task processor, a cloud platform and edge equipment;
s200, receiving heterogeneous resource data through an edge task allocation unit, and dividing each task data into data groups with identifications according to a preset segmentation strategy according to encryption levels corresponding to each task in the heterogeneous resource data;
s300, according to the running condition of the edge task processor, carrying out distribution calculation on the data group according to a preset distribution strategy and obtaining an identification sequence of a distribution mode;
and S400, calculating the data in the data group and then transmitting the data to the edge task allocation unit, wherein the edge task allocation unit obtains the calculation result of each task according to the identification sequence and transmits the calculation result to the cloud platform and the corresponding edge device.
According to the technical scheme, the edge task allocation unit is established and connected with the edge task processor, the cloud platform and the edge device, the receiving, sending and allocation functions of the computing tasks are achieved, heterogeneous computing resources are received through the edge task allocation unit, the uniform allocation of data in different network forms and different protocols is achieved, each computing task is divided into a plurality of data groups, each data group is identified, the data groups are allocated according to the operation condition of the edge task processor, and the identification sequence of the allocation mode is obtained.
In step S100, the edge task allocation unit is implemented by an intelligent device with DTS in the prior art, which is not described in detail herein; in step S200, the specific segmentation and packing process of the data group is implemented by the prior art, which is not described in detail herein; in step S300, the form of data group transmission depends on the specific application scenario, which is not limited herein; in step S400, the process of receiving and backtracking the data set is prior art and will not be described in detail herein.
As an embodiment of the present invention, the preset segmentation strategy is:
grouping the task data by the data interval length L, and identifying each grouped data group in sequence;
the data interval length L is determined according to the encryption level.
Through the technical scheme, the data groups are segmented by adopting a preset segmentation strategy, specifically, the data interval length L is used for grouping the task data, each grouped data group is identified in sequence, the segmentation mode can enable the calculated amount to be processed by each data group to be close, in addition, the encryption grades are different, the selection of the data interval length L is different, when the encryption grade is higher, the L is selected to be shorter, the complexity of data fusion can be improved, the confidentiality grade is improved, and the selection of the specific L length is not limited.
As an embodiment of the present invention, the preset allocation policy is:
obtaining the predicted calculated amount C of all tasks for Obtaining the operation state of each edge computing device, wherein the operation state comprises the maximum computation amount C under the first specific time length max And the current calculation amount C to be processed await
According to the formula
Figure BDA0003877268330000071
Calculating the residual calculation amount C of the edge task processor edge ,α i For the available calculation of the quantity coefficient, alpha i Less than 1;
will predict the calculated amount C for And the remaining calculated amount C edge And (3) carrying out comparison:
if C for ≤C edge Then the data group is distributed to the edge task processor;
if C for >C edge Then the non-overflow partial data set is assigned to the edgeAnd the task processor is used for distributing the overflow part data set to the cloud platform.
According to the technical scheme, through analysis of the calculated amount to be processed, the calculation tasks are reasonably distributed to the edge task processors and the cloud platform according to the size of the calculated amount, specifically, the maximum calculated amount of all the edge task processors is predicted according to the first specific time length, wherein the first specific time length can be selected correspondingly according to cloud calculation transmission and processing time, and calculation efficiency and reasonability of calculation amount distribution are guaranteed.
In the above technical solution, the residual calculated amount C edge Alpha in the formula (1) i For available computation quantity coefficient, which depends on the specific state of each edge task processor, by which coefficient alpha i With the maximum calculated amount C max The available calculation amount of each edge task processor can be obtained through multiplication, basic calculation capacity can be reserved for each edge task processor through the arrangement, and operation of basic functions of the edge task processors is guaranteed.
As an embodiment of the present invention, please refer to fig. 2, in which a method for allocating a data set to an edge task processor includes:
SS100, get edge task processor t i Edge task processor idle at point in time, t i The time point is t from the current time point i-1 A time point of a second specific duration;
SS200, calculating amount according to unit time length of edge task processor and starting calculating point distance t i Randomly distributing a data group by time length;
wherein i =1, 2, \8230;, m, t m -t 0 = first specific duration, t i -t i-1 = second specific time period;
SS300, repeat the steps S1, S2 incrementally for i until i = m.
According to the technical scheme, the first specific time length is uniformly divided into a plurality of second specific time lengths, and t is obtained by taking the current time point as a reference 1 The edge task processor with idle time points calculates the quantity C according to the unit time length of the edge task processor unit And starting the calculation point t b Distance t i Randomly allocating a data group of a time length, wherein the calculation amount C of the unit time length unit The starting calculation point is the time point of finishing the last calculation task of each edge task processor according to the difference of the edge task processors, so that the formula C is used unit *(t 1 -t b ) It can be calculated that a single edge task processor is at t 0 To t 1 In addition, in the embodiment, the steps S1 and S2 are repeated for i at intervals of second specific time duration, so that the distribution of the tasks can be continuously adjusted according to the specific state of the edge task processors, and further, the timely adjustment of the data distribution in the dynamic task change process of the edge task managers can be realized, and the reasonability of the data distribution is further ensured.
In the technical scheme, the random distribution of the data is generated through a random algorithm, and the random process can ensure the unordered state of the task group distribution, so that the encryption of the data fusion process is realized.
As an embodiment of the present invention, the generation process of the identification sequence is:
when the data group is randomly distributed, recording the serial number of the processing platform corresponding to the data group and the sequence of the processing platform on the platform, and linking the recorded data with the identifier corresponding to the data group;
for the data group of each task, a sequence of recording data is formed in the identification order as an identification sequence.
According to the technical scheme, when the data group is distributed, the serial number of the processing platform corresponding to the data group and the sequence of the processing platform corresponding to the data group are recorded, the recorded data are linked with the identifier corresponding to the data group, so that the recording of a random distribution mode is realized, then, a sequence of the recorded data is formed according to the identifier sequence aiming at the data group of each task, and the identifier sequence of each task can be obtained.
As an embodiment of the present invention, a method for acquiring a calculation result includes:
and recombining the calculation data of the data group according to the corresponding processing platform serial number in each task identification sequence and the sequence of the processing platform to obtain a calculation result.
According to the technical scheme, the calculation data are coincided again according to the corresponding relation of the related information when the calculation data of the data group are returned according to the related information corresponding to the identification sequence of each task, and then the calculation result can be obtained.
As an embodiment of the present invention, step S300 further includes:
and encrypting the distributed data set, and sending the encrypted data set to the edge task processor and the cloud platform.
By the technical scheme, the double encryption effect of the data group can be realized by encrypting the distributed data group, and the data security in the edge calculation process is ensured.
In the above embodiments, the encryption mode is an asymmetric encryption algorithm, and the specific implementation process is the prior art and will not be described in detail here.
In one embodiment of the present invention, in steps S1 and S3, the idle edge task processors are sorted according to the priority coefficient of each edge task processor, and are sequentially allocated according to the sorting order.
Through the technical scheme, the edge task processors related to the edge equipment are subjected to priority sequencing and are sequentially distributed according to the sequence, so that the edge task processors with higher priorities can be distributed preferentially, further, the calculation tasks can be distributed more reasonably, and the reasonability and the calculation efficiency of edge calculation distribution are ensured.
As an embodiment of the present invention, a method for calculating a priority coefficient of an edge task processor includes:
by the formula
Figure BDA0003877268330000101
Calculating a priority coefficient P of the edge task processor;
wherein, delta 1 、δ 2 Respectively a transmission weight coefficient and a calculation weight coefficient, L is the path distance between the edge task processor and the edge task allocation unit, v t For the transfer speed of the edge task processor to the edge task allocation unit, C th For a predetermined amount of tasks, v c Is the computation speed of the edge task processor.
According to the technical scheme, the method for calculating the priority coefficient P is provided, the influence of the transmission time on the priority is judged according to the path distance and the transmission speed of the edge task processor and the edge task allocation unit, the influence of the calculation time on the priority is judged through the preset task amount and the calculation speed, and the transmission weight coefficient delta is used for calculating the priority coefficient P 1 And calculating the weight coefficient delta 2 To perform a weighted calculation, wherein C th The value of the preset task amount is a fixed amount, and the priority is higher when the P value is larger, so that the priority coefficient calculated by the method can be used for sequencing the edge task processors by integrating the factors of the data transmission speed and the calculation speed of the edge task processors, and the priority distribution of the edge task processors with higher priorities is realized.
While one embodiment of the present invention has been described in detail, the description is only a preferred embodiment of the present invention and should not be taken as limiting the scope of the invention. All equivalent changes and modifications made within the scope of the present invention shall fall within the scope of the present invention.

Claims (9)

1. An edge computing method for realizing multi-network data heterogeneous fusion is characterized by comprising the following steps:
s100, establishing an edge task allocation unit, wherein the edge task allocation unit is respectively connected with an edge task processor, a cloud platform and edge equipment;
s200, receiving heterogeneous resource data through an edge task allocation unit, and dividing each task data into data groups with identifications according to a preset segmentation strategy according to the encryption level corresponding to each task in the heterogeneous resource data;
s300, according to the running state of the edge task processor, carrying out distribution calculation on the data group according to a preset distribution strategy and obtaining an identification sequence of a distribution mode;
and S400, calculating the data in the data group and then transmitting the data to the edge task distribution unit, wherein the edge task distribution unit obtains the calculation result of each task according to the identification sequence and transmits the calculation result to the cloud platform and the corresponding edge device.
2. The edge computing method for implementing heterogeneous fusion of multi-network data according to claim 1, wherein the preset segmentation policy is:
grouping the task data by the data interval length L, and identifying each grouped data group in sequence;
the data interval length L is determined according to the encryption level.
3. The edge computing method for implementing heterogeneous fusion of multi-network data according to claim 2, wherein the preset allocation policy is:
obtaining the predicted calculated amount C of all tasks for Obtaining the operation condition of each edge computing device, wherein the operation condition comprises the maximum computation C under a first specific time length max And the current calculation amount C to be processed await
According to the formula
Figure FDA0003877268320000011
Calculating the residual calculated amount C of the edge task processor edge ,α i For the available calculation quantity coefficient, alpha i Less than 1;
will predict the calculated quantity C for And the remaining calculated amount C edge And (3) carrying out comparison:
if C for ≤C edge Then the data group is distributed to the edge task processor;
if C for >C edge Then distributing the non-overflow part data group to the edge task processor and distributing the overflow part data groupTo the cloud platform.
4. The edge computing method for realizing multi-network data heterogeneous fusion according to claim 3, wherein the method for distributing the data groups to the edge task processors is as follows:
SS100, get edge task processor t i Edge task processors that are idle at a point in time, t i The time point is t from the current time point i-1 A time point of a second specific duration;
SS200, calculating amount according to unit time length of edge task processor and starting calculating point distance t i Randomly distributing a data group by time length;
wherein i =1, 2, \8230, m, t m -t 0 = first specific duration, t i -t i-1 = second specific duration;
SS300, repeat the steps S1, S2 incrementally for i until i = m.
5. The edge computing method for implementing multi-network data heterogeneous fusion according to claim 4, wherein the generation process of the identification sequence is as follows:
when the data group is randomly distributed, recording the serial number of the processing platform corresponding to the data group and the sequence of the processing platform on the platform, and linking the recorded data with the identifier corresponding to the data group;
for the data group of each task, a sequence of recording data is formed in the identification order as an identification sequence.
6. The edge computing method for realizing multi-network data heterogeneous fusion according to claim 5, wherein the computing result is obtained by:
and recombining the calculation data of the data group according to the corresponding processing platform serial number in each task identification sequence and the sequence of the processing platform to obtain a calculation result.
7. The edge computing method for implementing heterogeneous fusion of multi-network data according to claim 2, wherein step S300 further includes:
and encrypting the distributed data set, and sending the encrypted data set to the edge task processor and the cloud platform.
8. The edge computing method for realizing multi-network data heterogeneous fusion according to claim 3, wherein in steps S1 and S3, the idle edge task processors are sorted according to the priority coefficient of each edge task processor and are sequentially distributed according to the sorting order.
9. The edge computing method for implementing heterogeneous fusion of multi-network data according to claim 8, wherein the computing method of the priority coefficient of the edge task processor is as follows:
by the formula
Figure FDA0003877268320000031
Calculating a priority coefficient P of the edge task processor;
wherein, delta 1 、δ 2 Respectively, a transmission weight coefficient and a calculation weight coefficient, L is a path distance between the edge task processor and the edge task allocation unit, v t For the transfer speed of the edge task processor to the edge task allocation unit, C th For a predetermined amount of tasks, v c Is the computation speed of the edge task processor.
CN202211220786.9A 2022-10-08 2022-10-08 Edge calculation method for realizing multi-network data heterogeneous fusion Pending CN115543619A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211220786.9A CN115543619A (en) 2022-10-08 2022-10-08 Edge calculation method for realizing multi-network data heterogeneous fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211220786.9A CN115543619A (en) 2022-10-08 2022-10-08 Edge calculation method for realizing multi-network data heterogeneous fusion

Publications (1)

Publication Number Publication Date
CN115543619A true CN115543619A (en) 2022-12-30

Family

ID=84731939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211220786.9A Pending CN115543619A (en) 2022-10-08 2022-10-08 Edge calculation method for realizing multi-network data heterogeneous fusion

Country Status (1)

Country Link
CN (1) CN115543619A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116170446A (en) * 2023-04-20 2023-05-26 成都致学教育科技有限公司 Data processing method and system based on edge cloud
CN117791877A (en) * 2024-02-23 2024-03-29 北京智芯微电子科技有限公司 Control method, device, equipment and medium for power distribution Internet of things
CN117851789A (en) * 2024-03-05 2024-04-09 北京珞安科技有限责任公司 Industrial control equipment operation quality evaluation system based on artificial intelligence
CN117791877B (en) * 2024-02-23 2024-05-24 北京智芯微电子科技有限公司 Control method, device, equipment and medium for power distribution Internet of things

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116170446A (en) * 2023-04-20 2023-05-26 成都致学教育科技有限公司 Data processing method and system based on edge cloud
CN116170446B (en) * 2023-04-20 2023-06-27 成都致学教育科技有限公司 Data processing method and system based on edge cloud
CN117791877A (en) * 2024-02-23 2024-03-29 北京智芯微电子科技有限公司 Control method, device, equipment and medium for power distribution Internet of things
CN117791877B (en) * 2024-02-23 2024-05-24 北京智芯微电子科技有限公司 Control method, device, equipment and medium for power distribution Internet of things
CN117851789A (en) * 2024-03-05 2024-04-09 北京珞安科技有限责任公司 Industrial control equipment operation quality evaluation system based on artificial intelligence
CN117851789B (en) * 2024-03-05 2024-05-31 北京珞安科技有限责任公司 Industrial control equipment operation quality evaluation system based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN111246586B (en) Method and system for distributing smart grid resources based on genetic algorithm
CN113315700B (en) Computing resource scheduling method, device and storage medium
CN103986715B (en) A kind of method and device of control of network flow quantity
CN110717300B (en) Edge calculation task allocation method for real-time online monitoring service of power internet of things
CN110570075B (en) Power business edge calculation task allocation method and device
CN113784373B (en) Combined optimization method and system for time delay and frequency spectrum occupation in cloud edge cooperative network
US8090814B2 (en) Method for determining distribution of a shared resource among a plurality of nodes in a network
Harutyunyan et al. Latency and mobility–aware service function chain placement in 5G networks
CN112217725B (en) Delay optimization method based on edge calculation
Kliazovich et al. CA-DAG: Communication-aware directed acyclic graphs for modeling cloud computing applications
CN115543619A (en) Edge calculation method for realizing multi-network data heterogeneous fusion
Mai et al. On the use of supervised machine learning for assessing schedulability: application to Ethernet TSN
CN110602180A (en) Big data user behavior analysis method based on edge calculation and electronic equipment
CN110535705B (en) Service function chain construction method capable of adapting to user time delay requirement
EP2863597B1 (en) Computer-implemented method, computer system, computer program product to manage traffic in a network
Tkachova et al. A load balancing algorithm for SDN
CN100380890C (en) Data transmission method for matching upper protocol layer to high-speed serial bus
Nishanbayev et al. Evaluating the effectiveness of a software-defined cloud data center with a distributed structure
CN114138453B (en) Resource optimization allocation method and system suitable for edge computing environment
CN115550983A (en) Hierarchical control-based mobile environment communication transmission method
CN112817732B (en) Stream data processing method and system suitable for cloud-edge collaborative multi-data-center scene
CN114710288A (en) Network switch safety monitoring method, device and medium based on artificial intelligence
CN110336758B (en) Data distribution method in virtual router and virtual router
CN111752707A (en) Mapping relation-based power communication network resource allocation method
Lo et al. SDN-based QoS architectures in Edge-IoT Systems: A Comprehensive Analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination