CN112202603A - Interactive service entity placement method in edge environment - Google Patents
Interactive service entity placement method in edge environment Download PDFInfo
- Publication number
- CN112202603A CN112202603A CN202011029031.1A CN202011029031A CN112202603A CN 112202603 A CN112202603 A CN 112202603A CN 202011029031 A CN202011029031 A CN 202011029031A CN 112202603 A CN112202603 A CN 112202603A
- Authority
- CN
- China
- Prior art keywords
- service entity
- service
- users
- interaction
- placement
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 17
- 230000002452 interceptive effect Effects 0.000 title description 4
- 230000003993 interaction Effects 0.000 claims abstract description 33
- 238000005457 optimization Methods 0.000 claims abstract description 9
- 230000006870 function Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 description 4
- 230000002860 competitive effect Effects 0.000 description 2
- 230000002195 synergetic effect Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0823—Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/30—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
- A63F13/33—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections
- A63F13/335—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/30—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
- A63F13/35—Details of game servers
- A63F13/352—Details of game servers involving special game server arrangements, e.g. regional servers connected to a national server or a plurality of servers managing partitions of the game world
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0823—Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
- H04L41/083—Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for increasing network speed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/142—Network analysis or design using statistical or mathematical methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/40—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of platform network
- A63F2300/407—Data transfer via internet
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/50—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
- A63F2300/51—Server architecture
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Probability & Statistics with Applications (AREA)
- Pure & Applied Mathematics (AREA)
- Algebra (AREA)
- General Engineering & Computer Science (AREA)
- Computer And Data Communications (AREA)
Abstract
The invention discloses a method and equipment for placing service entities facing interaction in an edge environment, which can realize the efficient placement of the service entities facing the interaction in the edge environment, and the method comprises the following steps: establishing a system model comprising a time delay model, a placement cost model and a service entity association model; establishing an objective function for the interaction-oriented service entity placement problem according to the model, and describing the objective function as an optimization problem; and solving the optimization problem by adopting a greedy heuristic algorithm to obtain a service entity placement scheme.
Description
Technical Field
The invention relates to the field of edge computing, in particular to a method and equipment for placing service entities facing interaction in an edge environment.
Background
Distributed Interactive Applications (dia), such as multiplayer online games, virtual or augmented reality Applications, allow multiple users Distributed geographically discretely to communicate interactively to achieve a synergistic or competitive effect. A DIA typically includes two components, a service entity and a client. The serving entity maintains meta information of the DIA (e.g., user account information, user current state, application state), while the client is responsible for issuing user instructions to the serving entity or accepting instructions and updates from the serving entity. One service entity is typically capable of serving multiple users simultaneously. The service entity may be placed at an edge server. The client can be a smart phone, a tablet computer or a notebook computer of the user.
An interaction in a DIA comprises the following three phases: firstly, a client of a user A sends an instruction to a service entity of the user A, secondly, the service entity of the user A sends a corresponding instruction to a service entity of the user B after necessary calculation, and finally, the service entity of the user B sends corresponding information to the client of the user B after necessary calculation. The DIA focuses on the interaction of the user with the user, and therefore it is important to optimize the interaction latency.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the defects of the prior art, the invention provides a method and equipment for placing service entities facing interaction in an edge environment, which can realize the efficient placement of the service entities facing interaction in the edge environment.
The technical scheme is as follows: according to a first aspect of the present invention, there is provided a method for placing service entities facing interaction in an edge environment, including the following steps:
and S1, establishing a system model including a time delay model, a placement cost model and a service entity association model.
S2, establishing an objective function for the interaction-oriented service entity placement problem according to the model, and describing the objective function as an optimization problem.
And S3, solving the optimization problem by adopting a greedy heuristic algorithm to obtain a service entity placement scheme.
According to a second aspect of the present invention, there is provided a computer apparatus, the apparatus comprising: one or more processors, memory, and one or more programs, wherein the one or more programs are stored in the memory and configured for execution by the one or more processors, which when executed by the processors perform the steps of the first aspect of the invention.
Has the advantages that: the invention provides a service entity placement method aiming at minimizing weighted user interaction time delay, which is oriented to distributed interaction application in an edge computing environment.
Drawings
FIG. 1 is a flow chart of a method for placing interaction-oriented service entities in an edge environment
FIG. 2 is a specific example of the operation of the algorithm
Detailed Description
The present invention will be described in further detail with reference to examples, which are not intended to limit the scope of the present invention.
The distributed interactive application allows a plurality of users which are distributed discretely in a plurality of regions to interactively communicate, thereby achieving the synergistic or competitive effect. A DIA typically includes two components, a service entity and a client. The serving entity maintains meta information of the DIA (e.g., user account information, user current state, application state), while the client is responsible for issuing user instructions to the serving entity or accepting instructions and updates from the serving entity. One service entity is typically capable of serving multiple users simultaneously. The service entity may be placed at an edge server. The client can be a smart phone, a tablet computer or a notebook computer of the user. An interaction in a DIA comprises the following three phases: firstly, a client of a user A sends an instruction to a service entity of the user A, secondly, the service entity of the user A sends a corresponding instruction to a service entity of the user B after necessary calculation, and finally, the service entity of the user B sends corresponding information to the client of the user B after necessary calculation. The DIA focuses on the interaction of the user with the user, and therefore it is important to optimize the interaction latency.
Referring to fig. 1, the present invention provides a method for placing service entities facing interaction in an edge environment, including the following steps:
and step S1, establishing a system model including a time delay model, a placement cost model and a service entity association model.
The network model of the edge computing environment comprises n edge servers and m users, wherein the set S of the edge servers is { S }1,s2,...,snU-U set of users1,u2,...,umThe servers and users are connected through network equipment such as access points, base stations, metropolitan area network routers, and the like. The invention uses d (,) to denote the time delay between two users or servers, e.g. d (u)4,s3) Representing user u4And server s3The time delay therebetween. In particular, d (u) is used in the present invention4And C) represents user u4And latency of cloud data center C.
At edge server siThe cost of placing a service entity on is wiMeanwhile, placing a service entity on any edge server needs to consume a certain amount of physical resources of the edge server, which is denoted as b. Each service entity can simultaneously service at most K users, and K can also be regarded as the service capability of the service entity. When there is no suitable service entity or the service capability of the service entity is insufficient to provide service for all users, the cloud data center C may be used to service the users. Because the cloud data center has abundant physical resources, the cloud data center can simultaneously serve any number of users.
When more than one service entity is placed in an edge computing environment, each user needs to select an appropriate service entity to service their own application requirements. The user can select a proper service entity according to various indexes, but in the method, the invention ensures that any user selects the service entity which has the minimum time delay and the service capability which is not used up to serve the application requirement of the user.
And step S2, establishing an objective function for the interaction-oriented service entity placement problem according to the model, and describing the objective function as an optimization problem.
Let xiPresentation placement at edge server siThe number of service entities, the invention has the following constraints
Where Q represents the upper bound of the service entity placement cost. Let BiRepresenting edge servers siThe invention has the upper limit of physical resources
In the invention, X is ═ X1,x2,...,xn]Represents any one service entity placement scheme, denoted as s (u)iX) represents user uiThe service entity of (1). Then when two users uiAnd ujWhen interacting, the two users interact with a delay D (u) since the interaction requires their respective service entities to be usedi,ujX) can be represented as
D(ui,uj,X)=p(ui,s(ui,X))+p(s(ui,X),s(uj,X))+p(uj,s(uj,X))
Let user uiAnd ujInteraction frequency of fijThe number of interactions of these 2 users per unit time is indicated. Generally, the interaction frequency satisfies the following condition. First, fij=fji(ii) a Secondly, fii0; finally, the method has the advantages of no loss of generality,
the present invention seeks to minimize the weighted average interaction delay e (x) for m users, defined as follows:
the constraint conditions include:
and
and step S3, solving the optimization problem by adopting a greedy heuristic algorithm to obtain a service entity placement scheme.
S3-1, algorithm initialization: all the service entities of the m users are initialized to be a cloud data center C; the current service entity placement scheme is denoted as X.
S3-2, adding a new service entity: for each edge server siIf its physical resources remain, then attempt at siA service entity is added, and the new service entity placement scheme is marked as X'.
S3-3, calculating new weighted average interaction time delay: and calculating the weighted average interaction time delay E (X ') under the new service entity placement scheme X', and updating the current service entity placement scheme X to X 'if E (X') is less than E (X).
S3-4, circulation: the steps S3-2 and S3-3 are repeated until all edge servers are traversed.
S3-5, outputting: and outputting the current service entity arrangement scheme X as a final service entity placement scheme.
The present embodiment also provides an edge computing device, which includes: a processor, and a memory storing computer-executable instructions that, when executed by the processor, implement the steps of any of the methods provided by the present embodiments, as described above.
The present embodiments also provide a computer-readable storage medium storing one or more programs for: when one or more of the programs are executed by an edge computing device, the edge computing device implements the steps of any of the methods provided by the present embodiments.
The following describes, with reference to fig. 2, an edge computing network environment related to this embodiment and a placement scheme for obtaining a service entity by using a heuristic algorithm provided by this embodiment without loss of generality:
fig. 2 provides a specific example, containing 2 edge servers and 3 users. The upper limit Q of the placement cost of the service entity is 6; the upper limit of the physical resources of the edge servers is B1=B26; the resource b required by the operation of a single service entity is 2; at s1Has a cost of w 12; at s2Has a cost of w 23; a single service entity can simultaneously service the number of users as K equal to 1; the numbers on each link in fig. 2 represent the delay of that link.
When the heuristic algorithm provided by the invention is executed, in an initialization stage, all the service entities of 3 users are initialized to the cloud data center C.
The algorithm then tries at the edge server s1A service entity is arranged on the system, because u is the three users2And s1Is the shortest, so the newly added service entity will be u2And (6) serving. Then calculating the total weighted average interaction time delay under the condition to obtain a result of 107.6; the algorithm then tries at the edge server s2A service entity is arranged on the system, because u is the three users2And s2Is the shortest, so the newly added service entity will be u2And (6) serving. Finally, calculating the total weighted average interaction time delay under the condition to obtain a result of 106; the algorithm therefore places the 1 st serving entity at server s2The above.
According to the placement of the service entity, the upper limit Q is 6, so that another service entity can be placed. The algorithm tries at the edge server s1A new service entity is placed on s2In other words, u is among three users2And s2Is shortest, so s2Will be u2Servicing; for s1In terms of u3Biu is a ratio of1And s1Is shorter, so s1Will be u3And (6) serving. Then calculating the total weighted average interaction time delay under the condition to obtain a result 584; the algorithm then tries at the edge server s2A service entity is placed above, and similarly, the total weighted average interaction delay in this case is calculated to obtain a result of 48.4; the algorithm therefore places the 2 nd serving entity also at server s2The above.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.
Claims (5)
1. An interaction-oriented service entity placement method in an edge environment is characterized by comprising the following steps:
and S1, establishing a system model including a time delay model, a placement cost model and a service entity association model.
S2, establishing an objective function for the interaction-oriented service entity placement problem according to the model, and describing the objective function as an optimization problem.
And S3, solving the optimization problem by adopting a greedy heuristic algorithm to obtain a service entity placement scheme.
2. The method for placing service entity facing interaction in edge environment according to claim 1, wherein said step S1 includes:
s1-1, establishing a time delay model: the network model of the edge computing environment comprises n edge servers and m users, wherein the set S of the edge servers is { S }1,s2,...,snU-U set of users1,u2,...,umThe servers and users are connected through network equipment such as access points, base stations, metropolitan area network routers, and the like. The invention uses d (,) to denote the time delay between two users or servers, e.g. d (u)4,s3) Representing user u4And server s3The time delay therebetween. In particular, d (u) is used in the present invention4And C) represents user u4And latency of cloud data center C.
S1-2, establishing a placement cost model: at edge server siThe cost of placing a service entity on is wiMeanwhile, placing a service entity on any edge server needs to consume a certain amount of physical resources of the edge server, which is denoted as b. Each service entity can simultaneously service at most K users, and K can also be regarded as the service capability of the service entity. When there is no suitable service entity or the service capability of the service entity is insufficient to provide service for all users, the cloud data center C may be used to service the users. Because the cloud data center has abundant physical resources, the cloud data center can simultaneously serve any number of users.
S1-3, establishing a service entity association model: when more than one service entity is placed in an edge computing environment, each user needs to select an appropriate service entity to service their own application requirements. The user can select a proper service entity according to various indexes, but in the invention, any user selects the service entity which has the minimum time delay and the service capability which is not used up to serve the application requirement of the user.
3. The interaction-oriented delay model, the placement cost model and the service entity association model in the edge environment according to claim 2, wherein the step S2 includes:
s2-1, placing cost constraint: let xiPresentation placement at edge server siThe number of service entities, the invention has the following constraints
Where Q represents the upper bound of the service entity placement cost. Let BiRepresenting edge servers siLet b denote the physical resource required for a single service entity to operate. Thus is provided with
In the invention, X is ═ X1,x2,...,xn]Representing any one of the service entity placement schemes.
S2-2, interaction time delay and frequency thereof: s (u) for the present inventioniX) represents user uiThe service entity of (1). Then when two users uiAnd ujWhen interacting, the two users interact with a delay D (u) since the interaction requires their respective service entities to be usedi,ujX) can be represented as
D(ui,uj,X)=p(ui,s(ui,X))+p(s(ui,X),s(uj,X))+p(uj,s(uj,X))
Let user uiAnd ujInteraction frequency of fijThe number of interactions of these 2 users per unit time is indicated. AGenerally, the interaction frequency satisfies the following condition. First, fij=fji(ii) a Secondly, fii0; finally, the method has the advantages of no loss of generality,
s2-3, optimizing the target: the present invention seeks to minimize the weighted average interaction delay e (x) for m users, defined as follows:
the constraint conditions include:
and
4. the interaction-oriented optimization problem under the edge environment of claim 3, wherein the heuristic algorithm designed in the step S3 comprises:
s3-1, algorithm initialization: all the service entities of the m users are initialized to be a cloud data center C; the current service entity placement scheme is denoted as X.
S3-2, adding newThe service entity: for each edge server siIf its physical resources remain, then attempt at siA service entity is added, and the new service entity placement scheme is marked as X'.
S3-3, calculating new weighted average interaction time delay: and calculating the weighted average interaction time delay E (X ') under the new service entity placement scheme X', and updating the current service entity placement scheme X to X 'if E (X') is less than E (X).
S3-4, circulation: the steps S3-2 and S3-3 are repeated until all edge servers are traversed.
S3-5, outputting: and outputting the current service entity arrangement scheme X as a final service entity placement scheme.
5. A computer device, the device comprising:
one or more processors, memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, which when executed by the processors implement the steps of any of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011029031.1A CN112202603A (en) | 2020-09-25 | 2020-09-25 | Interactive service entity placement method in edge environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011029031.1A CN112202603A (en) | 2020-09-25 | 2020-09-25 | Interactive service entity placement method in edge environment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112202603A true CN112202603A (en) | 2021-01-08 |
Family
ID=74007024
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011029031.1A Pending CN112202603A (en) | 2020-09-25 | 2020-09-25 | Interactive service entity placement method in edge environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112202603A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190373016A1 (en) * | 2018-05-29 | 2019-12-05 | Cloudflare, Inc. | Providing cross site request forgery protection at an edge server |
CN110968920A (en) * | 2019-11-29 | 2020-04-07 | 江苏方天电力技术有限公司 | Method for placing chain type service entity in edge computing and edge computing equipment |
CN111580978A (en) * | 2020-05-12 | 2020-08-25 | 中国联合网络通信集团有限公司 | Edge computing server layout method and task allocation method |
-
2020
- 2020-09-25 CN CN202011029031.1A patent/CN112202603A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190373016A1 (en) * | 2018-05-29 | 2019-12-05 | Cloudflare, Inc. | Providing cross site request forgery protection at an edge server |
CN110968920A (en) * | 2019-11-29 | 2020-04-07 | 江苏方天电力技术有限公司 | Method for placing chain type service entity in edge computing and edge computing equipment |
CN111580978A (en) * | 2020-05-12 | 2020-08-25 | 中国联合网络通信集团有限公司 | Edge computing server layout method and task allocation method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106056529B (en) | Method and equipment for training convolutional neural network for picture recognition | |
Sheikhalishahi et al. | A multi-dimensional job scheduling | |
US20080104609A1 (en) | System and method for load balancing distributed simulations in virtual environments | |
US10130885B1 (en) | Viewport selection system | |
CN111259019B (en) | Resource allocation method, device, equipment and storage medium | |
Kim et al. | Enabling Digital Earth simulation models using cloud computing or grid computing–two approaches supporting high-performance GIS simulation frameworks | |
KR101882383B1 (en) | A container resource allocation device and method in virtual desktop infrastructure | |
Wang et al. | DCCP: an effective data placement strategy for data-intensive computations in distributed cloud computing systems | |
CN116431282A (en) | Cloud virtual host server management method, device, equipment and storage medium | |
CN111200525A (en) | Network shooting range scene re-engraving method and system, electronic equipment and storage medium | |
JP2017507395A (en) | Multi-mode gaming server | |
CN108289115B (en) | Information processing method and system | |
CN112221151B (en) | Map generation method and device, computer equipment and storage medium | |
WO2012047310A1 (en) | Levering geo-ip information to select default avatar | |
CN112202603A (en) | Interactive service entity placement method in edge environment | |
US8589475B2 (en) | Modeling a cloud computing system | |
Tseng et al. | A discrete electromagnetism-like mechanism for parallel machine scheduling under a grade of service provision | |
CN114885199B (en) | Real-time interaction method, device, electronic equipment, storage medium and system | |
CN110287025A (en) | A kind of resource allocation methods, device and equipment | |
US10104173B1 (en) | Object subscription rule propagation | |
US20220050729A1 (en) | Clustering Processes Using Traffic Data | |
CN114047918A (en) | Task processing method, device, equipment, storage medium and product | |
CN109617954B (en) | Method and device for creating cloud host | |
CN111552715A (en) | User query method and device | |
CN116166202B (en) | Method, device, equipment and medium for placing copies in big data environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210108 |
|
WD01 | Invention patent application deemed withdrawn after publication |