CN112491978A - Scheduling method and device - Google Patents

Scheduling method and device Download PDF

Info

Publication number
CN112491978A
CN112491978A CN202011261448.0A CN202011261448A CN112491978A CN 112491978 A CN112491978 A CN 112491978A CN 202011261448 A CN202011261448 A CN 202011261448A CN 112491978 A CN112491978 A CN 112491978A
Authority
CN
China
Prior art keywords
rendering server
terminal
target
server cluster
target service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011261448.0A
Other languages
Chinese (zh)
Other versions
CN112491978B (en
Inventor
翟颖奇
冯毅
李洁
邓煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202011261448.0A priority Critical patent/CN112491978B/en
Publication of CN112491978A publication Critical patent/CN112491978A/en
Application granted granted Critical
Publication of CN112491978B publication Critical patent/CN112491978B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/101Server selection for load balancing based on network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/141Setup of application sessions

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a scheduling method and equipment, relates to the technical field of communication, and aims to solve the problem of how to allocate a rendering server with the lowest time delay to a terminal. The method comprises the following steps: and receiving the service application reported by the terminal. And determining a first target rendering server according to a prestored network architecture of the rendering server cluster, the area where the terminal is located and the target service. A first message is sent to the terminal. The second message is sent to the first target rendering server. The service application comprises a target service and a region where the terminal is located. The target service is a service to be applied by the terminal. The first target rendering server is the rendering server with the lowest time delay between the second target rendering server and the terminal. The second target rendering server is a server in the rendering server cluster which can provide the target service. The first message is used for indicating the terminal to be connected with the first target rendering server. The second message is used for instructing the first target rendering server to provide the target service to the terminal.

Description

Scheduling method and device
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a scheduling method and apparatus.
Background
In recent years, virtual reality technology has received increased acceptance. Virtual reality technology requires rendering. In the prior art, rendering methods used in virtual reality technologies are generally classified into: native rendering and cloud rendering. The local rendering is three-dimensional (3D) rendering by using the resources of the device itself, and requires a higher performance of the device. Cloud rendering refers to rendering by using a special rendering server, and then transmitting the rendered image to a terminal, and requires a relatively low time delay between the terminal and the rendering server.
Most of existing cloud rendering systems have multiple rendering servers, and the cloud rendering systems generally allocate rendering servers nearby for terminals with rendering requirements, however, the nearby allocated rendering servers are not necessarily the rendering servers with the lowest time delay between the multiple rendering servers of the rendering systems and the terminals with the rendering requirements, and therefore, the optimal service may not be provided for users, and user experience is affected.
Disclosure of Invention
The invention provides a scheduling method and scheduling equipment, which are used for solving the problem of how to allocate a rendering server with the lowest time delay to a terminal. In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, the present invention provides a scheduling method, including: firstly, a service application reported by a terminal is received. And then, determining a first target rendering server according to a pre-stored network architecture of the rendering server cluster, the area where the terminal is located and the target service. Then, the first message is sent to the terminal. Finally, a second message is sent to the first target rendering server. The service application comprises a target service and a region where the terminal is located. The target service is a service to be applied by the terminal. The first target rendering server is the rendering server with the lowest time delay between the second target rendering server and the terminal. The second target rendering server is a server in the rendering server cluster which can provide the target service. The first message is used for indicating the terminal to be connected with the first target rendering server. The second message is used for instructing the first target rendering server to provide the target service to the terminal.
Since the delay is related to the distance from end to end in the network architecture, the shortest distance from end to end in the network architecture is not necessarily the shortest physical distance from end to end. Therefore, the existing scheduling method allocates rendering servers to the terminals only according to the physical distance between the terminals and the rendering servers, and may not allocate rendering servers with the lowest time delay to the terminals. Instead of allocating rendering servers to terminals by physical distance between the terminals and the rendering servers, the present invention allocates rendering servers to terminals according to a network architecture. Therefore, compared with the existing scheduling method, the probability of allocating the rendering server with the lowest time delay to the terminal can be improved, and the problem of how to allocate the rendering server with the lowest time delay to the terminal can be solved.
In a second aspect, the present invention provides a scheduling apparatus, including: the device comprises a receiving unit, a first determining unit, a first sending unit and a second sending unit. And the receiving unit is used for receiving the service application reported by the terminal, wherein the service application comprises a target service and the area where the terminal is located, and the target service is the service to be applied by the terminal. The first determining unit is configured to determine a first target rendering server according to a pre-stored network architecture of the rendering server cluster, an area where the terminal is located, and a target service, where the first target rendering server is a rendering server with the lowest time delay between the second target rendering server and the terminal, and the second target rendering server is a server capable of providing the target service in the rendering server cluster. And the first sending unit is used for sending a first message to the terminal, wherein the first message is used for indicating the terminal to be connected with the first target rendering server. And the second sending unit is used for sending a second message to the first target rendering server, wherein the second message is used for indicating the first target rendering server to provide the target service for the terminal.
In a third aspect, the present invention provides a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a scheduling apparatus, cause the scheduling apparatus to perform the scheduling method as described in the first aspect.
In a fourth aspect, the present invention provides a computer program product comprising instructions which, when run on a scheduling apparatus, cause the scheduling apparatus to perform the scheduling method according to the first aspect.
In a fifth aspect, the present invention provides a scheduling apparatus, including: a processor and a memory, the memory being arranged to store a program, the processor calling the program stored by the memory to perform the scheduling method as described in the first aspect.
Reference may be made to the detailed description of the first aspect and various implementations thereof for specific descriptions of the second to fifth aspects and various implementations thereof in the present disclosure; moreover, the beneficial effects of the second aspect to the fifth aspect and the various implementation manners thereof may refer to the beneficial effect analysis of the first aspect and the various implementation manners thereof, and are not described herein again.
These and other aspects of the invention will be more readily apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a communication system according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a rendering server cluster according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a scheduling device according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating a scheduling method according to an embodiment of the present invention;
fig. 5 is a flowchart illustrating a scheduling method according to an embodiment of the present invention;
fig. 6 is a flowchart illustrating a scheduling method according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a scheduling apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone.
The terms "first" and "second" and the like in the description of the present invention and the drawings are used for distinguishing different objects or for distinguishing different processes for the same object, and are not used for describing a specific order of the objects.
Furthermore, the terms "comprising" and "having" and any variations thereof as referred to in the description of the invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
It should be noted that in the description of the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or illustrations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the present invention, the meaning of "a plurality" means two or more unless otherwise specified.
The embodiment of the invention provides a scheduling method, which is used for solving the problem that part of terminals cannot communicate when moving out of the coverage range of the current operator. The scheduling method is suitable for a communication system. Fig. 1 shows an architecture of the communication system, which, as shown in fig. 1, comprises: a scheduling apparatus 100, a rendering server cluster 200, and a terminal 300 (may also be referred to as a virtual reality terminal). The scheduling apparatus 100 may be connected to the rendering server cluster 200 and the terminal 300, and the rendering server cluster 200 may be connected to the terminal 300.
The illustrated structure of the embodiments of the present invention does not limit the communication system. It may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The rendering server cluster 200 includes a plurality of rendering server clusters, each of the rendering server clusters includes one or more rendering servers (not shown), fig. 2 shows an architecture of the rendering server cluster 200, and as shown in fig. 2, the rendering server cluster 200 includes a core-side rendering server cluster, a zone-side rendering server cluster, and a base-side rendering server cluster. It is worth mentioning that the rendering server has a resource storage function and other functions besides the rendering function.
The structure illustrated in the embodiment of the present invention does not limit the rendering server cluster 200. It may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Fig. 3 shows a hardware configuration of the scheduling apparatus 100 described above. As shown in fig. 3, the scheduling apparatus 100 may include a processor 101, a communication line 102, a memory 103, and a communication interface 104.
The illustrated structure of the embodiment of the present invention does not limit the scheduling apparatus 100. It may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 101 may include one or more processing units, such as: the processor 101 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a Neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be a decision maker directing the various components of the scheduling apparatus 100 to work in concert as instructed. Are the neural center and the command center of the dispatching device 100. The controller generates an operation control signal according to the instruction operation code and the time sequence signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 101 for storing instructions and data. In some embodiments, the memory in the processor is a cache memory that may hold instructions or data that have just been used or recycled by the processor. If the processor needs to reuse the instruction or data, it can be called directly from memory. Avoiding repeated accesses and reducing the latency of the processor, thereby increasing the efficiency of the system.
In some embodiments, the processor 101 may include an interface. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
A communication line 102 for transmitting information between the processor 101 and the memory 103.
The memory 103 is used for storing and executing computer execution instructions and is controlled by the processor 101 to execute.
The memory 103 may be separate and coupled to the processor via the communication line 102. The memory 103 may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM). It should be noted that the memory of the systems and devices described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
A communication interface 104 for communicating with other devices or a communication network. The communication network may be an ethernet, a Radio Access Network (RAN), or a Wireless Local Area Network (WLAN), a Bluetooth (BT), a Global Navigation Satellite System (GNSS), a Frequency Modulation (FM), a Near Field Communication (NFC), an infrared technology (infrared, IR), or the like.
In a specific implementation, the embodiment of the present invention does not specifically limit the specific form of the scheduling apparatus 100.
The following describes a scheduling method provided by an embodiment of the present invention with reference to the scheduling apparatus 100 shown in fig. 3.
As shown in fig. 4, a scheduling method provided in the embodiment of the present invention includes:
s401, the scheduling device 100 receives the service application reported by the terminal.
The service application comprises a target service and an area where the terminal is located, and the target service is a service to be applied by the terminal.
Alternatively, the service application may be in an internet protocol version 4 (IPv 4) datagram format.
Optionally, the service application may further include: time delay required by the target service, bandwidth required by the target service and network type required by the target service.
Optionally, the service application may further include: and (4) user information.
After receiving the service application reported by the terminal, the scheduling device 100 may verify whether the service application is complete and verify whether the user corresponding to the user information in the service application has an application right.
When the service application is incomplete or the user corresponding to the user information in the service application does not have an application right, the scheduling device 100 may refuse to allocate a rendering server to the terminal and return a failure response to the terminal, where the failure response may include a service application failure reason (e.g., the service application is incomplete, no application right, etc.).
Alternatively, the scheduling apparatus 100 may periodically acquire the resource list in each rendering server from the rendering server cluster.
S402, the scheduling device 100 determines a first target rendering server according to a pre-stored network architecture of the rendering server cluster, a terminal located area and a target service.
And the first target rendering server is the rendering server with the lowest time delay between the second target rendering server and the terminal.
The second target rendering server is a server in the rendering server cluster which can provide the target service.
Specifically, S402 may include:
the scheduling device 100 determines a first target rendering server cluster according to a pre-stored network architecture of the rendering server cluster, a terminal located area and a target service.
And the first target rendering server cluster is the rendering server cluster with the lowest time delay between the second target rendering server cluster and the terminal. The second target rendering server cluster is a rendering server cluster which can provide target service in the rendering server clusters.
It is worth mentioning that the scheduling apparatus 100 may determine whether the rendering server (rendering server cluster) can provide the target service according to the resource list in the rendering server.
Specifically, the scheduling device 100 determines a first target rendering server cluster according to a pre-stored network architecture of the rendering server cluster, a region where the terminal is located, and the target service. The method can comprise the following steps:
the scheduling device 100 determines a base station side rendering server cluster of the terminal, an area side rendering server cluster of the terminal, a core side rendering server cluster of the terminal, and a cross-area side rendering server cluster of the terminal according to a pre-stored network architecture of the rendering server cluster and an area where the terminal is located.
The scheduling device 100 determines that the base station side rendering server cluster of the terminal is the first target rendering server cluster when the base station side rendering server cluster of the terminal can provide the target service.
The scheduling device 100 determines that the area-side rendering server cluster of the terminal is the first target rendering server cluster when the base-station-side rendering server cluster of the terminal cannot provide the target service and the area-side rendering server cluster of the terminal can provide the target service.
The scheduling device 100 determines that the core-side rendering server cluster of the terminal is the first target rendering server cluster under the condition that the base-side rendering server cluster of the terminal cannot provide the target service, the area-side rendering server cluster of the terminal cannot provide the target service, and the core-side rendering server cluster of the terminal can provide the target service.
The scheduling device 100 determines that the cross-regional rendering server cluster of the terminal is the first target rendering server cluster under the condition that the base station rendering server cluster of the terminal cannot provide the target service, the regional rendering server cluster of the terminal cannot provide the target service, the core rendering server cluster of the terminal cannot provide the target service, and the cross-regional rendering server cluster of the terminal can provide the target service.
Optionally, the scheduling device 100 may issue a failure response to the terminal to notify that the terminal rendering server cluster cannot provide the target service when the base station side rendering server cluster of the terminal cannot provide the target service, the area side rendering server cluster of the terminal cannot provide the target service, the core side rendering server cluster of the terminal cannot provide the target service, and the cross-area side rendering server cluster of the terminal cannot provide the target service.
The scheduling apparatus 100 determines a first target rendering server from a first cluster of target rendering servers.
Specifically, when only one second target rendering server exists in the first target rendering server cluster, it is determined that the second target rendering server in the first target rendering server cluster is the first target rendering server.
And under the condition that a plurality of second target rendering servers exist in the first target rendering server cluster, determining the second target rendering server with the lowest time delay between the second target rendering server and the terminal in the first target rendering server cluster as the first target rendering server.
S403, the scheduling device 100 sends a first message to the terminal.
The first message is used for indicating the terminal to be connected with the first target rendering server.
It is noted that the first message may include the address and the authentication information of the first target rendering server. Correspondingly, the terminal can carry identity authentication information and apply for the target service to the first target rendering server after passing identity authentication.
Under the condition that the service request further includes a time delay required by the target service, S403 specifically includes:
the scheduling device 100 sends the first message to the terminal when the delay between the terminal and the first target rendering server is less than the delay required by the target service.
Correspondingly, when the time delay between the terminal and the first target rendering server is larger than the time delay required by the target service, a failure response is returned to the terminal.
S404, the scheduling device 100 sends a second message to the first target rendering server.
And the second message is used for instructing the first target rendering server to provide the target service for the terminal.
It is noted that the second message may include the identity authentication information in the first message, so that the first target rendering server performs identity authentication on the terminal, and after passing the identity authentication, the first target rendering server provides the target service to the terminal.
When the service request further includes a time delay required by the target service, S404 is specifically:
the scheduling device 100 sends the second message to the first target rendering server when the delay between the terminal and the first target rendering server is smaller than the delay required by the target service.
It can be seen from S401-S404 that, in the embodiment of the present invention, the rendering server is allocated to the terminal according to the network architecture, rather than being allocated to the terminal by the physical distance between the terminal and the rendering server. Therefore, compared with the existing scheduling method, the probability of allocating the rendering server with the lowest time delay to the terminal can be improved, and the problem of how to allocate the rendering server with the lowest time delay to the terminal can be solved.
Referring to fig. 4, as shown in fig. 5, the scheduling method provided in the embodiment of the present invention may further include:
s405, the scheduling device 100 determines the type of the network slice required by the terminal according to the time delay required by the target service, the bandwidth required by the target service, the area where the terminal is located and the area where the first target rendering server is located.
Specifically, S405 may include:
the scheduling device 100 determines the delay type of the network slice required by the terminal according to the delay required by the target service.
The latency types of the network slice can be divided into high latency (500 milliseconds (ms)), medium latency (200ms) and low latency (70 ms).
The scheduling device 100 determines the bandwidth type of the network slice required by the terminal according to the bandwidth required by the target service.
Among them, the bandwidth types of the network slice may be divided into a high bandwidth (1000 megabits per second (Mbps)), a medium bandwidth (500Mbps), and a low bandwidth (200 Mbps).
The scheduling device 100 determines the region type of the network slice required by the terminal according to the region where the terminal is located and the region where the first target rendering server is located.
The region type of the network slice can be divided into a base station, a region and a core.
Optionally, S405 may also be:
the scheduling device 100 determines the type of the network slice required by the terminal according to the time delay required by the target service, the bandwidth required by the target service, the type of the network required by the target service, the area where the terminal is located, and the area where the first target rendering server is located.
The scheduling device 100 may determine the network type of the network slice required by the terminal according to the network type required by the target service.
The network types of the network slice can be divided into fixed networks and fifth generation mobile communication technologies (5th generation wireless systems, 5G).
Optionally, the scheduling apparatus 100 may also pre-allocate resources according to the performance priority.
The following describes the scheduling method provided by the embodiment of the present invention with reference to fig. 6 by taking an example of a terminal applying for a target service.
S601, the terminal 300 reports the service application to the scheduling device 100.
Correspondingly, the scheduling device 100 receives the service application reported by the terminal 300.
S602, the scheduling device 100 determines a first target rendering server according to a pre-stored network architecture of the rendering server cluster, the area where the terminal 300 is located and the target service.
Specifically, S602 may refer to the description of S402, which is not described herein again.
S603, the scheduling apparatus 100 sends the first message to the terminal 300.
Specifically, S603 may refer to the description of S403, and will not be described herein again.
S604, the scheduling device 100 sends a second message to the first target rendering server 201.
Specifically, S604 may refer to the description of S404, which is not described herein again.
S605, the terminal 300 applies for a service to the first target rendering server 201.
S606, the first target rendering server 201 establishes a link with the terminal 300, and starts providing a service.
The scheme provided by the embodiment of the invention is mainly introduced from the perspective of a method. To implement the above functions, it includes hardware structures and/or software modules for performing the respective functions. Those of skill in the art will readily appreciate that the present invention can be implemented in hardware or a combination of hardware and computer software, with the exemplary elements and algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The embodiment of the present invention may perform functional module division on the scheduling apparatus 100 according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the embodiment of the present invention is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
An embodiment of the present invention provides a scheduling apparatus 100, configured to execute the foregoing scheduling method, as shown in fig. 7, where the scheduling apparatus 100 includes: a receiving unit 701, a first determining unit 702, a first transmitting unit 703 and a second transmitting unit 704.
The receiving unit 701 is configured to receive a service application reported by a terminal, where the service application includes a target service and an area where the terminal is located, and the target service is a service to be applied by the terminal. For example, in conjunction with fig. 4, the receiving unit 701 may be configured to perform S401.
A first determining unit 702, configured to determine, according to a pre-stored network architecture of a rendering server cluster, an area where a terminal is located, and a target service, a first target rendering server, where the first target rendering server is a rendering server with the lowest time delay between the second target rendering server and the terminal, and the second target rendering server is a server that can provide the target service in the rendering server cluster. For example, in conjunction with fig. 4, the first determining unit 702 may be configured to perform S402.
A first sending unit 703 is configured to send a first message to the terminal, where the first message is used to instruct the terminal to connect to the first target rendering server. For example, in conjunction with fig. 4, the first sending unit 703 may be configured to perform S403.
A second sending unit 704, configured to send a second message to the first target rendering server, where the second message is used to instruct the first target rendering server to provide the target service to the terminal. For example, in conjunction with fig. 4, the second sending unit 704 may be configured to perform S404.
The first determining unit 702 may include: a first determining subunit and a second determining subunit.
The first determining subunit is configured to determine, according to a pre-stored network architecture of the rendering server cluster, an area where the terminal is located, and the target service, a first target rendering server cluster, where the first target rendering server cluster is a rendering server cluster with the lowest time delay between the terminal and the second target rendering server cluster, and the second target rendering server cluster is a rendering server cluster that can provide the target service in the rendering server clusters.
A second determining subunit, configured to determine the first target rendering server from the first cluster of target rendering servers.
The first determining unit is specifically configured to:
and determining a base station side rendering server cluster of the terminal, an area side rendering server cluster of the terminal, a core side rendering server cluster of the terminal and a cross-area side rendering server cluster of the terminal according to a pre-stored network architecture of the rendering server cluster and an area where the terminal is located.
And determining the base station side rendering server cluster of the terminal as a first target rendering server cluster under the condition that the base station side rendering server cluster of the terminal can provide the target service.
And determining the area side rendering server cluster of the terminal as a first target rendering server cluster under the condition that the base station side rendering server cluster of the terminal cannot provide the target service and the area side rendering server cluster of the terminal can provide the target service.
And determining the core side rendering server cluster of the terminal as a first target rendering server cluster under the condition that the base station side rendering server cluster of the terminal cannot provide the target service, the area side rendering server cluster of the terminal cannot provide the target service, and the core side rendering server cluster of the terminal can provide the target service.
The method comprises the steps that a rendering server cluster at a base station side of a terminal cannot provide a target service, a rendering server cluster at an area side of the terminal cannot provide the target service, a rendering server cluster at a core side of the terminal cannot provide the target service, and the rendering server cluster at the cross-area side of the terminal is determined to be a first rendering server cluster at the cross-area side of the terminal under the condition that the rendering server cluster at the cross-area side of the terminal can provide the target service.
Optionally, the service application request may further include: the time delay required by the target service and the bandwidth required by the target service.
As shown in fig. 7, the scheduling apparatus 100 may further include: a second determination unit 705.
A second determining unit 705, configured to determine the type of the network slice required by the terminal according to the time delay required by the target service, the bandwidth required by the target service, the area where the terminal is located, and the area where the first target rendering server is located. For example, in conjunction with fig. 5, the second determining unit 705 may be configured to perform S405.
The first sending unit 703 is specifically configured to:
and under the condition that the time delay between the terminal and the first target rendering server is less than the time delay required by the target service, sending a first message to the terminal.
The second sending unit 704 is specifically configured to:
and sending a second message to the first target rendering server under the condition that the time delay between the terminal and the first target rendering server is less than the time delay required by the target service.
Specifically, as shown in fig. 3 and 7. The receiving unit 701, the first determining unit 702, the first transmitting unit 703, the second transmitting unit 704, and the second determining unit 705 in fig. 7 call the program in the memory 103 via the communication line 102 by the processor 101 in fig. 3 to execute the scheduling method described above.
It should be understood that, in various embodiments of the present invention, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented using a software program, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The processes or functions according to embodiments of the present invention occur, in whole or in part, when computer-executable instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). Computer-readable storage media can be any available media that can be accessed by a computer or can comprise one or more data storage devices, such as servers, data centers, and the like, that can be integrated with the media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, devices and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided by the present invention, it should be understood that the disclosed system, device and method can be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (12)

1. A method of scheduling, comprising:
receiving a service application reported by a terminal, wherein the service application comprises a target service and an area where the terminal is located, and the target service is a service to be applied by the terminal;
determining a first target rendering server according to a network architecture of a pre-stored rendering server cluster, an area where the terminal is located and the target service, wherein the first target rendering server is a rendering server with the lowest time delay between the second target rendering server and the terminal, and the second target rendering server is a server which can provide the target service in the rendering server cluster;
sending a first message to the terminal, wherein the first message is used for indicating the terminal to be connected with the first target rendering server;
and sending a second message to the first target rendering server, wherein the second message is used for instructing the first target rendering server to provide the target service for the terminal.
2. The scheduling method according to claim 1, wherein the determining a first target rendering server according to a pre-stored network architecture of a rendering server cluster, an area where the terminal is located, and the target service includes:
determining a first target rendering server cluster according to a network architecture of a pre-stored rendering server cluster, an area where the terminal is located and the target service, wherein the first target rendering server cluster is a rendering server cluster with the lowest time delay between a second target rendering server cluster and the terminal, and the second target rendering server cluster is a rendering server cluster which can provide the target service in the rendering server clusters;
determining the first target rendering server from the first cluster of target rendering servers.
3. The scheduling method according to claim 2, wherein the determining a first target rendering server cluster according to a pre-stored network architecture of the rendering server cluster, an area where the terminal is located, and the target service includes:
determining a base station side rendering server cluster of the terminal, an area side rendering server cluster of the terminal, a core side rendering server cluster of the terminal and a cross-area side rendering server cluster of the terminal according to a pre-stored network architecture of the rendering server cluster and an area where the terminal is located;
determining that the base station side rendering server cluster of the terminal is a first target rendering server cluster under the condition that the base station side rendering server cluster of the terminal can provide the target service;
determining the area side rendering server cluster of the terminal as a first target rendering server cluster under the condition that the base station side rendering server cluster of the terminal cannot provide the target service and the area side rendering server cluster of the terminal can provide the target service;
determining that a core side rendering server cluster of the terminal is a first target rendering server cluster under the condition that the base station side rendering server cluster of the terminal cannot provide the target service, the area side rendering server cluster of the terminal cannot provide the target service, and the core side rendering server cluster of the terminal can provide the target service;
and determining the cross-regional rendering server cluster of the terminal as a first target rendering server cluster under the condition that the base station side rendering server cluster of the terminal cannot provide the target service, the regional side rendering server cluster of the terminal cannot provide the target service, the core side rendering server cluster of the terminal cannot provide the target service, and the cross-regional side rendering server cluster of the terminal can provide the target service.
4. The scheduling method according to any one of claims 1-3, wherein the service request further comprises: the time delay required by the target service and the bandwidth required by the target service;
the scheduling method further comprises the following steps:
and determining the type of the network slice required by the terminal according to the time delay required by the target service, the bandwidth required by the target service, the area where the terminal is located and the area where the first target rendering server is located.
5. The scheduling method of claim 4, wherein the sending the first message to the terminal comprises:
under the condition that the time delay between the terminal and the first target rendering server is smaller than the time delay required by the target service, sending a first message to the terminal;
the sending a second message to the first target rendering server comprises:
and sending a second message to the first target rendering server under the condition that the time delay between the terminal and the first target rendering server is less than the time delay required by the target service.
6. A scheduling apparatus, comprising: the device comprises a receiving unit, a first determining unit, a first sending unit and a second sending unit;
the receiving unit is configured to receive a service application reported by a terminal, where the service application includes a target service and an area where the terminal is located, and the target service is a service to be applied by the terminal;
the first determining unit is configured to determine a first target rendering server according to a pre-stored network architecture of a rendering server cluster, an area where the terminal is located, and the target service, where the first target rendering server is a rendering server with the lowest time delay between the second target rendering server and the terminal, and the second target rendering server is a server that can provide the target service in the rendering server cluster;
the first sending unit is configured to send a first message to the terminal, where the first message is used to instruct the terminal to connect to the first target rendering server;
the second sending unit is configured to send a second message to the first target rendering server, where the second message is used to instruct the first target rendering server to provide the target service to the terminal.
7. The scheduling apparatus of claim 6, wherein the first determining unit comprises: a first determining subunit and a second determining subunit;
the first determining subunit is configured to determine a first target rendering server cluster according to a pre-stored network architecture of the rendering server cluster, an area where the terminal is located, and the target service, where the first target rendering server cluster is a rendering server cluster with the lowest time delay between the first target rendering server cluster and the terminal, and the second target rendering server cluster is a rendering server cluster that can provide the target service in the rendering server cluster;
the second determining subunit is configured to determine the first target rendering server from the first cluster of target rendering servers.
8. The scheduling device of claim 7, wherein the first determining unit is specifically configured to:
determining a base station side rendering server cluster of the terminal, an area side rendering server cluster of the terminal, a core side rendering server cluster of the terminal and a cross-area side rendering server cluster of the terminal according to a pre-stored network architecture of the rendering server cluster and an area where the terminal is located;
determining that the base station side rendering server cluster of the terminal is a first target rendering server cluster under the condition that the base station side rendering server cluster of the terminal can provide the target service;
determining the area side rendering server cluster of the terminal as a first target rendering server cluster under the condition that the base station side rendering server cluster of the terminal cannot provide the target service and the area side rendering server cluster of the terminal can provide the target service;
determining that a core side rendering server cluster of the terminal is a first target rendering server cluster under the condition that the base station side rendering server cluster of the terminal cannot provide the target service, the area side rendering server cluster of the terminal cannot provide the target service, and the core side rendering server cluster of the terminal can provide the target service;
and determining the cross-regional rendering server cluster of the terminal as a first target rendering server cluster under the condition that the base station side rendering server cluster of the terminal cannot provide the target service, the regional side rendering server cluster of the terminal cannot provide the target service, the core side rendering server cluster of the terminal cannot provide the target service, and the cross-regional side rendering server cluster of the terminal can provide the target service.
9. The scheduling apparatus according to any one of claims 6-8, wherein the service request further comprises: the time delay required by the target service and the bandwidth required by the target service;
the scheduling device further includes: a second determination unit;
and the second determining unit is used for determining the type of the network slice required by the terminal according to the time delay required by the target service, the bandwidth required by the target service, the area where the terminal is located and the area where the first target rendering server is located.
10. The scheduling device of claim 9, wherein the first sending unit is specifically configured to:
under the condition that the time delay between the terminal and the first target rendering server is smaller than the time delay required by the target service, sending a first message to the terminal;
the second sending unit is specifically configured to:
and sending a second message to the first target rendering server under the condition that the time delay between the terminal and the first target rendering server is less than the time delay required by the target service.
11. A scheduling apparatus, characterized in that the scheduling apparatus comprises: one or more processors, and a memory;
the memory is coupled with the one or more processors; the memory is configured to store computer program code comprising instructions which, when executed by the one or more processors, cause the scheduling apparatus to perform the scheduling method of any of claims 1-5.
12. A computer-readable storage medium comprising instructions that, when executed on a scheduling apparatus, cause the scheduling apparatus to perform the scheduling method of any one of claims 1-5.
CN202011261448.0A 2020-11-12 2020-11-12 Scheduling method and device Active CN112491978B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011261448.0A CN112491978B (en) 2020-11-12 2020-11-12 Scheduling method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011261448.0A CN112491978B (en) 2020-11-12 2020-11-12 Scheduling method and device

Publications (2)

Publication Number Publication Date
CN112491978A true CN112491978A (en) 2021-03-12
CN112491978B CN112491978B (en) 2022-02-18

Family

ID=74929976

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011261448.0A Active CN112491978B (en) 2020-11-12 2020-11-12 Scheduling method and device

Country Status (1)

Country Link
CN (1) CN112491978B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114125038A (en) * 2021-11-26 2022-03-01 中国联合网络通信集团有限公司 Service scheduling method, device and storage medium
CN114125936A (en) * 2021-11-29 2022-03-01 中国联合网络通信集团有限公司 Resource scheduling method, device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407014A (en) * 2016-10-10 2017-02-15 深圳市彬讯科技有限公司 Realization method of straddle machine room cluster rendering allocation
CN106534240A (en) * 2015-09-11 2017-03-22 中国移动通信集团公司 CDN resource scheduling method, server and client
CN109560952A (en) * 2017-09-27 2019-04-02 华为技术有限公司 A kind of network slice management method and equipment
US20190295309A1 (en) * 2018-03-20 2019-09-26 Lenovo (Beijing) Co., Ltd. Image rendering method and system
CN110488977A (en) * 2019-08-21 2019-11-22 京东方科技集团股份有限公司 Virtual reality display methods, device, system and storage medium
CN110856103A (en) * 2019-11-18 2020-02-28 腾讯科技(深圳)有限公司 Scheduling method, communication method and related equipment
CN111669444A (en) * 2020-06-08 2020-09-15 南京工业大学 Cloud game service quality enhancement method and system based on edge calculation
CN111739141A (en) * 2020-08-12 2020-10-02 绿漫科技有限公司 3D cloud rendering method for light terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106534240A (en) * 2015-09-11 2017-03-22 中国移动通信集团公司 CDN resource scheduling method, server and client
CN106407014A (en) * 2016-10-10 2017-02-15 深圳市彬讯科技有限公司 Realization method of straddle machine room cluster rendering allocation
CN109560952A (en) * 2017-09-27 2019-04-02 华为技术有限公司 A kind of network slice management method and equipment
US20190295309A1 (en) * 2018-03-20 2019-09-26 Lenovo (Beijing) Co., Ltd. Image rendering method and system
CN110488977A (en) * 2019-08-21 2019-11-22 京东方科技集团股份有限公司 Virtual reality display methods, device, system and storage medium
CN110856103A (en) * 2019-11-18 2020-02-28 腾讯科技(深圳)有限公司 Scheduling method, communication method and related equipment
CN111669444A (en) * 2020-06-08 2020-09-15 南京工业大学 Cloud game service quality enhancement method and system based on edge calculation
CN111739141A (en) * 2020-08-12 2020-10-02 绿漫科技有限公司 3D cloud rendering method for light terminal

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
" "S1-190254-wasS1-190419-190129-PCR-NCIS-cloud gaming-v3"", 《3GPP TSG_SA\WG1_SERV》 *
刘奕: "5G网络技术对提升4G网络性能的研究", 《数码世界》 *
朱瑜坚等: "一种面向多租户的Linux容器集群组网方法", 《计算机科学》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114125038A (en) * 2021-11-26 2022-03-01 中国联合网络通信集团有限公司 Service scheduling method, device and storage medium
CN114125936A (en) * 2021-11-29 2022-03-01 中国联合网络通信集团有限公司 Resource scheduling method, device and storage medium
CN114125936B (en) * 2021-11-29 2023-09-05 中国联合网络通信集团有限公司 Resource scheduling method, device and storage medium

Also Published As

Publication number Publication date
CN112491978B (en) 2022-02-18

Similar Documents

Publication Publication Date Title
CN110267327B (en) Service transmission method and device
CN106358245B (en) Method and controller for sharing load of mobile edge computing application
CN113596191B (en) Data processing method, network element equipment and readable storage medium
CN108683613B (en) Resource scheduling method, device and computer storage medium
CN112491978B (en) Scheduling method and device
KR102329095B1 (en) Network access method, related devices and systems
CN111654852B (en) Data card switching method and device, terminal and storage medium
US11182210B2 (en) Method for resource allocation and terminal device
EP4057730A1 (en) Data processing system, method, and apparatus, device, and readable storage medium
KR20200062793A (en) Electronic device for managing bearer and operation method thereof
KR20180088880A (en) Use network-supported protocols to improve network utilization
EP3422799B1 (en) Lte-nr interworking procedure for scg split bearer configurations
CN113259260A (en) Method and device for deploying application instance and scheduling application instance
CN111194098B (en) Link establishment method, device, communication system and computer readable medium
CN110933758B (en) Interference coordination method and device, and base station
CN106162573B (en) Cluster group call processing method, related equipment and system
CN112688886B (en) Determination method and device
CN112272108B (en) Scheduling method and device
CN112995922B (en) Group establishing method and device
CN113535376A (en) Calculation power scheduling method, centralized control equipment and calculation power application equipment
CN112312577B (en) Communication method and device
EP4255100A1 (en) Service response method and apparatus, terminal, and storage medium
CN112511272B (en) Communication method and device
WO2021087909A1 (en) Signal transmission method and device, and mobile terminal and storage medium
CN113170521B (en) Electronic device for managing bearers and method of operating the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant