CN112910988A - Resource acquisition method and resource scheduling device - Google Patents

Resource acquisition method and resource scheduling device Download PDF

Info

Publication number
CN112910988A
CN112910988A CN202110120805.XA CN202110120805A CN112910988A CN 112910988 A CN112910988 A CN 112910988A CN 202110120805 A CN202110120805 A CN 202110120805A CN 112910988 A CN112910988 A CN 112910988A
Authority
CN
China
Prior art keywords
line
resource
uplink
cache server
weight value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110120805.XA
Other languages
Chinese (zh)
Inventor
丁再发
苏小杰
陈俊生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wangsu Science and Technology Co Ltd
Original Assignee
Wangsu Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wangsu Science and Technology Co Ltd filed Critical Wangsu Science and Technology Co Ltd
Priority to CN202110120805.XA priority Critical patent/CN112910988A/en
Publication of CN112910988A publication Critical patent/CN112910988A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1074Peer-to-peer [P2P] networks for supporting data block transmission mechanisms

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a resource acquisition method and a resource scheduling device, which are characterized by comprising the following steps: acquiring bandwidth information of each line in a cache server, and receiving a resource acquisition task pointing to a target resource and issued by a content scheduling center; if the target resource exists in the cache server, determining an uplink from each line based on the bandwidth information; and notifying the information of the uplink to the cache server so that the cache server provides the target resource for the user through the uplink. According to the technical scheme, the resource utilization rate of the lines in the cache server can be improved.

Description

Resource acquisition method and resource scheduling device
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a resource acquisition method and a resource scheduling apparatus.
Background
In existing home broadband network architectures, the required network resources can be provided to the client, typically by a cache server. The cache servers can be managed uniformly by the content scheduling center. According to the bandwidth occupation condition and the load condition of each cache server, the content scheduling center can selectively issue resource acquisition tasks to the cache servers.
With the advent of multiple network card cache servers, the same cache server can provide network resources for users by using multiple lines simultaneously. In the existing network architecture, a cache server may preferentially execute a resource acquisition task issued by a content scheduling center through a line corresponding to a default network card, which may result in that the line bandwidth corresponding to the default network card occupies a relatively high level, while lines corresponding to other network cards are in an idle state, which may not efficiently utilize the line resources of the multi-network card cache server.
Disclosure of Invention
The present application aims to provide a resource obtaining method and a resource scheduling apparatus, which can improve the resource utilization rate of multiple lines in a cache server.
In order to achieve the above object, an aspect of the present application provides a resource obtaining method, where the method includes: acquiring bandwidth information of each line in a cache server, and receiving a resource acquisition task pointing to a target resource and issued by a content scheduling center; if the target resource exists in the cache server, determining an uplink from each line based on the bandwidth information; and notifying the information of the uplink to the cache server so that the cache server provides the target resource for the user through the uplink.
In order to achieve the above object, another aspect of the present application further provides a resource scheduling apparatus, which includes a memory and a processor, where the memory is used for storing a computer program, and the computer program, when executed by the processor, implements the above resource scheduling method.
As can be seen from the above, according to the technical solutions provided in one or more embodiments of the present application, for a cache server with multiple network cards, bandwidth information of a line corresponding to each network card may be acquired by a resource scheduling device. Therefore, when the content scheduling center issues a resource acquisition task, the resource scheduling device can select a proper uplink from all lines according to the acquired bandwidth information. Subsequently, the cache server can provide corresponding resources for the user through the selected uplink. By monitoring the bandwidth information of each line, the resource acquisition tasks executed by each line can be effectively balanced, and the resource utilization rate of each line in the multi-network card cache server is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a system architecture diagram of resource acquisition in an embodiment of the present invention;
FIG. 2 is a flowchart of a resource acquisition method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a resource scheduling apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear, the technical solutions of the present application will be clearly and completely described below with reference to the detailed description of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art without any inventive work based on the embodiments in the present application are within the scope of protection of the present application.
The resource acquisition method provided by the application can be applied to the system architecture shown in fig. 1. In the system architecture, user equipment, a content scheduling center, a cache server and an origin station server can be included. The user equipment may be a terminal device used by a user, for example, the user equipment may be a smartphone, a tablet computer, a desktop computer, a smart wearable device, or the like. The content scheduling center can receive a resource query request sent by the user equipment and generate a corresponding resource acquisition task. The resource obtaining task may be issued to one of the cache servers by the content scheduling center. The cache server can be provided with a plurality of network cards, and each network card can correspond to one line. In addition, a resource scheduling device may be further integrated in the cache server, and the resource scheduling device may monitor bandwidth information of each line, and determine an uplink line for uploading resources therefrom, and may also determine a downlink line for downloading resources therefrom. Thus, the cache server can provide the required resources for the user equipment through the selected uplink, and can also obtain the corresponding resources from the source station server through the downlink.
It should be noted that, the above-mentioned cache server and resource scheduling apparatus may have various implementation forms in practical applications. In one embodiment, the resource scheduling apparatus may be integrated in the cache server as a component, and the two may form a whole. The content dispatch center, the user equipment and the source station server can all perform data interaction with the whole. The ensemble may provide multiple lines to the outside. The structure has the advantages that the communication efficiency between the resource scheduling device and the cache server is high, the resource scheduling device can collect the bandwidth information of each line more conveniently and accurately, and the selected uplink line and the selected downlink line can be notified to the cache server in time.
Referring to fig. 2, the resource obtaining method provided in the present application may be applied to the resource scheduling apparatus, and the method may include the following steps.
S1: and acquiring bandwidth information of each line in the cache server, and receiving a resource acquisition task pointing to a target resource and issued by the content scheduling center.
In this embodiment, the resource scheduling apparatus may include a bandwidth calculating module, and the bandwidth calculating module may collect bandwidth information of each line in the cache server. In practical applications, the bandwidth information may include one or more of parameters such as an uplink bandwidth upper limit, a downlink bandwidth upper limit, a total upper limit of uplink and downlink bandwidths, a real-time uplink bandwidth, a real-time downlink bandwidth, an uplink redundant bandwidth, a downlink redundant bandwidth, and a sum of uplink and downlink redundant bandwidths. The uplink refers to a process of the cache server providing resources to the user equipment, and the downlink refers to a process of the cache server downloading resources from the source station server. For the same line, the uplink and downlink processes can be simultaneously provided, so that the acquired bandwidth information can simultaneously include various uplink bandwidth information and various downlink bandwidth information.
In this embodiment, the resource scheduling device may collect bandwidth information of each line according to a certain time period, and after collecting the bandwidth information, may report the bandwidth information to the content scheduling center. When reporting the bandwidth information, the identifier of the cache server can be synchronously reported. In this way, the content scheduling center may store the bandwidth information in association with the identifier of the cache server, and the bandwidth information may be divided according to each line of the cache server. Thus, after the content scheduling center receives the resource request sent by the user equipment, the corresponding resource acquisition task can be generated. And analyzing the bandwidth information of each cache server to determine the cache server for processing the resource acquisition task. Subsequently, the cache server may issue the resource obtaining task to the resource scheduling device in the cache server, so that the resource scheduling device may receive the resource obtaining task directed to the target resource issued by the content scheduling center. The target resource may be a resource that the user equipment wants to acquire, and the resource identifier of the target resource may be carried in a resource query request sent by the user equipment, so that after the content scheduling center extracts the resource identifier, a resource acquisition task including the resource identifier may be constructed.
S3: and if the target resource exists in the cache server, determining an uplink from each line based on the bandwidth information.
In this embodiment, after receiving the resource acquisition task, the resource scheduling device may identify a resource identifier of a target resource therein, and then interact with the cache server to determine whether the target resource exists in the cache server. Specifically, the resource scheduling apparatus may send the resource identifier of the target resource to the cache server, so as to retrieve the stored resource through the cache server, and if the corresponding resource can be retrieved, it indicates that the target resource exists in the cache server; otherwise, if the corresponding resource is not retrieved, it indicates that the target resource does not exist in the cache server.
In this embodiment, if the target resource exists in the cache server, the cache server may provide the target resource to the user equipment through the uplink. If the target resource does not exist in the cache server, the cache server needs to download the target resource from the source station server through the downlink and provide the target resource to the user equipment through the uplink.
In this embodiment, the resource scheduling apparatus may include a line scheduling module, and the line scheduling module may determine an uplink line or a downlink line from each line according to bandwidth information of each line by using a dynamic load balancing algorithm. It should be noted that the uplink determining process and the downlink determining process may be the same, and the two processes are independent of each other. The process is explained by taking the determination of the uplink as an example.
In practical applications, the line scheduling module may set a line weight value and an effective weight value for each line in advance. The effective weight value may represent the capability of the line to process the resource acquisition task in the current period. The line weight value may be the sum of effective weight values accumulated by the line in the previous cycle. Specifically, for the cache server which is first in the resource obtaining task, the initial line weight value of each line may be 0, and the initial effective weight value may be an integer value greater than 1, and in general, the initial effective weight value of each line may be the same, for example, may be 3. When the cache server processes the resource to acquire the task each time, the line weight value and the effective weight value of the line can be updated synchronously.
After receiving the resource acquisition task issued by the content scheduling center, the line scheduling module may determine whether each line of the cache server has a redundant bandwidth for processing the resource acquisition task based on the bandwidth information acquired in the current period. Then, according to the judgment result, the current effective weight value of each line in the current period can be determined. Specifically, the line scheduling module may first identify an initial effective weight value of the line in the current period. For example, the initial significant weight value may be 3. Then, if the line has redundant bandwidth for processing resource acquisition tasks, the initial effective weight value can be directly used as the current effective weight value of the line. If the line does not have the redundant bandwidth for processing the resource acquisition task, the initial effective weight value of the line can be reduced, and the reduced result is taken as the current effective weight value of the line. For example, the initial effective weight value of a line is 3, and when the line has no redundant bandwidth to process a resource acquisition task, the initial effective weight value may be reduced by 1, so as to obtain a current effective weight value of 2.
After the current effective weight value of each line is determined, the respective line weight value of each line can be updated based on the current effective weight value. Specifically, an initial line weight value of a line (in the cache server that processes the resource acquisition task for the first time, the initial line weight value of each line is 0) may be identified, and the sum of the initial line weight value and the current effective weight value of the line is used as the line weight value after the line is updated. For example, if a line has the redundant bandwidth of the processing resource acquisition task, the updated line weight value is 0+3 to 3, and if a line does not have the redundant bandwidth of the processing resource acquisition task, the updated line weight value is 0+2 to 2.
In this embodiment, after updating the current line weight value of each line, an uplink line can be identified from each line according to the updated line weight value. Specifically, the maximum line weight value may be selected from the updated line weight values, and the line corresponding to the maximum line weight value may be used as the determined uplink line. If the maximum line weight value is multiple, the most front line can be selected as the uplink line according to the arrangement sequence of the lines. Of course, the uplink may be selected in other manners (e.g., randomly selected), which is not limited in this application.
In one embodiment, in consideration of the need for processing the resource acquisition task subsequently by the uplink, the redundant bandwidth of the uplink is further compressed, and in order not to burden the uplink too much, after the uplink is determined, the weight value of the uplink may be appropriately reduced, so as to ensure that the uplink has a smaller probability of being selected as the uplink again next time the resource acquisition task is performed.
Specifically, after the uplink is determined, the sum of updated line weight values of each line may be calculated. For example, if the updated line weight values of the three lines are all 3, the sum of the calculated line weight values is 9. Then, the line weight value selected as the uplink may be reduced according to the sum of the calculated line weight values, and the reduced line weight value may be used as the initial line weight value of the uplink. In practical application, a difference between the updated uplink line weight value and the sum of the calculated line weight values may be calculated, and the difference may be used as the reduced uplink line weight value. For example, after the updated line weight values of the three lines are all 3, and the sum of the calculated line weight values after the first line is selected as the uplink line is 9, the line weight value of the first line may be reduced to 3-9 to-6, so that-6 is used as the initial line weight value of the first line in the next period.
Assume that the cache server has three lines at present, the initial line weight value of each line is 0, and the initial effective weight value of each line in the current period is 3. The cache server needs to process three resource acquisition tasks in sequence, and in the process of processing the three resource acquisition tasks, the weighted values of the three lines can be as follows (assuming that the three lines all have corresponding redundant bandwidths when processing the resource acquisition task each time):
Figure BDA0002922248040000061
the three numbers in the middle brackets can respectively correspond to three lines, and after the three resource acquisition tasks, the initial line weight value of each line returns to zero.
S5: and notifying the information of the uplink to the cache server so that the cache server provides the target resource for the user through the uplink.
In this embodiment, after determining the uplink, the resource scheduling apparatus may notify the cache server of the uplink information. The information of the uplink may be, for example, the number of the uplink in the cache server. Thus, the cache server can establish connection with the user equipment through the uplink and provide the target resources required by the user to the user.
It should be noted that the line scheduling module may also determine the downlink from each line according to the method in step S3 (only the redundancy of the downlink bandwidth is referred to at this time), and the processes of determining the uplink and determining the downlink are independent from each other and do not affect each other, and the process of determining the downlink is not described in detail below.
In this embodiment, if the cache server does not have the target resource required by the user equipment, the resource scheduling device needs to determine an uplink and a downlink from each line based on the bandwidth information, where the cache server may acquire the target resource from the source station server through the downlink and then provide the acquired target resource to the user through the uplink. The process of determining the uplink and downlink is not described herein.
In the application, the resource scheduling device not only collects the bandwidth information and determines the uplink and downlink lines for transmitting the target resource, but also can report the caching condition of the resource in the cache server to the content scheduling center, so that the content scheduling center effectively selects the corresponding cache server to process the resource acquisition task by combining the bandwidth information and the caching condition when receiving the resource query request sent by the user equipment.
Specifically, the resource query request may carry cache server information and a resource identifier to be reported. The cache server information may include an identifier of the cache server, and may also include various bandwidth information of the cache server. The resource corresponding to the resource identifier to be reported is already cached in the cache server. The resource reporting module can upload the resource identifier and the cache server information corresponding to the resource reporting request to the content scheduling center, so that the content scheduling center can store the corresponding relationship between the resource identifier and the cache server information. When the content scheduling center receives a resource query request sent by the user equipment, the identifier of the target resource in the resource query request can be extracted, and then whether the cache server information corresponding to the identifier of the target resource exists or not is queried from the stored corresponding relation. If yes, the content scheduling center can issue the generated resource acquisition task to the corresponding cache server. The purpose of this processing is to ensure that the cache server processing the resource acquisition task can acquire the corresponding target resource from the local without acquiring the target resource from the source station server. This eliminates the downlink determination process and the download process of the target resource. If the content scheduling center cannot inquire the cache server information corresponding to the identifier of the target resource, the content scheduling center indicates that the current cache server does not have the target resource, and the content scheduling center can select the cache server with relatively more redundant bandwidth to process the resource acquisition task according to the bandwidth information of each cache server.
In addition, with the update of the content in the cache server, part of the cached resources may be deleted, and at this time, the cache server may send a resource deletion request to the resource reporting module of the resource scheduling device. The resource deletion request may carry an identifier of a target resource to be deleted and cache server information that originally stores the target resource, where the cache server information may include the identifier of the cache server. Thus, after receiving the resource deletion request, the resource reporting module can upload the resource identifier and the cache server information corresponding to the resource deletion request to the content scheduling center, so as to delete the stored corresponding relationship between the resource identifier and the cache server information through the content scheduling center. Therefore, the content scheduling center can not mistakenly assume that the cache content of the target resource still exists in the cache server, and sends the resource acquisition task pointing to the target resource to the cache server.
According to the scheme, the bandwidth information of each line can be collected through the resource scheduling device on one side of the cache server, the uplink or the downlink can be selected according to the collected bandwidth information, monitoring and analysis pressure of a content scheduling center is greatly reduced, meanwhile, the resource scheduling device can also obtain accurate bandwidth information due to the fact that the resource scheduling device and the cache server are located in the same network environment, the selected uplink and downlink are accurate, and efficiency and stability of resource providing are guaranteed.
Referring to fig. 3, the present application further provides a resource scheduling apparatus, where the resource scheduling apparatus includes a memory and a processor, where the memory is used to store a computer program, and the computer program is executed by the processor to implement the resource scheduling method.
In this application, the memory may include physical means for storing information, typically media that digitize the information and store it in an electrical, magnetic, or optical manner. The memory may in turn comprise: devices that store information using electrical energy, such as RAM or ROM; devices that store information using magnetic energy, such as hard disks, floppy disks, tapes, core memories, bubble memories, or usb disks; devices for storing information optically, such as CDs or DVDs. Of course, there are other ways of memory, such as quantum memory or graphene memory, among others.
In the present application, the processor may be implemented in any suitable way. For example, the processor may take the form of, for example, a microprocessor or processor and a computer-readable medium that stores computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, an embedded microcontroller, and so forth.
As can be seen from the above, according to the technical solutions provided in one or more embodiments of the present application, for a cache server with multiple network cards, bandwidth information of a line corresponding to each network card may be acquired by a resource scheduling device. Therefore, when the content scheduling center issues a resource acquisition task, the resource scheduling device can select a proper uplink from all lines according to the acquired bandwidth information. Subsequently, the cache server can provide corresponding resources for the user through the selected uplink. By monitoring the bandwidth information of each line, the resource acquisition tasks executed by each line can be effectively balanced, and the resource utilization rate of each line in the multi-network card cache server is improved.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the device, reference may be made to the introduction of embodiments of the method described above for comparison.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an embodiment of the present application, and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (13)

1. A method for resource acquisition, the method comprising:
acquiring bandwidth information of each line in a cache server, and receiving a resource acquisition task pointing to a target resource and issued by a content scheduling center;
if the target resource exists in the cache server, determining an uplink from each line based on the bandwidth information;
and notifying the information of the uplink to the cache server so that the cache server provides the target resource for the user through the uplink.
2. The method of claim 1, further comprising:
if the target resource does not exist in the cache server, respectively determining an uplink and a downlink from each line based on the bandwidth information, so that the cache server provides the obtained target resource to a user through the uplink after obtaining the target resource from a source station server through the downlink.
3. The method of claim 1 or 2, wherein determining an uplink from the lines based on the bandwidth information comprises:
after receiving the resource acquisition task, judging whether each line has a redundant bandwidth for processing the resource acquisition task or not based on the bandwidth information;
determining the current effective weight value of each line according to the judgment result;
updating respective line weight values of the lines based on the effective weight values, and determining an uplink line from the lines according to the updated line weight values.
4. The method of claim 3, wherein determining the current valid weight value for each line comprises:
identifying an initial effective weight value for the line;
if the line has the redundant bandwidth for processing the resource acquisition task, taking the initial effective weight value as the current effective weight value of the line;
and if the line does not have the redundant bandwidth for processing the resource acquisition task, reducing the initial effective weight value, and taking the reduced result as the current effective weight value of the line.
5. The method of claim 3, wherein updating the respective line weight values for the respective lines comprises:
and identifying an initial line weight value of the line, and taking the sum of the initial line weight value and the current effective weight value of the line as the updated line weight value of the line.
6. The method of claim 3 or 5, wherein determining an uplink from the respective lines comprises:
and selecting the maximum line weight value from the updated line weight values, and taking the line corresponding to the maximum line weight value as the determined uplink line.
7. The method of claim 3, wherein after determining an uplink from the lines, the method further comprises:
and calculating the sum of the updated line weight values of each line, reducing the line weight value of the uplink line according to the sum of the line weight values, and taking the reduced line weight value as the initial line weight value of the uplink line.
8. The method of claim 7, wherein reducing the uplink line weight value comprises:
and calculating a difference value between the updated uplink line weight value and the sum of the line weight values, and taking the difference value as the reduced uplink line weight value.
9. The method of claim 1, wherein the bandwidth information comprises at least one of:
the uplink bandwidth upper limit, the downlink bandwidth upper limit, the total upper limit of the uplink bandwidth and the downlink bandwidth, the real-time uplink bandwidth, the real-time downlink bandwidth, the uplink redundant bandwidth, the downlink redundant bandwidth and the sum of the uplink redundant bandwidth and the downlink redundant bandwidth.
10. The method of claim 1, wherein after collecting bandwidth information of each line in the cache server, the method further comprises:
and reporting the bandwidth information to a content scheduling center, so that the content scheduling center selects a target cache server for processing a resource acquisition task from a plurality of cache servers according to each acquired bandwidth information.
11. The method of claim 1, further comprising:
receiving a resource reporting request of the cache server, and uploading a resource identifier and cache server information corresponding to the resource reporting request to the content scheduling center, so as to store a corresponding relationship between the resource identifier and the cache server information through the content scheduling center, where the cache server information at least includes an identifier of the cache server.
12. The method of claim 1, further comprising:
receiving a resource deletion request of the cache server, and uploading a resource identifier and cache server information corresponding to the resource deletion request to the content scheduling center, so as to delete the stored corresponding relationship between the resource identifier and the cache server information through the content scheduling center, wherein the cache server information at least comprises the identifier of the cache server.
13. A resource scheduling apparatus, characterized in that the resource scheduling apparatus comprises a memory for storing a computer program and a processor, which computer program, when executed by the processor, implements the method according to any of claims 1 to 12.
CN202110120805.XA 2021-01-28 2021-01-28 Resource acquisition method and resource scheduling device Pending CN112910988A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110120805.XA CN112910988A (en) 2021-01-28 2021-01-28 Resource acquisition method and resource scheduling device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110120805.XA CN112910988A (en) 2021-01-28 2021-01-28 Resource acquisition method and resource scheduling device

Publications (1)

Publication Number Publication Date
CN112910988A true CN112910988A (en) 2021-06-04

Family

ID=76119938

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110120805.XA Pending CN112910988A (en) 2021-01-28 2021-01-28 Resource acquisition method and resource scheduling device

Country Status (1)

Country Link
CN (1) CN112910988A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113810461A (en) * 2021-08-04 2021-12-17 网宿科技股份有限公司 Bandwidth control method, device, equipment and readable storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101488918A (en) * 2009-01-09 2009-07-22 杭州华三通信技术有限公司 Multi-network card server access method and system
CN102664934A (en) * 2012-04-06 2012-09-12 北京华夏电通科技股份有限公司 Multi-thread control method and system for adaptive self-feedback of server
CN103188165A (en) * 2013-03-12 2013-07-03 神州数码网络(北京)有限公司 Intelligent router multipath output load balancing method and router
CN103763209A (en) * 2014-01-03 2014-04-30 上海聚力传媒技术有限公司 Scheduling method and device of CDN servers
US8880635B1 (en) * 2012-07-31 2014-11-04 Google Inc. Selective requesting of cached resources
CN104239149A (en) * 2012-08-31 2014-12-24 南京工业职业技术学院 Server multithread parallel data processing method and load balancing method
CN106302168A (en) * 2016-09-18 2017-01-04 东软集团股份有限公司 A kind of ISP route selecting method, device and gateway
CN107181804A (en) * 2017-05-25 2017-09-19 腾讯科技(深圳)有限公司 The method for down loading and device of resource
CN107943594A (en) * 2016-10-13 2018-04-20 北京京东尚科信息技术有限公司 Data capture method and device
CN108449388A (en) * 2018-02-25 2018-08-24 心触动(武汉)科技有限公司 A kind of multinode idleness of equipment aggregated bandwidth utilizes method and system
CN109995881A (en) * 2019-04-30 2019-07-09 网易(杭州)网络有限公司 The load-balancing method and device of cache server
CN111652681A (en) * 2020-05-29 2020-09-11 平安医疗健康管理股份有限公司 Receipt processing method, server and computer readable storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101488918A (en) * 2009-01-09 2009-07-22 杭州华三通信技术有限公司 Multi-network card server access method and system
CN102664934A (en) * 2012-04-06 2012-09-12 北京华夏电通科技股份有限公司 Multi-thread control method and system for adaptive self-feedback of server
US8880635B1 (en) * 2012-07-31 2014-11-04 Google Inc. Selective requesting of cached resources
CN104239149A (en) * 2012-08-31 2014-12-24 南京工业职业技术学院 Server multithread parallel data processing method and load balancing method
CN103188165A (en) * 2013-03-12 2013-07-03 神州数码网络(北京)有限公司 Intelligent router multipath output load balancing method and router
CN103763209A (en) * 2014-01-03 2014-04-30 上海聚力传媒技术有限公司 Scheduling method and device of CDN servers
CN106302168A (en) * 2016-09-18 2017-01-04 东软集团股份有限公司 A kind of ISP route selecting method, device and gateway
CN107943594A (en) * 2016-10-13 2018-04-20 北京京东尚科信息技术有限公司 Data capture method and device
CN107181804A (en) * 2017-05-25 2017-09-19 腾讯科技(深圳)有限公司 The method for down loading and device of resource
CN108449388A (en) * 2018-02-25 2018-08-24 心触动(武汉)科技有限公司 A kind of multinode idleness of equipment aggregated bandwidth utilizes method and system
CN109995881A (en) * 2019-04-30 2019-07-09 网易(杭州)网络有限公司 The load-balancing method and device of cache server
CN111652681A (en) * 2020-05-29 2020-09-11 平安医疗健康管理股份有限公司 Receipt processing method, server and computer readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113810461A (en) * 2021-08-04 2021-12-17 网宿科技股份有限公司 Bandwidth control method, device, equipment and readable storage medium

Similar Documents

Publication Publication Date Title
US10560465B2 (en) Real time anomaly detection for data streams
CN107872489B (en) File slice uploading method and device and cloud storage system
US8069224B2 (en) Method, equipment and system for resource acquisition
US10572285B2 (en) Method and apparatus for elastically scaling virtual machine cluster
CN108279974B (en) Cloud resource allocation method and device
US8843632B2 (en) Allocation of resources between web services in a composite service
CN107087031B (en) Storage resource load balancing method and device
CN107729570B (en) Data migration method and device for server
JP2015508543A (en) Processing store visit data
US20170153909A1 (en) Methods and Devices for Acquiring Data Using Virtual Machine and Host Machine
CN110389715B (en) Data storage method, storage server and cloud storage system
CN106598738A (en) Computer cluster system and parallel computing method thereof
CN110008029B (en) ceph metadata cluster directory distribution method, system, device and readable storage medium
CN111008071A (en) Task scheduling system, method and server
CN112910988A (en) Resource acquisition method and resource scheduling device
JP6432407B2 (en) NODE, INFORMATION PROCESSING SYSTEM, METHOD, AND PROGRAM
CN112861031B (en) URL refreshing method, device and equipment in CDN and CDN node
CN109286532B (en) Management method and device for alarm information in cloud computing system
CN105763508B (en) Data access method and application server
CN108023920B (en) Data packet transmission method, equipment and application interface
CN110134547B (en) Middleware-based repeated data deleting method and related device
CN110321133B (en) H5 application deployment method and device
CN113238836A (en) Distributed content scheduling method, scheduling system and central server
US20150379548A1 (en) Method and System for Data Processing
CN111464600A (en) Point-to-point storage network establishing method, device, medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210604

RJ01 Rejection of invention patent application after publication