CN110740148B - Content resource processing method, device, equipment and medium - Google Patents

Content resource processing method, device, equipment and medium Download PDF

Info

Publication number
CN110740148B
CN110740148B CN201810796471.6A CN201810796471A CN110740148B CN 110740148 B CN110740148 B CN 110740148B CN 201810796471 A CN201810796471 A CN 201810796471A CN 110740148 B CN110740148 B CN 110740148B
Authority
CN
China
Prior art keywords
cache server
content resource
identification information
cache
queried
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810796471.6A
Other languages
Chinese (zh)
Other versions
CN110740148A (en
Inventor
王磊
王莹
黄玉宝
陈乜
林森
梁虹
吴海柳
谢彩虹
陈勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Hainan Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Hainan Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Hainan Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201810796471.6A priority Critical patent/CN110740148B/en
Publication of CN110740148A publication Critical patent/CN110740148A/en
Application granted granted Critical
Publication of CN110740148B publication Critical patent/CN110740148B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Abstract

The invention discloses a method, a device, equipment and a medium for processing content resources, which can improve the query efficiency of the content resources. The method comprises the following steps: determining a cache server to be queried in a cache server cluster; if the cache server to be queried contains the identification information of the content resource which is the same as the identification information of the target content resource, determining to query the target content resource in the cache server to be queried; if the cache server to be queried does not contain the identification information of the content resource which is the same as the identification information of the target content resource, determining the cache server corresponding to the identification information of the target reference content resource in the content resource routing table of the cache to be queried as a new cache server to be queried, judging whether the new cache server to be queried contains the identification information of the target content resource until the new cache server to be queried contains the identification information of the target content resource, and determining to query the target content resource in the cache server to be queried.

Description

Content resource processing method, device, equipment and medium
Technical Field
The present invention relates to the field of communications, and in particular, to a method, an apparatus, a device, and a medium for processing content resources.
Background
The cache server is an important component of the core network. In the core network, the cache server is mainly responsible for providing content resources, wherein the cache server stores a large amount of content resources, including: internet content resources, mobile multimedia content resources, game acceleration content resources, video loading content resources, and the like.
At present, a Cache server (Cache) serving as a service node in a core network adopts a centralized deployment mode, and is deployed in a larger city in multiple ways. When the user requests the content resource, the base station where the user is located applies for the content resource from the nearby cache server. When the content resource requested by the user does not exist in the cache server, or when the cache server fails, the cache server needs to continuously query the content resources in a plurality of other cache servers serving as service nodes one by one until the content resource requested by the user is queried.
Particularly, when the content resource requested by the user is stored in the cache server far away from the user, the content resource requested by the user needs to be sequentially inquired from the cache server closest to the user according to the distance between the cache server and the user and according to the sequence from near to far. The query mode causes that a user needs to go and return a longer query path when querying the content resource, thereby causing that the efficiency of querying the content resource is lower.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a medium for processing content resources, which can improve the query efficiency of the content resources.
The embodiment of the invention provides a method for processing content resources, which comprises the following steps:
determining a cache server to be queried in a cache server cluster, wherein the cache server cluster comprises a plurality of cache servers, and each cache server stores identification information of a plurality of content resources;
judging whether the cache server to be inquired contains the identification information of the content resource which is the same as the identification information of the target content resource;
if the cache server to be queried contains the identification information of the content resource which is the same as the identification information of the target content resource, determining to query the target content resource in the cache server to be queried;
if the cache server to be queried does not contain the identification information of the content resource which is the same as the identification information of the target content resource, determining the cache server corresponding to the identification information of the target reference content resource in the content resource routing table of the cache to be queried as a new cache server to be queried, judging whether the new cache server to be queried contains the identification information of the target content resource until the new cache server to be queried contains the identification information of the target content resource, determining to query the target content resource in the cache server to be queried,
the content resource routing table contains the corresponding relation between the identification information of the reference content resource and the cache server, and the identification information of the target reference content resource is the identification information of the reference content resource with the maximum correlation degree with the identification information of the target content resource.
In an alternative embodiment, the identification information of the reference content resource is a number of the reference content resource,
the numbers of the plurality of reference content resources contained in the content resource routing table are sequentially arranged from small to large, wherein the difference values of the numbers of all two adjacent reference content resources form an equal ratio sequence.
In an optional implementation manner, a plurality of cache servers included in the cache server cluster are sequentially connected to form a closed loop, and the identification information of the content resource stored by any cache server in the cache server cluster is backed up in a next cache server of any cache server.
In an alternative embodiment, the identification information of the content resource is a number of the content resource, all cache servers in the cache server cluster are arranged in sequence,
the numbers of the plurality of content resources stored by any cache server are all smaller than the numbers of the plurality of content resources stored in the next cache server of any cache server.
In an alternative embodiment, if the number of the reference content resource is not greater than the number of the target content resource, and the number of the next reference content resource of the reference content resource is greater than the number of the target content resource, the correlation between the number of the reference content resource and the number of the target content resource is the largest.
In an optional implementation manner, the method for processing a content resource further includes:
receiving a removal instruction indicating removal of the cache server or indicating failure of the cache server, wherein the cache server indicated by the removal instruction and other cache servers contained in the cache server cluster except the cache server indicated by the removal instruction are arranged in sequence;
and responding to the removal instruction, and replacing the mapping relation between the identification information of the reference content resource in the content resource routing tables of all the cache servers and the cache server indicated by the removal instruction with the mapping relation between the identification information of the reference content resource and the next cache server of the cache server indicated by the removal instruction.
In an optional implementation manner, the processing method of the content resource includes:
receiving an adding instruction for indicating that a cache server is added in a cache server cluster;
in response to the adding instruction, adding the cache server indicated by the adding instruction at the cache server cluster, and constructing a content resource routing table of the cache server indicated by the adding instruction,
the content resource routing table of the cache server indicated by the adding instruction comprises a corresponding relation between the reference content resource and the original cache server, and the original cache server is other cache servers which are contained in the cache server cluster and are except the cache server indicated by the adding instruction.
An embodiment of the present invention provides a content data processing apparatus, including:
the system comprises a first determining module, a second determining module and a searching module, wherein the first determining module is used for determining a cache server to be queried in a cache server cluster, the cache server cluster comprises a plurality of cache servers, and each cache server stores identification information of a plurality of content resources;
the judging module is used for judging whether the cache server to be inquired contains the identification information of the content resource which is the same as the identification information of the target content resource;
the second determining module is used for determining to query the target content resource in the cache server to be queried if the cache server to be queried contains the identification information of the content resource which is the same as the identification information of the target content resource;
a third determining module, configured to determine, if the cache server to be queried does not contain the identification information of the content resource that is the same as the identification information of the target content resource, the cache server corresponding to the identification information of the target reference content resource in the content resource routing table of the cache to be queried as a new cache server to be queried, and determine whether the new cache server to be queried contains the identification information of the target content resource until the new cache server to be queried contains the identification information of the target content resource, and determine to query the target content resource in the cache server to be queried,
the content resource routing table contains the corresponding relation between the identification information of the reference content resource and the cache server, and the identification information of the target reference content resource is the identification information of the reference content resource with the maximum correlation degree with the identification information of the target content resource.
In an alternative embodiment, the identification information of the reference content resource is a number of the reference content resource,
the numbers of the plurality of reference content resources contained in the content resource routing table are sequentially arranged from small to large, wherein the difference values of the numbers of all two adjacent reference content resources form an equal ratio sequence.
In an alternative embodiment, the identification information of the content resource is a number of the content resource, all cache servers in the cache server cluster are arranged in sequence,
the numbers of the plurality of content resources stored by any cache server are all smaller than the numbers of the plurality of content resources stored in the next cache server of any cache server.
In an optional implementation manner, a plurality of cache servers included in the cache server cluster are sequentially connected to form a closed loop, and the identification information of the content resource stored by any cache server in the cache server cluster is backed up in a next cache server of any cache server.
In an alternative embodiment, if the number of the reference content resource is not greater than the number of the target content resource, and the number of the next reference content resource of the reference content resource is greater than the number of the target content resource, the correlation between the number of the reference content resource and the number of the target content resource is the largest.
In an optional implementation manner, the processing apparatus of the content resource further includes:
the first receiving module is used for receiving a removal instruction which indicates to remove the cache server or indicates that the cache server fails, wherein the cache server indicated by the removal instruction and other cache servers, except the cache server indicated by the removal instruction, contained in the cache server cluster are arranged in sequence;
and the replacing module is used for responding to the removing instruction, and replacing the mapping relation between the identification information of the reference content resource in the content resource routing table of all the cache servers and the cache server indicated by the removing instruction with the mapping relation between the identification information of the reference content resource and the next cache server of the cache server indicated by the removing instruction.
In an optional implementation manner, the processing method of the content resource includes:
the second receiving module is used for receiving an adding instruction for indicating that the cache server is added in the cache server cluster;
and the adding module is used for responding to the adding instruction, adding the cache server indicated by the adding instruction in the cache server cluster, and constructing a content resource routing table of the cache server indicated by the adding instruction, wherein the content resource routing table of the cache server indicated by the adding instruction comprises a corresponding relation between the reference content resource and the original cache server, and the original cache server is other cache servers which are contained in the cache server cluster and are except for the cache server indicated by the adding instruction.
An embodiment of the present invention provides a processing device for content resources, including:
a memory for storing a program;
and the processor is used for operating the program stored in the memory so as to execute the processing method of the content resource provided by the embodiment of the invention.
The embodiment of the invention provides a computer storage medium, wherein computer program instructions are stored on the computer storage medium, and when being executed by a processor, the computer program instructions realize the processing method of the content resources provided by the embodiment of the invention.
According to the content resource processing method, device, equipment and medium in the embodiment of the invention, when the cache server to be queried does not contain the identification information of the target content resource, the target content resource can be queried in the new cache server to be queried. And the new cache server to be inquired is the cache server corresponding to the identification information of the target reference resource in the content resource routing table. Because the identification information of the target reference content resource can be used as the reference information for selecting a new cache server to be queried, and the correlation between the identification information of the target reference content resource and the identification information of the target content resource is relatively high, the cache server corresponding to the identification information of the target reference content resource is used as the next cache server to be queried, so that the number of cache servers to be queried is reduced, and the speed of querying the target content resource is increased. Therefore, by the content resource processing method, device, equipment and medium in the embodiment of the invention, the query efficiency of the content resource is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram illustrating a method of processing a content asset in accordance with an embodiment of the present invention;
FIG. 2 is a diagram of a cache server cluster in an example of an embodiment of the invention;
FIG. 3 is a schematic diagram of a cache server cluster in another example of an embodiment of the invention;
FIG. 4 is a schematic diagram of a cache server cluster in yet another example of an embodiment of the invention;
fig. 5 is a schematic structural diagram showing a processing apparatus of a content asset according to an embodiment of the present invention;
fig. 6 is a block diagram illustrating an exemplary hardware architecture of a processing device of a content resource that can implement the processing method and apparatus of the content resource according to the embodiment of the present invention.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present invention by illustrating examples of the present invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The embodiment of the invention provides a processing method, a processing device, processing equipment and a processing medium of content resources, which can be applied to a cache server cluster consisting of cache servers. Because the cache server cluster comprises a plurality of content resources, when the target content resource is queried, the identification information of the target content resource can be queried in the identification information of all the content resources contained in the cache server cluster. Each cache server in the cache server cluster stores identification information of a plurality of content resources, and a content resource routing table is arranged in each cache server. When the identification information of one of the target contents is not queried in a cache server, a next cache server may be determined with reference to the content resource routing table of the cache server, so as to continue querying the identification information of the target content resource in the next cache server.
The cache server in the embodiment of the present invention may be located in a network edge area, such as a small city, county, district, or county. In particular, the method is applicable to the construction and use of the fifth Generation mobile phone communication technology (5-Generation wireless telephone technology, 5G) network construction.
For better understanding of the present invention, a method, an apparatus, a device and a medium for processing a content resource according to embodiments of the present invention will be described in detail below with reference to the accompanying drawings, and it should be noted that these embodiments are not intended to limit the scope of the present disclosure.
Fig. 1 is a schematic flow chart illustrating a processing method of a content asset according to an embodiment of the present invention. As shown in fig. 1, the processing method 100 of the content resource in the present embodiment may include S110 to S140:
s110, determining a cache server to be queried in the cache server cluster.
The cache server cluster comprises a plurality of cache servers, and each cache server stores identification information of a plurality of content resources.
In S110, the plurality of cache servers may be divided into a cache server cluster according to the geographical location information of the cache servers. For example, all cache servers in a certain area are divided into a cache server cluster.
In some embodiments of the present invention, the identification information of the content resource in the cache server cluster is designed to be 48 bits. In other words, theoretically, the storage upper limit of the identification information of the content resource in one cache server cluster is 248Identification information of the individual content assets.
In some embodiments of the present invention, the identification information of the content asset in S110 is a number of the content asset. If all the cache servers in the cache server cluster are arranged in sequence, the number of the content resources stored by any cache server is smaller than the number of the content resources stored by the next cache server of any cache server.
As an example, if cache server N in a cluster of cache servers1To N200The cache server cluster comprises the number 1 of the content resource to the number 2 multiplied by 10 of the content resource14. Each cache server caches 1012The number of individual content assets. For example, the numbers 1 to 10 may be assigned12Is stored in N1Will be numbered 1012+1 to 2X 1012Is stored in N1Next cache server N of2… …, numbered 199X 1012+1 to 2X 1014Is stored in N200
It should be noted that, in an actual implementation process, the order of magnitude of the cache servers in one cache server cluster is much higher than 200 in the example of the present embodiment.
In some embodiments of the present invention, the plurality of cache servers included in the cache server cluster are sequentially connected to form a closed loop.
In some embodiments, the closed loop may be a logically end-to-end loop.
As an example, fig. 2 is a schematic diagram of a closed loop in an example of an embodiment of the present invention. The cache server cluster comprises 200 cache servers N1To N200For example, as shown in FIG. 2, N1To N200Are connected in sequence according to the size of the sequence number, and the first cache server N1With the last cache server N200Connected to form a closed logic loop.
In other words, in the clockwise direction, cache server N on the logical ring1Is N2Cache server N1Last buffer of is N200. Cache server N200Is N1Cache server N200Last cache server of is N199
For convenience of description, the following embodiments of the present invention take an example in which a plurality of cache servers included in a cache server cluster are logically connected in sequence to form a closed loop, that is, a next cache server of a last cache server in the cache server cluster is a first cache server in the cache server cluster. And the last cache server of the first cache server in the cache server cluster is the last cache server in the cache server cluster.
It should be noted that the closed loop formed by the plurality of servers is a logically closed loop, and there is no limitation on whether physical connection is required between adjacent cache servers, and the geographical location relationship between the plurality of cache servers.
In some embodiments of the invention, the identification information of the content asset comprises a number of the content asset.
In some embodiments of the present invention, if a query request of a target content resource of a user is received, a cache server closest to the user may be determined in a cache server cluster according to geographic location information of the user, and the determined cache server is used as an initial cache server to be queried.
S120, judging whether the cache server to be inquired contains the identification information of the content resource which is the same as the identification information of the target content resource.
The execution subject of S120 may be a cache server to be queried or a processor of the entire cache server cluster.
In some embodiments of the invention, the target content resource is a content resource requested to be queried by a user through a query request of the target content resource.
In some embodiments of the invention, the identification information of the target content asset comprises a number of the target content asset.
In some embodiments of the present invention, the specific implementation of S120 may include:
and judging whether the cache server contains the number of the content resource which is the same as the number of the target content resource.
For convenience of explanation, the following sections of the embodiments of the present invention will explain identification information of a target content resource and identification information of a content resource, respectively, by taking the number of the target content resource and the number of the content resource as examples.
S130, if the cache server to be queried contains the identification information of the content resource which is the same as the identification information of the target content resource, determining to query the target content resource in the cache server to be queried.
As an example, if the target content resource is numbered 1012+5×108Buffer N to be queried2Number 10 containing content asset12+1 to 2X 1012. Since the buffer N to be queried2Number 10 of the content asset involved12+5×108Number 10 with target content resource12+5×108The same is true. Determining whether a cache server N is to be queried2Querying the target content resource. In other words, the number is 1012+5×108Target content resource caching in a cache server N to be queried2In (1).
In the embodiment of the present invention, in addition to the target content resource that can be queried in the cache server to be queried in S130, there is also a case that the target content resource cannot be queried in the cache server to be queried, specifically, as shown in S140.
S140, if the cache server to be queried does not contain the identification information of the content resource which is the same as the identification information of the target content resource, determining the cache server corresponding to the identification information of the target reference content resource in the content resource routing table of the cache to be queried as a new cache server to be queried, judging whether the new cache server to be queried contains the identification information of the target content resource until the new cache server to be queried contains the identification information of the target content resource, and determining to query the target content resource in the cache server to be queried.
The content resource routing table contains the corresponding relation between the identification information of the reference content resource and the cache server, and the identification information of the target reference content resource is the identification information of the reference content resource with the maximum correlation degree with the identification information of the target content resource.
As an example, if 200 cache servers are included in the cache server, the content resources need to be queried one by one in the 200 cache servers in the prior art.
By using the technical contents in S110 to S140 in the embodiment of the present invention, if the 8 th cache server is used as the cache server to be queried, the 44 th cache server may be determined as the second cache server to be queried directly according to the content resource routing table of the 8 th cache server. And then, determining the 49 th cache server as a third cache server to be queried by using the content resource routing table of the 44 th cache server. And finally, determining that the 50 th cache server is the fourth cache server to be queried by using the content resource routing table of the 49 th cache server, and if the target content resource is queried in the 50 th cache server, finishing the query only by 4 cache servers.
According to the content resource processing method in the embodiment of the invention, when the cache server to be queried does not contain the identification information of the target content resource, the target content resource can be queried in the new cache server to be queried. And the new cache server to be inquired is the cache server corresponding to the identification information of the target reference resource in the content resource routing table. Because the identification information of the target reference content resource can be used as the reference information for selecting a new cache server to be queried, and the correlation between the identification information of the target reference content resource and the identification information of the target content resource is relatively high, the cache server corresponding to the identification information of the target reference content resource is used as the next cache server to be queried, so that the number of cache servers to be queried is reduced, and the speed of querying the target content resource is increased. Therefore, by the content resource processing method in the embodiment of the invention, the query efficiency of the content resource is improved.
As an example, if the target content resource is numbered 2 × 1012+5×108Buffer N to be queried2Number 10 containing content asset12+1 to 2X 1012. Since the buffer N to be queried2Number 2 x 10 of the target content resource is not included in12+5×108The number of the same content asset. Determining whether a cache server N is to be queried2The target content resource is not queried.
In some embodiments of the present invention, the identification information of the reference content resource is identification information of a plurality of content resources selected from the identification information of the content resources included in the caching server cluster and used for reference.
As an example, the identification information of the reference content resource included in the content resource routing table of a cache server may be selected from identification information of content resources included in other cache servers except for the cache server.
In some embodiments of the present invention, the content resource routing table includes a plurality of correspondences between identification information of reference content resources and the cache server. The correlation degrees of the identification information of the multiple reference content resources and the identification information of the target content resource can be respectively determined, the identification information of the reference content resource with the maximum correlation degree with the identification information of the target content resource is used as the identification information of the target reference content resource, and the cache server corresponding to the identification information of the target reference content resource is determined as a new cache server to be queried.
In some embodiments, if the labels of the plurality of reference content resources in the content resource routing table of the cache server to be queried are sequentially arranged from small to large. The number of a reference content resource is not less than the number of the target content resource, the number of the next reference content resource of the reference content resource is greater than the number of the target content resource, and the correlation degree between the number of the reference content resource and the number of the target content resource is the largest.
As an example, the content resource routing table in the caching service to be queried is shown in table 1. The content resource routing table records the correspondence between the reference content resource reference label a and the cache server a, the correspondence between the reference content resource reference label B and the cache server B, and the correspondence between the reference content resource reference label C and the cache server C.
TABLE 1
Numbering of reference content resources Cache server
a A
b B
c C
As shown in table 2, when the value range of the number of the target content resource is between [ a, b), the number of the target content resource is not less than the number a of the reference content resource and is less than the number b of the next reference content resource, and the correlation between the number a of the reference content resource and the target content resource is the largest. Accordingly, the number a of the reference content resource is the number of the target reference content resource.
When the value range of the number of the target content resource is between [ b, c), the number of the target content resource is not less than the number b of the reference content resource, and the number c of the next reference content resource is greater than the number of the target content resource, at this time, the correlation degree between the number b of the reference content resource and the target content resource is the maximum. Accordingly, the number b of the reference content resource is the number of the target reference content resource.
When the value range of the number of the target content resource is between [ c, d ] < U [1, a), the number of the target content resource is not less than the number c of the reference content resource, and the label a of the next reference content resource is regarded as being greater than the number of the target content resource, at this time, the correlation degree between the number c of the reference content resource and the target content resource is the maximum. Accordingly, the number c of the reference content resource is the number of the target reference content resource.
It should be noted that, if the cache server cluster includes d content resources in total, when the value range of the number of the target content resource is between [ c, d ], [ u [1, a ], the number of the target content resource is regarded as not less than the number c of the reference content resource, and the number a of the next reference content resource is regarded as greater than the number of the target content resource, and then the correlation between the number c of the reference content resource and the target content resource is the largest. Accordingly, the number c of the reference content resource is the number of the target reference content resource. Wherein a, b, c and d are positive integers, and a < b < c < d.
TABLE 2
Figure BDA0001736044000000121
In some embodiments of the present invention, in order to further improve the query efficiency of the content resource, the numbers of the plurality of reference content resources in the content resource routing table of each cache server are sequentially arranged from small to large. Wherein, the difference of the numbers of all the two adjacent reference content resources forms an equal ratio sequence. For example, the common ratio of the series of equal ratios is 2.
In some embodiments, the reference content resource in the content resource routing table in the cache server has a number that is the sum of a fixed value and a power of 2. Specifically, the fixed value may be the maximum number of the content resource cached in the cache server. Namely, the number of the reference content resource of the content resource routing table in the cache server is the sum of the maximum number of the content resource cached in the cache server and the power of 2. For example, the number of the ith reference content resource is the sum of the maximum number of the content resource cached in the cache server and the power of the (i-1) th power of 2. Wherein i is a positive integer. It should be noted that, when the number of the reference content resource in the content resource routing table is the sum of a fixed value and the power of 2, the power of 2 value corresponding to the maximum number of the reference content resource should be smaller than the upper limit of the cache of the identification information of the content resource in the cache server cluster. For example, if the caching upper limit of the identification information of the content resource in the cache server cluster is 248Maximum number of reference content resource is f +247,247<248. Where f is a fixed value.
It should be noted that, in order to reduce the storage pressure of the content resource routing table on the cache server and increase the query speed, the content resource routing table may be a part of the above content routing resource table. For example, the number of the 1 st reference content resource is the sum of the maximum number of the content resource cached in the cache server and the k-th power of 2.
Wherein k is a positive integer greater than 1. And the value of the power of k of 2 is smaller than the number of numbers of content resources contained in one cache server. For example, if each cache server includes 1012The number of each content resource is such that the value of the power k of 2 is less than 1012The number of (2).
As an example, if the caching upper limit of the identification information of the content resource in the caching server cluster is 248The cache server cluster comprises 1014Number of individual content assets 1 to 1014The ratio is 2. Cache server N1The number of the cached content resources is 1 to 1012Cache server N2Number of cached content asset 1012+1 to 2X 1012… …, Server N200The number of the cached content resources is 199 × 1012+1 to 2X 1014
Cache server N8The content resource routing table of (1) is shown in table 3. Cache server N8Has a maximum number of 8 x 10 of cached content resources12. The content resource routing table comprises the serial numbers of 48 reference content resources, and the power of the common ratio value of the serial numbers of the 48 th reference content resource is 247Less than 248. The reference content resource number is cache server N8Maximum number of cached content resources of 8 x 1012The product with the power of 2. Table 3 the powers of 2 are 0 to 2, 1 to 2, … …, and 47 to 2, respectively, as viewed from above.
TABLE 3
Figure BDA0001736044000000131
Figure BDA0001736044000000141
Table 4 shows how a new cache server to be queried is selected. As shown in Table 4, when the number of the target content asset is 50 × 1012Of time, target content resourcesThe number is in the range of [8 × 10 ]12+245,8×1012+246) In this case, the number of the target reference content resource can be determined to be 8 × 10 according to the content resource routing table of table 312+245The cache server corresponding to the number of the target reference content resource is N44At this time, N may be44As a new cache server to be queried.
TABLE 4
Figure BDA0001736044000000142
When N is present44As a new cache server to be queried, N44The content resource routing table in (1) is shown in table 5. Cache server N44Has a maximum number of 44 x 10 of cached content resources12. Since 50 is multiplied by 1012In the [ 44X 10 ]12+242,44×1012+243) In accordance with N44Determines the number of the target content resource to be 50 x 1012Number 44 x 10 of the target reference content resource of (2)12+242Accordingly, the number of content resources 44 × 10 will be referenced12+242Corresponding cache server N44And determining as a new cache server to be queried.
TABLE 5
Numbering of reference content resources Corresponding cache server
44×1012+20 N45
44×1012+21 N45
44×1012+240 N46
44×1012+241 N47
44×1012+242 N49
44×1012+243 N53
44×1012+244 N62
44×1012+245 N80
44×1012+246 N115
44×1012+247 N185
Continuing to search for a new cache server N to be searched49Editing of target content resource in queryNumber 50 × 1012。N49The content resource routing table is shown in table 6. Since 50 is multiplied by 1012In the [ 49X 10 ]12+239,49×1012+240) From Table 6, it can be determined that the target reference content resource has a number of 50 × 1012The number of the corresponding target reference content resource is 49 × 1012+239And is compared with the number 49 x 10 of the target reference content resource12+239Corresponding cache server N50Can be used as a new cache server to be queried. Due to N50There is a number of the content asset that is the same as the number of the target content asset, and thus is determined to be at N50Querying the target content resource.
TABLE 6
Figure BDA0001736044000000151
Figure BDA0001736044000000161
When the number of the reference content resource in the content resource routing table of each cache server cluster is the sum of the maximum number of the content resource cached in the cache server and the power of 2, if n cache servers in the server cluster, the query path length of the number of the target content resource is not more than log2(n) of (a). For example, in the above embodiment, the number of the query target content asset is 50 × 1012From the cache server N to be queried8At the beginning, only 3 cache servers N need to be jumped again44、N49、N50The target content resource number can be inquired.
Compared with the prior art that the cache servers in the cache server cluster are queried one by one, the content resource processing method in the embodiment greatly shortens the query path length and correspondingly improves the query efficiency of the content resources.
In some embodiments of the present invention, in order to balance the disaster tolerance of the cache servers in the cache server cluster and the utilization rate of the cache servers, the identification information of the content resource stored by any cache server in the cache server cluster is backed up in the next cache server of the any cache server.
As an example, the numbers of the content resources numbered 1 to 10 are stored in the cache server N1Cache server N1Next cache server N2In addition to the numbers of the content resources numbered 11 to 20, the numbers of the content resources numbered 1 to 10 are backed up.
It should be noted that, in some specific embodiments, in addition to backing up the identification information of the content resource stored by any cache server to the next cache server of any cache server, the content resource stored by any cache server may also be backed up to the next cache server of any cache server. By backing up the serial number of the content resource stored by each cache server in the next cache server of the cache server, any cache server in the cache server cluster is used as both the primary cache server and the backup cache server, so that the cache servers can be fully utilized. Moreover, if any cache server in the cache server cluster fails, the number of the content resource backed up in the next cache server of the cache server and the backed up content resource can be started, so that the risk caused by failure or removal of the cache server is reduced. The main cache server is a cache server which is specially used for inquiring content resources under a normal condition; the backup cache server refers to a cache server which backups the number of the content resource and the content resource, and is used for inquiring the content resource when the corresponding primary cache server fails or is removed.
In some embodiments of the present invention, when a cache server in a cache server cluster fails or needs to be removed, the processing method 100 of a content resource of this embodiment further includes S150 and S160:
s150, receiving a removal instruction which indicates to remove the cache server or indicates the failure of the cache server.
The cache server indicated by the removal instruction and other cache servers included in the cache server cluster except the cache server indicated by the removal instruction are arranged in sequence.
In some embodiments of the present invention, the execution subject of S150 and S160 may be a processor of a cache server cluster.
In some embodiments of the present invention, when it is detected that a cache server in the cache server cluster fails or a cache server in the cache server cluster needs to be moved out of the cache server cluster to which the cache server cluster belongs, a removal instruction is sent to the execution main body of S150.
In some embodiments of the invention, the remove command comprises: the identification information of the cache server indicated by the instruction is removed. Such as the number of cache servers indicated by the remove instruction.
And S160, responding to the removal instruction, replacing the mapping relation between the identification information of the reference content resource in the content resource routing table of all the cache servers and the cache server indicated by the removal instruction with the mapping relation between the identification information of the reference content resource and the next cache server of the cache server indicated by the removal instruction.
In some embodiments of the present invention, after the cache server indicated by the removal instruction is removed from the cache server cluster to which the cache server belongs, a next cache server of a previous cache server of the cache server indicated by the removal instruction is replaced with a next cache server of the cache server indicated by the removal instruction.
As an example, fig. 3 is a schematic diagram of a cache server cluster in another example according to an embodiment of the present invention. As shown in FIG. 3, when the cache server N indicated by the remove instruction is removed9After removal, cache server N9Last cache server N of8From the original N9Change to N10
In some embodiments of the present invention, the content resource routing table of the cache server other than the cache server indicated by the removal instruction may include: and referring to the corresponding relation between the content resource and the cache server indicated by the removal instruction.
And after receiving the removal instruction, replacing the cache server indicated by the reference content resource and the removal instruction with the cache server next to the cache server indicated by the reference content resource and the removal instruction.
As an example, as shown in FIG. 3, when cache server N is removed9Thereafter, cache server N in Table 38Of the content resource routing table9Need to be replaced by N10。N8The updated partial content resource routing table is shown in Table 7, for example, with reference to the number of content resources 8 × 1012+21And cache server N9Is replaced by the number of the reference content resource of 8 x 1012+21And cache server N10The corresponding relationship of (1).
TABLE 7
Numbering of reference content resources Corresponding cache server
8×1012+20 N10
8×1012+21 N10
8×1012+240 N10
8×1012+241 N11
8×1012+247 N149
In some embodiments of the present invention, the specific implementation manner of S160 further includes:
and moving the identification information of the content resource cached in the cache server indicated by the removing instruction to the next cache server of the cache server. For example, the identification information of the content resource cached by the cache server indicated by the removal instruction and the content resource cached by the cache server are moved to the next cache server of the cache server.
In some embodiments of the present invention, if the content resource of any cache server in the cache server cluster is backed up at the next cache server of any cache server, the specific implementation manner of S160 further includes:
and backing up the identification information of the content resource of the last cache server belonging to the cache server indicated by the removal instruction and the next cache server of the cache server indicated by the removal instruction. For example, the content resource of the previous cache server and the identification information of the content resource of the previous cache server are backed up with the next cache server. Through S150 and S160, when the cache server indicated by the removal instruction is removed from the cache server cluster, the mapping relationship between the identification information of all the reference content resources in the content resource routing table of the cache server and the cache server indicated by the removal instruction can be replaced in quick response to the removal instruction. The influence on the search of the content resource caused by removing the cache server or the cache server fault is reduced.
It should be noted that the execution sequence between S150 and S160 and the other steps except S150 and S160 in the processing method 100 for content resources is not limited herein.
It should also be noted that when the number of the reference content resource in each content resource routing table in the cache server cluster is the sum of the maximum number of the content resource cached in the cache server and the power of 2. If n cache servers in the server cluster are available, when the removal instruction indicates that 1 cache server is removed from the cache server cluster, the total of all content resource routing tables corresponding to the cache server cluster after the 1 cache server is removed is at most log2And (n) updating the corresponding relation between the labels of the reference content resources and the cache server.
In some embodiments of the present invention, when a new cache server needs to be added in a cache server cluster, for example, a content resource that needs to be cached in a certain area exceeds the maximum storage load of all cache servers included in the cache server cluster in the area, and capacity needs to be expanded for the cache server cluster, the content resource processing method 100 of this embodiment further includes S150 'and S160':
s150', an adding instruction for adding the cache server in the cache server cluster is received.
In some embodiments of the present invention, the execution subject of S150' may be a processor of a cache server cluster.
S160', responding to the adding instruction, adding the cache server indicated by the adding instruction in the cache server cluster, and constructing a content resource routing table of the cache server indicated by the adding instruction.
The content resource routing table of the cache server indicated by the adding instruction comprises a corresponding relation between the reference content resource and the original cache server, and the original cache server is other cache servers which are contained in the cache server cluster and are except the cache server indicated by the adding instruction.
As an example, fig. 4 is a schematic diagram of a cache server cluster in another example of an embodiment of the present invention. As shown in FIG. 4, if the cache server indicated by the add instruction is N201The cache server N can be used201Adding to N contained in original cache server cluster1And N200In between, a new closed logic loop is formed.
It should be noted that, in the newly constructed logical ring, the cache server N indicated by the instruction is added13Last cache server of is N200,N201Is N1
In some embodiments of the present invention, when the adding the cache server indicated by the instruction is to expand the cache capacity of the cache server cluster, before constructing the content resource routing table of the cache server indicated by the instruction in S160', the method further includes:
the number of the new content asset is stored in the cache server indicated by the add instruction. For example, the new content resource and the number of the content resource are stored in the cache server indicated by the add instruction.
In some embodiments, the characteristics of the content resource routing table constructed in S160' are the same as those of the content resource routing table in S140.
As an example, if the original cache server cluster includes 200 cache servers N1To N200And each cache server caches 1012The number of each content resource has a common value of 2. Cache server N1The number of the cached content resources is 1 to 1012Cache server N2Number of cached content asset 1012+1 to 2X 1012… …, Server N200The number of the cached content resources is 199 × 1012+1 to 200 × 1012.
Cache server N for adding instruction indication201After adding into the cache server cluster, if the numbers of the newly added content resources are included in the cache server cluster, the numbers are 200 × 1012+1 to 201 × 1012. The content resource routing table 8 of the cache server indicated by the add instruction is shown.
TABLE 8
Figure BDA0001736044000000201
Figure BDA0001736044000000211
Note that, since N is201The next cache server of is N1,N201Maximum number of medium content resources, corresponding to a ratio of N1The content asset number 1 in (1) is also smaller by 0.
In other embodiments of the present invention, when the cache server indicated by the instruction is added to relieve the cache pressure of the original cache server, before S160', the method 100 for processing the content resource further includes:
first, the identification information of one or more content resources of the adjacent cache server of the cache server indicated by the adding instruction is moved to the cache server indicated by the adding instruction.
In some embodiments, the neighboring cache server of the cache server indicated by the add instruction is a next cache server of the cache server indicated by the add instruction or a previous cache server of the cache server indicated by the add instruction.
As an example, as shown in FIG. 4, add cache Server N of instruction indication201The next cache server of is N1Cache server N with instruction indication201The last cache server of is N200
In some embodiments, the identification information of the part of the content resource of the next cache server may be moved to the cache server indicated by the adding instruction, and/or the identification information of the part of the content resource of the last cache server may be moved to the cache server indicated by the adding instruction.
And secondly, replacing the corresponding relation between the identification information of one or more mobile content resources in the content resource routing table of all the cache servers and the adjacent cache servers with the corresponding relation between the identification information of one or more mobile content resources and the cache server indicated by the adding instruction.
In some embodiments, if the identification information of a content resource is moved from the neighboring server to the cache server indicated by the add instruction. In the content resource routing tables of all the cache servers, the cache server corresponding to the identification information of the content resource is replaced by the cache server indicated by the adding instruction by the adjacent server.
As an example, when the content asset is 199 × 10 in number12+28By cache server N200Move to cache Server N with Add instruction indication201When the content resource is cached, the number of the reference content resource in the cache server in the routing table of all the content resources is 199 multiplied by 1012+28And cache router N200Is replaced by the number 199 × 10 of the reference content resource12+28And cache router N201The corresponding relationship of (1).
It should be noted that when the number of the reference content resource in the content resource routing table of each of the cache server clusters is the sum of the maximum number of the content resource cached in the cache server and the power of 2. If n cache servers are in the server cluster, when 1 cache server is added in the cache server cluster, the total log of all content resource routing tables corresponding to the cache server cluster is needed at most2And (n) updating the corresponding relation between the labels of the reference content resources and the cache server.
In some embodiments of the present invention, if the content resource of any cache server in the cache server cluster is backed up at the next cache server of any cache server, after S160', the processing method 100 of the content resource further includes:
and backing up the identification information of the content resource of the last cache server of the cache server indicated by the adding instruction to the cache server indicated by the adding instruction.
And backing up the identification information of the content resource in the cache server indicated by the adding instruction to the next cache server of the cache server indicated by the adding instruction.
As an example, the identification information of the content resource originally backed up by the next cache server of the cache server indicated by the addition instruction is deleted, and the identification information of the content resource of the cache server indicated by the addition instruction is backed up to the next cache server.
It should be noted that, the execution sequence between S150 'and S160' and other steps in the processing method 100 for content resources except S150 'and S160' is not limited herein.
It should be further noted that, through S150 'and S160', the cache server cluster can be quickly and conveniently expanded. Furthermore, the 5G network capacity expansion is simplified.
Based on the same inventive concept, an embodiment of the present invention provides a processing apparatus for content resources. Fig. 5 is a schematic structural diagram of a processing apparatus for providing a content resource according to an embodiment of the present invention. As shown in fig. 5, the processing apparatus 500 of the content resource includes 510 to 540:
a first determining module 510, configured to determine a cache server to be queried in the cache server cluster.
The cache server cluster comprises a plurality of cache servers, and each cache server stores identification information of a plurality of content resources.
The determining module 520 is configured to determine whether the cache server to be queried includes the identification information of the content resource that is the same as the identification information of the target content resource.
The second determining module 530 is configured to determine to query the target content resource in the cache server to be queried if the cache server to be queried contains the identification information of the content resource that is the same as the identification information of the target content resource.
A third determining module 540, configured to determine, if the cache server to be queried does not include the identifier information of the content resource that is the same as the identifier information of the target content resource, the cache server corresponding to the identifier information of the target reference content resource in the content resource routing table of the cache to be queried as a new cache server to be queried, and determine whether the new cache server to be queried includes the identifier information of the target content resource until the new cache server to be queried includes the identifier information of the target content resource, and determine to query the target content resource in the cache server to be queried.
The content resource routing table contains the corresponding relation between the identification information of the reference content resource and the cache server, and the identification information of the target reference content resource is the identification information of the reference content resource with the maximum correlation degree with the identification information of the target content resource.
In some embodiments of the invention, the identification information of the reference content resource is a number of the reference content resource.
The numbers of the plurality of reference content resources contained in the content resource routing table are sequentially arranged from small to large, wherein the difference values of the numbers of all two adjacent reference content resources form an equal ratio sequence.
In some embodiments of the present invention, a plurality of cache servers included in a cache server cluster are sequentially connected to form a closed loop, and identification information of a content resource stored in any cache server in the cache server cluster is backed up in a next cache server of any cache server.
In some embodiments of the present invention, the identification information of the content resource is a number of the content resource, all cache servers in the cache server cluster are arranged in sequence,
the numbers of the plurality of content resources stored by any cache server are all smaller than the numbers of the plurality of content resources stored in the next cache server of any cache server.
In some embodiments of the present invention, if the number of the reference content resource is not greater than the number of the target content resource, and the number of the next reference content resource of the reference content resource is greater than the number of the target content resource, the correlation between the number of the reference content resource and the number of the target content resource is the largest.
In some embodiments of the present invention, the processing apparatus 500 of the content resource further includes:
the first receiving module is used for receiving a removal instruction which indicates that the cache server is removed or indicates that the cache server fails.
The cache server indicated by the removal instruction and other cache servers included in the cache server cluster except the cache server indicated by the removal instruction are arranged in sequence.
And the replacing module is used for responding to the removing instruction, and replacing the mapping relation between the identification information of the reference content resource in the content resource routing table of all the cache servers and the cache server indicated by the removing instruction with the mapping relation between the identification information of the reference content resource and the next cache server of the cache server indicated by the removing instruction.
In some embodiments of the present invention, the processing apparatus 500 of the content resource further includes:
and the second receiving module is used for receiving an adding instruction for indicating that the cache server is added in the cache server cluster.
And the adding module is used for responding to the adding instruction, adding the cache server indicated by the adding instruction in the cache server cluster, and constructing a content resource routing table of the cache server indicated by the adding instruction.
The content resource routing table of the cache server indicated by the adding instruction comprises a corresponding relation between the reference content resource and the original cache server, and the original cache server is other cache servers which are contained in the cache server cluster and are except the cache server indicated by the adding instruction.
Fig. 6 is a block diagram of an exemplary hardware architecture of a processing device of a content resource in an embodiment of the present invention.
As shown in fig. 6, the processing device 600 of the content asset includes an input device 601, an input interface 602, a central processor 603, a memory 604, an output interface 605, and an output device 606. The input interface 602, the central processing unit 603, the memory 604, and the output interface 605 are connected to each other via a bus 610, and the input device 601 and the output device 606 are connected to the bus 610 via the input interface 602 and the output interface 605, respectively, and further connected to other components of the processing device 600 for content resources.
Specifically, the input device 601 receives input information from the outside, and transmits the input information to the central processor 603 through the input interface 602; the central processor 603 processes input information based on computer-executable instructions stored in the memory 604 to generate output information, stores the output information temporarily or permanently in the memory 604, and then transmits the output information to the output device 606 through the output interface 605; the output device 606 outputs the output information to the outside of the processing device 600 of the content resource for use by the user.
That is, the processing device of the content resource shown in fig. 6 may also be implemented to include: a memory storing computer-executable instructions; and a processor which, when executing computer executable instructions, may implement the methods and apparatus of the processing device of the content resource described in connection with fig. 1-5.
In one embodiment, the processing device 600 of the content resource shown in fig. 6 may be implemented as a device that may include: a memory for storing a program; and the processor is used for operating the program stored in the memory so as to execute the processing method of the content resource of the embodiment of the invention.
It is to be understood that the invention is not limited to the specific arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions or change the order between the steps after comprehending the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
As described above, only the specific embodiments of the present invention are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

Claims (15)

1. A method for processing content resources, the method comprising:
determining a cache server to be queried in a cache server cluster, wherein the cache server cluster comprises a plurality of cache servers, and each cache server stores identification information of a plurality of content resources;
judging whether the cache server to be inquired contains the identification information of the content resource which is the same as the identification information of the target content resource;
if the cache server to be queried contains the identification information of the content resource which is the same as the identification information of the target content resource, determining to query the target content resource in the cache server to be queried;
if the cache server to be queried does not contain the identification information of the content resource which is the same as the identification information of the target content resource, determining a cache server corresponding to the identification information of the target reference content resource in a content resource routing table of the cache server to be queried as a new cache server to be queried, and judging whether the new cache server to be queried contains the identification information of the target content resource until the new cache server to be queried contains the identification information of the target content resource, determining to query the target content resource in the cache server to be queried,
the content resource routing table includes a correspondence between identification information of a reference content resource and the cache server, and the identification information of the target reference content resource is the identification information of the reference content resource with the highest degree of correlation with the identification information of the target content resource.
2. The method of claim 1,
the identification information of the reference content resource is the number of the reference content resource,
the reference content resources contained in the content resource routing table are sequentially arranged from small to large in number, and the difference values of the numbers of all the two adjacent reference content resources form an equal ratio sequence.
3. The method of claim 1,
the cache server cluster comprises a plurality of cache servers which are sequentially connected to form a closed loop, and the identification information of the content resource stored by any cache server in the cache server cluster is backed up in the next cache server of any cache server.
4. The method of claim 1, wherein the step of applying the coating comprises applying a coating to the substrate
The identification information of the content resource is the number of the content resource, all the cache servers in the cache server cluster are arranged in sequence,
the numbers of the plurality of content resources stored by any cache server are all smaller than the numbers of the plurality of content resources stored in the next cache server of any cache server.
5. The method of claim 2,
and if the number of the reference content resource is not greater than the number of the target content resource and the number of the next reference content resource of the reference content resource is greater than the number of the target content resource, the correlation degree between the number of the reference content resource and the number of the target content resource is the largest.
6. The method according to any one of claims 1 to 5, wherein the method for processing the content resource further comprises:
receiving a removal instruction indicating removal of a cache server or indicating a fault of the cache server, wherein the cache server indicated by the removal instruction is sequentially arranged with other cache servers contained in the cache server cluster except the cache server indicated by the removal instruction;
and in response to the removal instruction, replacing the mapping relationship between the identification information of the reference content resource in the content resource routing tables of all the cache servers and the cache server indicated by the removal instruction with the mapping relationship between the identification information of the reference content resource and the next cache server of the cache servers indicated by the removal instruction.
7. The method according to any one of claims 1 to 5, wherein the processing method of the content resource comprises:
receiving an adding instruction for indicating to add a cache server in the cache server cluster;
in response to the adding instruction, adding the cache server indicated by the adding instruction in the cache server cluster, and constructing a content resource routing table of the cache server indicated by the adding instruction,
the content resource routing table of the cache server indicated by the adding instruction includes a corresponding relation between a reference content resource and an original cache server, and the original cache server is another cache server included in the cache server cluster except the cache server indicated by the adding instruction.
8. An apparatus for processing a content resource, the apparatus comprising:
the system comprises a first determining module, a second determining module and a searching module, wherein the first determining module is used for determining a cache server to be queried in a cache server cluster, the cache server cluster comprises a plurality of cache servers, and each cache server stores identification information of a plurality of content resources;
the judging module is used for judging whether the cache server to be inquired contains the identification information of the content resource which is the same as the identification information of the target content resource;
a second determining module, configured to determine to query the target content resource in the cache server to be queried if the cache server to be queried contains the identifier information of the content resource that is the same as the identifier information of the target content resource;
a third determining module, configured to determine, if the cache server to be queried does not include the identifier information of the content resource that is the same as the identifier information of the target content resource, a cache server corresponding to the identifier information of the target reference content resource in a content resource routing table of the cache server to be queried as a new cache server to be queried, and determine whether the new cache server to be queried includes the identifier information of the target content resource until the new cache server to be queried includes the identifier information of the target content resource, and determine to query the target content resource in the cache server to be queried,
the content resource routing table includes a correspondence between identification information of a reference content resource and the cache server, and the identification information of the target reference content resource is the identification information of the reference content resource with the highest degree of correlation with the identification information of the target content resource.
9. The apparatus of claim 8,
the identification information of the reference content resource is the number of the reference content resource,
the reference content resources contained in the content resource routing table are sequentially arranged from small to large in number, and the difference values of the numbers of all the two adjacent reference content resources form an equal ratio sequence.
10. The apparatus of claim 8, wherein the apparatus is further characterized in that the apparatus comprises a sensor for detecting the presence of a particle in the sample
The identification information of the content resource is the number of the content resource, all the cache servers in the cache server cluster are arranged in sequence,
the numbers of the plurality of content resources stored by any cache server are all smaller than the numbers of the plurality of content resources stored in the next cache server of any cache server.
11. The apparatus of claim 9,
and if the number of the reference content resource is not greater than the number of the target content resource and the number of the next reference content resource of the reference content resource is greater than the number of the target content resource, the correlation degree between the number of the reference content resource and the number of the target content resource is the largest.
12. The apparatus according to any one of claims 8 to 11, wherein the processing apparatus of the content resource further comprises:
a first receiving module, configured to receive a removal instruction indicating to remove a cache server or indicating that a cache server fails, where the cache server indicated by the removal instruction is sequentially arranged with other cache servers included in the cache server cluster except the cache server indicated by the removal instruction;
and a replacing module, configured to replace, in response to the removal instruction, the mapping relationship between the identification information of the reference content resource in the content resource routing tables of all the cache servers and the cache server indicated by the removal instruction with the mapping relationship between the identification information of the reference content resource and a next cache server of the cache servers indicated by the removal instruction.
13. The apparatus according to any one of claims 8 to 11, wherein the processing means of the content resource comprises:
a second receiving module, configured to receive an addition instruction indicating that a cache server is added in the cache server cluster;
an adding module, configured to add, in response to the adding instruction, the cache server indicated by the adding instruction in the cache server cluster, and construct a content resource routing table of the cache server indicated by the adding instruction,
the content resource routing table of the cache server indicated by the adding instruction includes a corresponding relation between a reference content resource and an original cache server, and the original cache server is another cache server included in the cache server cluster except the cache server indicated by the adding instruction.
14. A device for processing a content asset, the device comprising:
a memory for storing a program;
a processor for executing the program stored in the memory to perform the processing method of the content resource according to any one of claims 1 to 7.
15. A computer storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of processing a content asset of any of claims 1-7.
CN201810796471.6A 2018-07-19 2018-07-19 Content resource processing method, device, equipment and medium Active CN110740148B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810796471.6A CN110740148B (en) 2018-07-19 2018-07-19 Content resource processing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810796471.6A CN110740148B (en) 2018-07-19 2018-07-19 Content resource processing method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN110740148A CN110740148A (en) 2020-01-31
CN110740148B true CN110740148B (en) 2022-03-29

Family

ID=69235688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810796471.6A Active CN110740148B (en) 2018-07-19 2018-07-19 Content resource processing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN110740148B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102394880A (en) * 2011-10-31 2012-03-28 北京蓝汛通信技术有限责任公司 Method and device for processing jump response in content delivery network
CN102884775A (en) * 2012-06-25 2013-01-16 华为技术有限公司 Method and apparatus for accessing resources
CN104471528A (en) * 2012-04-23 2015-03-25 谷歌公司 Associating a file type with an application in a network storage service
CN105187308A (en) * 2015-05-07 2015-12-23 深圳市迪菲特科技股份有限公司 Resource node searching method and device
CN106789142A (en) * 2015-11-25 2017-05-31 北京国双科技有限公司 The method and apparatus of resource distribution

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8620894B2 (en) * 2010-12-21 2013-12-31 Microsoft Corporation Searching files
US9514132B2 (en) * 2012-01-31 2016-12-06 International Business Machines Corporation Secure data migration in a dispersed storage network
US10353911B2 (en) * 2016-06-19 2019-07-16 Data.World, Inc. Computerized tools to discover, form, and analyze dataset interrelations among a system of networked collaborative datasets

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102394880A (en) * 2011-10-31 2012-03-28 北京蓝汛通信技术有限责任公司 Method and device for processing jump response in content delivery network
CN104471528A (en) * 2012-04-23 2015-03-25 谷歌公司 Associating a file type with an application in a network storage service
CN102884775A (en) * 2012-06-25 2013-01-16 华为技术有限公司 Method and apparatus for accessing resources
CN105187308A (en) * 2015-05-07 2015-12-23 深圳市迪菲特科技股份有限公司 Resource node searching method and device
CN106789142A (en) * 2015-11-25 2017-05-31 北京国双科技有限公司 The method and apparatus of resource distribution

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"内容网络中内容调度技术研究";杨磊;《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》;20160715;I139-20 *
"缓存可感知的路由机制研究";胡晓艳;《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》;20160815;I139-5 *

Also Published As

Publication number Publication date
CN110740148A (en) 2020-01-31

Similar Documents

Publication Publication Date Title
CN103019960B (en) Distributed caching method and system
CN105512320B (en) User ranking obtaining method and device and server
CN107391033B (en) Data migration method and device, computing equipment and computer storage medium
CN109254981B (en) Data management method and device of distributed cache system
CN102571936B (en) Method, device and system for searching data
CN108512726A (en) A kind of method and apparatus of data monitoring
CN114185678A (en) Data storage method, device, equipment and storage medium
CN111639140A (en) Distributed data storage method, device and storage medium
CN110740148B (en) Content resource processing method, device, equipment and medium
CN106777285B (en) Method and device for clustering labels of user communication consumption data
CN104166649A (en) Caching method and device for search engine
CN112162987A (en) Data processing method, device, equipment and storage medium
CN111046004B (en) Data file storage method, device, equipment and storage medium
CN107547605A (en) A kind of message reading/writing method and node device based on node queue
CN115361295B (en) TOPSIS-based resource backup method, device, equipment and medium
CN114756385B (en) Elastic distributed training method under deep learning scene
CN114791912A (en) Data processing method, system, electronic equipment and storage medium
CN105025042A (en) Method of determining data information, system and proxy servers
CN116126928A (en) Information searching system based on variable fingerprint cuckoo filter
CN108600354B (en) System response time fluctuation suppression method and system
CN106202303A (en) A kind of Chord routing table compression method and optimization file search method
CN108809916B (en) Service processing method and device
CN111782346A (en) Distributed transaction global ID generation method and device based on same-library mode
CN114490744B (en) Data caching method, storage medium and electronic device
CN113296687A (en) Data processing method, device, computing equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant