WO2013128840A1 - Dispositif de commande de trafic, système de commande de trafic, procédé de contrôle du trafic, et programme de commande de trafic - Google Patents

Dispositif de commande de trafic, système de commande de trafic, procédé de contrôle du trafic, et programme de commande de trafic Download PDF

Info

Publication number
WO2013128840A1
WO2013128840A1 PCT/JP2013/000883 JP2013000883W WO2013128840A1 WO 2013128840 A1 WO2013128840 A1 WO 2013128840A1 JP 2013000883 W JP2013000883 W JP 2013000883W WO 2013128840 A1 WO2013128840 A1 WO 2013128840A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
request
cache server
cache
traffic control
Prior art date
Application number
PCT/JP2013/000883
Other languages
English (en)
Japanese (ja)
Inventor
徹 大須賀
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Publication of WO2013128840A1 publication Critical patent/WO2013128840A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23103Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion using load balancing strategies, e.g. by placing or distributing content on different disks, different memories or different servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2405Monitoring of the internal components or processes of the server, e.g. server load

Definitions

  • the present invention relates to a traffic control apparatus, method, and program for reducing traffic on a communication network (hereinafter simply referred to as “network”).
  • a central site such as a data center performs distribution in response to content distribution requests from many users
  • a large amount of traffic occurs on all the routes from the central site to each user's user terminal.
  • this large amount of traffic places a high load on a wide range of networks and adversely affects other communications.
  • delivery using a central location as described above uses a best-effort network such as the Internet over a long distance, so it is affected by a significant decrease in throughput and an increase in delay time due to congestion of the communication path. Cheap.
  • a hierarchical content distribution system In a hierarchical content distribution system, a distributed base such as a proxy server or an edge server capable of storing a part of content is installed in the vicinity of a user terminal, and the frequency of accessing the distributed base (hereinafter simply referred to as “access frequency”). Traffic) can be concentrated in the vicinity of the user terminal.
  • access frequency the frequency of accessing the distributed base
  • the traffic needs to go through the proxy server or edge server without requiring explicit user settings.
  • a traffic control device through which traffic such as routers and gateways passes, a transparent proxy function that performs automatic distribution to a cache server or the like based on information included in a content request from a user is executed. Therefore, there is a related technique for solving the above-mentioned problem.
  • the method disclosed in Patent Document 1 is based on information such as communication protocol information, a source IP (Internet Protocol) address, a source port number, a destination IP address, and a destination port number included in a packet header. Traffic content is classified and traffic is distributed.
  • the method as disclosed in Patent Document 2 identifies traffic contents in detail by using Deep Packet Inspection technology that inspects the payload data that is the contents of the packet, and distributes the traffic.
  • a filter device such as a router or a gateway determines unnecessary contents and does not distribute such unnecessary contents to the cache server, thereby reducing the load on the cache server or the cache controller.
  • the above-described traffic control device distributes content only by information included in a content request from a user, it can only make a rough prediction as to whether or not a cache is available or whether it is a popular site. Therefore, there is a possibility that the traffic control device cannot effectively reduce the load on the cache server or the cache controller.
  • the present invention has been made in view of the above-described problems, and provides a technology related to a traffic control device, a traffic control system, a traffic control method, and a traffic control program for reducing the load on a cache server and a cache controller. Is the main purpose.
  • a traffic control device that specifies the degree of cache effect when transferring a request for content to a cache server that temporarily caches content based on the popularity of the content specified based on the number of times the request is received;
  • a distribution determination unit that determines whether to transfer a content request to a cache server or an origin server that stores the content based on the degree of the cache effect.
  • a traffic control system includes: A cache server that temporarily caches content; An origin server for storing content; The device sending the content request, A traffic control device, The traffic control device A communication unit that receives a content request from a terminal; A transfer necessity determination unit that specifies a degree of cache effect when transferring a request for content received by the communication unit to a cache server based on the popularity of the content specified based on the number of times the request is received; A distribution determination unit that determines whether to transfer the content request to the cache server or the origin server based on the degree of the cache effect.
  • a traffic control method includes: Based on the popularity of the content identified based on the number of requests received, identify the degree of caching effectiveness when forwarding the content request to a cache server that temporarily caches the content, Based on the degree of the cache effect, it is determined whether the content request is transferred to the cache server or the origin server storing the content.
  • a traffic control program is provided. On the computer, A process for identifying the degree of caching effectiveness when transferring a content request to a cache server that temporarily caches the content based on the popularity of the content identified based on the number of received requests; And determining whether to transfer the content request to the cache server or the origin server that stores the content based on the degree of the cache effect.
  • FIG. 1 is a block diagram showing a configuration of a traffic control device 1 according to a first embodiment for carrying out the present invention.
  • the traffic control device 1 includes a distribution determination unit 111 and a transfer necessity determination unit 112.
  • the traffic control device 1 uses a cache when transferring a content request to a cache server that caches the content based on the popularity of the content specified based on the number of times the content request is received. Specify the degree of effect.
  • the degree of the cache effect indicates an effect (expected value) in which the traffic flowing in the network between the origin server and the traffic control device is reduced by caching the content by the cache server. Then, the traffic control device 1 determines whether to transfer the content request to the cache server or the origin server that stores the content based on the degree of the cache effect.
  • the traffic control device 1 transfers a request to the cache server for a small part of the content to be cached with a high cache effect, the load on the cache server and the cache controller can be effectively reduced.
  • the transfer necessity determination unit 112 acquires the local popularity of the traffic control device 1 that is specified by the number of requests for content that passes through the traffic control device 1.
  • the popularity may be the frequency of requests for each content, or the rank of the frequency or frequency of requests for each content.
  • the transfer necessity determination unit 112 determines whether or not it is necessary to transfer a request for the content to the cache server based on the popularity of the local content.
  • the transfer necessity determination unit 112 may compare the local popularity of the content with one or more predetermined thresholds and specify the degree of the cache effect based on the relationship with the thresholds.
  • the content When the popularity is represented by the rank of the number of requests for each content, the content may be classified based on this rank. For example, content with the top 10 rankings is top class, content with 10th to 100th popularity is middle class, content with rankings below 100th is long tail class, etc. It may be divided. Then, the transfer necessity determination unit 112 may specify the degree of the cache effect for each class. For example, the top class is a “high cache effect” content class, the middle class is a “moderate (medium)” cache effect, and the long tail class is a “low cache effect” class.
  • the present invention is not limited to the above-described method.
  • the transfer necessity determination unit 112 compares the popularity with one or more predetermined thresholds, The degree of the cache effect may be specified based on the relationship with the threshold value.
  • content with a popularity (request frequency) of 1000 requests or more per hour is a “high cache effect” content class
  • content with a request of 100 to 1000 requests per hour is “moderate in cache effect”.
  • the content class of less than 100 requests per hour may be classified into a content class with “less cache effect”. Then, the degree of the cache effect corresponding to each class is specified for each content request.
  • the cache server is a server that temporarily caches content
  • the origin server is a server that stores content.
  • the distribution determination unit 111 forwards a request for content with a high cache effect to the cache server, and forwards a request for content with a moderate cache effect and a request for content with a less cache effect to the origin server. May be.
  • the distribution determining unit 111 forwards a request for content with a high cache effect and a request for content with a moderate cache effect to the cache server, and forwards a request for content with a less cache effect to the origin server. May be. That is, the distribution determination unit 111 may determine a single transfer destination for each degree of cache effect.
  • the processing of the distribution determination unit 111 is not limited to the method described above.
  • the distribution determination unit 111 transfers a request for a class having a high cache effect to the cache server, and probabilistically transfers a request for a class having a moderate cache effect to the cache server or the origin server.
  • a request for a small class of content may be forwarded to the origin server. That is, the distribution determination unit 111 may adaptively determine the transfer destination for each degree of cache effect. Then, the distribution determination unit 111 may determine whether to transfer a request for a content class having a moderate cache effect to the cache server or the origin server based on a predetermined probability.
  • the administrator of the traffic control device 1 may set in advance at least one of a probability of transferring a request for a content of a class having a moderate cache effect to the cache server and a probability of transferring the request to the origin server. Good. Based on the probability set by the administrator, the distribution determination unit 111 determines whether to transfer a request for a class of content with a moderate cache effect to the cache server or the origin server.
  • FIG. 2 is a diagram showing a hardware configuration of the traffic control device 1 and its peripheral devices in the first embodiment of the present invention.
  • the traffic control device 1 includes a CPU (Central Processing Unit) 191, a communication I / F 192 (communication interface 192) for network connection, a memory 193, and a storage device 194 such as a hard disk for storing programs. including.
  • the traffic control device 1 is connected to an input device 195 and an output device 196 via a bus 197.
  • the CPU 191 operates the operating system to control the entire traffic control device 1 according to the first embodiment of the present invention. Further, the CPU 191 reads a program and data from a recording medium mounted on, for example, a drive device to the memory 193, and the traffic control device 1 according to the first embodiment according to this reads the distribution determining unit 111 and the transfer necessity. Various processes are executed as the sex determination unit 112.
  • the storage device 194 is, for example, an optical disk, a flexible disk, a magnetic optical disk, an external hard disk, a semiconductor memory, or the like, and records a computer program so that it can be read by a computer.
  • the computer program may be downloaded from an external computer (not shown) connected to the communication network.
  • the input device 195 is realized by, for example, a mouse, a keyboard, a built-in key button, etc., and is used for an input operation.
  • the input device 195 is not limited to a mouse, a keyboard, and a built-in key button, but may be a touch panel, an accelerometer, a gyro sensor, a camera, or the like.
  • the output device 196 is realized by a display, for example, and is used for confirming the output.
  • the block diagram (FIG. 1) used in the description of the first embodiment shows functional unit blocks instead of hardware unit configurations. These functional blocks are realized by the hardware configuration shown in FIG.
  • the means for realizing each unit included in the traffic control device 1 is not particularly limited. That is, the traffic control device 1 may be realized by one physically coupled device, or two or more physically separated devices are connected by wire or wirelessly, and are realized by these plural devices. Also good.
  • the CPU 191 may read a computer program recorded in the storage device 194 and operate as the distribution determination unit 111 and the transfer necessity determination unit 112 according to the program.
  • a recording medium (or storage medium) in which the above-described program code is recorded may be supplied to the traffic control device 1, and the traffic control device 1 may read and execute the program code stored in the recording medium. That is, the present invention also includes a recording medium 198 that temporarily or non-temporarily stores software (information processing program) to be executed by the traffic control device 1 according to the first embodiment.
  • FIG. 3 is a flowchart showing an outline of the operation of the traffic control device 1 according to the first embodiment of the present invention.
  • the transfer necessity determination unit 112 determines the cache effect when transferring the content request to a cache server that temporarily caches the content based on the popularity of each content specified based on the number of times the content request is received. The degree is specified (step S101).
  • the distribution determination unit 111 determines whether to transfer the content request to the cache server that temporarily caches the content or the origin server that stores the content based on the degree of the cache effect specified by the transfer necessity determination unit 112. Determine (step S102).
  • the traffic control device 1 uses a cache when transferring a content request to a cache server that caches the content based on the popularity of the content specified based on the number of times the content request is received. Specify the degree of effect. Then, the traffic control device 1 determines whether to transfer the content request to the cache server or the origin server that stores the content based on the degree of the cache effect.
  • the traffic control device 1 transfers a request to the cache server for some contents that should be cached because the cache effect is high.
  • the traffic control device 1 transfers a content request to the origin server for a long tail content with a low cache effect, for example, a large number of low access frequencies.
  • the traffic control device 1 in the first embodiment can effectively reduce the load on the cache server and the cache controller.
  • FIG. 4 is a block diagram showing the configuration of the traffic control device 1 according to the second embodiment of the present invention.
  • the traffic control device 1 according to the second embodiment includes a distribution determination unit 211, a transfer necessity determination unit 112, and a cache server load estimation unit 213.
  • the traffic control device 1 in the second embodiment estimates the load on the cache server based on the number of requests distributed to the cache server, the number of sessions to which the cache server is distributing data, and the like. Then, the traffic control device 1 determines whether or not the cache server can afford to receive the request. The traffic control device 1 determines whether to distribute the request to the cache server or to the origin server based on the determination result and the degree of the cache effect when the content request is transferred.
  • the traffic control device 1 transfers a request to the cache server for a part of the cacheable content having a high cache effect.
  • the traffic control device 1 transfers a content request to the origin server for a long tail content having a low cache effect, that is, a large number of low access frequencies.
  • the traffic control device 1 adaptively changes the request transfer destination based on the load of the cache server. That is, the traffic control device 1 in the second embodiment can effectively reduce the load on the cache server and the cache controller.
  • the cache server load estimation unit 213 acquires, for example, information on the number of sessions in which the cache server is currently distributing data and transmission / reception throughput from the cache server. Then, the cache server load estimation unit 213 compares the acquired information with a predetermined threshold or a maximum value that can be taken, and determines whether or not the cache server is in a high load state based on the relationship between these values. May be.
  • the cache server load estimation unit 213 compares the acquired number of sessions information and transmission / reception throughput information with a predetermined threshold value or a maximum value that can be taken, and based on the relationship between these values, the cache server load ratio is expressed as a percentage. It may be derived by, for example.
  • the cache server load estimation unit 213 obtains, for example, information indicating the time when the determination is made and the size of the requested content from the distribution determination unit 211 for a request that the distribution determination unit 211 determines to distribute to the cache server. May be. Then, the cache server load estimation unit 213 may estimate information on the number of sessions in which the cache server is currently distributing data and transmission / reception throughput based on the acquired information. Then, the cache server load estimation unit 213 may determine whether the load is in a high load state or derive the load ratio based on the estimated information by the same method as described above.
  • the cache server load estimation unit 213 passes the load information acquired by the cache server load estimation unit 213 or the load information indicating the load of the cache server estimated by the cache server load estimation unit 213 to the distribution determination unit 211.
  • the distribution determination unit 211 receives load information indicating the load on the cache server from the cache server load estimation unit 213. Then, the distribution determination unit 211 determines whether to transfer the above request to the cache server or the origin server based on the received information.
  • the distribution determination unit 211 transfers a request for content with a high cache effect and a request for content with a moderate cache effect to the cache server. Then, a request for content with a small cache effect may be transferred to the origin server.
  • the distribution determination unit 211 transfers a request for a class having a high cache effect to the cache server, and the class having a moderate cache effect. The request for the content and the request for the content of the class with less cache effect may be transferred to the origin server. That is, the distribution determination unit 211 may determine a single transfer destination for each degree of cache effect based on the load information of the cache server.
  • the processing of the distribution determination unit 211 is not limited to the method described above.
  • the distribution determination unit 211 transfers a request for content with a high cache effect and a request for content with a moderate cache effect to the cache server. Then, a request for content with a small cache effect may be transferred to the origin server.
  • the distribution determination unit 211 transfers a request for a class having a high cache effect to the cache server, and the class having a moderate cache effect is transmitted.
  • a content request may be stochastically transferred to a cache server or an origin server, and a class content request with a low cache effect may be transferred to the origin server. That is, the distribution determination unit 211 may adaptively determine the transfer destination for each degree of cache effect based on the load information of the cache server. Then, the distribution determination unit 111 may determine whether to transfer a request for a content class having a moderate cache effect to the cache server or the origin server based on a predetermined probability.
  • the administrator of the traffic control device 1 may set in advance at least one of a probability of transferring a request for a content of a class having a moderate cache effect to the cache server and a probability of transferring the request to the origin server. Good. Based on the probability set by the administrator, the distribution determination unit 211 determines whether to transfer a request for a class of content with a moderate cache effect to the cache server or the origin server.
  • FIG. 5 is a flowchart showing an outline of the operation of the traffic control device 1 according to the second embodiment.
  • the transfer necessity determination unit 112 determines the cache effect when transferring the content request to a cache server that temporarily caches the content based on the popularity of each content specified based on the number of times the content request is received. The degree is specified (step S101).
  • the cache server load estimation unit 213 acquires load information indicating the load on the cache server or estimates the load on the cache server. This process may be performed at regular time intervals or each time the traffic control device 1 receives a content request. Then, the cache server load estimating unit 213 passes the received load information or load information indicating the load estimated by the cache server to the distribution determining unit 211 (step S201).
  • the distribution determination unit 211 receives load information from the cache server load estimation unit 213, and receives information indicating the degree of the cache effect of the content corresponding to the content request from the transfer necessity determination unit 112. Based on the received information, the distribution determination unit 211 determines whether to transfer the content request to the cache server that temporarily caches the content or the origin server that stores the content (step S202).
  • the traffic control device 1 in the second embodiment estimates the load on the cache server based on the number of requests distributed to the cache server, the number of sessions to which the cache server is distributing data, and the like. Then, the traffic control device 1 determines whether or not the cache server can afford to receive the request. The traffic control device 1 determines whether to distribute the request to the cache server or to the origin server based on the determination result and the degree of the cache effect when the content request is transferred.
  • the traffic control device 1 transfers a request to the cache server for some contents that should be cached due to the high cache effect.
  • the traffic control device 1 transfers a content request to the origin server for a long tail content with a low cache effect, for example, a large number of low access frequencies.
  • the traffic control device 1 adaptively changes the request transfer destination based on the load of the cache server. As a result, the traffic control device 1 in the second embodiment can effectively reduce the load on the cache server and the cache controller.
  • FIG. 6 is a block diagram showing the configuration of the traffic control system S in the third embodiment.
  • the traffic control system S includes a traffic control device 1, a cache server 2, an origin server 3, and a user terminal 4.
  • the traffic control device 1, the cache server 2, the origin server 3 and the user terminal 4 are communicably connected via a network 5.
  • the traffic control device 1 relays a content request from the user terminal 4 to the origin server 3 and a content traffic corresponding to the above request from the origin server 3 to the user terminal 4, such as a router or a gateway. Then, the traffic control device 1 transfers a part or all of the above traffic to the cache server 2 in accordance with preset conditions and algorithms.
  • the cache server 2 is communicably connected to the traffic control device 1 and stores part or all of the content traffic transferred from the traffic control device 1 as a cache in a storage unit (not shown) provided in the cache server 2.
  • the cache server 2 reads the content corresponding to the request from the cache stored in the cache server 2 and distributes the cache.
  • the origin server 3 exists on the network 5 and stores content requested from the user terminal 4.
  • the user terminal 4 acquires content via the network 5 such as a PC (Personal Computer) or a mobile terminal. Then, the user terminal 4 outputs the acquired content so that the user can audition or browse.
  • the network 5 such as a PC (Personal Computer) or a mobile terminal.
  • the traffic control device 1 includes a request distribution unit 11, a cache server transmission / reception unit 12, an origin server transmission / reception unit 13, a user terminal transmission / reception unit 14, a local popularity measurement unit 15, and a local popularity storage unit 16. .
  • the request distribution unit 11 includes a distribution determination unit 311, a transfer necessity determination unit 312, a cache server load estimation unit 313, and a requested content identification unit 314.
  • the request distribution unit 11 When the request distribution unit 11 receives a content request from the user terminal 4 via the user terminal transmission / reception unit 14, the request distribution unit 11 sends the request to the cache server transmission / reception unit 12 or the origin server according to the determination result of the distribution determination unit 311. The data is transferred to the transmission / reception unit 13.
  • the request distribution unit 11 receives a request corresponding to the above-described content request from the cache server 12 through the cache server transmission / reception unit 12 or from the origin server 3 through the origin server 13, the following operations are performed. Execute. That is, the request distribution unit 11 transfers the content to the requesting user terminal 4 via the user terminal transmission / reception unit 14.
  • the request distribution unit 11 executes the following operation when the content request is transferred to the origin server 3. That is, the request distribution unit 11 transmits the above-mentioned content distributed from the origin server 3 via the origin server transmission / reception unit 13 to the cache server 2 via the cache server transmission / reception unit 12.
  • ID information Identity Information
  • URL Uniform Resource Locator
  • IP address Internet Protocol Address
  • the requested content identification unit 314 is not limited to the above-described identification method, and may identify content based on information included in the request by using, for example, URL normalization technology.
  • the URL normalization technique is a technique for generating ID information that uniquely identifies content by processing a URL included in a request.
  • the requested content identification unit 314 may generate a unique ID by converting the URL included in the request into a hash value, and identify the content based on this ID information.
  • the requested content identification unit 314 passes the ID information of the identified content to the transfer necessity determination unit 312.
  • the transfer necessity determination unit 312 determines whether or not it is necessary to transfer the content request to the cache server 2 based on the acquired information indicating the degree of popularity.
  • the transfer necessity determination unit 312 may compare the local popularity of the content with one or more predetermined thresholds, and specify the degree of the cache effect based on the relationship with the thresholds.
  • the specific determination method is the same method as the processing performed by the transfer necessity determination unit 112 in the first embodiment.
  • the transfer necessity determination unit 312 passes information indicating the specified level of the cache effect to the distribution determination unit 311.
  • the cache server load estimation unit 313 acquires, for example, information on the number of sessions in which the cache server 2 is currently distributing data and transmission / reception throughput from the cache server 2 or the cache server transmission / reception unit 12. Then, the cache server load estimation unit 313 compares the acquired information with a predetermined threshold value or a maximum value that can be taken, and based on the relationship between these values, determines whether or not the cache server 2 is in a high load state. You may judge.
  • the cache server load estimation unit 313 compares the acquired number of sessions information and transmission / reception throughput information with a predetermined threshold value or the maximum value that can be taken, and determines the load ratio of the cache server 2 based on the relationship between these values. It may be derived as a percentage.
  • the cache server load estimator 313 receives, for example, information indicating the time when the determination is made and the size of the requested content from the distribution determiner 311 for the request that the distribution determiner 311 determines to distribute to the cache server 2. You may get it. Then, the cache server load estimation unit 313 may estimate information on the number of sessions in which the cache server 2 currently distributes data and transmission / reception throughput based on the acquired information. Based on the estimated information, the cache server load estimation unit 313 determines whether or not the cache server 2 is in a high load state by the same method as described above, or derives the load ratio of the cache server 2. May be.
  • the cache server load estimating unit 313 may pass the load information acquired by the cache server load estimating unit 313 or the load information indicating the load of the cache server 2 estimated by the cache server load estimating unit 313 to the distribution determining unit 311.
  • the information indicating the necessity of transfer is information indicating the degree of cache effect.
  • the distribution determination unit 311 receives load information indicating the load on the cache server 2 from the cache server load estimation unit 313. Then, the distribution determination unit 311 determines whether to transfer the above request to the cache server 2 or the origin server 3 based on the received information.
  • the method by which the distribution determination unit 311 determines the transfer destination of the request is the same as the determination method in the distribution determination unit 211 in the second embodiment.
  • the cache server transmission / reception unit 12 executes the following operation when receiving the content request from the cache server 2. That is, the cache server transmission / reception unit 12 passes the content request to the request distribution unit 11. This request is transmitted to the origin server 3 via the request distribution unit 11 and the origin server transmission / reception unit 13.
  • the cache server transmitting / receiving unit 12 When the cache server transmitting / receiving unit 12 receives the content corresponding to the request from the origin server 3 via the request distribution unit 11 as a response to the request, the cache server transmitting / receiving unit 12 transmits the content to the cache server 2.
  • the cache server transmission / reception unit 12 When the cache server 2 has the content requested by the user, the cache server transmission / reception unit 12 performs the following operation when receiving the requested content from the cache server 2. That is, the cache server transmission / reception unit 12 transmits the content to the user terminal 4 via the request distribution unit 11.
  • the cache server transmission / reception unit 12 transmits the time when the data is transmitted / received and the distribution load information, which is information indicating the data distribution throughput, for the request and content transmitted / received to / from the cache server 2. You may remember. At this time, when the distribution load information is requested from the cache server load estimation unit 313, the cache server transmission / reception unit 12 passes the distribution load information to the cache server load estimation unit 313.
  • the user terminal transmission / reception unit 14 when receiving a content request from the user terminal 4, specifies the time when the received request is sent (the time when the content is requested) and the content corresponding to the request. Information such as (information specifying the requested content) is stored as log information of the request. The user terminal transmission / reception unit 14 transmits the stored log information to the local popularity measurement unit 15 when requested log information is requested from the local popularity measurement unit 15.
  • the information for specifying the content may be, for example, a content URL.
  • the information specifying the content is not limited to the above-described URL, but may be, for example, an ID assigned to each content by a service site that distributes the content, or the above-described ID and the URL indicating the above-described service site. It may be combined information.
  • the local popularity measuring unit 15 logs request information indicating the time when the request is sent (the time when the content is requested), information identifying the content corresponding to the request (information identifying the requested content), and the like. Is acquired from the user terminal transmission / reception unit 14. This process may be performed at regular time intervals or each time the user terminal transmission / reception 14 receives a request for user terminal 4-way content.
  • the local popularity measuring unit 15 derives the popularity of local content in traffic via the traffic control device 1 based on the acquired log information.
  • the local popularity measurement unit 15 stores the derived popularity in the local popularity storage unit 16.
  • the local popularity measuring unit 15 may derive the above-mentioned popularity by, for example, counting the number of requests to each content in a certain time up to the current time and dividing the number by the above-mentioned certain time.
  • the local popularity measuring unit 15 multiplies a predetermined smoothing coefficient ⁇ ( ⁇ is any value from 0.0 to 1.0) to the number of requests M (n) of content newly generated in the predetermined time.
  • a new local content popularity E (n) may be obtained by adding the obtained value and the current local content popularity E (n-1).
  • the local popularity measuring unit 15 may obtain the popularity E (n) of local content by a moving index average calculation method represented by the following equation (1).
  • the local popularity measurement unit 15 may determine the popularity of local content based on an evaluation index used in an algorithm for replacing content, such as LRU (Least Recently Used) and LFU (Least Frequency Used). E (n) may be obtained.
  • LRU Least Recently Used
  • LFU Least Frequency Used
  • the local popularity storage unit 16 stores the popularity E (n) of local content derived by the local popularity measurement unit 15 in association with information for identifying content.
  • the cache server 2 is connected to the traffic control device 1 so as to be communicable.
  • the cache server 2 determines whether the content is stored as a cache in a storage unit included in the own device (cache server 2). To do.
  • the cache server 2 When the cache server 2 stores the above-described content as a cache in the storage unit, the cache server 2 acquires the cache from the above-described storage unit. Then, the cache server 2 distributes the acquired cache to the user terminal 4 via the traffic control device 1. On the other hand, if the above-mentioned content is not stored in the storage unit as a cache, the cache server 2 acquires the content from the other cache server 2 connected to the origin server 3 or another traffic control device. Then, the cache server 2 distributes the acquired content to the user terminal 4 via the traffic control device 1.
  • the cache server 2 When the cache server 2 distributes the cache to the user terminal 4 and the content is not stored as a cache in the storage unit, the cache server 2 stores the content according to a preset condition or algorithm. Stored as a cache.
  • the cache server 2 when the cache server 2 stores content as a cache in the storage unit, if the capacity of the storage unit is insufficient (below a predetermined threshold), the following operation may be executed. That is, the cache server 2 may delete a predetermined cache from the caches in the storage unit according to a preset condition or algorithm, and store the content as a cache.
  • the cache server 2 stores distribution load information, which is information indicating the time at which the data is transmitted / received, the distribution throughput of the data, and the like of requests and contents transmitted / received to / from the cache server transmission / reception unit 12 as necessary. May be. At this time, the cache server 2 may transmit the distribution load information to the cache server load estimation unit 313 when the distribution load information is requested from the cache server load estimation unit 313.
  • ⁇ Origin server 3> One or more origin servers 3 are connected to the network 5. And the origin server 3 memorize
  • the origin server 3 receives a content request from the user terminal 4 or the cache server 2 via the traffic control device 1, the origin server 3 stores the content corresponding to the request (the origin server 3) in its own device (origin server 3) ( (Not shown). Then, the origin server 3 distributes the acquired content to the user terminal 4 or the cache server 2 described above.
  • the user terminal 4 transmits the above-described content request to the origin server 3 via the traffic control device 1 when a request regarding the use of the content such as trial listening or browsing of the content is made by the user.
  • the content request may be data, a message, a signal, or the like requesting transmission of the content.
  • the request transmitted by the user terminal 4 is transferred to the origin server 3 or the cache server 2 via the traffic control device 1.
  • the content is distributed from the origin server 3 or the cache server 2 to the user terminal 4 via the traffic control device 1.
  • the user terminal 4 receives the above content and outputs the content.
  • the user terminal 4 transmits a content request to the user terminal transmission / reception unit 14 when a request regarding the use of the content such as trial listening or browsing of the content is received from the user (step S301).
  • the user terminal transmission / reception unit 14 receives a request for content requested by the user from the user terminal 4, and passes the received request to the request distribution unit 11.
  • the request content identification unit 314 receives the content request received from the user terminal transmission / reception unit 14 by the request distribution unit 11. Then, the requested content identification unit 314 identifies the content corresponding to the received request (step S302).
  • the transfer necessity determination unit 312 acquires the above-described content identification result from the requested content identification unit 314. Then, the transfer necessity determination unit 312 reads the local popularity degree of the content corresponding to the request in the traffic passing through the traffic control device 1 from the local popularity degree storage unit 16 based on the acquired content identification result. (Step S303).
  • the transfer necessity determination unit 312 identifies the degree of effect (cache effect degree) when caching the content request based on the read local popularity of the content (step S304).
  • the cache server load estimation unit 313 acquires load information indicating the load of the cache server 2 or estimates the load of the cache server 2 (step S305). This process may be performed at regular time intervals, or may be performed each time the request distribution unit 11 receives a content request from the user terminal transmission / reception unit 14. Then, the cache server load estimation unit 313 determines the degree of allowance for the cache server 2 that can respond to the content request from the user (step S306).
  • the distribution determination unit 311 acquires information indicating the necessity to transfer the request from the transfer necessity determination unit 12 to the cache server 2. . Then, the distribution determination unit 311 receives the load information of the cache server 2 from the cache server load estimation unit 313. Based on these pieces of information, the distribution determination unit 311 determines whether to transfer the above request to the cache server 2 or the origin server 3 (step S307).
  • the request distribution unit 11 passes the request to the origin server transmission / reception unit 13.
  • the origin server transmission / reception unit 13 receives a request for content from the request distribution unit 11, and transmits the received request to the origin server 3 (step S308).
  • the origin server 3 receives the content request from the origin server transmission / reception unit 13 and reads the content corresponding to the received request from the storage unit included in the own device (origin server 3). The origin server 3 transmits the read content to the origin server transmission / reception unit 13 (step S309).
  • the origin server transmission / reception unit 13 receives content from the origin server 3 and passes the received content to the request distribution unit 11.
  • the request distribution unit 11 receives content from the origin server transmission / reception unit 13 and passes the received content to the user terminal transmission / reception unit 14.
  • the user terminal transmission / reception unit 14 receives the content from the request distribution unit 11, and transmits the received content to the user terminal 4 (step S310).
  • the user terminal 4 receives the content from the user terminal transmission / reception unit 14 and outputs the received content (step S311).
  • the request distribution unit 11 passes the request to the cache server transmission / reception unit 12.
  • the cache server transmission / reception unit 12 receives a request for content from the request distribution unit 11, and transmits the received request to the cache server 2 (step S312).
  • the cache server 2 receives a request for content from the cache server transmission / reception unit 12, and determines whether or not the content corresponding to the received request is stored as a cache in a storage unit included in the own device (cache server 2) ( Step S313).
  • step S313 If the cache server 2 determines in step S313 that the content is stored as a cache (“Yes” in step S313), the cache server 2 reads the content from the storage unit and transmits it to the cache server transmission / reception unit 12 (step S313). S314).
  • the cache server transmission / reception unit 12 receives content from the cache server 2 and passes the received content to the request distribution unit 11.
  • the request distribution unit 11 receives content from the cache server transmission / reception unit 12 and passes the received content to the user terminal transmission / reception unit 14.
  • the user terminal transmission / reception unit 14 receives the content from the request distribution unit 11, and transmits the received content to the user terminal 4 (step S315).
  • the user terminal 4 receives the content from the user terminal transmission / reception unit 14 and outputs the received content (step S311).
  • step S313 if it is determined in step S313 that the content is not stored as a cache (“No” in step S313), the cache server 2 transmits the received request for the content to the cache server transmission / reception unit 12.
  • the cache server transmission / reception unit 12 receives a request from the cache server 2 and passes the received request to the request distribution unit 11.
  • the request distribution unit 11 receives a request from the cache server transmission / reception unit 12 and passes the received request to the origin server transmission / reception unit 13.
  • the origin server transmission / reception unit 13 receives the request from the request distribution unit 11, and transmits the received request to the origin server 3 (step S316).
  • the origin server 3 receives the content request from the origin server transmission / reception unit 13 and reads the content corresponding to the received request from the storage unit included in the own device (origin server 3).
  • the origin server 3 transmits the read content to the origin server transmission / reception unit 13.
  • the origin server transmission / reception unit 13 receives content from the origin server 3, and passes the received content to the request distribution unit 11.
  • the request distribution unit 11 receives content from the origin server transmission / reception unit 13 and passes the received content to the cache server transmission / reception unit 12.
  • the cache server transmission / reception unit 12 receives the content from the request distribution unit 11, and transmits the received content to the cache server 2 (step S317).
  • the cache server 2 receives the content from the cache server transmission / reception unit 12, temporarily stores the received content, and passes the stored content to the cache server transmission / reception unit 12.
  • the cache server transmission / reception unit 12 receives content from the cache server 2 and passes the received content to the request distribution unit 11.
  • the request distribution unit 11 receives content from the cache server transmission / reception unit 12 and passes the received content to the user terminal transmission / reception unit 14.
  • the user terminal transmission / reception unit 14 receives the content from the request distribution unit 11, and transmits the received content to the user terminal 4 (step S318).
  • the cache server 2 stores the temporarily accumulated content as a cache in a storage unit included in the own device (cache server 2) based on at least one of a preset condition and algorithm.
  • the cache server 2 deletes an unnecessary cache from the cache in the storage unit based on at least one of a preset condition and algorithm when the storage unit has insufficient capacity, and Is stored as a cache (step S319).
  • the user terminal 4 receives the content from the user terminal transmission / reception unit 14 and outputs the content (step S311).
  • the user terminal transmission / reception unit 14 When the user terminal transmission / reception unit 14 receives a request for content from the user terminal, the user terminal transmission / reception unit 14 stores information specifying the time corresponding to the request and the content corresponding to the request as log information of the request (step S401).
  • the local popularity measuring unit 15 acquires log information of the request from the user terminal transmission / reception unit 14 at regular time intervals or whenever the user terminal transmission / reception unit 14 receives a content request from the user terminal. Then, the local popularity measuring unit 15 derives the popularity of local content in traffic via the traffic control device 1 based on the log information of the request (step S402). The local popularity measurement unit 15 stores the derived local popularity in the local popularity storage unit 16.
  • the traffic control system S in the third embodiment is based on the local popularity information of each content obtained from a traffic log that has passed through a filter device (traffic control device 1) such as a router or a gateway. It is determined whether or not the request needs to be transferred to the cache server 2. Then, the traffic control system S estimates the load on the cache server 2 from the number of requests distributed to the cache server 2 from a filter device such as a router or gateway, the number of sessions distributed by the cache server 2 and the like. Based on the load of the cache server 2, the traffic control system S specifies a margin for the cache server 2 to accept the request, and determines whether to distribute the content request to the cache server 2 based on the specified result.
  • a filter device such as a router or a gateway
  • the traffic control system S accurately discriminates long tail contents with low access frequency that exist in large quantities other than the popular contents to be cached, and does not distribute these long tail contents to the cache server 2. Thereby, the traffic control system S can reduce the load of the cache server 2 and the cache controller.
  • FIG. 10 is a block diagram showing a configuration of the traffic control system S in the first modification of the third embodiment.
  • the traffic control system S includes a traffic control device 1, a cache server 2, an origin server 3, a user terminal 4, and a router 6.
  • the traffic control device 1, the cache server 2, the origin server 3, the user terminal 4, and the router 6 are connected to be communicable via a network 5.
  • the traffic control system S according to the first modification of the third embodiment is different from the traffic control system S according to the third embodiment in that router transmission / reception is performed instead of the origin server transmission / reception unit 13 and the user terminal transmission / reception unit 14.
  • the difference is that a unit 17 is provided and a router 6 is further provided.
  • the router 6 relays data of the traffic control device 1, the origin server 3, and the user terminal 4. In this embodiment, it is described as “router”, but the router 6 may be a switch.
  • the router transmission / reception unit 17 includes the functions of the origin server transmission / reception unit 13 and the user terminal transmission / reception unit 14 in the third embodiment. Data transmitted from the traffic control device 1 to the origin server 3 or the user terminal 4 passes through the router transmission / reception unit 14 and the router 6.
  • the traffic control system S in the first modification of the third embodiment has the same effects as the traffic control system S in the third embodiment.
  • An example of the effect of the present invention is that the load on the cache server and the cache controller can be reduced.
  • each component in each embodiment of the present invention can be realized by a computer and a program as well as its function in hardware.
  • the program is provided by being recorded on a computer-readable recording medium such as a magnetic disk or a semiconductor memory, and is read by the computer when the computer is started up.
  • the read program causes the computer to function as a component in each of the embodiments described above by controlling the operation of the computer.
  • a transfer necessity determination unit that specifies the degree of the cache effect when transferring the content request to a cache server that temporarily caches the content based on the popularity of the content specified based on the number of times the request is received
  • a traffic control device comprising: a distribution determination unit that determines whether to transfer the content request to the cache server or an origin server that stores the content based on the degree of the cache effect.
  • a cache server load estimating unit for identifying the load of the cache server is a traffic control device that determines, based on the degree of popularity and a load on the cache server, whether to transfer a request for the content to the cache server or the origin server.
  • the traffic control device (Appendix 4) The traffic control device according to attachment 2 or 3, wherein A probability that a request for the content having a predetermined popularity is transferred to the cache server or the origin server is set in advance;
  • the distribution determination unit is a traffic control device that determines whether to transfer the request to the cache server or the origin server according to the probability when the load of the cache server shows a specific value.
  • the traffic control device according to any one of appendices 1 to 4,
  • the transfer necessity determination unit classifies the request into a class based on the popularity specified based on a relationship between a frequency of receiving the request for the content and a threshold, and the unique cache effect for each class.
  • the cache server load estimation unit receives at least one of the number of sessions distributed by the cache server and information indicating the throughput of transmission / reception data of the cache server, and at least one of the information indicating the number of sessions and the throughput
  • a traffic control device that estimates the load of the cache server based on
  • the traffic control device according to any one of appendices 2 to 4,
  • the cache server load estimation unit receives information indicating the time when the determination is made and the size of the content from the distribution determination unit in response to the content request that the distribution determination unit determines to transfer to the cache server,
  • a traffic control device that estimates the load of the cache server by estimating at least one of the number of sessions distributed by the cache server and the throughput of transmission / reception data of the cache server based on the information.
  • a cache server that temporarily caches content; An origin server for storing the content; A terminal that transmits the content request; A traffic control device, The traffic control device includes: A communication unit for receiving the content request from the terminal; A transfer necessity determining unit that specifies a degree of cache effect when transferring a request for the content received by the communication unit to the cache server based on a popularity degree of the content specified based on the number of times the request is received.
  • a traffic control system comprising: a distribution determination unit that determines whether to transfer the content request to the cache server or the origin server based on the degree of the cache effect.
  • the traffic control system includes: A cache server load estimating unit for identifying the load of the cache server;
  • the distribution determination unit is a traffic control system that determines whether to transfer a request for a certain content to the cache server or the origin server based on the popularity and the load on the cache server.
  • appendix 11 The traffic control system according to appendix 9 or 10, A probability that a request for content having a predetermined popularity will be transferred to the cache server or the origin server is set in advance; When the load of the cache server shows a specific value, the distribution determining unit determines whether to transfer the request to the cache server or the origin server according to the probability.
  • the traffic control system according to any one of appendices 8 to 11,
  • the transfer necessity determination unit classifies the request into a class based on the popularity specified based on a relationship between a frequency of receiving the request for the content and a threshold, and the unique cache effect for each class.
  • a traffic control system that identifies the information to show.
  • the traffic control system according to any one of appendices 9 to 11,
  • the cache server load estimation unit receives at least one of the number of sessions distributed by the cache server and information indicating the throughput of transmission / reception data of the cache server, and at least one of the information indicating the number of sessions and the throughput
  • the traffic control system according to any one of appendices 9 to 11,
  • the cache server load estimation unit receives information indicating the time when the determination is made and the size of the content from the distribution determination unit in response to the content request that the distribution determination unit determines to transfer to the cache server,
  • a traffic control system that estimates the load of the cache server by estimating at least one of the number of sessions distributed by the cache server and the throughput of transmission / reception data of the cache server based on the information.
  • Appendix 17 The traffic control method according to appendix 16, wherein For each load of the cache server, the threshold value of the popularity of the content to which the request is transferred to the cache server is specified, and the request is sent to the cache server or the origin based on a comparison between the threshold value and the popularity degree. A traffic control method for determining which server to forward.
  • Appendix 18 The traffic control method according to appendix 16 or 17, A probability that a request for content having a predetermined popularity will be transferred to the cache server or the origin server is set in advance; A traffic control method for determining whether to transfer the request to the cache server or the origin server according to the probability when the load on the cache server shows a specific value.
  • the traffic control method according to any one of appendices 16 to 18, comprising: In response to a content request determined to be transferred to the cache server, the number of sessions distributed by the cache server and the transmission / reception of the cache server based on information indicating the time when the determination was made and the size of the content A traffic control method for estimating a load of the cache server by estimating at least one of data throughputs.
  • a traffic control program that executes, based on the degree of the cache effect, a process of determining whether to transfer the content request to the cache server or an origin server that stores the content.
  • a traffic control program according to attachment 22, In the computer, A process for identifying the load of the cache server is executed; In the determining process, a traffic control program for determining whether to transfer a request for a certain content to the cache server or the origin server based on the popularity and the load on the cache server.
  • the traffic control program according to any one of appendices 23 to 25, In the process of specifying the load of the cache server, the cache server distributes the content request determined to be transferred to the cache server based on information indicating the time when the determination is made and the size of the content.
  • the traffic control device can be applied as a network device or a filter device that distributes traffic such as routers, gateways, and load balancers.
  • the traffic control device according to the present invention can be applied as selection software that is incorporated into a cache server or a cache management server and assigns different cache policies and algorithms.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

La présente invention réduit efficacement la charge sur un serveur de mémoire cache et sur un contrôleur de mémoire cache. Un dispositif de commande de trafic est pourvu d'une unité de détermination de demande de transfert pour identifier le degré de l'efficacité de mise en antémémoire lorsqu'une demande de contenu doit être transférée à un serveur de cache pour mettre le contenu en mémoire cache temporairement, sur la base de la popularité du contenu identifié sur la base d'une fréquence de demandes reçues, et une unité de détermination de tri pour déterminer s'il y a lieu de transférer une demande de contenu au serveur cache ou à un serveur d'origine pour stocker le contenu, sur la base du degré de l'efficacité de mise en antémémoire.
PCT/JP2013/000883 2012-02-28 2013-02-18 Dispositif de commande de trafic, système de commande de trafic, procédé de contrôle du trafic, et programme de commande de trafic WO2013128840A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012-040850 2012-02-28
JP2012040850 2012-02-28

Publications (1)

Publication Number Publication Date
WO2013128840A1 true WO2013128840A1 (fr) 2013-09-06

Family

ID=49082056

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/000883 WO2013128840A1 (fr) 2012-02-28 2013-02-18 Dispositif de commande de trafic, système de commande de trafic, procédé de contrôle du trafic, et programme de commande de trafic

Country Status (1)

Country Link
WO (1) WO2013128840A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001202330A (ja) * 1999-11-09 2001-07-27 Matsushita Electric Ind Co Ltd クラスタサーバ装置
JP2006309383A (ja) * 2005-04-27 2006-11-09 Hitachi Ltd コンピュータシステム

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001202330A (ja) * 1999-11-09 2001-07-27 Matsushita Electric Ind Co Ltd クラスタサーバ装置
JP2006309383A (ja) * 2005-04-27 2006-11-09 Hitachi Ltd コンピュータシステム

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HIROYOSHI MIWA: "Performance Evaluation of a Load Balancing Routing Algorithm for Clustered Multiple Cache Servers", NTT R & D, vol. 50, no. 9, 10 September 2001 (2001-09-10), pages 712 - 720 *
NORIFUMI NISHIKAWA: "A Proposal for Caching Strategy Based on WWW Traffic Characteristics", TRANSACTIONS OF INFORMATION PROCESSING SOCIETY OF JAPAN, vol. 41, no. 9, 15 September 2000 (2000-09-15), pages 2625 - 2637 *

Similar Documents

Publication Publication Date Title
US11997111B1 (en) Attribute-controlled malware detection
US11863581B1 (en) Subscription-based malware detection
JP6560351B2 (ja) モビリティ管理のための仮想サービングゲートウェイを配置するためのシステムおよび方法
US10521348B2 (en) Managing resources using resource expiration data
JP5189974B2 (ja) 負荷制御装置およびその方法
CN105282215B (zh) 用于通过内容中心网络转发并响应兴趣的基于声誉的策略
KR101379864B1 (ko) 네트워크 연산 요소들을 이용한 요청 라우팅
US10708377B2 (en) Communication control device, communication control method, and non-transitory computer readable medium
US9712412B2 (en) Aggregating status to be used for selecting a content delivery network
KR20160019361A (ko) 콘텐츠 중심 네트워크에서의 인증 없는 확률적인 저속 전송기술
US10404603B2 (en) System and method of providing increased data optimization based on traffic priority on connection
JP4410963B2 (ja) コンテンツ動的ミラーリングシステム、
CN111432231B (zh) 边缘网络的内容调度方法、家庭网关、系统、及服务器
JP2006146951A (ja) コンテンツ動的ミラーリングシステム
JP6886874B2 (ja) エッジ装置、データ処理システム、データ送信方法、及びプログラム
JP7097427B2 (ja) データ処理システム、及びデータ処理方法
WO2022057131A1 (fr) Procédé et appareil de traitement d'encombrement de données, dispositif informatique, et support de stockage
US10691700B1 (en) Table replica allocation in a replicated storage system
WO2009064126A2 (fr) Procédé et appareil permettant d'équilibrer la charge d'un serveur
WO2013128840A1 (fr) Dispositif de commande de trafic, système de commande de trafic, procédé de contrôle du trafic, et programme de commande de trafic
CN107211189A (zh) 一种用于视频发送的方法与装置
JP2014157459A (ja) キャッシュ装置、コンテンツ配信システム、および、コンテンツ配信方法
US11960407B1 (en) Cache purging in a distributed networked system
JP6850618B2 (ja) 中継装置および中継方法
WO2015200829A1 (fr) Obtention de fraîcheur de mise en cache équilibrée de contenu dans un réseau

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13754641

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13754641

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP