WO2013185547A1 - 一种缓存服务器的服务方法、缓存服务器及系统 - Google Patents

一种缓存服务器的服务方法、缓存服务器及系统 Download PDF

Info

Publication number
WO2013185547A1
WO2013185547A1 PCT/CN2013/076680 CN2013076680W WO2013185547A1 WO 2013185547 A1 WO2013185547 A1 WO 2013185547A1 CN 2013076680 W CN2013076680 W CN 2013076680W WO 2013185547 A1 WO2013185547 A1 WO 2013185547A1
Authority
WO
WIPO (PCT)
Prior art keywords
request
data
point
cache server
request information
Prior art date
Application number
PCT/CN2013/076680
Other languages
English (en)
French (fr)
Inventor
于文晓
张锦辉
杨友庆
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2013185547A1 publication Critical patent/WO2013185547A1/zh
Priority to US14/564,703 priority Critical patent/US20150095447A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5683Storage of data provided by user terminals, i.e. reverse caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/222Secondary servers, e.g. proxy server, cable television Head-end
    • H04N21/2225Local VOD servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23103Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion using load balancing strategies, e.g. by placing or distributing content on different disks, different memories or different servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23116Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving data replication, e.g. over plural servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/237Communication with additional data server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests

Definitions

  • the present invention relates to the field of communications, and in particular, to a service method, a cache server, and a system for a cache server.
  • cache servers In order to alleviate network pressure, reduce traffic costs, and better serve users, operators usually deploy cache servers at the edge of the network (near the user side); cache servers can cache hot content and serve users nearby. If the content requested by the user is already cached on the cache server, then there is no need to request it from the source server, thereby reducing the traffic of the upstream network and alleviating the pressure on the network; if the content requested by the user is not cached on the cache server, it is still needed. Requested from the source server, the traffic is still very large, and the traffic on the upstream network cannot be reduced, and the pressure on the network cannot be alleviated.
  • a service method for a cache server includes:
  • the request that falls within the preset window Select a request point in the point
  • the preset window is a preset fixed window or a preset dynamic change window.
  • the preset fixed window is a window with a fixed occupation time or a fixed byte.
  • the preset dynamic change window is a window that dynamically changes according to a user state and an occupation time of an upstream network state, or a window that dynamically changes a byte according to an upstream network state and a user state.
  • the method includes: in each preset window, requesting, that the time difference between the different request points of the same uncached data is less than or equal to a time occupied by the preset window.
  • the method includes: in each preset window, requesting, between the different data request points of the same uncached data, that the byte difference is less than or equal to a byte occupied by the preset window.
  • selecting one of the request points falling within each preset window includes: selecting, in a request point falling within each preset window, the closest to the preset window start position A request point.
  • the method further includes: receiving the uncached data sent by the source server from a corresponding location of the request point; according to the received multiple users The request point indicated by the first request information sent by the device sends the data to the user equipment from the corresponding location of the request point.
  • the method further includes: receiving the uncached The data is spliced; the spliced data is cached.
  • the method further includes: if the spliced uncached data is incomplete, sending third request information to the source server, where the third request is The information indicates the uncached data and a starting point of the data; receiving the data sent by the source server from the starting point.
  • the method further includes: if the uncached data sent by the source server is received, acquiring the data includes a random access point, then updating the request point according to the random access point.
  • a cache server including:
  • a first receiving unit configured to receive first request information sent by multiple user equipments, where the first request information indicates data required by each of the multiple user equipments, and the respective required numbers According to the request point;
  • a selecting unit configured to determine that the data indicated by the first request information sent by at least two user equipments of the plurality of user equipments received by the first receiving unit is the same and the same data is not cached in the cache server , selecting a request point among the request points that fall within each preset window;
  • the first sending unit is configured to send second request information to the source server, where the second request information indicates the uncached data and the request point selected by the selecting unit.
  • the selecting unit is specifically configured to select, in a request point that falls within each preset window, a request point that is closest to the start position of the preset window.
  • the method further includes: a second receiving unit, where: the second receiving unit is configured to receive, by the source server, the uncached data that is sent from the corresponding location of the request point;
  • the second sending unit is configured to send, according to the request point indicated by the first request information sent by the multiple user equipments received by the first receiving unit, to the user equipment from the corresponding location of the request point Transmitting the data received by the second receiving unit.
  • the second receiving unit is configured to receive the unreceived uncached data sent by the source server from the request point, and stop receiving the received uncached data.
  • the splicing unit and the splicing unit are further configured to: splicing unit, configured to splicing the unbuffered data received by the second receiving unit; The data after unit splicing is cached.
  • the processing unit further includes: the processing unit, configured to: if the uncached data is spliced after the splicing unit is spliced, the first sending unit sends the first sending unit to the source server a third request information, the third request information indicating the uncached data and a starting point of the data; the second receiving unit is further configured to receive the data sent by the source server from the starting point .
  • the processing unit is further configured to: if the data that is not cached sent by the source server that is received by the second receiving unit, acquire a random access point included in the data, according to the The random access point updates the request point.
  • a system comprising: a source server and at least one of the foregoing cache servers;
  • the source server is configured to receive a second request sent by the cache server, where the second request information indicates uncached data in the cache server and a request point of the data; starting from a corresponding location of the request point The uncached data is sent to the cache server.
  • the cache server receives first request information sent by a plurality of user equipments, each first request information indicating a data required by each of the plurality of user equipments and a request for the respective required data If it is determined that at least two of the plurality of user equipments received are required to have the same data and the same data is not cached in the cache server, select one of the request points that fall within each preset window. Requesting a point; and transmitting second request information indicating the uncached data and the selected request point to the source server. In this way, the cache server can avoid repeated requests for the same data with similar request points through the preset window.
  • the preset is Selecting a request point in the window to send a request to the source server can reduce the bandwidth consumption of the upstream network of the cache server and the source server, thereby reducing the traffic of the upstream network and alleviating network pressure.
  • FIG. 1 is a schematic flowchart of a service method of a cache server according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of another service method of a cache server according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a cache server receiving random according to an embodiment of the present invention
  • a schematic diagram of the data of the access point
  • FIG. 4 is a schematic structural diagram of a cache server according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a cache server according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of still another cache server according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of a system according to an embodiment of the present invention.
  • a service method of a cache server includes:
  • the cache server receives first request information sent by multiple user equipments, where the first request information indicates a request point of data and data required by the user equipment.
  • the cache server may be a Cache server. If the cache server receives the first request information sent by the user equipments A, B, C, D, and E respectively, each of the first request information respectively indicates the user equipment A, B, The video data requested by C, D, E and the request point of the video data. That is, the first request information sent by the user equipment A indicates the video data requested by the user equipment A and the request point of the video data, and the first request information sent by the user equipment B indicates the video data requested by the user equipment B and the video data.
  • the first request information sent by the user equipment C indicates the video data requested by the user equipment C and the request point of the video data
  • the first request information sent by the user equipment D indicates the video data requested by the user equipment D and the video.
  • the request point of the data, the first request information sent by the user equipment E indicates the video data requested by the user equipment E and the request point of the video data.
  • the request point indicates the starting position of the video data that the user equipment needs to view. If the video data requested by the user equipments A, B, and C is the same, such as the movie M, and the video data requested by the user equipments A, B, and C are different request points.
  • the video data requested by the user equipment 0, E is other video data other than X, and the video data requested by the user equipment D, E and the corresponding request point may be the same or different.
  • the user equipment A, B, and C send the request information indicating the user equipment A, B, C requests the fields of the different request points of the video data.
  • the request point may be a time point at which the user equipment requests to watch the video time point relative to the initial point of the entire video file, or a specific byte position of the video file in the entire video file.
  • the cache server If it is determined that the data indicated by the first request information sent by the at least two user equipments of the multiple user equipments that send the first request information is the same and the data is not cached on the cache server, then falls within each preset window. Select a request point from the request point. It should be noted that, if the first request information of the multiple user equipments received by the cache server indicates the data that has been cached on the cache server, the data may be sent to each user equipment from the request point requested by the user equipment. The data requested by the user device. If one or some user equipment requests data that is not cached on the cache server, the cache server requests the source server for the data that is not cached on the cache server, and receives the data and then sends the data to the corresponding user. The device sends.
  • the cache server receives at least two user equipments of the plurality of user equipments to request the same uncached data, if each request point of the received video data is separately requested and forwarded, the cache server needs to occupy a larger Upstream network traffic, so the same or different request points required by the at least two user equipments can be selected by using a preset window, such as indicated by the first request information of the at least two user equipments according to the preset window. Select one of the request points of the same data to transmit, thereby reducing the repeated transmission and reducing the consumption of the upstream network bandwidth.
  • the size of the preset window may be set according to time.
  • the preset window is 6 seconds.
  • the cache server simultaneously receives the user equipment A, B, and C to request the same file M that is not cached by the cache server.
  • the first request information, the file M requested by the user equipment A, B, C is written as "file-abc,".
  • the requests of the user equipments A, B, and C are as follows:
  • the start field is in seconds. Of course, other time units, such as minutes, are the units of the start field. In addition, you can use any length of time as the unit of the start field. For example, you can set the time unit of 1 start field every 5 seconds. In this example, the unit of the start field is in seconds.
  • the request point of the user equipment B and the user equipment C is within a preset window.
  • the cache server selects a request point from the request point of the user equipment B and the request point of the user equipment C in the preset window.
  • the cache server can ignore the request point of the user C and select the request point of the user equipment B that is closest to the start position of the preset window.
  • the preset window can be pre-set in a piece of data according to the time or byte position of the data, such as a 360-second video, starting from the 0th second of the starting point, and setting a preset window every 6 seconds. It is also possible to set a preset window based on the position of the request in the received first request information. As in the above example, the first preset window may start from the 32nd second and the size is 6 seconds, then the difference between the request points of the user equipment A and the user equipment B is greater than 6 seconds, and does not fall into the same preset window.
  • the second preset window may start from the 58th second and the size is 6 seconds, then the difference between the request points of the user equipment B and the user equipment C is less than 6 seconds, and the request points of the user equipment B and the user equipment C fall. Into the same preset window. Further, selecting a request point among the request points falling in the same preset window may select a request point of the user equipment that is closest to the start position of the preset window. For example, in the preset preset window, if the preset window ends from 240 seconds to 246 seconds, select the closest request point from the 240th second.
  • the preset window is determined according to the request point of the received first request information, and when determined according to the request point of the user equipment B, the request points of the user equipment B and the user equipment C fall into the preset window, due to the preset window.
  • the request point of the user equipment B is the request point closest to the preset window, and the request point of the user equipment B is selected. Selecting the request point closest to the start position of the preset window can cover the data required by the request point required by other user devices falling into the preset window, so that the cache server sends the data to the source server within the window.
  • the request point contains all the content required by the user device that the request point falls into the preset window.
  • the preset window to which the user equipment B and the user equipment C belong are taken as an example, if the cache server receives the first request information of other user equipments at the same time or within a predetermined time, and other When the first request information of the plurality of user equipments indicates that the request point is within a certain preset window, the same method may be used to select a request point, and other request points are ignored. For example, if user equipment D and user equipment E request data N at the same time, and the data N is not cached on the cache server, it is determined whether the user equipment D and the user equipment E request point and the preset window are determined according to the request point of the user equipment D.
  • a request point is selected from the request point of device D and the request point of user device E, and another request point is ignored. If the difference between the request point of the user equipment D and the user equipment E is less than the preset window, then the user sets A request point is selected from the request point of the standby D and the request point of the user equipment E, and may be a request point of the user equipment that is closest to the start position of the preset window.
  • the predetermined time may be an estimated time that the user equipment sees the video after the request is made, or a shorter time.
  • the size of the current preset window may also be set according to a byte, for example, the preset window is 2048 bytes, then if the request point requested by the user equipment A is the 1050th byte, the request point requested by the user equipment B is the first 1090 bytes, the request point requested by the user equipment C is the 2000th byte, then the difference between the request point of the user equipment A and the request point requested by the user equipment B is 40 bytes, and the user equipment A and the user equipment C request The difference between the request points is 50 bytes, and the difference between the user equipment B and the request point requested by the user equipment C is 10 bytes, which is less than 2048 bytes of the preset window, so the user equipments A, B In the same preset window, C can select the request point of the user equipment A closest to the start position of the preset window, and ignore the request points of the user equipment B and the user C. Selecting a request point among the request points that fall into the same preset window may be a request point of the user equipment that is the
  • the preset window size can be fixed or dynamically adjusted.
  • the factor that affects the size of the window may be the network status upstream of the cache server, including the upstream packet loss rate, the upstream network delay, and the like.
  • the factors affecting the size of the window may also include the user's network conditions, such as user service bandwidth, user network delay, etc., as well as the user's experience expectations.
  • the relationship between the size of the preset window and each influencing factor can be qualitatively determined by the following relationship.
  • the preset window size can be set to be dynamically variable according to the network conditions, and can be set to a fixed value that is optimized by multiple experiments.
  • the cache server sends second request information to the source server, where the second request information indicates the uncached data and the selected request point.
  • the cache server sends a message to the source server indicating the selected request point.
  • the request information for example, after the cache server receives the request that the user equipment A, B, C records the same file that is not cached by the cache server as "file-abc", selects the request points of user equipment A and user equipment B.
  • the cache server sends two second request information to the source server, wherein one second request information indicates the file "file-abc" and the request point of the user equipment A, and the second request information indicates the file "file-abc". And the request point of user device B.
  • the cache server can send the data sent by the source server, such as video data and audio data, to the user equipment in the same preset window according to the location corresponding to the request point requested by the user equipment, so that the upstream network bandwidth is obtained.
  • the user's viewing needs are met.
  • the service method of the cache server receives the first request information respectively sent by the multiple user equipments at the same time or within a predetermined time, and each first request information indicates required by one of the plurality of user equipments. Data and a request point of the data; if it is determined that at least two of the plurality of user equipments received are the same data requested and the same data is not cached in the cache server, falling into each preset window Selecting one of the request points of at least two user equipments; and transmitting second request information indicating the data and the selected request point to the source server. In this way, the cache server can avoid repeated requests for the same data with similar request points through the preset window.
  • the preset is Selecting a request point in the window to send a request to the source server can reduce the bandwidth consumption of the upstream network of the cache server and the source server, thereby reducing the traffic of the upstream network and alleviating network pressure.
  • Another service method of the cache server includes:
  • the cache server receives multiple first request information sent by multiple user equipments, and each first request information indicates video data required by one user equipment of the multiple user equipments and a request point for the video data.
  • the corresponding video data information is separately sent to the requesting user equipment. If the video data indicated by at least two of the plurality of first request information received by the first request information is video data that is not buffered by the cache server, and the uncached video data may be the same video data, which may be different video data, It is also possible to have both the same video data and different video data. If the at least two data that are not cached have the same video data and different video data, the cache server may select the same video data that is not cached on the cache server, according to the corresponding first request information.
  • the request point performs processing, and after processing the same video data, the next same video data is selected for processing; or multiple sets of video data not cached in the cache server are simultaneously selected for processing, and the plurality of groups are not locally processed. Cached in the video data of the cache server, the video data in each group that is not cached in the cache server is the same, and the request points may be the same or different.
  • different uncached video data may be requests of multiple users for multiple video data, such as user equipment A, user equipment B, and user equipment C requesting the first movie, user equipment D, user equipment E, and user.
  • Device F requests a second movie
  • user device G requests a third movie
  • user device H requests a fourth movie.
  • the cache server can send the request information of the user equipment G to the third movie and the first request information of the user equipment H to the fourth movie to the source server.
  • the request for the first movie and the request of the user equipment D, the user equipment E, and the user equipment F for the second movie need to be selected through S203.
  • the second request information indicating the selected request point is sent to the source server.
  • the cache server sends second request information to the source server, where the second request information indicates a request point of each video data and each video data.
  • the request point may not be selected.
  • the source server transmits the respective video data and the second request information of the video data request point.
  • the cache server selects a request point according to the video data and the request point that are not cached in the cache server.
  • the preset window can be set according to time, such as set to 6 seconds, etc.; it can also be measured by the number of bytes, such as 1 megabyte, etc., and can also be set by both standards.
  • the preset window can be set to an initial value. For example, the default window defaults to 6 seconds, or 1 Mbyte, etc., and it is judged whether multiple request points requesting the same data are in a preset window and selected from a preset window. A method of requesting a point has been described in detail in the above embodiments, and details are not described herein again.
  • the request point located in each preset window may be selected, wherein the request point located in each preset window may be a plurality of request points with the same requested data, and the plurality of requested data are different. Request point, or multiple request points with the same requested data and multiple request points with different requested data.
  • the request point is a request point indicated in the first request information sent by the user equipment, and when the cache server receives the video data, The random access point is obtained in the video data, and then the location of the indicated request point can be updated according to the random access point.
  • the video data In general, after the video data is compression-encoded, it will be encapsulated in a certain format before being transmitted on the network.
  • the common package formats for Internet video are: mp4, flv, f4v, etc., mp4, flv, f4v, etc. are often referred to as containers.
  • the container summarizes all the information in the video encoded data it encapsulates, such as the encoding of audio and video, the resolution of the image, the duration of the video, the location of the random access point, etc., to support various operations during playback. Such as dragging, playback, fast forward, etc.
  • These summary information is usually placed at the beginning of the entire video file, whether it is a complete video or a partial video clip, or the player cannot play it.
  • the cache server can obtain the information of the random access point as long as it receives a small portion of the video data. For example, if the user equipment requests the video data at the previous moment, the cache server receives a small portion of the video data, and does not receive the video data. After the video data is cached, the cache server obtains the information of the random access point of the video data, and the cache server first adjusts the request point requested by the user equipment according to the location of the random access point, and then adjusts the request request. Click to select in the preset window.
  • the locations of the random access points of the video data 20 are recorded as A', B', and C', respectively, and at a certain time, user equipment A, user equipment B, and user equipment C are The three request points of the file are recorded as request point A, request point B, and request point C, respectively.
  • the user equipment A, the user equipment B, and the user equipment C correspond to the three request points of the file at the time points of 42 seconds, 46 seconds, and 50 seconds respectively, and the current preset window size is 6 seconds;
  • the corresponding time points at access points A, B, B, and C are 41.333 seconds, 45.583 seconds, and 51.583 seconds, respectively.
  • the server selects to send two data indicating the data required by the user equipment A and the data required by the user equipment C to the source server. request.
  • the request point B and the request point C are in the same GOP (Group of Pictures).
  • the GOP is between two adjacent random access points, including the previous random access point, and does not include the video data of the latter random access point.
  • the cache server requests data from different locations in the same GOP to the source server
  • the source server usually sends data from the random access point of the GOP, so that the user equipment and the user equipment C After receiving the data, you can start playing immediately from ⁇ .
  • the random access point is the point at which the video data can be played immediately.
  • the video device can place the drag bar at any position, not all the positions can immediately play the video data, and the video is always dragged from the trailer.
  • the playback starts at the random access point near the indicated request point. Therefore, the request point of the user equipment ⁇ and the user equipment C is within one GOP, and can be reduced to one request point, that is, the cache server sends a request to the source server.
  • a request message at point B can complete the request of user equipment B and user equipment C for the video data.
  • the location of request point A can be adjusted to the point where the request point is A. In this way, after adjusting the three request points of A, B, and C, it becomes the two request points of the starting point A, and B.
  • some servers may also send data from a GOP location after the request point. This embodiment is only described by taking the data from the previous GOP as an example, and is not limited thereto.
  • the cache server makes an adjustment request, depending on whether it is selected within the preset window. Since the difference between the random access point A' and the random access point B' is 4.25 seconds, which is smaller than the preset window size by 6 seconds, they are in the same preset window. At this point, the cache server only sends a second request message to the source server.
  • the request information of the request point A that is closest to the start position of the preset window may be forwarded to the source server, where the specific location of the request point may be the request point A indicated by the first request information of the user equipment A, or may be The requested point A of the request point A indicated by the first request information of the user equipment A is at the request point A'.
  • the cache server may first The user's request point is adjusted according to the position of the random access point, and then the adjusted request point is selected according to the preset window; or the user's request may be clicked on the preset window to select, and then the selected request is made. Tap the location of the random access point to adjust, and then press the preset window to select the adjusted request point.
  • the cache server can convert their position in the entire video file based on the request point of the current clip and the video data information in the header of the container. Thus, subsequent requests can still be made in accordance with this embodiment.
  • the cache server sends second request information to the source server, where the second request information indicates uncached video data and the selected request point.
  • the second request information sent by the cache server to the source server may be multiple.
  • the request point selected in each preset window corresponds to a second request information
  • the cache server may separately send the request message to the source server. And indicating second request information of the selected request points, so that the source server sends the video data corresponding to the request point to the cache server according to the location corresponding to the request points.
  • the cache server receives the uncached video data sent by the source server starting from the corresponding location of the request point indicated in the second request information.
  • the source server is also from the 130th, 330th, and 5690th seconds respectively.
  • the corresponding location sends the video data to the cache server.
  • the cache server since the cache server can simultaneously receive the video data from the three request points, the cache server receives the video data sent from the 130th second to the 330th second. The content after the start of the 330th second has been partially received, so the cache server does not repeatedly receive the received content.
  • the cache server Active disconnection from the source server, termination of repeated reception of data from the corresponding location in the 330th second.
  • the cache server splices the received uncached video data.
  • the cache server stops receiving the number of uncached videos that have been received. According to the fact that the cache server does not repeatedly receive the video data, the cache server needs to splicing the separately received segments of the video data into a complete video data or video segment data.
  • S210 After the splicing of the video data by the cache server, S210 can be executed. In addition, if a complete video is obtained after splicing, S209 is executed, and if the spliced video is incomplete, S207 is executed.
  • the cache server sends third request information to the source server, where the third request information indicates an uncached video data and a starting point of the uncached video data.
  • the cache server only receives and splicing the video data from the 300th to the end, and the cache server sends the third request information indicating the video data and the starting point to the source server.
  • the starting point is the position starting from the 0th second of the video data, so that the source server sends the video data from the starting point to the cache server, and the starting point can be used as a request point of a special location, that is, the beginning of the data required by the user equipment.
  • the request point for the location is the starting point.
  • the cache server receives the video data sent by the source server from a starting point. It should be noted that after receiving the video data sent by the source server from the starting point, the cache server may use the received video data starting from the starting point to match the video data that has been received and is not spliced. The segment video data is spliced to obtain a complete video data.
  • the cache server caches the spliced video data.
  • the cache server sends video data to each user equipment according to a request point indicated by the first request information sent by the multiple user equipments, and a location corresponding to the request point indicated by the first request information sent by each user equipment.
  • the cache server sends the video data starting from the request point A of the user equipment A to the user equipment A, and transmits the video data starting from the request point B of the user equipment B to the user equipment B, and sends the video data to the user equipment C.
  • the video data may also be separately sent from the adjusted random access point to the user equipment.
  • the cache server receives the first request information sent by the multiple user equipments, and each first request information indicates the data required by the user equipment and the request point of the required data;
  • the user equipment requires the same data and is not cached on the cache server, then selects a request point among the request points falling within each preset window; and sends the source server to indicate the data and the selected request point.
  • Second request information In this way, the cache server can avoid the same data that is close to the request point through the preset window.
  • Repeat request because the request point location in the same preset window is close, it can be used as a request for the same request point, so selecting a request point in the preset window to send a request to the source server can reduce the cache server and source.
  • the bandwidth consumption of the upstream network of the server reduces the traffic of the upstream network and relieves network pressure.
  • a cache server 30, as shown in FIG. 4, includes a first receiving unit 301, a selecting unit 302, and a first transmitting unit 303. among them:
  • the first receiving unit 301 is configured to receive first request information sent by multiple user equipments, where the first request information indicates data required by each user equipment and a request point of data required by each user equipment.
  • the selecting unit 302 is configured to: if it is determined that the data indicated by the first request information sent by the at least two user equipments of the plurality of user equipments received by the first receiving unit 301 is the same and the same data is not cached on the cache server, Select a request point from the request points in each preset window.
  • the selecting unit 302 selects the closest to the preset window start position among the plurality of identical request points, among the plurality of different request points, or among the plurality of different request points and the same request point falling within each preset window. A request point.
  • the first sending unit 303 is configured to send second request information to the source server, where the second request information indicates the uncached data and the request point selected by the selecting unit 302.
  • the first sending unit 303 is further configured to: if the request information of the at least one user equipment indicates different request points of the same uncached data and uncached data, and the request point is in a different preset window, send an indication to the source server. Each uncached data and second request information of the data for the request point.
  • the cache server 30 further includes a second receiving unit 304 and a second sending unit 305, where:
  • the second receiving unit 304 is configured to receive uncached data sent by the source server 40 starting from a location corresponding to the request point.
  • the second receiving unit 304 receives the unreceived uncached data sent by the source server 40 from the corresponding location of the different request point, and stops receiving the received uncached data, that is, does not repeatedly receive the uncached data.
  • the second sending unit 305 is configured to send, according to the location corresponding to the request point indicated by the first request information sent by the multiple user equipments received by the first receiving unit 301, the second receiving from the location corresponding to the requesting point to the user equipment.
  • the cache server 30 further includes a tiling unit 306, a cache unit 307, and a processing unit 308, where:
  • the splicing unit 306 is configured to splicing the uncached data received by the second receiving unit 304.
  • the processing unit 308 is configured to enable the first sending unit 303 to send the first to the source server 40 if the uncached data after the splicing unit 306 is spliced is incomplete.
  • the third request information indicates the uncached data and the starting point of the data.
  • the second receiving unit 304 is further configured to receive data sent by the source server 40 from the starting point, so that the tiling unit 306 splices the data received by the second receiving unit 304 and the previously spliced incomplete data.
  • the buffer unit 307 is configured to cache the data spliced by the splicing unit 306.
  • processing unit 308 can also be used for the uncached data sent by the source server 40 received by the second receiving unit 304, obtain the random access point included in the data, and update the request point according to the random access point.
  • the foregoing cache server 30 corresponds to the foregoing method embodiment, and may be used in the steps of the foregoing method embodiments.
  • the application in the specific steps may refer to the foregoing method embodiments, and details are not described herein.
  • the cache server 30 is provided by the embodiment of the present invention.
  • the cache server 30 receives the first request information sent by at least two user equipments.
  • Each first request information indicates a request point of data and data required by the user equipment. If the data required by the device is the same and is not cached in the cache server, a request point is selected among the request points falling within each preset window; and a second request indicating the data and the selected request point is sent to the source server. information. In this way, the cache server 30 can avoid repeated requests for the same data that are close to the request point through the preset window.
  • the system provided by the embodiment of the present invention includes one or more cache servers 30 and source servers 40, wherein:
  • the cache server 30 can be the cache server 30 described in at least one of Figures 4-6.
  • the source server 40 is configured to receive a second request sent by the cache server 30, where the second request information indicates a request point of data and data that is not cached in the cache server; a bit corresponding to the request point The start of sending uncached data to the cache server 30.
  • the cache server 30 and the source server 40 are corresponding to the foregoing method embodiments, and may be used in the steps of the foregoing method embodiments.
  • the specific application in each step may refer to the foregoing method embodiment, and the specific structure of the cache server 30 is
  • the structure of the cache server provided in the foregoing embodiment is the same, and details are not described herein again.
  • the cache server 30 receives the first request information sent by the at least two user equipments, where each first request information indicates a request point of data and data required by the user equipment; If the required data is the same and is not cached in the cache server, a request point is selected among the request points falling within each preset window; and the second request information indicating the data and the selected request point is sent to the source server 40. . In this way, the cache server 30 can avoid repeated requests for the same data that are close to the request point through the preset window.
  • the method includes the steps of the foregoing method embodiments; and the foregoing storage medium includes: a medium that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

一种緩存服务器的服务方法,涉及通信领域,能够降低对上游网络带宽的消耗,緩解网络压力,该方法包括:接收多个用户设备发送的第一请求信息,所述第一请求信息指示所述多个用户设备各自所需的数据及各自所需数据的请求点;若确定所述多个用户设备中的至少两个用户设备发送的所述第一请求信息指示的数据相同且所述相同的数据未緩存在该緩存服务器,则在落入预设窗口内的请求点中选择一个请求点;向源服务器发送第二请求信息,所述第二请求信息指示所述未緩存的数据及所述选择的请求点。

Description

一种緩存服务器的服务方法、 緩存服务器及系统
技术领域
本发明涉及通信领域, 尤其涉及一种緩存服务器的服务方法、 緩存 服务器及系统。
背景技术
近年来, 由于互联网视频发展迅猛, 用户规模急剧增长。 网络视频 逐渐成为人们获取影视、 资讯等数字内容的重要渠道。 由于视频是集图 像、 声音、 文字等为一体的综合性媒体, 因此, 互联网视频的迅猛发展 导致网络中的数据量成爆炸式的增长, 这给网络带来了巨大的流量压力, 迫使运营商不断地扩容网络带宽, 以保证各种业务能够顺利地开展、 运 营。
为了緩解网络的压力, 降低流量成本, 以及更好地为用户提供服务, 运营商通常会在网络边缘 (靠近用户侧) 部署高速緩存服务器; 緩存服 务器可以緩存热点内容, 就近为用户提供服务。 如果用户请求的内容已 经緩存在緩存服务器上了, 那么就不需要再向源服务器索取, 从而降低 上游网络的流量, 緩解网络的压力; 如果用户请求的内容未在緩存服务 器上緩存, 则还是需要向源服务器索取, 业务量依然很大, 无法降低占 用上游网络的流量, 无法緩解网络的压力。
发明内容
一方面, 一种緩存服务器的服务方法, 包括:
接收多个用户设备发送的第一请求信息, 所述第一请求信息指示所 述多个用户设备各自所需的数据及各自所需数据的请求点;
若确定所述多个用户设备中的至少两个用户设备发送的所述第一请 求信息指示的数据相同且所述相同的数据未緩存在该緩存服务器, 则在 落入预设窗口内的请求点中选择一个请求点;
向源服务器发送第二请求信息, 所述第二请求信息指示所述未緩存 的数据及所述选择的请求点。
可选地, 所述预设窗口为预设固定窗口或预设动态变化窗口。 可选地, 所述预设固定窗口为占用时间固定或占用字节固定的窗口。 可选地, 所述预设动态变化窗口为根据用户状态和上游网络状态占 用时间动态变化的窗口, 或根据上游网络状态和用户状态占用字节动态 变化的窗口。
可选地, 所述在每个预设窗口内包括: 请求同一个未緩存的所述数 据的不同所述请求点之间时间差小于或等于预设窗口占用的时间。
可选地, 所述在每个预设窗口内包括: 请求同一个未緩存的所述数 据的不同所述数据请求点之间字节差小于或等于预设窗口占用的字节。
可选地, 所述在落入每个预设窗口内的请求点中选择一个请求点, 包括: 在落入每个预设窗口内的请求点中选择与所述预设窗口起始位置 最近的一个请求点。
可选地, 所述向源服务器发送第二请求信息之后, 还包括: 接收所 述源服务器发送的从所述请求点对应位置开始的未緩存的所述数据; 根 据接收的所述多个用户设备发送的所述第一请求信息指示的请求点, 从 所述请求点对应位置分别向所述用户设备发送所述数据。
可选地, 接收所述源服务器从所述请求点发送的未接收的未緩存的 所述数据, 停止接收已接收的未緩存的所述数据。
可选地, 所述接收所述源服务器从所述请求点发送的未接收的未緩 存的所述数据, 停止接收已接收的未緩存的所述数据之后, 还包括: 对 接收的未緩存的所述数据进行拼接; 对拼接后的所述数据进行緩存。
可选地, 所述对拼接后的所述数据进行緩存之前, 还包括: 若拼接 后的未緩存的所述数据不完整, 则向所述源服务器发送第三请求信息, 所述第三请求信息指示未緩存的所述数据及所述数据的起始点; 接收所 述源服务器从所述起始点发送的所述数据。
可选地, 所述在落入每个预设窗口内的请求点中选择一个请求点之 后, 还包括: 若接收到所述源服务器发送的未緩存的所述数据, 获取所 述数据中包含的随机访问点, 则根据所述随机访问点更新所述请求点。
一方面, 提供一种緩存服务器, 包括:
第一接收单元, 用于接收多个用户设备发送的第一请求信息, 所述 第一请求信息指示所述多个用户设备各自所需的数据及所述各自所需数 据的请求点;
选择单元, 用于若确定所述第一接收单元接收的多个用户设备中的 至少两个用户设备发送的所述第一请求信息指示的数据相同且所述相同 的数据未緩存在该緩存服务器, 则在落入每个预设窗口内的请求点中选 择一个请求点;
第一发送单元, 用于向源服务器发送第二请求信息, 所述第二请求 信息指示所述未緩存的数据及所述选择单元选择的所述请求点。
可选地, 所述选择单元, 具体用于在落入每个预设窗口内的请求点 中选择与所述预设窗口起始位置最近的一个请求点。
可选地, 还包括第二接收单元和第二发送单元, 其中: 所述第二接 收单元, 用于接收所述源服务器发送的从所述请求点对应位置开始的未 緩存的所述数据; 所述第二发送单元, 用于根据所述第一接收单元接收 的所述多个用户设备发送的所述第一请求信息指示的请求点, 从所述请 求点对应位置分别向所述用户设备发送所述第二接收单元接收的所述数 据。
可选地, 所述第二接收单元, 具体用于接收所述源服务器从所述请 求点发送的未接收的未緩存的所述数据, 停止接收已接收的未緩存的所 述数据。
可选地, 还包括拼接单元和緩存单元, 其中: 所述拼接单元, 用于 对所述第二接收单元接收的未緩存的所述数据进行拼接; 所述緩存单元, 用于对所述拼接单元拼接后的所述数据进行緩存。
可选地, 还包括处理单元, 其中: 所述处理单元, 用于若所述拼接 单元拼接后的未緩存的所述数据不完整, 则使得所述第一发送单元向所 述源服务器发送第三请求信息, 所述第三请求信息指示未緩存的所述数 据及所述数据的起始点; 所述第二接收单元, 还用于接收所述源服务器 从所述起始点发送的所述数据。
可选地, 所述处理单元, 还用于若所述第二接收单元接收到的所述 源服务器发送的未緩存的所述数据, 获取所述数据中包含的随机访问点, 则根据所述随机访问点更新所述请求点。
另一方面, 提供一种系统, 包括: 源服务器和至少一个上述的緩存 服务器; 所述源服务器, 用于接收所述緩存服务器发送的第二请求, 所述第 二请求信息指示所述緩存服务器中未緩存的数据及所述数据的请求点; 从所述请求点对应位置开始向所述緩存服务器发送未緩存的所述数据。
緩存服务器的服务方法、 緩存服务器及系统, 緩存服务器接收多个 用户设备发送的第一请求信息, 每个第一请求信息指示多个用户设备各 自所需的数据及所述各自所需数据的请求点; 若确定接收的多个用户设 备中至少两个用户设备所需的数据相同且该相同的数据未緩存在所述緩 存服务器, 则在落入每个预设窗口内的请求点中选择一个请求点; 并向 源服务器发送指示该未緩存的数据和选择了的请求点的第二请求信息。 这样一来, 緩存服务器可以通过预设窗口避免对请求点相近的同一数据 的重复请求, 由于同一个预设窗口内的请求点位置接近, 可以作为对同 一个请求点的请求, 所以在预设窗口内选择一个请求点向源服务器发送 请求, 可以减少对本緩存服务器与源服务器的上游网络的带宽消耗, 从 而降低上游网络的流量, 緩解网络压力。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施 例或现有技术描述中所需要使用的附图作简单地介绍, 显而易见地, 下面描述 中的附图仅仅是本发明的一些实施例, 对于本领域普通技术人员来讲, 在不付 出创造性劳动的前提下, 还可以根据这些附图获得其他的附图。
图 1为本发明实施例提供的緩存服务器的服务方法流程示意图; 图 2为本发明实施例提供的緩存服务器的另一服务方法流程示意图; 图 3 为本发明实施例提供的緩存服务器接收到随机访问点的数据示 意图;
图 4为本发明实施例提供的緩存服务器的结构示意图;
图 5为本发明实施例提供的緩存服务器的结构示意图;
图 6为本发明实施例提供的又一緩存服务器的结构示意图;
图 7为本发明实施例提供的系统结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清 楚、 完整地描述, 显然, 所描述的实施例仅仅是本发明一部分实施例, 而不是 全部的实施例。基于本发明中的实施例, 本领域普通技术人员在没有做出创造 性劳动前提下所获得的所有其他实施例, 都属于本发明保护的范围。
如图 1所示, 一种緩存服务器的服务方法, 包括:
5101、 緩存服务器接收多个用户设备发送的第一请求信息, 所述第 一请求信息指示用户设备所需的数据及数据的请求点。
示例性的, 緩存服务器可以为 Cache 服务器, 若緩存服务器接收了 用户设备 A, B , C, D , E分别发送的第一请求信息, 每个第一请求信息 中分别指示用户设备 A, B , C, D, E所请求的视频数据及该视频数据的 请求点。 即用户设备 A发送的第一请求信息指示用户设备 A所请求的视 频数据及该视频数据的请求点, 用户设备 B发送的第一请求信息指示用 户设备 B所请求的视频数据及该视频数据的请求点, 用户设备 C发送的 第一请求信息指示用户设备 C所请求的视频数据及该视频数据的请求点, 用户设备 D发送的第一请求信息指示用户设备 D所请求的视频数据及该 视频数据的请求点, 用户设备 E发送的第一请求信息指示用户设备 E所 请求的视频数据及该视频数据的请求点。 其中, 请求点表示用户设备需 要观看该视频数据的起始位置。 如果用户设备 A、 B、 C请求的视频数据 相同, 比如均为电影 M, 并且用户设备 A, B , C请求的该视频数据是不 同请求点。 用户设备 0、 E请求的视频数据为 X以外的其他视频数据, 用户设备 D、 E请求的视频数据及对应的请求点可以相同或者不同。
进一步的, 如果用户设备 A, B , C不是从整个视频文件 M的起始点 请求观看, 即不是从头开始观看视频文件, 那么用户设备 A, B , C发送 的请求信息中存在指示用户设备 A, B , C请求的该视频数据的不同请求 点的字段。 其中, 请求点可以为用户设备请求观看视频时间点相对于整 个视频文件初始点的时间点, 或者是请求观看视频字节在整个视频文件 中的具体字节位置等。 如, HTTP ( Hypertext Transport Protocol,超文本传 送协议) 请求消息中, 起始行的参数部分会用 start=x (or begin=x)来指明 本条请求的请求点, 其中, X 为请求点, 可以表示时间, 如 x=32 , 表示 起始观看时间是第 32秒; 也可以表示具体字节数, 如 1204 , 表示起始观 看位置是第 1204字节处等。
5102、 若确定发送第一请求信息多个用户设备中的至少两个用户设 备发送的第一请求信息指示的数据相同且该数据未緩存在緩存服务器 上, 则在落入每个预设窗口内的请求点中选择一个请求点。 需要说明的是, 若緩存服务器接收到的多个用户设备的第一请求信 息指示的是已緩存在所述緩存服务器上的数据, 则可以从用户设备请求 的请求点开始向每个用户设备发送该用户设备请求的数据。 若某个或某 些用户设备请求的是未緩存在所述緩存服务器上的数据, 则緩存服务器 向源服务器请求该未緩存在所述緩存服务器上的数据, 接收这些数据后 再向相应的用户设备发送。
若緩存服务器收到多个用户设备中的至少两个用户设备对同一未緩 存的数据进行请求, 若对接收到的这段视频数据的各个请求点都单独进 行请求和转发, 则需要占用较大的上游网络流量, 所以可以利用预设窗 口对所述至少两个用户设备所需的相同或不同的请求点进行选择, 如根 据预设窗口从至少两个用户设备的第一请求信息指示的该相同数据的请 求点中选择一个请求点发送, 从而减少重复发送, 降低对上游网络带宽 的消耗。
示例性的, 预设窗口的大小可以根据时间设定, 如预设窗口为 6秒, 此时緩存服务器同时收到用户设备 A, B , C对緩存服务器未緩存的同一 个文件 M进行请求的第一请求信息, 所述用户设备 A, B , C请求的文件 M记作" file-abc,,。 以 HTTP请求消息为例, 假设用户设备 A, B , C的请 求分别如下:
用户设备 A 'http:〃xyz.com/file-abc?start=32&..."
用户设备 B http:// xyz.com/file-abc?start=58&..."
用户设备 C http:// xyz.com/file-abc?start=60&..."
其中, start 字段的单位为秒, 当然也可以以其他时间单位, 比如分 钟, 为 start字段的单位。 另外, 也可以以任意时间长度作为 start字段的 单位, 比如可以设置每 5秒为 1个 start字段的计时单位。 本示例中以秒 为 start字段的单位。
从用户设备 A、 用户设备 B和用户设备 C的请求信息中可知, 用户 设备 A 和用 户 设备 B 请求的请求点之间 的差值为 26 秒 ( startA-startB=58-32=26秒)。 由于预设窗口的大小为 6s , 用户设备 A和 用户设备 B请求的请求点之间的差值大于预设窗口的大小, 因此用户设 备 A的请求点和用户设备 B的请求点不在一个预设窗口内。 而用户设备 B和用户设备 C之间请求点的差值为 2 ( startc-startB=60-58=2秒)秒, 用 户设备 B和用户设备 C之间请求点的差值小于预设窗口的大小。 也就是 说, 用户设备 B和用户设备 C的请求点在一个预设窗口之内, 此时, 緩 存服务器则从这个预设窗口内用户设备 B的请求点和用户设备 C的请求 点中选择一个请求点。 可选的, 緩存服务器可以忽略用户 C 的请求点, 选择距离预设窗口起始位置最近的用户设备 B的请求点。
预设窗口可以在一段数据中按照该数据的时间或字节位置, 预先设 定好, 如一段 360秒的视频, 从起始点第 0秒开始, 每 6秒设置一个预 设窗口。 也可以是根据接收到的第一请求信息中的请求 , 的位置设定预 设窗口。 如上述示例中, 第一个预设窗口可以是从第 32秒处开始, 大小 为 6秒, 那么用户设备 A和用户设备 B的请求点之差大于 6秒, 未落入 同一个预设窗口中; 第二个预设窗口可以是从第 58秒处开始, 大小为 6 秒, 那么用户设备 B和用户设备 C的请求点之差小于 6秒, 用户设备 B 和用户设备 C 的请求点落入同一个预设窗口中。 进一步的, 在落入同一 个预设窗口中的请求点中选择一个请求点, 可以选择距离预设窗口起始 位置最近的用户设备的请求点。 如在预先设置好的预设窗口中, 若预设 窗口从第 240秒开始至 246秒结束, 选择距离第 240秒最近的请求点。 如预设窗口根据接到的第一请求信息的请求点确定, 如根据用户设备 B 的请求点确定时,用户设备 B和用户设备 C的请求点都落入该预设窗口, 由于预设窗口从用户设备 B请求点的位置开始, 所以用户设备 B的请求 点为距离预设窗口最近的请求点, 选择该用户设备 B 的请求点即可。 选 择距离预设窗口起始位置最近的请求点, 可以将落入此预设窗口的其他 用户设备所需的请求点所需的数据都涵盖, 以使緩存服务器向源服务器 发送的该窗口内的请求点包含请求点落入该预设窗口的用户设备所需的 所有内容。
需要说明的是, 这里仅以用户设备 B和用户设备 C所属的预设窗口 为例进行说明, 若緩存服务器在同一时刻或在预定时间内还接收到其他 用户设备的第一请求信息, 并且其他多个用户设备的第一请求信息中指 示请求点在某一个预设窗口内, 则也可以釆用同样的方法选择一个请求 点, 忽略其他的请求点。 比如用户设备 D和用户设备 E如果同时请求数 据 N, 并且该数据 N在緩存服务器上没有緩存, 则需要根据用户设备 D 的请求点和用户设备 E的请求点及预设窗口来确定是否从用户设备 D的 请求点和用户设备 E的请求点中选择一个请求点, 忽略掉另一个请求点。 如果用户设备 D与用户设备 E的请求点之差小于预设窗口, 则在用户设 备 D的请求点与用户设备 E的请求点中选择一个请求点, 可以是选择距 离预设窗口起始位置最近的用户设备的请求点。 所述预定时间可以是用 户设备在发出请求后到用户看到视频的预计时间, 或者更短一些的时间。
进一步的, 当前预设窗口的大小还可以根据字节设定, 如预设窗口 为 2048字节, 那么若用户设备 A请求的请求点为第 1050字节、 用户设 备 B请求的请求点为第 1090字节, 用户设备 C请求的请求点为第 2000 字节, 那么用户设备 A的请求点与用户设备 B请求的请求点之间的差值 为 40字节、 用户设备 A与用户设备 C请求的请求点之间的差值为 50字 节、 用户设备 B与用户设备 C请求的请求点之间的差值为 10字节, 都小 于预设窗口的 2048字节, 所以用户设备 A、 B、 C在同一个预设窗口内, 可以选择距离预设窗口起始位置最近的用户设备 A的请求点, 将用户设 备 B和用户 C的请求点忽略。 在落入同一个预设窗口中的请求点中选择 一个请求点, 可以是选择距离预设窗口起始位置最近的用户设备的请求 点。
值得指出的是, 预设窗口大小可以是固定的, 也可以动态进行调整。 影响窗口大小的因素可以是緩存服务器上游的网络状况, 包括上游丟包 率、 上游网络时延等。 除此之外, 影响窗口大小的因素还可以包括用户 的网络状况, 如用户业务带宽、 用户网络时延等, 以及用户的体验期望 等因素。 预设窗口的大小与各影响因素的关系可用下面的关系式定性地 加以 · ¾
RTTup -PLRup - RTTuser -E
其中 „为预设窗口的大小, 为用户设备的业务带宽, 用户延时, W77^为上游网络延时, ^为上游网络丟包率, £ 为用户 体验期望。 从上述公式可以看出上游网络状况越差, 即上游丟包率越高, 时延越大, 预设窗口越小; 用户网络状况越差, 即用户业务带宽一定的 情况下, 时延越大, 预设窗口越小; 用户体验期望越高, 预设窗口越小 等。
预设窗口大小可以设置为根据网络状况动态可变, 可以设定为一个 经过多次实验优化取值的固定值。
S103、 緩存服务器向源服务器发送第二请求信息, 第二请求信息指 示所述未緩存的数据及所述选择的请求点。
示例性的, 緩存服务器向源服务器发送指示所述选择的请求点的第 二请求信息, 如緩存服务器在收到用户设备 A, B , C对緩存服务器未緩 存的同一个文件记作 "file-abc"的请求后, 选择了用户设备 A和用户设备 B的请求点, 緩存服务器向源服务器发送两个第二请求信息, 其中一个第 二请求信息指示该文件" file-abc"和用户设备 A的请求点, 另一个第二请 求信息就指示该文件 "file-abc"和用户设备 B的请求点。
这样一来, 緩存服务器就可以将源服务器发送的数据, 如视频数据、 音频数据等根据用户设备请求的请求点所对应的位置, 分别向同一预设 窗口内的用户设备发送, 这样上游网络带宽消耗降低的同时, 满足了用 户观看需求。
上述緩存服务器的服务方法, 緩存服务器同时或在预定时间内接收 到多个用户设备分别发送的第一请求信息, 每个第一请求信息指示所述 多个用户设备中的一个用户设备所需的数据及该数据的请求点; 若确定 接收到的多个用户设备中的至少两个用户设备所请求的数据相同且未在 该緩存服务器中緩存该相同数据, 则在落入每个预设窗口内的至少两个 用户设备的请求点中选择一个请求点; 并向源服务器发送指示该数据和 选择了的请求点的第二请求信息。 这样一来, 緩存服务器可以通过预设 窗口避免对请求点相近的同一数据的重复请求, 由于同一个预设窗口内 的请求点位置接近, 可以作为对同一个请求点的请求, 所以在预设窗口 内选择一个请求点向源服务器发送请求, 可以减少对本緩存服务器与源 服务器的上游网络的带宽消耗, 从而降低上游网络的流量, 緩解网络压 力。
以緩存服务器为 Cache 服务器, 数据为视频数据为例说明, 但不以 此做任何限定, 如图 2所示, 另一种緩存服务器的服务方法包括:
S201、 緩存服务器接收多个用户设备分别发送的多个第一请求信息, 每个第一请求信息指示该多个用户设备中的一个用户设备所需的视频数 据及对该视频数据的请求点。
需要说明的是, 緩存服务器接收到的第一请求信息指示的视频数据 如果是已緩存在緩存服务器上的视频数据, 则将对应视频数据信息分别 发送给发出请求的用户设备。 如果接收到的多个第一请求信息中的至少 两个第一请求信息指示的视频数据是緩存服务器未緩存的视频数据, 且 未緩存的视频数据可能是相同视频数据, 可能是不同视频数据, 也可能 既有相同的视频数据, 也有不同的视频数据。 若未緩存的至少两个数据既有相同的视频数据, 也有不同的视频数 据, 则緩存服务器可以逐一选择相同的未緩存在所述緩存服务器上的视 频数据, 根据相应的第一请求信息中的请求点进行处理, 处理完成一个 相同的视频数据后, 再选择下一个相同的视频数据进行处理; 也可以同 时选择多组未緩存在所述緩存服务器的视频数据分别进行处理, 这多组 本地未緩存在所述緩存服务器的视频数据中, 每组内的未緩存在所述緩 存服务器的视频数据相同, 请求点可能相同或不同。
示例性的, 不同未緩存的视频数据可以是多个用户对多个视频数据 的请求, 如用户设备 A、 用户设备 B和用户设备 C请求第一个电影, 用 户设备 D、 用户设备 E和用户设备 F请求第二个电影, 用户设备 G请求 第三个电影, 用户设备 H请求第四个电影。 这时緩存服务器可以向源服 务器发送用户设备 G对第三个电影的请求信息和用户设备 H对第四个电 影的第一请求信息。 而对于多个用户设备, 如用户设备 A、 用户设备 B 和用户设备 C对第一个电影的请求和用户设备 D、 用户设备 E和用户设 备 F对第二个电影的请求, 需要通过 S203选择请求点后再向源服务器发 送指示该选择了的请求点的第二请求信息。
若未緩存的视频数据是不同视频数据, 则执行 S202; 若未緩存的视 频数据是相同视频数据, 则执行 S203。
5202、 緩存服务器向源服务器发送第二请求信息, 其中, 第二请求 信息指示各视频数据及各个视频数据的请求点。
需要说明的是, 如果緩存服务器接到的多个用户设备分别发出的第 一请求信息中请求的视频数据不相同, 而且这些视频数据均未緩存在緩 存服务器上, 则不用选择请求点, 可以向源服务器发送各个视频数据和 该视频数据请求点的第二请求信息。
5203、 緩存服务器根据未緩存在緩存服务器的视频数据及请求点, 选择一个请求点。
需要说明的是, 预设窗口可以依据时间设置, 如设置为 6 秒等; 也 可以用字节数来进行衡量, 如设置为 1 兆字节等, 也可以同时以这两种 标准进行设置。 预设窗口可以设定初始值, 如预设窗口默认为 6 秒, 或 为 1M字节等,判断请求同一个数据的多个请求点是否在一个预设窗口内 和从一个预设窗口中选择一个请求点的方法在上述实施例中已经详细描 述, 在此不再赘述。 需要说明的是, 如果用户设备所需的视频数据未緩存在緩存服务器, 则不论所需的视频数据是否相同, 请求点是否相同就向源服务器转发, 会对上游网络造成较大的带宽消耗, 所以若可以利用预设窗口选择位于 每个预设窗口内的一个请求点, 其中, 位于每个预设窗口内的请求点可 以是多个所请求数据相同的请求点, 多个所请求数据不同的请求点, 或 者多个所请求数据相同的请求点和多个所请求数据不同的请求点。
值得指出的是, 緩存服务器在未接收到第二请求信息中所指示的视 频数据之前, 请求点为用户设备发送的第一请求信息中指示的请求点, 当緩存服务器接收到视频数据时, 从视频数据中获取随机访问点, 然后 可以根据随机访问点更新指示的请求点的位置。
一般情况下, 视频数据在进行压缩编码之后, 会按照一定的格式进 行封装, 然后才在网络上进行传输。互联网视频常见的封装格式有: mp4、 flv、 f4v等等, mp4、 flv、 f4v等通常被称为容器。 容器会将它所封装的 视频编码数据中的所有信息, 如音频和视频的编码方式、 图像的分辨率、 视频持续时长、 随机访问点的位置等进行汇总, 以支持播放时的各种操 作, 如拖拉、 回放、 快进等。 这些汇总信息通常放在整个视频文件的起 始部分, 无论是一个完整的视频还是部分视频片段, 都含有该信息, 否 则播放器无法进行播放。
所以緩存服务器只要接收到小部分的视频数据就可以获取到随机访 问点的信息, 如前一时刻有用户设备对该视频数据进行了请求, 緩存服 务器才接收了小部分的视频数据, 并未接收完并緩存该视频数据, 这时 緩存服务器得到了该段视频数据的随机访问点的信息, 緩存服务器会先 根据随机访问点的位置, 对用户设备请求的请求点进行调整, 然后对调 整后请求点在预设窗口中进行选择。
示例性的, 如图 3所示, 视频数据 20的随机访问点的位置分别记作 A'处、 B'处和 C'处, 及某一时刻用户设备 A、 用户设备 B和用户设备 C 对该文件的 3个请求点, 分别记作请求点 A、 请求点 B和请求点 C。 假 设用户设备 A、 用户设备 B和用户设备 C对该文件的 3个请求点对应的 时间点分别为第 42秒、第 46秒和第 50秒, 当前预设窗口的大小为 6秒; 而随机访问点 A,处、 B,处和 C,处对应的时间点分别是第 41.333 秒、 第 45.583秒和第 51.583秒。 那么, 调整前, 由于用户设备 A的请求点和用 户设备 B 的请求点之间的差值小于预设窗口大小, 因此它们处于同一窗 口,而用户设备 Α的请求点和用户设备 C的请求点之间的差值比窗口大, 所以用户设备 A的请求点和用户设备 B的请求点在一个预设窗口中, 用 户设备 C的请求点在另一个预设窗口中, 这 3条请求信息最终位于两个 不同的窗口, 緩存服务器选择后会选择向源服务器发送指示用户设备 A 所需数据和用户设备 C所需数据的两条请求。
通过随机访问点的位置可以发现请求点 B 和请求点 C 在同一个 GOP(Group of Pictures , 画面组)中。 其中, GOP是两个相邻的随机访问 点之间, 包含前一个随机访问点, 不包含后一个随机访问点的视频数据。 实际上虽然緩存服务器向源服务器请求的是同一个 GOP中不同位置点的 数据, 但源服务器通常都是从该 GOP的随机访问点 Β,处开始下发数据, 这样用户设备 Β和用户设备 C接收到数据之后, 就能够从 Β,处立即开始 播放。 需要说明的是, 随机访问点是该视频数据可以立即播放的点, 虽 然观看视频设备可以将拖条置于任何位置, 但不是所有的位置都可以立 即播放视频数据的, 视频总是从拖条指示的请求点附近的随机访问点处 开始播放, 因此, 上述用户设备 Β、 用户设备 C的请求点在一个 GOP之 内, 可以归结为一个请求点, 也就是说, 緩存服务器向源服务器发送请 求点为 B,处的一条请求信息就可以完成用户设备 B和用户设备 C对该视 频数据的请求。 同样请求点 A的位置可以调整到请求点为 A,处。 这样, 对 A、 B、 C三个请求点调整之后, 就变成起点为 A,处和 B,处的两个请 求点。
值得指出的是, 有的服务器也可能从请求点后一个 GOP处开始下发 数据, 本实施例仅以从前一个 GOP处开始下发数据为例进行说明, 并不 以此做任何限定。
然后, 緩存服务器对调整后的请求, 根据是否在预设窗口内进行选 择。 由于随机访问点 A'和随机访问点 B'之间的差值为 4.25秒, 小于预设 窗口大小 6 秒, 因此它们处于同一预设窗口内。 此时, 緩存服务器只向 源服务器发送一条第二请求信息。 可选的, 可以向源服务器转发距离预 设窗口起始位置最近的请求点 A的请求信息, 这个请求点的具体位置可 以是用户设备 A的第一请求信息指示的请求点 A, 也可以是对用户设备 A的第一请求信息指示的请求点 A调整后的请求点 A'处。 这样一来降低 了上游带宽的占用, 同时也减轻了緩存服务器可能需要拼接多个视频片 段的性能开销。 值得指出的是, 緩存服务器在获取到用户设备所请求的视频数据的 随机访问点的信息之后, 对于后续用户设备请求的处理, 在所请求的视 频仍未緩存的情况下, 緩存服务器可以先对用户的请求点按随机访问点 的位置进行调整, 然后再按预设窗口对调整后的请求点进行选择; 也可 以先对用户的请求点按预设窗口进行选择, 然后再对选择后的请求点按 随机访问点的位置进行调整, 最后再按预设窗口对调整后的请求点进行 选择。
进一步的, 有时緩存服务器从视频头部中所获取的随机访问点的位 置并不是它们在整个视频文件中的位置, 而是它们在当前视频片段中的 位置。 这种情况下, 緩存服务器可以根据当前片段的请求点和容器头部 中的视频数据信息, 换算出它们在整个视频文件中的位置。 这样, 后续 的请求依然可以按照本实施例进行。
5204、 緩存服务器向源服务器发送第二请求信息, 第二请求信息指 示未緩存的视频数据及选择的请求点。
需要说明的是, 緩存服务器向源服务器发送的第二请求信息可以为 多条, 如每一个预设窗口中选择出的请求点都对应着一个第二请求信息, 緩存服务器可以分别向源服务器发送指示这些选择了的请求点的第二请 求信息, 以使得源服务器根据这些请求点对应的位置向緩存服务器发送 请求点对应的视频数据。
5205、 緩存服务器接收源服务器发送的从第二请求信息中指示的请 求点对应位置开始的未緩存的视频数据。
示例性的, 如果緩存服务器向源服务器发送的第二请求信息指示的 请求点为第 130秒,第 330秒和第 5690秒,那么源服务器也分别从第 130 秒, 第 330秒和第 5690秒所对应的位置向緩存服务器发送视频数据, 需 要说明的是, 由于緩存服务器可以同时接收从这三个请求点开始的视频 数据, 緩存服务器接收从第 130秒发送的视频数据到第 330秒时, 第 330 秒开始之后的内容已经有部分被接收, 所以緩存服务器不再重复接收已 经接收了的内容, 緩存服务器接收完从第 130 秒所对应的数据到第 330 秒所对应的数据时, 緩存服务器主动断开与源服务器的连接, 终止对从 第 330秒对应位置之后的数据进行重复接收。
5206、 緩存服务器对接收的未緩存的视频数据进行拼接。
需要说明的是, 由于緩存服务器停止接收已接收的未緩存的视频数 据, 即緩存服务器不对视频数据进行重复的接收, 那么緩存服务器需要 将分别接收的该视频数据的片段拼接成一个完整的视频数据或视频片段 数据。
緩存服务器对视频数据进行拼接后, 可以执行 S210 ; 另外, 若拼接 后得到一段完整的视频, 则执行 S209 , 若拼接后的视频不完整, 则执行 S207。
5207、 緩存服务器向源服务器发送第三请求信息, 第三请求信息指 示未緩存的视频数据及所述未緩存的视频数据的起始点。
示例性的, 緩存服务器只接收并拼接了从第 300 秒到结束的视频数 据, 那么緩存服务器向源服务器发送指示该视频数据和起始点的第三请 求信息。 其中, 起始点为该视频数据的第 0 秒开始的位置, 以使得源服 务器从起始点向緩存服务器发送视频数据, 起始点可以作为一个特殊位 置的请求点, 即用户设备所需数据的最开始位置的请求点为起始点。
5208、 緩存服务器接收源服务器发送的从起始点开始的视频数据。 需要说明的是, 緩存服务器接收源服务器发送的从起始点开始的视 频数据后, 可以利用接收的、 由起始点开始的这段视频数据, 将已经接 收的, 拼接不完整的该视频数据与这段视频数据进行拼接, 以得到一个 完整的视频数据。
5209、 緩存服务器对拼接后的视频数据进行緩存。
S210、 緩存服务器根据接收到的多个用户设备发送的第一请求信息 指示的请求点, 从各个用户设备发送的第一请求信息指示的请求点所对 应的位置分别向各个用户设备发送视频数据。
示例性的, 緩存服务器向用户设备 A发送从用户设备 A的请求点 A 起始的视频数据, 向用户设备 B发送从用户设备 B的请求点 B起始的视 频数据, 向用户设备 C发送从用户设备 C的请求点 C起始的视频数据。 进一步的, 也可以从调整后的随机访问点向用户设备分别发送视频数据。
本发明实施例提供的緩存服务器的服务方法, 緩存服务器接收多个 用户设备发送的第一请求信息, 每个第一请求信息指示用户设备所需的 数据及所需数据的请求点; 若确定接收的用户设备所需的数据相同且未 緩存在緩存服务器上, 则在落入每个预设窗口内的请求点中选择一个请 求点; 并向源服务器发送指示该数据和选择了的请求点的第二请求信息。 这样一来, 緩存服务器可以通过预设窗口避免对请求点相近的同一数据 的重复请求, 由于同一个预设窗口内的请求点位置接近, 可以作为对同 一个请求点的请求, 所以在预设窗口内选择一个请求点向源服务器发送 请求, 可以减少对本緩存服务器与源服务器的上游网络的带宽消耗, 从 而降低上游网络的流量, 緩解网络压力。
一种緩存服务器 30 , 如图 4所示, 包括第一接收单元 301、 选择单 元 302和第一发送单元 303。 其中:
第一接收单元 301 , 用于接收多个用户设备发送的第一请求信息, 第 一请求信息指示各个用户设备所需的数据及各个用户设备所需的数据的 请求点。
选择单元 302 ,用于若确定第一接收单元 301接收的多个用户设备中 至少两个用户设备发送的第一请求信息指示的数据相同且该相同的数据 未緩存在緩存服务器上, 则在落入每个预设窗口内的请求点中选择一个 请求点。
示例性的, 选择单元 302在落入每个预设窗口内多个相同请求点中、 多个不同请求点中或多个不同请求点和相同请求点中选择与预设窗口起 始位置最近的一个请求点。
第一发送单元 303 , 用于向源服务器发送第二请求信息, 第二请求信 息指示未緩存的数据及选择单元 302选择的请求点。
进一步的, 第一发送单元 303 还用于若至少一个用户设备的请求信 息指示同一个未緩存的数据及未緩存数据的不同请求点, 请求点在不同 预设窗口内, 则向源服务器发送指示各个未緩存的数据及该数据对于请 求点的第二请求信息。
进一步的, 如图 5所示, 緩存服务器 30 , 还包括第二接收单元 304 和第二发送单元 305 , 其中:
第二接收单元 304 , 用于接收源服务器 40发送的从请求点所对应的 位置开始的未緩存的数据。
进一步的, 第二接收单元 304接收源服务器 40从不同请求点对应位 置发送的未接收的未緩存的数据, 停止接收已接收的未緩存的数据, 即 不重复接收未緩存的数据。
第二发送单元 305 ,用于根据第一接收单元 301接收的多个用户设备 发送的第一请求信息指示的请求点所对应的位置, 从请求点所对应的位 置分别向用户设备发送第二接收单元 304接收的数据。 进一步的, 如图 6所示, 緩存服务器 30 , 还包括拼接单元 306、 緩 存单元 307和处理单元 308 , 其中:
拼接单元 306 ,用于对第二接收单元 304接收的未緩存的数据进行拼 接。
需要说明的是, 在緩存单元 307 对未緩存的数据进行緩存之前, 处 理单元 308用于若拼接单元 306拼接后的未緩存的数据不完整, 则使得 第一发送单元 303向源服务器 40发送第三请求信息, 第三请求信息指示 未緩存的数据及数据的起始点。 第二接收单元 304 还用于接收源服务器 40从起始点开始发送的数据,以使得拼接单元 306再对第二接收单元 304 接收的该数据和之前拼接的不完整的数据进行拼接。
緩存单元 307 , 用于对拼接单元 306拼接后的数据进行緩存。
进一步的, 处理单元 308还可以用于第二接收单元 304接收到的源 服务器 40发送的未緩存的数据, 获取数据中包含的随机访问点, 根据随 机访问点更新请求点。
上述緩存服务器 30对应上述方法实施例, 可以用于上述方法实施例 的步骤中, 其具体各个步骤中的应用可以参照上述方法实施例, 在此不 再赘述。
本发明实施例提供的緩存服务器 30 ,緩存服务器 30接收至少两个用 户设备发送的第一请求信息, 每个第一请求信息指示用户设备所需的数 据及数据的请求点; 若确定接收的用户设备所需的数据相同且未緩存在 緩存服务器, 则在落入每个预设窗口内的请求点中选择一个请求点; 并 向源服务器发送指示该数据和选择了的请求点的第二请求信息。 这样一 来, 緩存服务器 30可以通过预设窗口避免对请求点相近的同一数据的重 复请求, 由于同一个预设窗口内的请求点位置接近, 可以作为对同一个 请求点的请求, 所以在预设窗口内选择一个请求点向源服务器发送请求, 可以减少对本緩存服务器 30与源服务器的上游网络的带宽消耗, 从而降 低上游网络的流量, 緩解网络压力。
本发明实施例提供的系统, 如图 7 所示, 包括一个或多个緩存服务 器 30和源服务器 40 , 其中:
所述緩存服务器 30可以是图 4-6中至少一个所述的緩存服务器 30。 源服务器 40 , 用于接收緩存服务器 30发送的第二请求, 第二请求信 息指示未緩存在緩存服务器的数据及数据的请求点; 从请求点对应的位 置开始向緩存服务器 30发送未緩存的数据。
需要说明是, 上述緩存服务器 30和源服务器 40对应上述方法实施 例, 可以用于上述方法实施例的步骤中, 其具体各个步骤中的应用可以 参照上述方法实施例, 緩存服务器 30的具体结构与上述实施例中提供的 緩存服务器的结构相同, 在此不再赘述。
本发明实施例提供的系统, 緩存服务器 30接收至少两个用户设备发 送的第一请求信息, 每个第一请求信息指示用户设备所需的数据及数据 的请求点; 若确定接收的用户设备所需的数据相同且未緩存在緩存服务 器, 则在落入每个预设窗口内的请求点中选择一个请求点; 并向源服务 器 40发送指示该数据和选择了的请求点的第二请求信息。 这样一来, 緩 存服务器 30 可以通过预设窗口避免对请求点相近的同一数据的重复请 求, 由于同一个预设窗口内的请求点位置接近, 可以作为对同一个请求 点的请求, 所以在预设窗口内选择一个请求点向源服务器 40发送请求, 可以减少对本緩存服务器 30与源服务器 40 的上游网络的带宽消耗, 从 而降低上游网络的流量, 緩解网络压力。
本领域普通技术人员可以理解: 实现上述方法实施例的全部或部分 步骤可以通过程序指令相关的硬件来完成, 前述的程序可以储存于一计 算机可读取储存介质中, 该程序在执行时, 执行包括上述方法实施例的 步骤; 而前述的储存介质包括: ROM、 RAM, 磁碟或者光盘等各种可以 储存程序代码的介质。
以上所述, 仅为本发明的具体实施方式, 但本发明的保护范围并不 局限于此, 任何熟悉本技术领域的技术人员在本发明揭露的技术范围内, 可轻易想到变化或替换, 都应涵盖在本发明的保护范围之内。 因此, 本 发明的保护范围应以所述权利要求的保护范围为准。

Claims

权 利 要 求 书
1、 一种緩存服务器的服务方法, 其特征在于, 包括:
接收多个用户设备发送的第一请求信息, 所述第一请求信息指示所 述多个用户设备各自所需的数据及各自所需数据的请求点;
若确定所述多个用户设备中的至少两个用户设备发送的所述第一请 求信息指示的数据相同且所述相同的数据未緩存在该緩存服务器, 则在 落入预设窗口内的请求点中选择一个请求点;
向源服务器发送第二请求信息, 所述第二请求信息指示所述未緩存 的数据及所述选择的请求点。
2、 根据权利要求 1所述的服务方法, 其特征在于, 所述预设窗口为 预设固定窗口或预设动态变化窗口。
3、 根据权利要求 2所述的服务方法, 其特征在于, 所述预设固定窗 口为占用时间固定或占用字节固定的窗口。
4、 根据权利要求 2所述的服务方法, 其特征在于, 所述预设动态变 化窗口为根据用户状态和上游网络状态占用时间动态变化的窗口, 或根 据上游网络状态和用户状态占用字节动态变化的窗口。
5、 根据权利要求 3所述的服务方法, 其特征在于, 所述在每个预设 窗口内包括:
请求同一个未緩存的所述数据的不同所述请求点之间时间差小于或 等于预设窗口占用的时间。
6、 根据权利要求 4所述的服务方法, 其特征在于, 所述在每个预设 窗口内包括:
请求同一个未緩存的所述数据的不同所述数据请求点之间字节差小 于或等于预设窗口占用的字节。
7、 根据权利要求 1 -6中任一所述的服务方法, 其特征在于, 所述在 落入每个预设窗口内的请求点中选择一个请求点, 包括:
在落入每个预设窗口内的请求点中选择与所述预设窗口起始位置最 近的一个请求点。
8、 根据权利要求 1 -7 中任一所述的服务方法, 其特征在于, 所述向 源服务器发送第二请求信息之后, 还包括:
接收所述源服务器发送的从所述请求点对应位置开始的未緩存的所 述数据;
根据接收的所述多个用户设备发送的所述第一请求信息指示的请求 点, 从所述请求点对应位置分别向所述用户设备发送所述数据。
9、 根据权利要求 8所述的服务方法, 其特征在于,
接收所述源服务器从所述请求点发送的未接收的未緩存的所述数 据, 停止接收已接收的未緩存的所述数据。
10、 根据权利要求 9 所述的服务方法, 其特征在于, 所述接收所述 源服务器从所述请求点发送的未接收的未緩存的所述数据, 停止接收已 接收的未緩存的所述数据之后, 还包括:
对接收的未緩存的所述数据进行拼接;
对拼接后的所述数据进行緩存。
11、 根据权利要求 10所述的服务方法, 其特征在于, 所述对拼接后 的所述数据进行緩存之前, 还包括:
若拼接后的未緩存的所述数据不完整, 则向所述源服务器发送第三 请求信息, 所述第三请求信息指示未緩存的所述数据及所述数据的起始 点;
接收所述源服务器从所述起始点发送的所述数据。
12、 根据权利要求 1至 11任一所述的服务方法, 其特征在于, 所述 在落入每个预设窗口内的请求点中选择一个请求点之后, 还包括:
若接收到所述源服务器发送的未緩存的所述数据, 获取所述数据中 包含的随机访问点, 则根据所述随机访问点更新所述请求点。
13、 一种緩存服务器, 其特征在于, 包括:
第一接收单元, 用于接收多个用户设备发送的第一请求信息, 所述 第一请求信息指示所述多个用户设备各自所需的数据及所述各自所需数 据的请求点;
选择单元, 用于若确定所述第一接收单元接收的多个用户设备中的 至少两个用户设备发送的所述第一请求信息指示的数据相同且所述相同 的数据未緩存在该緩存服务器, 则在落入每个预设窗口内的请求点中选 择一个请求点;
第一发送单元, 用于向源服务器发送第二请求信息, 所述第二请求 信息指示所述未緩存的数据及所述选择单元选择的所述请求点。
14、 根据权利要求 13所述的緩存服务器, 其特征在于, 所述选择单元, 具体用于在落入每个预设窗口内的请求点中选择与 所述预设窗口起始位置最近的一个请求点。
15、 根据权利要求 13所述的緩存服务器, 其特征在于, 还包括第二 接收单元和第二发送单元, 其中:
所述第二接收单元, 用于接收所述源服务器发送的从所述请求点对 应位置开始的未緩存的所述数据;
所述第二发送单元, 用于根据所述第一接收单元接收的所述多个用 户设备发送的所述第一请求信息指示的请求点, 从所述请求点对应位置 分别向所述用户设备发送所述第二接收单元接收的所述数据。
16、 根据权利要求 15所述的緩存服务器, 其特征在于,
所述第二接收单元, 具体用于接收所述源服务器从所述请求点发送 的未接收的未緩存的所述数据, 停止接收已接收的未緩存的所述数据。
17、 根据权利要求 16所述的緩存服务器, 其特征在于, 还包括拼接 单元和緩存单元, 其中:
所述拼接单元, 用于对所述第二接收单元接收的未緩存的所述数据 进行拼接;
所述緩存单元, 用于对所述拼接单元拼接后的所述数据进行緩存。
18、 根据权利要求 17所述的緩存服务器, 其特征在于, 还包括处理 单元, 其中:
所述处理单元, 用于若所述拼接单元拼接后的未緩存的所述数据不 完整, 则使得所述第一发送单元向所述源服务器发送第三请求信息, 所 述第三请求信息指示未緩存的所述数据及所述数据的起始点;
所述第二接收单元, 还用于接收所述源服务器从所述起始点发送的 所述数据。
19、 根据权利要求 13至 18所述的緩存服务器, 其特征在于, 所述处理单元, 还用于若所述第二接收单元接收到的所述源服务器 发送的未緩存的所述数据, 获取所述数据中包含的随机访问点, 则根据 所述随机访问点更新所述请求点。
20、 一种系统, 其特征在于, 包括源服务器和至少一个緩存服务器, 其中:
所述緩存服务器为权利要求 13-19任一项所述的緩存服务器; 所述源服务器, 用于接收所述緩存服务器发送的第二请求, 所述第 二请求信息指示所述緩存服务器收到的第一请求信息中指示的未緩存在 所述緩存服务器的数据及所述数据的请求点; 从所述请求点对应位置开 始向所述緩存服务器发送未緩存的所述数据。
PCT/CN2013/076680 2012-06-15 2013-06-04 一种缓存服务器的服务方法、缓存服务器及系统 WO2013185547A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/564,703 US20150095447A1 (en) 2012-06-15 2014-12-09 Serving method of cache server, cache server, and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210199126.7A CN103516731B (zh) 2012-06-15 2012-06-15 一种缓存服务器的服务方法、缓存服务器及系统
CN201210199126.7 2012-06-15

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/564,703 Continuation US20150095447A1 (en) 2012-06-15 2014-12-09 Serving method of cache server, cache server, and system

Publications (1)

Publication Number Publication Date
WO2013185547A1 true WO2013185547A1 (zh) 2013-12-19

Family

ID=49757503

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/076680 WO2013185547A1 (zh) 2012-06-15 2013-06-04 一种缓存服务器的服务方法、缓存服务器及系统

Country Status (3)

Country Link
US (1) US20150095447A1 (zh)
CN (1) CN103516731B (zh)
WO (1) WO2013185547A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105025305A (zh) * 2014-04-22 2015-11-04 中兴通讯股份有限公司 Iptv图片文件的请求、发送方法及装置
CN104572860B (zh) * 2014-12-17 2018-01-26 北京皮尔布莱尼软件有限公司 一种数据处理方法和系统
CN106201561B (zh) * 2015-04-30 2019-08-23 阿里巴巴集团控股有限公司 分布式缓存集群的升级方法与设备
CN107623729B (zh) * 2017-09-08 2021-01-15 华为技术有限公司 一种缓存方法、设备及缓存服务系统
CN110113306B (zh) * 2019-03-29 2022-05-24 华为技术有限公司 分发数据的方法和网络设备
CN113905258B (zh) * 2021-09-08 2023-11-03 鹏城实验室 视频播放方法、网络设备以及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6868452B1 (en) * 1999-08-06 2005-03-15 Wisconsin Alumni Research Foundation Method for caching of media files to reduce delivery cost
CN102075562A (zh) * 2010-12-03 2011-05-25 华为技术有限公司 协作缓存的方法和装置
CN102196298A (zh) * 2011-05-19 2011-09-21 广东星海数字家庭产业技术研究院有限公司 一种分布式视频点播系统与方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101039357A (zh) * 2006-03-17 2007-09-19 陈晓月 一种手机浏览现有网站的方法
US9432433B2 (en) * 2006-06-09 2016-08-30 Qualcomm Incorporated Enhanced block-request streaming system using signaling or block creation
US8355433B2 (en) * 2009-08-18 2013-01-15 Netflix, Inc. Encoding video streams for adaptive video streaming
CN101998682A (zh) * 2009-08-27 2011-03-30 中兴通讯股份有限公司 一种个人网设备获取业务内容的装置、方法及相关装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6868452B1 (en) * 1999-08-06 2005-03-15 Wisconsin Alumni Research Foundation Method for caching of media files to reduce delivery cost
CN102075562A (zh) * 2010-12-03 2011-05-25 华为技术有限公司 协作缓存的方法和装置
CN102196298A (zh) * 2011-05-19 2011-09-21 广东星海数字家庭产业技术研究院有限公司 一种分布式视频点播系统与方法

Also Published As

Publication number Publication date
CN103516731A (zh) 2014-01-15
CN103516731B (zh) 2017-04-19
US20150095447A1 (en) 2015-04-02

Similar Documents

Publication Publication Date Title
CN111586479B (zh) 一种由客户端设备执行的机器实现的方法以及可读介质
US8661098B2 (en) Live media delivery over a packet-based computer network
TWI470983B (zh) 用以更新超文件傳輸協定內容描述之方法及裝置
US9356985B2 (en) Streaming video to cellular phones
EP3120520B1 (en) Media streaming
CN108063769B (zh) 一种内容服务的实现方法、装置及内容分发网络节点
WO2013185547A1 (zh) 一种缓存服务器的服务方法、缓存服务器及系统
US20150271231A1 (en) Transport accelerator implementing enhanced signaling
CN113141522B (zh) 资源传输方法、装置、计算机设备及存储介质
WO2018233539A1 (zh) 视频处理方法、计算机存储介质及设备
KR101472032B1 (ko) Http 스트리밍에서 표현 스위칭시 처리 방법
KR20120021246A (ko) Http 스트리밍을 위한 미디어 정보 파일의 전송 및 수신 방법
WO2017063574A1 (zh) 自适应流媒体传输方法及装置
WO2022056072A1 (en) Presenting media items on a playing device
EP2538629A1 (en) Content delivering method
US11882168B2 (en) Methods, systems, and media for delivering manifestless streaming media content
KR101888982B1 (ko) 적응형 컨텐츠 제공을 위한 컨텐츠 캐싱 서비스 제공 방법 및 이를 위한 로컬 캐싱 장치
WO2017114393A1 (zh) Http流媒体传输方法及装置
WO2023275969A1 (ja) データ中継装置、配信システム、データ中継方法、及びコンピュータ可読媒体
KR101971595B1 (ko) 적응형 컨텐츠 제공을 위한 컨텐츠 캐싱 서비스 제공 방법 및 이를 위한 로컬 캐싱 장치
KR20200018890A (ko) 무선 스트리밍 방법
WO2018150594A1 (ja) 端末装置、映像配信装置、映像配信システムおよび映像配信方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13803600

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13803600

Country of ref document: EP

Kind code of ref document: A1