WO2022233335A1 - 一种直播数据处理方法、装置、设备及介质 - Google Patents

一种直播数据处理方法、装置、设备及介质 Download PDF

Info

Publication number
WO2022233335A1
WO2022233335A1 PCT/CN2022/091482 CN2022091482W WO2022233335A1 WO 2022233335 A1 WO2022233335 A1 WO 2022233335A1 CN 2022091482 W CN2022091482 W CN 2022091482W WO 2022233335 A1 WO2022233335 A1 WO 2022233335A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache
online live
live broadcast
data
identifier
Prior art date
Application number
PCT/CN2022/091482
Other languages
English (en)
French (fr)
Inventor
樊博超
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Publication of WO2022233335A1 publication Critical patent/WO2022233335A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Definitions

  • the present disclosure relates to the technical field of data processing, and in particular, to a method, apparatus, device and medium for processing live broadcast data.
  • distributing the data of the live broadcast room and obtaining the data of the live broadcast room is a core function, which is usually completed by multiple basic servers.
  • a cache can be set up between the basic server and the data storage source, but when the number of live broadcast visits increases , not only the response is slow due to the long data delay, but also multiple basic servers update the cache at the same time, which may cause the cache to be continuously updated and cause serious problems such as cache avalanches.
  • the present disclosure provides a method, apparatus, device and medium for processing live broadcast data.
  • Embodiments of the present disclosure provide a method for processing live broadcast data, the method comprising:
  • the online live broadcast data is read from the data cache by the first server, and the online live broadcast data is distributed.
  • Embodiments of the present disclosure provide a method for processing live broadcast data, including:
  • the online live broadcast data is read from the data cache, and the online live broadcast data is distributed.
  • Embodiments of the present disclosure provide a method for processing live broadcast data, including:
  • the online live broadcast data is written into a data cache for the first server to read the online live broadcast data from the data cache and distribute the online live broadcast data.
  • Embodiments of the present disclosure also provide a live data processing apparatus, the apparatus comprising:
  • An identification module used for sending the online live studio identification to the identification cache through the first server
  • a data writing module configured to obtain the online live broadcast room identification from the identification cache through the second server, obtain online live broadcast data from the data storage according to the online live broadcast room identification, and write the online live broadcast data into data cache;
  • a data reading module configured to read the online live broadcast data from the data cache through the first server, and distribute the online live broadcast data.
  • Embodiments of the present disclosure also provide a live data processing apparatus, the apparatus comprising:
  • the sending module is configured to send the online live studio identifier to the identifier cache, so that the second server obtains the online live studio identifier from the identifier cache, and obtains the online live broadcast data from the data storage according to the online live studio identifier, and writing the online live data into the data cache;
  • a distribution module configured to read the online live broadcast data from the data cache, and distribute the online live broadcast data.
  • Embodiments of the present disclosure also provide a live broadcast data processing apparatus, including:
  • a first obtaining module configured to obtain an online live room identification from the identification cache, and the online live room identification is sent to the identification cache by the first server;
  • a second acquisition module configured to acquire online live broadcast data from the data storage according to the online live broadcast room identifier
  • a writing module configured to write the online live broadcast data into a data cache, for the first server to read the online live broadcast data from the data cache, and distribute the online live broadcast data.
  • An embodiment of the present disclosure further provides an electronic device, the electronic device includes: a processor; a memory for storing instructions executable by the processor; the processor for reading the memory from the memory The instructions can be executed, and the instructions can be executed to implement the live broadcast data processing method provided by the embodiments of the present disclosure.
  • An embodiment of the present disclosure further provides a computer-readable storage medium, where the storage medium stores a computer program, and the computer program is used to execute the live broadcast data processing method provided by the embodiment of the present disclosure.
  • Embodiments of the present disclosure also provide a computer program product, including computer programs/instructions, which, when executed by a processor, implement the live broadcast data processing method provided by the embodiments of the present disclosure.
  • FIG. 1 is a schematic diagram of live broadcast data processing in the related art
  • 2a is a schematic flowchart of a method for processing live broadcast data according to an embodiment of the present disclosure
  • 2b is a schematic flowchart of a method for processing live broadcast data according to an embodiment of the present disclosure
  • 2c is a schematic flowchart of a method for processing live broadcast data according to an embodiment of the present disclosure
  • FIG. 3 is a schematic flowchart of another method for processing live broadcast data according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of live data processing according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of a live data processing apparatus according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
  • the term “based on” is “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • FIG. 1 is a schematic diagram of live broadcast data processing in the related art.
  • the online live broadcast room relies on three-level storage, that is, the local cache (local cache), unified cache and data storage (Date Base, DB) of the live broadcast server itself.
  • the data storage is the data storage source, and a cache can be set between the live server and the data storage source.
  • the local cache of each live server will expire periodically, after which data can be retrieved from the cache.
  • the cache itself will also set an expiration time.
  • the live server finds that the data in the cache is also expired, it will obtain all the online live room data from the data storage and write it into the cache.
  • the problem with this solution is that because the live server corresponding to the online live room has a huge amount of traffic, the live server needs many instances to provide services. When the number of instances increases, it will cause huge pressure on the downstream cache. When the access volume is large, the response time of the data storage itself will also be slow, and the refresh time will be very long. In addition, due to the large number of instances, there may be multiple instances returning to the source at the same time.
  • embodiments of the present disclosure provide a method for processing live broadcast data, which will be described below with reference to specific embodiments.
  • FIG. 2a is a schematic flowchart of a method for processing live broadcast data provided by an embodiment of the present disclosure.
  • the method can be executed by a live broadcast data processing apparatus, wherein the apparatus can be implemented by software and/or hardware, and can generally be integrated in an electronic device.
  • the method includes:
  • Step 101 Send the online live studio identifier to the identifier cache through the first server.
  • the first server may be a server for acquiring live broadcast data and sending an online live broadcast room identifier, and the number of the first servers may be multiple, which is not particularly limited.
  • the online live broadcast room identifier is used to represent the live broadcast room that has been opened and is being broadcast live, and the identifier can be represented in the form of numbers and/or letters, which is not limited in particular.
  • the ID cache refers to the external cache used to store the ID of the online live studio.
  • the first server may detect the opening and closing of the live broadcast room, and send the identifier of the currently opened online live broadcast room to the identifier cache for storage.
  • the method for processing live broadcast data may further include: performing, by the first server, an update operation on the identification of the online live broadcast room in the identification cache, where the update operation includes an insert operation and/or a delete operation.
  • the first server can update the stored online live room ID in the ID cache, insert the online live room ID of the newly opened live room into it, and delete the closed live room ID. Online live studio logo.
  • Step 102 Obtain the online live broadcast room identifier from the identification cache through the second server, acquire online live broadcast data from the data storage according to the online live broadcast room identifier, and write the online live broadcast data into the data cache.
  • the second server may be a server newly added in the embodiment of the disclosure for updating live broadcast data
  • the data cache may be an external cache used for storing live broadcast data, which is different from the above-mentioned identification cache.
  • the above identification cache and data cache can be specifically implemented by using Redis database or other central cache memory database Memcached, etc., which are only examples, and other databases are also applicable.
  • the second server may obtain all the current online live studio identifiers from the identifier cache at fixed time intervals, access the data storage, obtain and package the corresponding online live broadcast data according to the online live studio identifiers, and then The online live broadcast data is written into the data cache for later use.
  • the above-mentioned fixed time interval can be set according to the actual situation, for example, the fixed time interval can be 1.5 seconds.
  • Step 103 Read the online live broadcast data from the data cache through the first server, and distribute the online live broadcast data.
  • the first server may read the online live broadcast data from the data cache, and distribute the online live broadcast data to each client, so that the user can watch the live broadcast.
  • the online live broadcast data can be read according to the set time interval, that is, the first server can obtain the online live broadcast data regularly, and the set time interval is not limited, for example, the set time interval can be 1 second.
  • the first server sends the online live studio identifier to the identifier cache; the second server obtains the online live studio identifier from the identifier cache, and obtains the online live studio identifier from the data storage according to the online live studio identifier.
  • live broadcast data and write the online live broadcast data into the data cache; read the online live broadcast data from the data cache through the first server, and distribute the online live broadcast data.
  • the online live broadcast data is written and read through two servers respectively, which realizes the separation of reading and writing, reduces the data delay, reduces the pressure on the cache due to the increase of access volume, and avoids the problem of cache avalanches. , thereby improving the capacity and stability of live broadcast data processing.
  • the identification cache includes at least two cache segments
  • the method for processing live broadcast data may further include: comparing the number of online live room identifiers in each cache segment with a preset threshold, and adjusting according to the comparison result The number of cache shards, where each cache shard stores a part of multiple online live room identifiers.
  • the cache shards can be used to store some online live studio identifiers, and multiple cache shards form an identifier cache.
  • the number of cache shards is not limited and can be set according to the actual situation.
  • the number of online live studio IDs stored in each cache segment can be the same, and can change according to the real-time total number of online live studio IDs. For example, when the total number of online live studio IDs is 1,000 and 10 cache segments are set , each cache shard can evenly store 100 online live studio IDs.
  • the first server sends the ID of the online live studio to the ID cache, and after storing it in each cache segment, it can compare the number of IDs of the online live studio in each cache segment with a preset threshold, and increase the number according to the comparison result. Or reduce the number of cache shards to accommodate the traffic in the live broadcast room.
  • dynamically adjust the number of cached shards according to the comparison result including: when the comparison result is that the number of online live room identifiers in each cached shard is greater than the first threshold in the preset thresholds, increasing the number of cached shards. Quantity; when the comparison result is that the number of online live room identifiers in each cache shard is less than the second threshold in the preset threshold, the number of cache shards is reduced.
  • the preset threshold may include the number of the most and least stored online live studio identifiers in the cache segment, the first threshold is the most stored number, the second threshold is the least stored number, and the first threshold is greater than the second threshold.
  • the access speed may be slowed down; when the second threshold is not met, it may be too redundant, and the number of segments to be refreshed is increased for the second downstream server. ,Reduce efficiency.
  • the comparison result is that the number of online live room identifiers in each cache segment is greater than the first threshold in the preset thresholds
  • the number of cache segments is increased, so that each cache segment stores The number of online live room logos is reduced, and the processing speed is improved.
  • the comparison result is that the number of online live room identifiers in each cache segment is less than the second threshold in the preset threshold
  • the number of cache segments is reduced, so that the online live room identifiers stored in each cache segment are The number of shards increases, reducing the number of refreshed shards and improving efficiency.
  • the specific value of the increase or decrease in the number of the above-mentioned cache fragments can be set according to the actual situation, so that the number of online live room identifiers in each cache fragment can be between the above-mentioned second threshold and the first threshold. .
  • a sharding scheme that dynamically expands or shrinks the capacity of the logo cache can be introduced, and the number of cache shards can be dynamically adjusted according to the change in the number of logos of the online live broadcast room, that is, the logo cache can be adjusted according to the number of the current live broadcast room. Dynamic expansion or contraction to ensure access efficiency and stability of the identity cache.
  • the second server includes at least two refresh execution modules
  • obtaining the online live studio ID from the ID cache through the second server may include: at least two refresh execution modules in the second server, respectively, from the ID
  • the online live room identifier is obtained from the cache shards corresponding to each refresh execution module in the cache, and each refresh execution module corresponds to a preset number of cache shards.
  • the refresh execution module is a specific function module used for data update in the second server
  • the second server may be composed of a refresh scheduling module and a plurality of refresh execution modules.
  • Each refresh execution module can correspond to a preset number of cache shards, and the preset number can be set according to the actual situation, for example, one refresh execution module corresponds to 10 cache shards.
  • the refresh scheduling module in the second server may schedule each refresh execution module regularly, and obtain the online live room identifiers in the identifier cache from the cache segments corresponding to each cache execution module respectively.
  • the method for processing live data may further include: adjusting the number of refresh execution modules according to the adjustment result of the number of cache segments, wherein the number of refresh execution modules is proportional to the number of cache segments.
  • the number of cache shards can be dynamically increased or decreased, with the adjustment of the number of cache shards, the number of refresh execution modules will also be adjusted to cope with it, that is, the number of refresh execution modules and the number of cache shards proportional.
  • the advantage of this setting is that through the distributed setting and dynamic adjustment of the refresh execution module, the problem of prolonging the refresh time caused by the change of the number of cache shards can be solved, and the refresh time of the data in the live broadcast room can be kept within a certain range. .
  • FIG. 2b is a schematic flowchart of another live broadcast data processing method provided by an embodiment of the present disclosure.
  • the live broadcast data processing method can be applied to the first server, and the live broadcast data processing method includes:
  • the first server may be a server for acquiring live broadcast data and sending an online live broadcast room identifier, and the number of the first servers may be multiple, which is not particularly limited.
  • the online live broadcast room identifier is used to represent the live broadcast room that has been opened and is being broadcast live, and the identifier can be represented in the form of numbers and/or letters, which is not limited in particular.
  • the ID cache refers to the external cache used to store the ID of the online live studio.
  • the first server may detect the opening and closing of the live broadcast room, and send the identifier of the currently opened online live broadcast room to the identifier cache for storage.
  • the above-mentioned live broadcast data processing method may further include:
  • An update operation is performed on the identifier of the online live studio in the identifier cache, wherein the update operation includes an insert operation and/or a delete operation. Further, with the opening and closing of the live room, the first server can update the stored online live room ID in the ID cache, insert the online live room ID of the newly opened live room into it, and delete the closed live room ID. Online live studio logo.
  • the second server may be a server newly added in the embodiment of the present disclosure for updating live broadcast data
  • the data cache may be an external cache for storing live broadcast data, which is different from the above-mentioned identification cache.
  • the above identification cache and data cache can be specifically implemented by using the Redis database or other central cache memory database Memcached, etc., which are only examples, and other databases are also applicable.
  • the second server may obtain all the current online live studio identifiers from the identifier cache at fixed time intervals, access the data storage, obtain and package the corresponding online live broadcast data according to the online live studio identifiers, and then The online live broadcast data is written into the data cache for later use.
  • the above-mentioned fixed time interval can be set according to the actual situation, for example, the fixed time interval can be 1.5 seconds.
  • the first server may read the online live broadcast data from the data cache, and distribute the online live broadcast data to each client, so that the user can watch the live broadcast.
  • the online live broadcast data can be read according to the set time interval, that is, the first server can obtain the online live broadcast data regularly, and the set time interval is not limited, for example, the set time interval can be 1 second.
  • the first server sends the online live studio identifier to the identifier cache; the second server obtains the online live studio identifier from the identifier cache, and obtains the online live studio identifier from the data storage according to the online live studio identifier.
  • live broadcast data and write the online live broadcast data into the data cache; read the online live broadcast data from the data cache through the first server, and distribute the online live broadcast data.
  • the online live broadcast data is written and read through two servers respectively, which realizes the separation of reading and writing, reduces the data delay, reduces the pressure on the cache due to the increase of access volume, and avoids the problem of cache avalanches. , thereby improving the capacity and stability of live broadcast data processing.
  • the identification cache includes at least two cache segments
  • the method for processing live broadcast data may further include: comparing the number of online live room identifiers in each cache segment with a preset threshold, and adjusting according to the comparison result The number of cache shards, where each cache shard stores a part of multiple online live room identifiers.
  • the cache shards can be used to store some online live studio identifiers, and multiple cache shards form an identifier cache.
  • the number of cache shards is not limited and can be set according to the actual situation.
  • the number of online live studio IDs stored in each cache segment can be the same, and can change according to the real-time total number of online live studio IDs. For example, when the total number of online live studio IDs is 1,000 and 10 cache segments are set , each cache shard can evenly store 100 online live studio IDs.
  • the first server sends the ID of the online live studio to the ID cache, and after storing it in each cache segment, it can compare the number of IDs of the online live studio in each cache segment with a preset threshold, and increase the number according to the comparison result. Or reduce the number of cache shards to accommodate the traffic in the live broadcast room.
  • adjusting the number of cached shards according to the comparison result includes: when the comparison result is that the number of online live room identifiers in each cached shard is greater than the first threshold in the preset thresholds, increasing the number of cached shards ; When the comparison result is that the number of online live room identifiers in each cache segment is less than the second threshold in the preset threshold, the number of cache segments is reduced.
  • the preset threshold may include the number of the most and least stored online live studio identifiers in the cache segment, the first threshold is the most stored number, the second threshold is the least stored number, and the first threshold is greater than the second threshold.
  • the access speed may be slowed down; when the second threshold is not met, it may be too redundant, and the number of segments to be refreshed is increased for the second downstream server. ,Reduce efficiency.
  • the comparison result is that the number of online live room identifiers in each cache segment is greater than the first threshold in the preset thresholds
  • the number of cache segments is increased, so that each cache segment stores The number of online live room logos is reduced, and the processing speed is improved.
  • the comparison result is that the number of online live room identifiers in each cache segment is less than the second threshold in the preset threshold
  • the number of cache segments is reduced, so that the online live room identifiers stored in each cache segment are The number of shards increases, reducing the number of refreshed shards and improving efficiency.
  • the specific value of the increase or decrease in the number of the above-mentioned cache fragments can be set according to the actual situation, so that the number of online live room identifiers in each cache fragment can be between the above-mentioned second threshold and the first threshold. .
  • a sharding scheme that dynamically expands or shrinks the capacity of the logo cache can be introduced, and the number of cache shards can be dynamically adjusted according to the change in the number of logos of the online live broadcast room, that is, the logo cache can be adjusted according to the number of the current live broadcast room. Dynamic expansion or contraction to ensure access efficiency and stability of the identity cache.
  • the online live broadcast data is written and read through two servers respectively, which realizes the separation of reading and writing, reduces the data delay, reduces the pressure on the cache due to the increase of access volume, and avoids the problem of cache avalanches. , thereby improving the capacity and stability of live broadcast data processing.
  • FIG. 2c is a schematic flowchart of another method for processing live broadcast data provided by an embodiment of the present disclosure.
  • the method for processing live broadcast data can be applied to a second server, and the method for processing live broadcast data includes:
  • the first server may be a server for acquiring live broadcast data and sending an online live broadcast room identifier, and the number of the first servers may be multiple, which is not particularly limited.
  • the online live broadcast room identifier is used to represent the live broadcast room that has been opened and is being broadcast live, and the identifier can be represented in the form of numbers and/or letters, which is not limited in particular.
  • the ID cache refers to the external cache used to store the ID of the online live studio.
  • the first server may detect the opening and closing of the live broadcast room, and send the identifier of the currently opened online live broadcast room to the identifier cache for storage.
  • the second server may be a server newly added in the embodiment of the present disclosure for updating live broadcast data
  • the data cache may be an external cache for storing live broadcast data, which is different from the above-mentioned identification cache.
  • the above identification cache and data cache can be specifically implemented by using the Redis database or other central cache memory database Memcached, etc., which are only examples, and other databases are also applicable.
  • the second server may obtain all the current online live studio identifiers from the identifier cache at fixed time intervals, access the data storage, obtain and package the corresponding online live broadcast data according to the online live studio identifiers, and then The online live broadcast data is written into the data cache for later use.
  • the above-mentioned fixed time interval can be set according to the actual situation, for example, the fixed time interval can be 1.5 seconds.
  • the identification cache includes at least two cache fragments
  • the method further includes: by using at least two refresh execution modules, respectively, from the identification cache, the cache fragments corresponding to each of the refresh execution modules are retrieved.
  • the identifier of the online live broadcast room is obtained in , and each refresh execution module corresponds to a preset number of the cache fragments.
  • FIG. 3 is a schematic flowchart of another live broadcast data processing method provided by an embodiment of the present disclosure. On the basis of the foregoing embodiment, this embodiment further optimizes the foregoing live broadcast data processing method. As shown in Figure 3, the method includes:
  • Step 201 Send the online live studio identifier to the identifier cache through the first server.
  • the method for processing live broadcast data may further include: performing, by the first server, an update operation on the identifier of the online live broadcast room in the identifier cache, where the update operation includes an insert operation and/or a delete operation.
  • steps 202 to 205 may be performed, or steps 204 to 205 may be directly performed, that is, steps 202 to 203 are optional steps.
  • Step 202 Compare the number of online live room identifiers in each cache segment in the identifier cache with a preset threshold, and adjust the number of cache segments according to the comparison result.
  • the identification cache includes at least two cache segments, and each cache segment stores a part of multiple online live room identifiers.
  • dynamically adjust the number of cached shards according to the comparison result including: when the comparison result is that the number of online live room identifiers in each cached shard is greater than the first threshold in the preset thresholds, increasing the number of cached shards. Quantity; when the comparison result is that the number of online live room identifiers in each cache shard is less than the second threshold in the preset threshold, the number of cache shards is reduced.
  • Step 203 Adjust the number of refresh execution modules according to the result of adjusting the number of cache fragments.
  • the second server includes at least two refresh execution modules, and the number of refresh execution modules is proportional to the number of cache fragments.
  • Step 204 Obtain the online live broadcast room identifier from the identification cache through the second server, acquire the online live broadcast data from the data storage according to the online live broadcast room identifier, and write the online live broadcast data into the data cache.
  • obtaining the online live studio ID from the ID cache through the second server may include: at least two refresh execution modules in the second server, respectively from the ID cache.
  • the online live room identifier is obtained from the cache shards corresponding to each refresh execution module, and each refresh execution module corresponds to a preset number of cache shards.
  • Step 205 Read the online live broadcast data from the data cache through the first server, and distribute the online live broadcast data.
  • FIG. 4 is a schematic diagram of a live broadcast data processing provided by an embodiment of the present disclosure.
  • the first server 11 is equivalent to the live broadcast server in FIG. 1 .
  • a second server 12 is added for updating live broadcast data.
  • FIG. 4 is different from FIG. 1 in that the cache is composed of an identification cache 13 and a data cache 14 .
  • the second server 12 may be composed of a refresh scheduling module 21 and multiple refresh execution modules 22, and the identification cache 13 may also be composed of multiple cache segments (not shown in the figure).
  • the number of cache fragments can be dynamically adjusted according to the change in the number of online live studio identifiers, and then the number of refresh execution modules 22 can be adjusted accordingly, so as to cope with and ensure the stability of the refresh time.
  • the first server 11 may insert or delete the online live room identifier in the identifier cache 13 , and the online live room identifier is maintained in the identifier cache 13 .
  • the second server 12 periodically obtains the identifiers of all current online live studios from the identifier cache 13 , and accesses the data storage 15 to package the data of all the online live studios, and writes them into the data cache 14 after the packaging is completed.
  • the first server 11 periodically obtains the full amount of online live studio data from the data cache 14 and provides services to the outside world.
  • the first server 11 and the second server 12 are independent of each other when performing specific functions, which realizes logical separation of read and write, and improves the capacity and stability of the overall system.
  • a sharding scheme (also called a bucketing scheme) that dynamically expands or shrinks capacity is introduced for the identity cache, and the number of shards or buckets is customized according to the current development scale of the live broadcast service.
  • the first server 11 stores the online live broadcast room identifier in the cache segment of the specified range.
  • the first server 11 adds or moves the online live broadcast room identifier in each cache segment. remove.
  • the refresh scheduling module 21 in the second server 12 can schedule each refresh execution module 22 on a regular basis, obtain the online live room identifiers from the cache segments corresponding to each cache execution module 22 in the identifier cache 13, and synchronously update the full online Live room logo.
  • the number of the refresh execution modules 22 can be dynamically increased or decreased, and the refresh scheduling module 21 can realize the perception of the increase or decrease of the refresh execution modules 22 through service discovery.
  • the problem of synchronous extension of the refresh market caused by the change in the number of cache shards can be dealt with. It is only necessary to increase or decrease the number of refresh execution modules 22, and the online live broadcast room can be refreshed. duration is kept within a certain range.
  • This scheme reduces the update and acquisition pressure of the cache through the update scheme of read and write separation, and reduces the systemic risk caused by excessive cache load; through the dynamic expansion or shrinkage of the identification cache, the fragmentation scheme, as well as the distributed
  • the refresh data scheme makes the overall architecture dynamically scalable, can cope with the rapid growth of business, and avoids the problem that the refresh time increases linearly with the number of online live rooms, reduces the refresh delay, and compresses the update delay of massive live data to second level.
  • the online live broadcast room identifier is sent to the identifier cache through the first server; Adjust the number of cache shards by comparing the results, adjust the result according to the number of cache shards, and adjust the number of refresh execution modules; obtain the online live room ID from the ID cache through the second server, and obtain the online live room ID from the data storage according to the online live room ID live broadcast data, and write the online live broadcast data into the data cache; read the online live broadcast data from the data cache through the first server, and distribute the online live broadcast data.
  • the online live broadcast data is written and read through two servers respectively, which realizes the separation of read and write, reduces the data delay, reduces the pressure on the cache due to the increase of access volume, and avoids the problem of cache avalanches. , thereby improving the capacity and stability of live broadcast data processing; and according to the current number of online live broadcast rooms, the logo cache and the refresh execution module in the server can be dynamically expanded or reduced to enhance scalability, ensure access efficiency and identify Cache stability.
  • FIG. 5 is a schematic structural diagram of a live broadcast data processing apparatus according to an embodiment of the present disclosure.
  • the apparatus may be implemented by software and/or hardware, and may generally be integrated in an electronic device.
  • the device includes:
  • the identification module 301 is used to send the online live room identification to the identification cache through the first server;
  • a data writing module 302 is configured to obtain the online live broadcast room identifier from the identification cache through the second server, obtain online live broadcast data from a data storage according to the online live broadcast room identifier, and write the online live broadcast data into the data cache;
  • a data reading module 303 configured to read the online live broadcast data from the data cache through the first server, and distribute the online live broadcast data.
  • the device also includes an identification update module for:
  • An update operation is performed on the identifier of the online live studio in the identifier cache by the first server, wherein the update operation includes an insert operation and/or a delete operation.
  • the online live broadcast data is read at a set time interval.
  • the identification cache includes at least two cache fragments
  • the device further includes a first adjustment module for:
  • the number of the online live room identifiers in each of the cached fragments is compared with a preset threshold, and the number of the cached fragments is adjusted according to the comparison result, wherein each of the cached fragments stores a plurality of A part of the identifier of the online live studio.
  • the first adjustment module is specifically used for:
  • the comparison result is that the number of the online live room identifiers in each of the cached segments is greater than the first threshold in the preset thresholds, the number of the cached segments is increased;
  • the comparison result is that the number of the online live room identifiers in each of the cached segments is less than the second threshold in the preset thresholds, the number of the cached segments is reduced.
  • the second server includes at least two refresh execution modules, and the data writing module 302 is specifically configured to:
  • the identifiers of the online live broadcast rooms are obtained from the cache segments corresponding to the refresh execution modules in the identifier cache, respectively, and each refresh The execution module corresponds to a preset number of the cache slices.
  • the device further includes a second adjustment module for:
  • the number of the refresh execution modules is adjusted according to the adjustment result of the number of the cache fragments, wherein the number of the refresh execution modules is proportional to the number of the cache fragments.
  • the live broadcast data processing apparatus provided by the embodiment of the present disclosure can execute the live broadcast data processing method provided by any embodiment of the present disclosure, and has functional modules and beneficial effects corresponding to the execution method.
  • Embodiments of the present disclosure also provide a schematic structural diagram of a live broadcast data processing device, where the live broadcast data processing device includes:
  • the sending module is configured to send the online live studio identifier to the identifier cache, so that the second server obtains the online live studio identifier from the identifier cache, and obtains the online live broadcast data from the data storage according to the online live studio identifier, and writing the online live data into the data cache;
  • a distribution module configured to read the online live broadcast data from the data cache, and distribute the online live broadcast data.
  • the apparatus is further configured to: perform an update operation on the online live studio identifier in the identifier cache, where the update operation includes an insert operation and/or a delete operation.
  • the identification cache includes at least two cache fragments, and the device is further configured to:
  • the number of the online live room identifiers in each of the cached fragments is compared with a preset threshold, and the number of the cached fragments is adjusted according to the comparison result, wherein each of the cached fragments stores a plurality of A part of the identifier of the online live studio.
  • the device when used to adjust the number of the cache fragments according to the comparison result, it is specifically used to:
  • the comparison result is that the number of the online live room identifiers in each of the cached segments is greater than the first threshold in the preset thresholds, the number of the cached segments is increased;
  • the comparison result is that the number of the online live room identifiers in each of the cached segments is less than the second threshold in the preset thresholds, the number of the cached segments is reduced.
  • the live broadcast data processing apparatus provided by the embodiment of the present disclosure can execute the live broadcast data processing method provided by any embodiment of the present disclosure, and has functional modules and beneficial effects corresponding to the execution method.
  • Embodiments of the present disclosure also provide a schematic structural diagram of a live broadcast data processing device, where the live broadcast data processing device includes:
  • a first obtaining module configured to obtain an online live room identification from the identification cache, and the online live room identification is sent to the identification cache by the first server;
  • a second acquisition module configured to acquire online live broadcast data from the data storage according to the online live broadcast room identifier
  • a writing module configured to write the online live broadcast data into a data cache, for the first server to read the online live broadcast data from the data cache, and distribute the online live broadcast data.
  • the identification cache includes at least two cache segments
  • the device is further configured to: through at least two refresh execution modules, respectively, from the identification cache and corresponding to each of the refresh execution modules, the cache segments.
  • the online live room identifier is obtained in the slice, and each refresh execution module corresponds to a preset number of the cache slices.
  • the live broadcast data processing apparatus provided by the embodiment of the present disclosure can execute the live broadcast data processing method provided by any embodiment of the present disclosure, and has functional modules and beneficial effects corresponding to the execution method.
  • An embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored in the storage medium, and the computer program is used to implement the live broadcast data processing method provided by any embodiment of the present disclosure when executed.
  • An embodiment of the present disclosure also provides a computer program product, including a computer program/instruction, which, when executed by a processor, implements the live broadcast data processing method provided by any embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. Referring specifically to FIG. 6 below, it shows a schematic structural diagram of an electronic device 400 suitable for implementing an embodiment of the present disclosure.
  • the electronic device 400 in the embodiment of the present disclosure may include, but is not limited to, such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle-mounted terminal ( For example, mobile terminals such as car navigation terminals) and the like, and stationary terminals such as digital TVs, desktop computers, and the like.
  • the electronic device shown in FIG. 6 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
  • the electronic device 400 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 401 that may be loaded into random access according to a program stored in a read only memory (ROM) 402 or from a storage device 408 Various appropriate actions and processes are executed by the programs in the memory (RAM) 403 . In the RAM 403, various programs and data required for the operation of the electronic device 400 are also stored.
  • the processing device 401, the ROM 402, and the RAM 403 are connected to each other through a bus 404.
  • An input/output (I/O) interface 405 is also connected to bus 404 .
  • I/O interface 405 the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 407 of a computer, etc.; a storage device 408 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 409. Communication means 409 may allow electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While FIG. 6 shows electronic device 400 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
  • input devices 406 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.
  • LCD liquid crystal display
  • speakers vibration
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication device 409, or from the storage device 408, or from the ROM 402.
  • the processing device 401 When the computer program is executed by the processing device 401, the above-mentioned functions defined in the live broadcast data processing method of the embodiment of the present disclosure are executed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • the client and server can use any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol) to communicate, and can communicate with digital data in any form or medium Communication (eg, a communication network) interconnects.
  • HTTP HyperText Transfer Protocol
  • Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently known or future development network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: sends the online live room identification to the identification cache through the first server; through the second server Obtain the online live broadcast room ID from the ID cache, obtain online live broadcast data from the data storage according to the online live broadcast room ID, and write the online live broadcast data into the data cache; The online live broadcast data is read from the data cache, and the online live broadcast data is distributed.
  • Computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner. Among them, the name of the unit does not constitute a limitation of the unit itself under certain circumstances.
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Systems on Chips (SOCs), Complex Programmable Logical Devices (CPLDs) and more.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs Systems on Chips
  • CPLDs Complex Programmable Logical Devices
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • the present disclosure provides a method for processing live broadcast data, including:
  • the online live broadcast data is read from the data cache by the first server, and the online live broadcast data is distributed.
  • the method further includes:
  • An update operation is performed on the identifier of the online live studio in the identifier cache by the first server, wherein the update operation includes an insert operation and/or a delete operation.
  • the online live broadcast data is read according to a set time interval.
  • the identification cache includes at least two cache segments, and the method further includes:
  • the number of the online live room identifiers in each of the cached fragments is compared with a preset threshold, and the number of the cached fragments is adjusted according to the comparison result, wherein each of the cached fragments stores a plurality of A part of the identifier of the online live studio.
  • the dynamically adjusting the number of the cached shards according to the comparison result includes:
  • the comparison result is that the number of the online live room identifiers in each of the cached segments is greater than the first threshold in the preset thresholds, the number of the cached segments is increased;
  • the comparison result is that the number of the online live room identifiers in each of the cached segments is less than the second threshold in the preset thresholds, the number of the cached segments is reduced.
  • the second server includes at least two refresh execution modules, and the second server obtains the online live broadcast room from the identification cache through the second server identification, including:
  • the identifiers of the online live broadcast rooms are obtained from the cache segments corresponding to the refresh execution modules in the identifier cache, respectively, and each refresh The execution module corresponds to a preset number of the cache slices.
  • the live broadcast data processing method provided by the present disclosure further includes:
  • the number of the refresh execution modules is adjusted according to the adjustment result of the number of the cache fragments, wherein the number of the refresh execution modules is proportional to the number of the cache fragments.
  • the present disclosure provides a method for processing live broadcast data, including:
  • the online live broadcast data is read from the data cache, and the online live broadcast data is distributed.
  • the live broadcast data processing method provided by the present disclosure further includes:
  • An update operation is performed on the identifier of the online live studio in the identifier cache, wherein the update operation includes an insert operation and/or a delete operation.
  • the identification cache includes at least two cache segments
  • the live broadcast data processing method provided by the present disclosure includes:
  • the number of the online live room identifiers in each of the cached fragments is compared with a preset threshold, and the number of the cached fragments is adjusted according to the comparison result, wherein each of the cached fragments stores a plurality of A part of the identifier of the online live studio.
  • adjusting the number of the cache shards according to the comparison result includes:
  • the comparison result is that the number of the online live room identifiers in each of the cached segments is greater than the first threshold in the preset thresholds, the number of the cached segments is increased;
  • the comparison result is that the number of the online live room identifiers in each of the cached segments is less than the second threshold in the preset thresholds, the number of the cached segments is reduced.
  • the present disclosure provides a method for processing live broadcast data, including:
  • the online live broadcast data is written into a data cache for the first server to read the online live broadcast data from the data cache and distribute the online live broadcast data.
  • the identification cache includes at least two cache segments, and the method further includes:
  • the identifiers of the online live broadcast rooms are obtained respectively from the cache slices in the identifier cache corresponding to the refresh execution modules, and each refresh execution module corresponds to a preset number of the Cache shards.
  • the present disclosure provides a live data processing apparatus, including:
  • An identification module used for sending the online live studio identification to the identification cache through the first server
  • a data writing module configured to obtain the online live broadcast room identification from the identification cache through the second server, obtain online live broadcast data from the data storage according to the online live broadcast room identification, and write the online live broadcast data into data cache;
  • a data reading module configured to read the online live broadcast data from the data cache through the first server, and distribute the online live broadcast data.
  • the apparatus further includes an identification update module for:
  • An update operation is performed on the identifier of the online live studio in the identifier cache by the first server, wherein the update operation includes an insert operation and/or a delete operation.
  • the online live broadcast data is read according to a set time interval.
  • the identification cache includes at least two cache segments, and the apparatus further includes a first adjustment module for:
  • the number of the online live room identifiers in each of the cached fragments is compared with a preset threshold, and the number of the cached fragments is adjusted according to the comparison result, wherein each of the cached fragments stores a plurality of A part of the identifier of the online live studio.
  • the first adjustment module is specifically configured to:
  • the comparison result is that the number of the online live room identifiers in each of the cached segments is greater than the first threshold in the preset thresholds, the number of the cached segments is increased;
  • the comparison result is that the number of the online live room identifiers in each of the cached segments is less than the second threshold in the preset thresholds, the number of the cached segments is reduced.
  • the second server includes at least two refresh execution modules, and the data writing module is specifically configured to:
  • the identifiers of the online live broadcast rooms are obtained from the cache segments corresponding to the refresh execution modules in the identifier cache, respectively, and each refresh The execution module corresponds to a preset number of the cache slices.
  • the apparatus further includes a second adjustment module for:
  • the number of the refresh execution modules is adjusted according to the adjustment result of the number of the cache fragments, wherein the number of the refresh execution modules is proportional to the number of the cache fragments.
  • the present disclosure provides a live data processing apparatus, including:
  • the sending module is configured to send the online live studio identifier to the identifier cache, so that the second server obtains the online live studio identifier from the identifier cache, and obtains the online live broadcast data from the data storage according to the online live studio identifier, and writing the online live data into the data cache;
  • a distribution module configured to read the online live broadcast data from the data cache, and distribute the online live broadcast data.
  • the apparatus is further configured to: perform an update operation on the online live studio identifier in the identifier cache, where the update operation includes an insert operation and/or a delete operation.
  • the identifier cache includes at least two cache segments, and the apparatus is further configured to:
  • the number of the online live room identifiers in each of the cached fragments is compared with a preset threshold, and the number of the cached fragments is adjusted according to the comparison result, wherein each of the cached fragments stores a plurality of A part of the identifier of the online live studio.
  • the device when the device is used to adjust the number of the cached fragments according to the comparison result, it is specifically used for:
  • the comparison result is that the number of the online live room identifiers in each of the cached segments is greater than the first threshold in the preset thresholds, the number of the cached segments is increased;
  • the comparison result is that the number of the online live room identifiers in each of the cached segments is less than the second threshold in the preset thresholds, the number of the cached segments is reduced.
  • Embodiments of the present disclosure also provide a schematic structural diagram of a live broadcast data processing device, where the live broadcast data processing device includes:
  • a first obtaining module configured to obtain an online live room identification from the identification cache, and the online live room identification is sent to the identification cache by the first server;
  • a second acquisition module configured to acquire online live broadcast data from the data storage according to the online live broadcast room identifier
  • a writing module configured to write the online live broadcast data into a data cache, for the first server to read the online live broadcast data from the data cache, and distribute the online live broadcast data.
  • the identifier cache includes at least two cache segments, and the apparatus is further configured to: through at least two refresh execution modules, respectively The online live room identifier is obtained from the cache slice corresponding to each refresh execution module in the identifier cache, and each refresh execution module corresponds to a preset number of the cache slice.
  • the present disclosure provides an electronic device, comprising:
  • a memory for storing the processor-executable instructions
  • the processor is configured to read the executable instructions from the memory and execute the instructions to implement any of the live broadcast data processing methods provided in the present disclosure.
  • the present disclosure provides a computer-readable storage medium, where the storage medium stores a computer program, and the computer program is used to execute any of the live broadcasts provided by the present disclosure data processing method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本公开实施例提供一种直播数据处理方法、装置、设备及介质,其中该方法包括:通过第一服务器发送在线直播间标识至标识缓存中;通过第二服务器从标识缓存中获取在线直播间标识,根据在线直播间标识从数据存储器中获取在线直播数据,并将在线直播数据写入数据缓存中;通过第一服务器从数据缓存中读取在线直播数据,并分发在线直播数据。

Description

一种直播数据处理方法、装置、设备及介质
相关申请的交叉引用
本申请要求于2021年5月7日提交的,申请名称为“一种直播数据处理方法、装置、设备及介质”的、中国专利申请号为“202110496834.6”的优先权,该中国专利申请的全部内容通过引用结合在本申请中。
技术领域
本公开涉及数据处理技术领域,尤其涉及一种直播数据处理方法、装置、设备及介质。
背景技术
随着互联网技术的快速发展,观看直播成为人们生活中重要的一种娱乐方式。
目前在直播行业中,分发在线直播间数据和获取在线直播间数据是一个核心功能,通常由多个基础服务器完成,该基础服务器与数据存储源之间可以设置缓存,但是当直播访问量增大时,不仅存在因数据延时较长造成的响应慢,而且多个基础服务器同时更新缓存,可能会导致缓存持续更新,造成缓存雪崩等严重问题。
技术解决方案
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种直播数据处理方法、装置、设备及介质。
本公开实施例提供了一种直播数据处理方法,所述方法包括:
通过第一服务器发送在线直播间标识至标识缓存中;
通过第二服务器从所述标识缓存中获取所述在线直播间标识,根据所述在线直播间标识从数据存储器中获取在线直播数据,并将所述在线直播数据写入数据缓存中;
通过所述第一服务器从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
本公开实施例提供了一种直播数据处理方法,包括:
发送在线直播间标识至标识缓存中,使第二服务器从所述标识缓存中获取所述在线直播间标识,根据所述在线直播间标识从数据存储器中获取在线直播数据,并将所述在线直播数据写入数据缓存中;
从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
本公开实施例提供了一种直播数据处理方法,包括:
从标识缓存中获取在线直播间标识,所述在线直播间标识由第一服务器发送至所述标识缓存中;
根据所述在线直播间标识从数据存储器中获取在线直播数据;
将所述在线直播数据写入数据缓存中,供所述第一服务器从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
本公开实施例还提供了一种直播数据处理装置,所述装置包括:
标识模块,用于通过第一服务器发送在线直播间标识至标识缓存中;
数据写入模块,用于通过第二服务器从所述标识缓存中获取所述在线直播间标识,根据所述在线直播间标识从数据存储器中获取在线直播数据,并将所述在线直播数据写入数据缓存中;
数据读取模块,用于通过所述第一服务器从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
本公开实施例还提供了一种直播数据处理装置,所述装置包括:
发送模块,用于发送在线直播间标识至标识缓存中,使第二服务器从所述标识缓存中获取所述在线直播间标识,根据所述在线直播间标识从数据存储器中获取在线直播数据,并将所述在线直播数据写入数据缓存中;
分发模块,用于从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
本公开实施例还提供了一种直播数据处理装置,包括:
第一获取模块,用于从标识缓存中获取在线直播间标识,所述在线直播间标识由第一服务器发送至所述标识缓存中;
第二获取模块,用于根据所述在线直播间标识从数据存储器中获取在线直播数据;
写入模块,用于将所述在线直播数据写入数据缓存中,供所述第一服务器从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
本公开实施例还提供了一种电子设备,所述电子设备包括:处理器;用于存储所述处理器可执行指令的存储器;所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现如本公开实施例提供的直播数据处理方法。
本公开实施例还提供了一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行如本公开实施例提供的直播数据处理方法。
本公开实施例还提供了一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时实现如本公开实施例提供的直播数据处理方法。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为相关技术中直播数据处理的示意图;
图2a为本公开实施例提供的一种直播数据处理方法的流程示意图;
图2b为本公开实施例提供的一种直播数据处理方法的流程示意图;
图2c为本公开实施例提供的一种直播数据处理方法的流程示意图;
图3为本公开实施例提供的另一种直播数据处理方法的流程示意图;
图4为本公开实施例提供的一种直播数据处理的示意图;
图5为本公开实施例提供的一种直播数据处理装置的结构示意图;
图6为本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。 其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
目前在直播行业中,分发在线直播间数据和获取在线直播间数据是一个核心功能,通常由多个基础服务器完成,直接影响了直播推荐、直播连麦、进房、送礼等直播相关的功能。服务器作为直播的基础服务组件,在高峰时期每秒请求次数(Query Per Second,QPS)达到百万数量级,压力巨大。图1为相关技术中直播数据处理的示意图,如图1所示,在线直播间依赖三级存储,即直播服务器自身的本地缓存(local Cache)、统一的缓存以及数据存储器(Date Base,DB),数据存储器即为数据存储源,直播服务器与数据存储源之间可以设置缓存。
每个直播服务器的本地缓存会定期过期,过期后可以从缓存中获取数据。当然缓存自身也会设置过期时间,当直播服务器发现缓存中的数据也过期时,会从数据存储器获取此时所有的在线直播间数据,并将其写入缓存中。这种方案的问题在于,由于在线直播房间对应的直播服务器有着巨大的访问量,因此该直播服务器需要很多实例来提供服务,当实例数量上涨时,对下游的缓存造成巨大压力。当访问量较多时,数据存储器自身的响应时间也会变慢,刷新时间很长。另外由于实例众多可能存在多个实例同时回源,当请求较多时,各个实例会争夺更新,使得实例读取失败,进而引发缓存雪崩等严重问题。为了解决上述问题,本公开实施例提供了一种直播数据处理方法,下面结合具体的实施例对该方法进行介绍。
图2a为本公开实施例提供的一种直播数据处理方法的流程示意图,该方法可以由直播数据处理装置执行,其中该装置可以采用软件和/或硬件实现,一般可集成在电子设备中。如图2a所示,该方法包括:
步骤101、通过第一服务器发送在线直播间标识至标识缓存中。
其中,第一服务器可以为用于获取直播数据以及发送在线直播间标识的服务器,第一服务器的数量可以为多个,具体不限。在线直播间标识用于表征已经开启的正在进行直播的直播间,该标识可以采用数字和/或字母形式进行表示,具体不限。标识缓存是指用于存储在线直播间标识的外部缓存。
本公开实施例中,第一服务器可以检测直播间的开启和关闭,并发送当前开启的在线直播间标识至标识缓存中进行存储。可选的,直播数据处理方法还可以包括:通过第一服务器对标识缓存中的在线直播间标识执行更新操作,其中,更新操作包括插入操作和/或删除操作。进一步的,第一服务器可以随着直播间的开启和关闭,可以更新标识缓存中的存储的在线直播间标识,将最新开启的直播间的在线直播间标识插入进去并且删除已经关闭的直播间的在线直播间标识。
步骤102、通过第二服务器从标识缓存中获取在线直播间标识,根据在线直播间标识从数据存储器中获取在线直播数据,并将在线直播数据写入数据缓存中。
其中,第二服务器可以为本公开实施例中新增的用于更新直播数据的服务器,数据缓存可以为用于存储直播数据的外部缓存,与上述标识缓存不同。上述标识缓存和数据缓存具体可以采用Redis数据库或其他中心缓存的内存性数据库 Memcached等实现,仅为示例,其他数据库也可适用。
本公开实施例中,第二服务器可以按照固定时间间隔从标识缓存中获取当前全部的在线直播间标识,并访问数据存储器,根据在线直播间标识获取并打包对应的在线直播数据,之后将打包之后的在线直播数据写入数据缓存中,以备后用。上述固定时间间隔可以根据实际情况设置,例如固定时间间隔可以为1.5秒。
步骤103、通过第一服务器从数据缓存中读取在线直播数据,并分发在线直播数据。
具体的,第一服务器可以从数据缓存中读取在线直播数据,并将在线直播数据分发至各客户端,使用户可以观看直播。在线直播数据可以按照设定时间间隔进行读取,也即第一服务器可以定时获取在线直播数据,设定时间间隔不限,例如设定时间间隔可以为1秒。
本公开实施例提供的直播数据处理方案,通过第一服务器发送在线直播间标识至标识缓存中;通过第二服务器从标识缓存中获取在线直播间标识,根据在线直播间标识从数据存储器中获取在线直播数据,并将在线直播数据写入数据缓存中;通过第一服务器从数据缓存中读取在线直播数据,并分发在线直播数据。采用上述技术方案,分别通过两个服务器进行在线直播数据的写入和读取,实现了读写分离,降低了数据延时,减少缓存因访问量增大造成的压力,避免造成缓存雪崩的问题,进而提升了直播数据处理的容量和稳定性。
在一些实施例中,标识缓存中包括至少两个缓存分片,直播数据处理方法还可以包括:将每个缓存分片中的在线直播间标识的数量与预设阈值进行对比,根据对比结果调整缓存分片的数量,其中,每个缓存分片中存储多个在线直播间标识中的一部分。
其中,缓存分片可以用于存储部分在线直播间标识,多个缓存分片组成了标识缓存,缓存分片的数量不限,可以根据实际情况设置。每个缓存分片中存储的在线直播间标识的数量可以相同,并且可以根据在线直播间标识实时总数量变化,例如当在线直播间标识的总数量为1000个,设置了10个缓存分片时,每个缓存分片可以均匀存储100个在线直播间标识。
具体的,第一服务器将在线直播间标识发送至标识缓存,存储在各缓存分片之后,可以将每个缓存分片中的在线直播间标识的数量与预设阈值进行对比,根据对比结果增加或减少缓存分片的数量,以适应直播间的访问量。
可选的,根据对比结果动态调整缓存分片的数量,包括:当对比结果为每个缓存分片中的在线直播间标识的数量大于预设阈值中的第一阈值,则增加缓存分片的数量;当对比结果为每个缓存分片中的在线直播间标识的数量小于预设阈值中的第二阈值,则减少缓存分片的数量。预设阈值可以包括缓存分片中最多存储和最少存储的在线直播间标识的数量,第一阈值为最多存储的数量,第二阈值为最少存储的数量,第一阈值大于第二阈值。当每个缓存分片中在线直播间标识的数量超过第一阈值可能会拖慢访问速度;当不满足第二阈值,可能会过于冗余,对于下游第二服务器来说增加刷新的分片数量,降低效率。
本公开实施例中,当对比结果为每个缓存分片中的在线直播间标识的数量大于预设阈值中的第一阈值,则增加缓存分片的数量,以使每个缓存分片中存储的在线直播间标识的数量降低,提升处理速度。而当对比结果为每个缓存分片中的在线直播间标识的数量小于预设阈值中的第二阈值,则减少缓存分片的数量,以使每个缓存分片中存储的在线直播间标识的数量增大,减少刷新的分片数量,提升效率。可以理解的是,上述缓存分片的数量增加或减少的具体值可以根据实际 情况设置,使每个缓存分片中的在线直播间标识的数量在上述第二阈值和第一阈值之间即可。
上述方案中,针对标识缓存可以引入动态扩容或缩容的分片方案,根据在线直播间标识的数量变化可以动态调整缓存分片的数量,也即根据当前在线直播间的数量可以对标识缓存进行动态扩容或缩容,以保证访问效率以及标识缓存的稳定性。
在一些实施例中,第二服务器包括至少两个刷新执行模块,通过第二服务器从标识缓存中获取在线直播间标识,可以包括:通过第二服务器中的至少两个刷新执行模块,分别从标识缓存中与各刷新执行模块对应的缓存分片中获取在线直播间标识,每个刷新执行模块对应预设数量的缓存分片。
其中,刷新执行模块是第二服务器中用于进行数据更新的具体功能模块,第二服务器可以由刷新调度模块和多个刷新执行模块构成。每个刷新执行模块可以对应预设数量的缓存分片,预设数量可以根据实际情况设置,例如一个刷新执行模块对应10个缓存分片。第二服务器中的刷新调度模块可以定时调度各个刷新执行模块,分别从标识缓存中与各缓存执行模块对应的缓存分片中获取其中的在线直播间标识。
在一些实施例中,直播数据处理方法还可以包括:根据缓存分片的数量调整结果,调整刷新执行模块的数量,其中,刷新执行模块的数量与缓存分片的数量成正比。
由于缓存分片的数量可以动态增加或减少,随着缓存分片的数量的调整,刷新执行模块的数量也会随之调整,以实现应对,也即刷新执行模块的数量与缓存分片的数量成正比。这样设置的好处在于,通过对刷新执行模块的分布式设置和动态调整,可以解决随着缓存分片的数量变化导致的刷新时长延长的问题,可以将直播间数据的刷新时长保持在一定范围内。
图2b为本公开实施例提供的另一种直播数据处理方法的流程示意图,该直播数据处理方法可适用于第一服务器,该直播数据处理方法包括:
21、发送在线直播间标识至标识缓存中,使第二服务器从所述标识缓存中获取所述在线直播间标识,根据所述在线直播间标识从数据存储器中获取在线直播数据,并将所述在线直播数据写入数据缓存中;
可选地,第一服务器可以为用于获取直播数据以及发送在线直播间标识的服务器,第一服务器的数量可以为多个,具体不限。在线直播间标识用于表征已经开启的正在进行直播的直播间,该标识可以采用数字和/或字母形式进行表示,具体不限。标识缓存是指用于存储在线直播间标识的外部缓存。
本公开实施例中,第一服务器可以检测直播间的开启和关闭,并发送当前开启的在线直播间标识至标识缓存中进行存储。
可选地,上述直播数据处理方法还可以包括:
对标识缓存中的在线直播间标识执行更新操作,其中,更新操作包括插入操作和/或删除操作。进一步的,第一服务器可以随着直播间的开启和关闭,可以更新标识缓存中的存储的在线直播间标识,将最新开启的直播间的在线直播间标识插入进去并且删除已经关闭的直播间的在线直播间标识。
第二服务器可以为本公开实施例中新增的用于更新直播数据的服务器,数据缓存可以为用于存储直播数据的外部缓存,与上述标识缓存不同。上述标识缓存和数据缓存具体可以采用Redis数据库或其他中心缓存的内存性数据库Memcached等实现,仅为示例,其他数据库也可适用。
本公开实施例中,第二服务器可以按照固定时间间隔从标识缓存中获取当前全部的在线直播间标识,并访问数据存储器,根据在线直播间标识获取并打包对应的在线直播数据,之后将打包之后的在线直播数据写入数据缓存中,以备后用。上述固定时间间隔可以根据实际情况设置,例如固定时间间隔可以为1.5秒。
22、从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
可选地,第一服务器可以从数据缓存中读取在线直播数据,并将在线直播数据分发至各客户端,使用户可以观看直播。在线直播数据可以按照设定时间间隔进行读取,也即第一服务器可以定时获取在线直播数据,设定时间间隔不限,例如设定时间间隔可以为1秒。
本公开实施例提供的直播数据处理方案,通过第一服务器发送在线直播间标识至标识缓存中;通过第二服务器从标识缓存中获取在线直播间标识,根据在线直播间标识从数据存储器中获取在线直播数据,并将在线直播数据写入数据缓存中;通过第一服务器从数据缓存中读取在线直播数据,并分发在线直播数据。采用上述技术方案,分别通过两个服务器进行在线直播数据的写入和读取,实现了读写分离,降低了数据延时,减少缓存因访问量增大造成的压力,避免造成缓存雪崩的问题,进而提升了直播数据处理的容量和稳定性。
在一些实施例中,标识缓存中包括至少两个缓存分片,直播数据处理方法还可以包括:将每个缓存分片中的在线直播间标识的数量与预设阈值进行对比,根据对比结果调整缓存分片的数量,其中,每个缓存分片中存储多个在线直播间标识中的一部分。
其中,缓存分片可以用于存储部分在线直播间标识,多个缓存分片组成了标识缓存,缓存分片的数量不限,可以根据实际情况设置。每个缓存分片中存储的在线直播间标识的数量可以相同,并且可以根据在线直播间标识实时总数量变化,例如当在线直播间标识的总数量为1000个,设置了10个缓存分片时,每个缓存分片可以均匀存储100个在线直播间标识。
具体的,第一服务器将在线直播间标识发送至标识缓存,存储在各缓存分片之后,可以将每个缓存分片中的在线直播间标识的数量与预设阈值进行对比,根据对比结果增加或减少缓存分片的数量,以适应直播间的访问量。
可选的,根据对比结果调整缓存分片的数量,包括:当对比结果为每个缓存分片中的在线直播间标识的数量大于预设阈值中的第一阈值,则增加缓存分片的数量;当对比结果为每个缓存分片中的在线直播间标识的数量小于预设阈值中的第二阈值,则减少缓存分片的数量。预设阈值可以包括缓存分片中最多存储和最少存储的在线直播间标识的数量,第一阈值为最多存储的数量,第二阈值为最少存储的数量,第一阈值大于第二阈值。当每个缓存分片中在线直播间标识的数量超过第一阈值可能会拖慢访问速度;当不满足第二阈值,可能会过于冗余,对于下游第二服务器来说增加刷新的分片数量,降低效率。
本公开实施例中,当对比结果为每个缓存分片中的在线直播间标识的数量大于预设阈值中的第一阈值,则增加缓存分片的数量,以使每个缓存分片中存储的在线直播间标识的数量降低,提升处理速度。而当对比结果为每个缓存分片中的在线直播间标识的数量小于预设阈值中的第二阈值,则减少缓存分片的数量,以使每个缓存分片中存储的在线直播间标识的数量增大,减少刷新的分片数量,提升效率。可以理解的是,上述缓存分片的数量增加或减少的具体值可以根据实际情况设置,使每个缓存分片中的在线直播间标识的数量在上述第二阈值和第一阈值之间即可。
上述方案中,针对标识缓存可以引入动态扩容或缩容的分片方案,根据在线直播间标识的数量变化可以动态调整缓存分片的数量,也即根据当前在线直播间的数量可以对标识缓存进行动态扩容或缩容,以保证访问效率以及标识缓存的稳定性。
采用上述技术方案,分别通过两个服务器进行在线直播数据的写入和读取,实现了读写分离,降低了数据延时,减少缓存因访问量增大造成的压力,避免造成缓存雪崩的问题,进而提升了直播数据处理的容量和稳定性。
图2b对应的其他实施细节可参见前述内容,此处不再赘述。
图2c为本公开实施例提供的另一种直播数据处理方法的流程示意图,该直播数据处理方法可适用于第二服务器,该直播数据处理方法包括:
01、从标识缓存中获取在线直播间标识,所述在线直播间标识由第一服务器发送至所述标识缓存中;
02、根据所述在线直播间标识从数据存储器中获取在线直播数据;
03、将所述在线直播数据写入数据缓存中,供所述第一服务器从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
可选地,第一服务器可以为用于获取直播数据以及发送在线直播间标识的服务器,第一服务器的数量可以为多个,具体不限。在线直播间标识用于表征已经开启的正在进行直播的直播间,该标识可以采用数字和/或字母形式进行表示,具体不限。标识缓存是指用于存储在线直播间标识的外部缓存。
本公开实施例中,第一服务器可以检测直播间的开启和关闭,并发送当前开启的在线直播间标识至标识缓存中进行存储。
第二服务器可以为本公开实施例中新增的用于更新直播数据的服务器,数据缓存可以为用于存储直播数据的外部缓存,与上述标识缓存不同。上述标识缓存和数据缓存具体可以采用Redis数据库或其他中心缓存的内存性数据库Memcached等实现,仅为示例,其他数据库也可适用。
本公开实施例中,第二服务器可以按照固定时间间隔从标识缓存中获取当前全部的在线直播间标识,并访问数据存储器,根据在线直播间标识获取并打包对应的在线直播数据,之后将打包之后的在线直播数据写入数据缓存中,以备后用。上述固定时间间隔可以根据实际情况设置,例如固定时间间隔可以为1.5秒。
可选地,所述标识缓存中包括至少两个缓存分片,所述方法还包括:通过至少两个刷新执行模块,分别从所述标识缓存中与各所述刷新执行模块对应的缓存分片中获取所述在线直播间标识,每个所述刷新执行模块对应预设数量的所述缓存分片。
图2c对应的其他实施细节可参见前述内容,此处不再赘述。
图3为本公开实施例提供的另一种直播数据处理方法的流程示意图,本实施例在上述实施例的基础上,进一步优化了上述直播数据处理方法。如图3所示,该方法包括:
步骤201、通过第一服务器发送在线直播间标识至标识缓存中。
可选的,直播数据处理方法还可以包括:通过第一服务器对标识缓存中的在线直播间标识执行更新操作,其中,更新操作包括插入操作和/或删除。
步骤201之后,可以执行步骤202-步骤205,或者直接执行步骤204-步骤205,也即步骤202-步骤203为可选的步骤。
步骤202、将标识缓存中每个缓存分片中的在线直播间标识的数量与预设阈值进行对比,根据对比结果调整缓存分片的数量。
其中,标识缓存中包括至少两个缓存分片,每个缓存分片中存储多个在线直播间标识中的一部分。
可选的,根据对比结果动态调整缓存分片的数量,包括:当对比结果为每个缓存分片中的在线直播间标识的数量大于预设阈值中的第一阈值,则增加缓存分片的数量;当对比结果为每个缓存分片中的在线直播间标识的数量小于预设阈值中的第二阈值,则减少缓存分片的数量。
步骤203、根据缓存分片的数量调整结果,调整刷新执行模块的数量。
第二服务器包括至少两个刷新执行模块,刷新执行模块的数量与缓存分片的数量成正比。
步骤204、通过第二服务器从标识缓存中获取在线直播间标识,根据在线直播间标识从数据存储器中获取在线直播数据,并将在线直播数据写入数据缓存中。
可选的,当第二服务器包括至少两个刷新执行模块,通过第二服务器从标识缓存中获取在线直播间标识,可以包括:通过第二服务器中的至少两个刷新执行模块,分别从标识缓存中与各刷新执行模块对应的缓存分片中获取在线直播间标识,每个刷新执行模块对应预设数量的缓存分片。
步骤205、通过第一服务器从数据缓存中读取在线直播数据,并分发在线直播数据。
接下来通过一个具体的示例对本公开实施例中的直播数据处理方法进行进一步说明。示例性的,图4为本公开实施例提供的一种直播数据处理的示意图,如图4所示,相较于图1,第一服务器11相当于图1中的直播服务器,图4中新增了第二服务器12,用于更新直播数据。并且图4区别于图1,缓存由标识缓存13和数据缓存14两个组成。参见图4,第二服务器12可以由刷新调度模块21和多个刷新执行模块22构成,标识缓存13也可以由多个缓存分片组成(图中未示出)。根据在线直播间标识的数量变化可以动态调整缓存分片的数量,进而刷新执行模块22的数量也会随之调整,以实现应对,保证刷新时间的稳定。
当直播间开关播时,第一服务器11可以对标识缓存13中的在线直播间标识进行插入或删除,在线直播间标识被维护在标识缓存13中。第二服务器12定期从标识缓存13获取当前全部的在线直播间标识,并访问数据存储器15打包全部在线直播间的数据,打包完毕后写入数据缓存14。第一服务器11定期从数据缓存14全量获取在线直播间数据并对外提供服务。第一服务器11和第二服务器12在执行具体功能时相互独立,实现读写逻辑分离,提高了整体系统的容量和稳定性。
随着在线直播间越来越多,将所有在线直播间标识放在一个数据结构内无法满足要求,会拖慢访问速率,导致失败率上涨,也会影响整个缓存集群的稳定性。因此本方案中针对标识缓存引入动态扩容或缩容的分片方案(也称分桶方案),根据直播业务当前的发展规模定制分片数量或桶(bucket)数量。在线直播间开播时,第一服务器11将在线直播间标识存储至指定范围的缓存分片中,当房间开关播时,第一服务器11对各缓存分片中的在线直播间标识进行加入或移除。
此外本方案中还可以在维护一份全量在线直播间标识,作为备份数据存在,不会被第二服务器12访问到。当目前在线直播间数量大大增加,每一个缓存分片内维护的在线直播间标识的数量超过阈值时,可以通过增加缓存分片的数量,由额外的功能模块(脚本)通过全量在线直播间标识快速重建缓存分片。当在线直播间数量回落时也可以缩小缓存分片的数量,通过同样方法重构缓存分片,以减少第二服务器12刷新的缓存分片的总量,提高效率。
第二服务器12中的刷新调度模块21可以定时调度各个刷新执行模块22,分 别从标识缓存13中与各缓存执行模块22对应的缓存分片中获取其中的在线直播间标识,并且同步更新全量在线直播间标识。刷新执行模块22的数量可以动态的增加或减少,刷新调度模块21可以通过服务发现来实现对刷新执行模块22增减的感知。通过刷新执行模块22的分布式的实现,可以应对随着缓存分片数量变化导致刷新市场同步延长的问题,只需要增加或减少刷新执行模块22的数量就可以应对,并可以将在线直播间刷新的时长保持在一定范围内。
本方案中将在线直播间数据的维护和更新交给新增的一个服务器,可以定期更新数据,而对外提供的众多服务器只负责提供服务和定期从缓存中读取数据。这样的设计使得读写逻辑分离,使得对缓存的更新压力不会随访问量和服务器数量的增大而增大,从而提高了整体系统的容量和稳定性,避免出现缓存雪崩。本方案通过读写分离的更新方案,减少缓存的更新和获取压力,减少因为缓存负载过高带来的系统性风险;通过对标识缓存进行动态扩容或缩容的分片方案,以及分布式的刷新数据方案,使得整体架构具备动态可伸缩性,可以应对业务的较快增长,并且避免刷新时间随在线直播间数量线性增长的问题,减少刷新延时,将海量直播数据的更新延时压缩至秒级别。
本公开实施例提供的直播数据处理方案,通过第一服务器发送在线直播间标识至标识缓存中;将标识缓存中每个缓存分片中的在线直播间标识的数量与预设阈值进行对比,根据对比结果调整缓存分片的数量,根据缓存分片的数量调整结果,调整刷新执行模块的数量;通过第二服务器从标识缓存中获取在线直播间标识,根据在线直播间标识从数据存储器中获取在线直播数据,并将在线直播数据写入数据缓存中;通过第一服务器从数据缓存中读取在线直播数据,并分发在线直播数据。采用上述技术方案,分别通过两个服务器进行在线直播数据的写入和读取,实现了读写分离,降低了数据延时,减少缓存因访问量增大造成的压力,避免造成缓存雪崩的问题,进而提升了直播数据处理的容量和稳定性;并且根据当前在线直播间的数量可以对标识缓存以及服务器中刷新执行模块进行动态扩容或缩容,以增强了可扩展性,保证访问效率以及标识缓存的稳定性。
图5为本公开实施例提供的一种直播数据处理装置的结构示意图,该装置可由软件和/或硬件实现,一般可集成在电子设备中。如图5所示,该装置包括:
标识模块301,用于通过第一服务器发送在线直播间标识至标识缓存中;
数据写入模块302,用于通过第二服务器从所述标识缓存中获取所述在线直播间标识,根据所述在线直播间标识从数据存储器中获取在线直播数据,并将所述在线直播数据写入数据缓存中;
数据读取模块303,用于通过所述第一服务器从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
可选的,所述装置还包括标识更新模块,用于:
通过所述第一服务器对所述标识缓存中的在线直播间标识执行更新操作,其中,所述更新操作包括插入操作和/或删除操作。
可选的,所述在线直播数据按照设定时间间隔进行读取。
可选的,所述标识缓存中包括至少两个缓存分片,所述装置还包括第一调整模块,用于:
将每个所述缓存分片中的所述在线直播间标识的数量与预设阈值进行对比,根据对比结果调整所述缓存分片的数量,其中,每个所述缓存分片中存储多个所述在线直播间标识中的一部分。
可选的,所述第一调整模块具体用于:
当所述对比结果为每个所述缓存分片中的所述在线直播间标识的数量大于所述预设阈值中的第一阈值,则增加所述缓存分片的数量;
当所述对比结果为每个所述缓存分片中的所述在线直播间标识的数量小于所述预设阈值中的第二阈值,则减少所述缓存分片的数量。
可选的,所述第二服务器包括至少两个刷新执行模块,所述数据写入模块302具体用于:
通过所述第二服务器中的所述至少两个刷新执行模块,分别从所述标识缓存中与各所述刷新执行模块对应的缓存分片中获取所述在线直播间标识,每个所述刷新执行模块对应预设数量的所述缓存分片。
可选的,所述装置还包括第二调整模块,用于:
根据所述缓存分片的数量调整结果,调整所述刷新执行模块的数量,其中,所述刷新执行模块的数量与所述缓存分片的数量成正比。
本公开实施例所提供的直播数据处理装置可执行本公开任意实施例所提供的直播数据处理方法,具备执行方法相应的功能模块和有益效果。
本公开实施例还提供了一种直播数据处理装置的结构示意图,该直播数据处理装置包括:
发送模块,用于发送在线直播间标识至标识缓存中,使第二服务器从所述标识缓存中获取所述在线直播间标识,根据所述在线直播间标识从数据存储器中获取在线直播数据,并将所述在线直播数据写入数据缓存中;
分发模块,用于从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
可选地,所述装置还用于:对所述标识缓存中的在线直播间标识执行更新操作,其中,所述更新操作包括插入操作和/或删除操作。
可选地,所述标识缓存中包括至少两个缓存分片,所述装置还用于:
将每个所述缓存分片中的所述在线直播间标识的数量与预设阈值进行对比,根据对比结果调整所述缓存分片的数量,其中,每个所述缓存分片中存储多个所述在线直播间标识中的一部分。
可选地,所述装置在用于根据对比结果调整所述缓存分片的数量时,具体用于:
当所述对比结果为每个所述缓存分片中的所述在线直播间标识的数量大于所述预设阈值中的第一阈值,则增加所述缓存分片的数量;
当所述对比结果为每个所述缓存分片中的所述在线直播间标识的数量小于所述预设阈值中的第二阈值,则减少所述缓存分片的数量。
本公开实施例所提供的直播数据处理装置可执行本公开任意实施例所提供的直播数据处理方法,具备执行方法相应的功能模块和有益效果。
本公开实施例还提供了一种直播数据处理装置的结构示意图,该直播数据处理装置包括:
第一获取模块,用于从标识缓存中获取在线直播间标识,所述在线直播间标识由第一服务器发送至所述标识缓存中;
第二获取模块,用于根据所述在线直播间标识从数据存储器中获取在线直播数据;
写入模块,用于将所述在线直播数据写入数据缓存中,供所述第一服务器从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
可选地,所述标识缓存中包括至少两个缓存分片,所述装置还用于:通过至少两个刷新执行模块,分别从所述标识缓存中与各所述刷新执行模块对应的缓存分片中获取所述在线直播间标识,每个所述刷新执行模块对应预设数量的所述缓存分片。
本公开实施例所提供的直播数据处理装置可执行本公开任意实施例所提供的直播数据处理方法,具备执行方法相应的功能模块和有益效果。
本公开实施例还提供了一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行时实现本公开任意实施例所提供的直播数据处理方法。
本公开实施例还提供了一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时实现本公开任意实施例所提供的直播数据处理方法。
图6为本公开实施例提供的一种电子设备的结构示意图。下面具体参考图6,其示出了适于用来实现本公开实施例中的电子设备400的结构示意图。本公开实施例中的电子设备400可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图6示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图6所示,电子设备400可以包括处理装置(例如中央处理器、图形处理器等)401,其可以根据存储在只读存储器(ROM)402中的程序或者从存储装置408加载到随机访问存储器(RAM)403中的程序而执行各种适当的动作和处理。在RAM 403中,还存储有电子设备400操作所需的各种程序和数据。处理装置401、ROM 402以及RAM 403通过总线404彼此相连。输入/输出(I/O)接口405也连接至总线404。
通常,以下装置可以连接至I/O接口405:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置406;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置407;包括例如磁带、硬盘等的存储装置408;以及通信装置409。通信装置409可以允许电子设备400与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的电子设备400,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置409从网络上被下载和安装,或者从存储装置408被安装,或者从ROM 402被安装。在该计算机程序被处理装置401执行时,执行本公开实施例的直播数据处理方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光 纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:通过第一服务器发送在线直播间标识至标识缓存中;通过第二服务器从所述标识缓存中获取所述在线直播间标识,根据所述在线直播间标识从数据存储器中获取在线直播数据,并将所述在线直播数据写入数据缓存中;通过所述第一服务器从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通 过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,本公开提供了一种直播数据处理方法,包括:
通过第一服务器发送在线直播间标识至标识缓存中;
通过第二服务器从所述标识缓存中获取所述在线直播间标识,根据所述在线直播间标识从数据存储器中获取在线直播数据,并将所述在线直播数据写入数据缓存中;
通过所述第一服务器从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
根据本公开的一个或多个实施例,本公开提供的直播数据处理方法中,所述方法还包括:
通过所述第一服务器对所述标识缓存中的在线直播间标识执行更新操作,其中,所述更新操作包括插入操作和/或删除操作。
根据本公开的一个或多个实施例,本公开提供的直播数据处理方法中,所述在线直播数据按照设定时间间隔进行读取。
根据本公开的一个或多个实施例,本公开提供的直播数据处理方法中,所述标识缓存中包括至少两个缓存分片,所述方法还包括:
将每个所述缓存分片中的所述在线直播间标识的数量与预设阈值进行对比,根据对比结果调整所述缓存分片的数量,其中,每个所述缓存分片中存储多个所述在线直播间标识中的一部分。
根据本公开的一个或多个实施例,本公开提供的直播数据处理方法中,所述根据对比结果动态调整所述缓存分片的数量,包括:
当所述对比结果为每个所述缓存分片中的所述在线直播间标识的数量大于所述预设阈值中的第一阈值,则增加所述缓存分片的数量;
当所述对比结果为每个所述缓存分片中的所述在线直播间标识的数量小于所述预设阈值中的第二阈值,则减少所述缓存分片的数量。
根据本公开的一个或多个实施例,本公开提供的直播数据处理方法中,所述第二服务器包括至少两个刷新执行模块,通过第二服务器从所述标识缓存中获取所述在线直播间标识,包括:
通过所述第二服务器中的所述至少两个刷新执行模块,分别从所述标识缓存 中与各所述刷新执行模块对应的缓存分片中获取所述在线直播间标识,每个所述刷新执行模块对应预设数量的所述缓存分片。
根据本公开的一个或多个实施例,本公开提供的直播数据处理方法中,还包括:
根据所述缓存分片的数量调整结果,调整所述刷新执行模块的数量,其中,所述刷新执行模块的数量与所述缓存分片的数量成正比。
根据本公开的一个或多个实施例,本公开提供了一种直播数据处理方法,包括:
发送在线直播间标识至标识缓存中,使第二服务器从所述标识缓存中获取所述在线直播间标识,根据所述在线直播间标识从数据存储器中获取在线直播数据,并将所述在线直播数据写入数据缓存中;
从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
根据本公开的一个或多个实施例,本公开提供的直播数据处理方法中,还包括:
对所述标识缓存中的在线直播间标识执行更新操作,其中,所述更新操作包括插入操作和/或删除操作。
根据本公开的一个或多个实施例,所述标识缓存中包括至少两个缓存分片,本公开提供的直播数据处理方法包括:
将每个所述缓存分片中的所述在线直播间标识的数量与预设阈值进行对比,根据对比结果调整所述缓存分片的数量,其中,每个所述缓存分片中存储多个所述在线直播间标识中的一部分。
根据本公开的一个或多个实施例,根据对比结果调整所述缓存分片的数量,包括:
当所述对比结果为每个所述缓存分片中的所述在线直播间标识的数量大于所述预设阈值中的第一阈值,则增加所述缓存分片的数量;
当所述对比结果为每个所述缓存分片中的所述在线直播间标识的数量小于所述预设阈值中的第二阈值,则减少所述缓存分片的数量。
根据本公开的一个或多个实施例,本公开提供了一种直播数据处理方法,包括:
从标识缓存中获取在线直播间标识,所述在线直播间标识由第一服务器发送至所述标识缓存中;
根据所述在线直播间标识从数据存储器中获取在线直播数据;
将所述在线直播数据写入数据缓存中,供所述第一服务器从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
根据本公开的一个或多个实施例,所述标识缓存中包括至少两个缓存分片,所述方法还包括:
通过至少两个刷新执行模块,分别从所述标识缓存中与各所述刷新执行模块对应的缓存分片中获取所述在线直播间标识,每个所述刷新执行模块对应预设数量的所述缓存分片。
根据本公开的一个或多个实施例,本公开提供了一种直播数据处理装置,包括:
标识模块,用于通过第一服务器发送在线直播间标识至标识缓存中;
数据写入模块,用于通过第二服务器从所述标识缓存中获取所述在线直播间标识,根据所述在线直播间标识从数据存储器中获取在线直播数据,并将所述在 线直播数据写入数据缓存中;
数据读取模块,用于通过所述第一服务器从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
根据本公开的一个或多个实施例,本公开提供的直播数据处理装置中,所述装置还包括标识更新模块,用于:
通过所述第一服务器对所述标识缓存中的在线直播间标识执行更新操作,其中,所述更新操作包括插入操作和/或删除操作。
根据本公开的一个或多个实施例,本公开提供的直播数据处理装置中,所述在线直播数据按照设定时间间隔进行读取。
根据本公开的一个或多个实施例,本公开提供的直播数据处理装置中,所述标识缓存中包括至少两个缓存分片,所述装置还包括第一调整模块,用于:
将每个所述缓存分片中的所述在线直播间标识的数量与预设阈值进行对比,根据对比结果调整所述缓存分片的数量,其中,每个所述缓存分片中存储多个所述在线直播间标识中的一部分。
根据本公开的一个或多个实施例,本公开提供的直播数据处理装置中,所述第一调整模块具体用于:
当所述对比结果为每个所述缓存分片中的所述在线直播间标识的数量大于所述预设阈值中的第一阈值,则增加所述缓存分片的数量;
当所述对比结果为每个所述缓存分片中的所述在线直播间标识的数量小于所述预设阈值中的第二阈值,则减少所述缓存分片的数量。
根据本公开的一个或多个实施例,本公开提供的直播数据处理装置中,所述第二服务器包括至少两个刷新执行模块,所述数据写入模块具体用于:
通过所述第二服务器中的所述至少两个刷新执行模块,分别从所述标识缓存中与各所述刷新执行模块对应的缓存分片中获取所述在线直播间标识,每个所述刷新执行模块对应预设数量的所述缓存分片。
根据本公开的一个或多个实施例,本公开提供的直播数据处理装置中,所述装置还包括第二调整模块,用于:
根据所述缓存分片的数量调整结果,调整所述刷新执行模块的数量,其中,所述刷新执行模块的数量与所述缓存分片的数量成正比。
根据本公开的一个或多个实施例,本公开提供了一种直播数据处理装置,包括:
发送模块,用于发送在线直播间标识至标识缓存中,使第二服务器从所述标识缓存中获取所述在线直播间标识,根据所述在线直播间标识从数据存储器中获取在线直播数据,并将所述在线直播数据写入数据缓存中;
分发模块,用于从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
可选地,所述装置还用于:对所述标识缓存中的在线直播间标识执行更新操作,其中,所述更新操作包括插入操作和/或删除操作。
根据本公开的一个或多个实施例,本公开提供的直播数据处理装置中,所述标识缓存中包括至少两个缓存分片,所述装置还用于:
将每个所述缓存分片中的所述在线直播间标识的数量与预设阈值进行对比,根据对比结果调整所述缓存分片的数量,其中,每个所述缓存分片中存储多个所述在线直播间标识中的一部分。
根据本公开的一个或多个实施例,本公开提供的直播数据处理装置中,所述 装置在用于根据对比结果调整所述缓存分片的数量时,具体用于:
当所述对比结果为每个所述缓存分片中的所述在线直播间标识的数量大于所述预设阈值中的第一阈值,则增加所述缓存分片的数量;
当所述对比结果为每个所述缓存分片中的所述在线直播间标识的数量小于所述预设阈值中的第二阈值,则减少所述缓存分片的数量。
本公开实施例还提供了一种直播数据处理装置的结构示意图,该直播数据处理装置包括:
第一获取模块,用于从标识缓存中获取在线直播间标识,所述在线直播间标识由第一服务器发送至所述标识缓存中;
第二获取模块,用于根据所述在线直播间标识从数据存储器中获取在线直播数据;
写入模块,用于将所述在线直播数据写入数据缓存中,供所述第一服务器从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
根据本公开的一个或多个实施例,本公开提供的直播数据处理装置中,所述标识缓存中包括至少两个缓存分片,所述装置还用于:通过至少两个刷新执行模块,分别从所述标识缓存中与各所述刷新执行模块对应的缓存分片中获取所述在线直播间标识,每个所述刷新执行模块对应预设数量的所述缓存分片。
根据本公开的一个或多个实施例,本公开提供了一种电子设备,包括:
处理器;
用于存储所述处理器可执行指令的存储器;
所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现如本公开提供的任一所述的直播数据处理方法。
根据本公开的一个或多个实施例,本公开提供了一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行如本公开提供的任一所述的直播数据处理方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (19)

  1. 一种直播数据处理方法,包括:
    通过第一服务器发送在线直播间标识至标识缓存中;
    通过第二服务器从所述标识缓存中获取所述在线直播间标识,根据所述在线直播间标识从数据存储器中获取在线直播数据,并将所述在线直播数据写入数据缓存中;
    通过所述第一服务器从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
  2. 根据权利要求1所述的方法,其中,所述方法还包括:
    通过所述第一服务器对所述标识缓存中的在线直播间标识执行更新操作,其中,所述更新操作包括插入操作和/或删除操作。
  3. 根据权利要求1所述的方法,其中,所述在线直播数据按照设定时间间隔进行读取。
  4. 根据权利要求1所述的方法,其中,所述标识缓存中包括至少两个缓存分片,所述方法还包括:
    将每个所述缓存分片中的所述在线直播间标识的数量与预设阈值进行对比,根据对比结果调整所述缓存分片的数量,其中,每个所述缓存分片中存储多个所述在线直播间标识中的一部分。
  5. 根据权利要求4所述的方法,其中,所述根据对比结果动态调整所述缓存分片的数量,包括:
    当所述对比结果为每个所述缓存分片中的所述在线直播间标识的数量大于所述预设阈值中的第一阈值,则增加所述缓存分片的数量;
    当所述对比结果为每个所述缓存分片中的所述在线直播间标识的数量小于所述预设阈值中的第二阈值,则减少所述缓存分片的数量。
  6. 根据权利要求5所述的方法,其中,所述第二服务器包括至少两个刷新执行模块,通过第二服务器从所述标识缓存中获取所述在线直播间标识,包括:
    通过所述第二服务器中的所述至少两个刷新执行模块,分别从所述标识缓存中与各所述刷新执行模块对应的缓存分片中获取所述在线直播间标识,每个所述刷新执行模块对应预设数量的所述缓存分片。
  7. 根据权利要求6所述的方法,其中,还包括:
    根据所述缓存分片的数量调整结果,调整所述刷新执行模块的数量,其中,所述刷新执行模块的数量与所述缓存分片的数量成正比。
  8. 一种直播数据处理方法,包括:
    发送在线直播间标识至标识缓存中,使第二服务器从所述标识缓存中获取所述在线直播间标识,根据所述在线直播间标识从数据存储器中获取在线直播数据,并将所述在线直播数据写入数据缓存中;
    从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
  9. 根据权利要求8所述的方法,其中,所述方法还包括:
    对所述标识缓存中的在线直播间标识执行更新操作,其中,所述更新操作包括插入操作和/或删除操作。
  10. 根据权利要求8所述的方法,其中,所述标识缓存中包括至少两个缓存分片,所述方法还包括:
    将每个所述缓存分片中的所述在线直播间标识的数量与预设阈值进行对比,根据对比结果调整所述缓存分片的数量,其中,每个所述缓存分片中存储多个所 述在线直播间标识中的一部分。
  11. 根据权利要求10所述的方法,其中,所述根据对比结果调整所述缓存分片的数量,包括:
    当所述对比结果为每个所述缓存分片中的所述在线直播间标识的数量大于所述预设阈值中的第一阈值,则增加所述缓存分片的数量;
    当所述对比结果为每个所述缓存分片中的所述在线直播间标识的数量小于所述预设阈值中的第二阈值,则减少所述缓存分片的数量。
  12. 一种直播数据处理方法,包括:
    从标识缓存中获取在线直播间标识,所述在线直播间标识由第一服务器发送至所述标识缓存中;
    根据所述在线直播间标识从数据存储器中获取在线直播数据;
    将所述在线直播数据写入数据缓存中,供所述第一服务器从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
  13. 根据权利要求12所述的方法,其中,所述标识缓存中包括至少两个缓存分片,所述方法还包括:
    通过至少两个刷新执行模块,分别从所述标识缓存中与各所述刷新执行模块对应的缓存分片中获取所述在线直播间标识,每个所述刷新执行模块对应预设数量的所述缓存分片。
  14. 一种直播数据处理装置,包括:
    标识模块,用于通过第一服务器发送在线直播间标识至标识缓存中;
    数据写入模块,用于通过第二服务器从所述标识缓存中获取所述在线直播间标识,根据所述在线直播间标识从数据存储器中获取在线直播数据,并将所述在线直播数据写入数据缓存中;
    数据读取模块,用于通过所述第一服务器从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
  15. 一种直播数据处理装置,包括:
    发送模块,用于发送在线直播间标识至标识缓存中,使第二服务器从所述标识缓存中获取所述在线直播间标识,根据所述在线直播间标识从数据存储器中获取在线直播数据,并将所述在线直播数据写入数据缓存中;
    分发模块,用于从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
  16. 一种直播数据处理装置,包括:
    第一获取模块,用于从标识缓存中获取在线直播间标识,所述在线直播间标识由第一服务器发送至所述标识缓存中;
    第二获取模块,用于根据所述在线直播间标识从数据存储器中获取在线直播数据;
    写入模块,用于将所述在线直播数据写入数据缓存中,供所述第一服务器从所述数据缓存中读取所述在线直播数据,并分发所述在线直播数据。
  17. 一种电子设备,所述电子设备包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现上述权利要求1-7中任一项,或权利要求8-11中任一项,或权利要求12-13中任一项所述的直播数据处理方法。
  18. 一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行上述权利要求1-7中任一项,或权利要求8-11中任一项,或权利要求12-13中任一项所述的直播数据处理方法。
  19. 一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时实现上述权利要求1-7中任一项,或权利要求8-11中任一项,或权利要求12-13中任一项所述的直播数据处理方法。
PCT/CN2022/091482 2021-05-07 2022-05-07 一种直播数据处理方法、装置、设备及介质 WO2022233335A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110496834.6A CN115314718B (zh) 2021-05-07 2021-05-07 一种直播数据处理方法、装置、设备及介质
CN202110496834.6 2021-05-07

Publications (1)

Publication Number Publication Date
WO2022233335A1 true WO2022233335A1 (zh) 2022-11-10

Family

ID=83854112

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/091482 WO2022233335A1 (zh) 2021-05-07 2022-05-07 一种直播数据处理方法、装置、设备及介质

Country Status (2)

Country Link
CN (1) CN115314718B (zh)
WO (1) WO2022233335A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118474402A (zh) * 2024-04-18 2024-08-09 广州灵狮科技有限公司 一种视频直播数据处理方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108235051A (zh) * 2017-12-29 2018-06-29 福建中金在线信息科技有限公司 直播系统及直播数据的存储和获取方法
CN110958462A (zh) * 2019-11-28 2020-04-03 广州市百果园信息技术有限公司 直播活动页面显示方法、装置、存储介质及直播系统
CN111159233A (zh) * 2019-12-18 2020-05-15 金蝶软件(中国)有限公司 分布式缓存方法、系统、计算机设备以及存储介质
CN111464615A (zh) * 2020-03-30 2020-07-28 北京达佳互联信息技术有限公司 请求处理方法、装置、服务器及存储介质
CN112256733A (zh) * 2020-10-19 2021-01-22 北京字节跳动网络技术有限公司 数据缓存方法、装置、电子设备及计算机可读存储介质

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040003101A1 (en) * 2002-06-26 2004-01-01 Roth David J. Caching control for streaming media
US9009406B2 (en) * 2010-12-10 2015-04-14 International Business Machines Corporation Determining server write activity levels to use to adjust write cache size
US8838902B2 (en) * 2012-10-15 2014-09-16 International Business Machines Corporation Cache layer optimizations for virtualized environments
CN104469433B (zh) * 2013-09-13 2018-09-07 深圳市腾讯计算机系统有限公司 一种视频直播回看方法及装置
US9648125B2 (en) * 2013-10-04 2017-05-09 Akamai Technologies, Inc. Systems and methods for caching content with notification-based invalidation
US10154110B2 (en) * 2014-04-22 2018-12-11 Qwilt, Inc. System and methods thereof for delivery of popular content using a multimedia broadcast multicast service
US10142436B2 (en) * 2015-11-19 2018-11-27 Microsoft Technology Licensing, Llc Enhanced mode control of cached data
CN108628765B (zh) * 2018-04-13 2021-03-23 新华三技术有限公司 开源分布式存储软件Ceph中Cache实现方法和装置
CN110633296A (zh) * 2018-05-31 2019-12-31 北京京东尚科信息技术有限公司 数据查询方法、装置、介质及电子设备
CN112312145B (zh) * 2019-07-31 2023-04-18 上海幻电信息科技有限公司 接入服务器、突发流量的缓存方法、系统、计算机设备及可读存储介质
CN112565870B (zh) * 2019-09-26 2021-09-14 北京字节跳动网络技术有限公司 内容的缓存和读取方法、客户端及存储介质
CN111506603B (zh) * 2020-04-23 2024-03-26 上海达梦数据库有限公司 数据处理方法、装置、设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108235051A (zh) * 2017-12-29 2018-06-29 福建中金在线信息科技有限公司 直播系统及直播数据的存储和获取方法
CN110958462A (zh) * 2019-11-28 2020-04-03 广州市百果园信息技术有限公司 直播活动页面显示方法、装置、存储介质及直播系统
CN111159233A (zh) * 2019-12-18 2020-05-15 金蝶软件(中国)有限公司 分布式缓存方法、系统、计算机设备以及存储介质
CN111464615A (zh) * 2020-03-30 2020-07-28 北京达佳互联信息技术有限公司 请求处理方法、装置、服务器及存储介质
CN112256733A (zh) * 2020-10-19 2021-01-22 北京字节跳动网络技术有限公司 数据缓存方法、装置、电子设备及计算机可读存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118474402A (zh) * 2024-04-18 2024-08-09 广州灵狮科技有限公司 一种视频直播数据处理方法及系统

Also Published As

Publication number Publication date
CN115314718B (zh) 2023-07-14
CN115314718A (zh) 2022-11-08

Similar Documents

Publication Publication Date Title
US10872064B2 (en) Utilizing version vectors across server and client changes to determine device usage by type, app, and time of day
CN110147398B (zh) 一种数据处理方法、装置、介质和电子设备
US11038962B2 (en) Methods and systems for processing data requests
US8874700B2 (en) Optimizing storage of data files
US11494314B2 (en) Caching system for eventually consistent services
CN110909521A (zh) 在线文档信息的同步处理方法、装置及电子设备
CN111163336B (zh) 视频资源推送方法、装置、电子设备及计算机可读介质
US20220256226A1 (en) Video data processing method, electronic device and computer-readable medium
WO2022233335A1 (zh) 一种直播数据处理方法、装置、设备及介质
CN110704000A (zh) 数据处理方法、装置、电子设备及存储介质
CN116158069A (zh) 可配置的基于访问的缓存策略控制
CN114398372A (zh) 一种数据缓存方法和装置
KR20160102683A (ko) 클라우드 스트리밍 서비스를 위한 프락시 서버, 이를 이용한 클라우드 스트리밍 시스템 및 클라우드 스트리밍 서비스 제공 방법
US20220309044A1 (en) Schema based data buffering and processing on a client device
CN113407916A (zh) 信息处理方法、装置、终端和存储介质
CN112181733A (zh) 一种服务请求的处理方法、装置、设备及存储介质
WO2023226757A1 (zh) 视频缓存方法、装置、设备及存储介质
WO2022228411A1 (zh) 视频预热方法、装置、设备和存储介质
CN110727694A (zh) 数据处理方法、装置、电子设备及存储介质
CN113824675B (zh) 管理登录态的方法和装置
CN115658171A (zh) 一种轻量级解决java分布式应用配置动态刷新的方法及系统
CN113242446A (zh) 视频帧的缓存方法、转发方法、通信服务器及程序产品
WO2022206474A1 (zh) 数据获取方法, 装置, 电子设备及计算机可读存储介质
CN111580890A (zh) 用于处理特征的方法、装置、电子设备和计算机可读介质
WO2020172586A1 (en) Adaptive retrieval of objects from remote storage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22798675

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23/02/2024)

122 Ep: pct application non-entry in european phase

Ref document number: 22798675

Country of ref document: EP

Kind code of ref document: A1