CN110971709A - Data processing method, computer device and storage medium - Google Patents

Data processing method, computer device and storage medium Download PDF

Info

Publication number
CN110971709A
CN110971709A CN201911329555.XA CN201911329555A CN110971709A CN 110971709 A CN110971709 A CN 110971709A CN 201911329555 A CN201911329555 A CN 201911329555A CN 110971709 A CN110971709 A CN 110971709A
Authority
CN
China
Prior art keywords
service node
server
information
service
node information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911329555.XA
Other languages
Chinese (zh)
Other versions
CN110971709B (en
Inventor
杨勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Onething Technology Co Ltd
Original Assignee
Shenzhen Onething Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Onething Technology Co Ltd filed Critical Shenzhen Onething Technology Co Ltd
Priority to CN201911329555.XA priority Critical patent/CN110971709B/en
Publication of CN110971709A publication Critical patent/CN110971709A/en
Application granted granted Critical
Publication of CN110971709B publication Critical patent/CN110971709B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Abstract

The invention provides a data processing method, which comprises the following steps: sending a data flow request and the currently connected service node information to a scheduling server; receiving a resource list fed back by the scheduling server according to the data stream request; judging whether the resource list has service node information or OSS server information; when the resource list does not have service node information but has OSS server information, starting a first fallback mechanism; when there is no service node information and no OSS server information in the resource list, a second fallback mechanism is initiated. The invention also provides another data processing method, computer equipment and a storage medium. The invention can start the rollback mechanism to improve the resource utilization rate when the heat of the data flow is reduced to the point that the service node completely exits the service.

Description

Data processing method, computer device and storage medium
Technical Field
The present invention relates to the field of computer network technologies, and in particular, to a data processing method, a computer device, and a storage medium.
Background
When the playing end requests the data stream from the scheduling server, the scheduling server allocates corresponding shared nodes to serve according to the heat of the data stream, the number of the shared nodes is increased along with the increase of the heat, and the number of the shared nodes is reduced along with the decrease of the heat.
When the heat degree is reduced to be extremely low, the scheduling server does not allocate any sharing node for service, that is, after the sharing node is completely retired, the playing end still occupies the resources of the Push server and the OSS server in order to ensure the stability of the service quality, which causes the waste of the resources.
Therefore, there is a need to provide a data processing scheme that enables a rollback mechanism when the heat of a data flow is reduced to the point where a shared node completely exits service, so as to improve resource utilization.
Disclosure of Invention
The invention mainly aims to provide a data processing method, a computer device and a storage medium, aiming at solving the technical problem of improving the resource utilization rate when the heat of a data stream is reduced to the point that a sharing node completely exits from service.
In order to achieve the above object, a first aspect of the present invention provides a data processing method applied in a playing end, where the method includes:
sending a data flow request and the currently connected service node information to a scheduling server;
receiving a resource list fed back by the scheduling server according to the data stream request;
judging whether the resource list has service node information or OSS server information;
when the resource list does not have service node information but has OSS server information, starting a first fallback mechanism;
when there is no service node information and no OSS server information in the resource list, a second fallback mechanism is initiated.
According to an alternative embodiment of the invention, the initiating the first fallback mechanism comprises:
and connecting the OSS server through the SDK according to the OSS server information, acquiring the data stream provided by the OSS server, and disconnecting the current service node.
According to an alternative embodiment of the invention, said initiating the second fallback mechanism comprises:
and connecting the CDN server through the SDK according to the CDN server information, acquiring the data stream provided by the CDN server, and disconnecting the current connected service node.
According to an alternative embodiment of the invention, the method further comprises:
when the resource list has service node information, comparing the service node information with the service node information connected currently to determine a newly added service node and a service node to be released;
and connecting the newly added service node, and disconnecting the service node to be released after the newly added service node is successfully connected.
In order to achieve the above object, a second aspect of the present invention provides a data processing method applied in a scheduling server, the method including:
receiving a data stream request of a playing end and service node information currently connected with the playing end;
calculating the heat of the data stream;
comparing the heat of the data stream to a first threshold and a second threshold, wherein the first threshold is greater than the second threshold;
when the heat degree of the data stream is smaller than the first threshold value but larger than the second threshold value, allocating an OSS server, generating a resource list according to the OSS server information, and sending the resource list to the playing end, so that the playing end starts a first fallback mechanism;
and when the heat degree of the data stream is smaller than the second threshold value, no service node and an OSS server are allocated, and an empty resource list is fed back to the playing end, so that the playing end starts a second rollback mechanism.
According to an alternative embodiment of the invention, the method further comprises:
when the heat degree of the data flow is larger than the first threshold value, allocating a service node;
and updating the currently connected service node information according to the distributed service node information, generating a resource list according to the updated service node information, and sending the resource list to the playing end.
According to an optional embodiment of the present invention, the updating the currently connected service node information according to the allocated service node information and generating the resource list according to the updated service node information includes:
determining service node information needing to be added and service node information needing to be deleted according to the distributed service node information and the currently connected service node information;
deleting the service node information to be deleted in the service node information currently connected;
judging whether the number of the deleted service nodes which are currently connected is larger than a preset number threshold value or not;
when the number of the deleted currently connected service nodes is larger than the preset number threshold, generating a resource list according to the number of the deleted currently connected service nodes;
and when the number of the deleted currently connected service nodes is not larger than the preset number threshold, adding the newly added service node information into the deleted currently connected service node information and generating a resource list based on the added service node information.
According to an optional embodiment of the invention, the calculating the heat of the data stream comprises:
acquiring a target node providing the data stream service;
calculating the number of clients connected with the target node;
and determining the number of the clients as the heat degree of the data stream.
To achieve the above object, a third aspect of the present invention provides a computer device including a memory and a processor, the memory having stored thereon a downloaded program of a data processing method executable on the processor, the downloaded program of the data processing method implementing the data processing method when executed by the processor.
To achieve the above object, a fourth aspect of the present invention provides a computer-readable storage medium having stored thereon a downloaded program of a data processing method, the downloaded program of the data processing method being executable by one or more processors to implement the data processing method.
According to the data processing method, the computer equipment and the storage medium, provided by the embodiment of the invention, through data interaction of the playing end and the scheduling server, when the heat of a data stream is reduced to the point that a service node exits from service, the SDK is adjusted to enable the service to return to a single-source stage to release the resources of the Push server, so that the utilization rate of the resources of the Push server is improved, or the SDK is adjusted to enable the service to return to a CDN stage to release the resources of the OSS server, so that the utilization rate of the resources of the OSS server is improved. And the service nodes connected with the playing end can be released when the player backs to the single source stage or the CDN stage, so that the utilization rate of the service nodes is improved.
Drawings
FIG. 1 is a data flow diagram for a multi-source service phase;
FIG. 2 is a flowchart illustrating a data processing method according to a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating a data processing method according to a second embodiment of the present invention;
FIG. 4 is a flowchart illustrating a data processing method according to a third embodiment of the present invention;
FIG. 5 is a functional block diagram of a data processing apparatus according to a fourth embodiment of the present invention;
FIG. 6 is a functional block diagram of a data processing apparatus according to a fifth embodiment of the present invention;
fig. 7 is a schematic structural diagram of a computer device according to a sixth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first" and "second" in the description and claims of the present application and the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
As shown in fig. 1, in this embodiment, the live Content Delivery system may include a Content Delivery Network (CDN), an Object Storage Service (OSS) server, a Push server, a plurality of sharing nodes, and the like. The customer content delivery network comprises a plurality of CDN servers, and source data of customers are stored in the CDN servers. The source data can be audio and video files or files in other formats. The OSS server may pull a data stream from a CDN server to provide to a playout end or other OSS servers. The Push server may obtain a data stream from the OSS server, perform slicing processing on the obtained data stream to obtain a plurality of sliced data streams, and provide the sliced data streams to a play end or a sharing node. The Push server can realize data transmission between the sharing node and the playing end by adopting a P2P mode. The sharing node can obtain a plurality of slice data streams from the Push server and provide the slice data streams to the playing end. The sharing node can be a device capable of providing storage and data transmission functions, such as a guest playing cloud, and is generally arranged in a family, and the sharing node can realize P2P transmission between the playing end devices, so that compared with a CDN server and an OSS server, the sharing node has the characteristics of wider distribution and lower data transmission cost. The sharing node and the Push server are collectively called as a service node. The playing end and the service node are connected in many-to-many mode, that is, one playing end can be connected with a plurality of service nodes to obtain data streams at the same time, and one service node can provide data stream services for a plurality of playing ends at the same time.
Example one
Fig. 2 is a flowchart illustrating a data processing method according to a first embodiment of the invention.
The data processing method specifically comprises the following steps, and the sequence of the steps in the flowchart can be changed and some steps can be omitted according to different requirements.
S11, sending the data flow request and the currently connected service node information to the dispatch server.
In the live broadcast system, in an initial state, a playing end firstly sends registration information to a scheduling server in the live broadcast system so as to realize communication with the scheduling server.
The method comprises the steps that a playing end periodically sends a data stream request to a scheduling server to request to acquire a data stream played by the playing end, and simultaneously sends service node information of current connection and a connection state of a service node so that the scheduling server can evaluate whether the service node of the current connection needs to be updated or not.
The currently connected service node information refers to information of a service node to which the play end is successfully connected, and the service node information may include, but is not limited to: the stability of the connection, the number of connections, the quality of the connection, etc. may represent a number of data indicators of the connection status and node address information of the serving node.
S12, receiving the resource list fed back by the scheduling server according to the data stream request.
And the scheduling server calculates the heat of the data stream according to the data stream request, decides whether to serve the data stream distribution service node and whether to serve the data stream distribution OSS server according to the heat of the data stream, and finally returns a resource list to the playing end.
And S13, judging whether the resource list has service node information or OSS server information.
And after receiving the resource list, the playing end analyzes the information in the resource list. The resource list may include one or more of the following in combination: service node (including shared node and Push server) information, OSS server information.
S14, when there is no service node information but there is OSS server information in the resource list, a first fallback mechanism is initiated.
The live broadcast system generally has three service stages for service stability: CDN stages, single-source stages and multi-source stages. The CDN stage is a stage of directly pulling data from a standard CDN server in order to increase the playback speed of the playback end. The single source phase is a phase of pulling data from an OSS server of a live system. The multi-source stage is a stage of pulling data from a Push server and a shared node. The playing end switches three service stages according to different state indexes through the SDK.
When the playing end judges that no service node information exists in the resource list, whether OSS server information exists or not can be judged. And when the OSS server information is determined, starting a first fallback mechanism and switching to other service stages.
In an optional embodiment of the present invention, the initiating the first fallback mechanism may comprise:
and connecting the OSS server through the SDK according to the OSS server information, acquiring the data stream provided by the OSS server, and disconnecting the current service node.
In this alternative embodiment, when the broadcast end determines that there is no service node information but there is OSS server information, a rollback mechanism is initiated to rollback the service of the data stream to a single source phase, i.e., to pull data from the OSS server. The Push server resources can be released when the single source stage is backed, because the Push server uses a private coding mode, the OSS server stores data which are not fragmented, and the playing end disconnects the Push server to acquire the data from the OSS server, the performance overhead and the extra data flow overhead of SDK private protocol decoding can be reduced, the data processing efficiency of the playing end is improved, and the resources of the playing end are saved.
S15, when there is no serving node information and no OSS server information in the resource list, a second fallback mechanism is initiated.
And when the playing end judges that the resource list has neither service node information nor OSS server information, starting a second rollback mechanism and switching to other service stages.
In an optional embodiment of the invention, the initiating the second fallback mechanism comprises:
and connecting the CDN server through the SDK according to the CDN server information, acquiring the data stream provided by the CDN server, and disconnecting the current connected service node.
In this optional embodiment, when the playback end determines that there is no service node information nor OSS server information, a fallback mechanism is started to enable the service of the data stream to fallback to the CDN stage, that is, the data is pulled from the CDN server. When the OSS server is returned to the CDN stage, the resource of the OSS server can be released, and the limited OSS server can serve hotter data streams, so that the amplification ratio of the OSS server is maintained, and the resource utilization rate of the OSS server is improved.
In an optional embodiment of the invention, the method further comprises:
when the resource list has service node information, comparing the service node information with the service node information connected currently to determine a newly added service node and a service node to be released;
and connecting the newly added service node, and disconnecting the service node to be released after the newly added service node is successfully connected.
In this optional embodiment, the newly added service node refers to a service node that is in the resource list but not in the service node to which the playing end is currently connected. The service node to be released refers to a service node which is in the service node currently connected with the playing end but not in the resource list.
Illustratively, the resource list fed back by the scheduling server is assumed to include: the service node A, the service node B, the service node C, the service node D and the service node E are connected with the playing end at present, and the service node connected with the playing end at present comprises: and the service node C, the service node D, the service node E and the service node F are added, the newly added service nodes are the service node A and the service node B, and the service node needing to be released is the service node F. And the playing end is connected with the service node A and the service node B, and is disconnected with the service node F after the service node A and the service node B are successfully connected.
In summary, in the data processing method described in this embodiment, the playing end receives the resource list fed back by the scheduling server, and when it is determined that there is no service node information in the resource list but there is other information, different rollback mechanisms are adopted, so that when the heat of the data stream is reduced to a point where the service node exits the service, the SDK is adjusted to allow the service to rollback to the single-source stage to release the Push server resource, thereby improving the utilization rate of the Push server resource, or the SDK is adjusted to allow the service to rollback to the CDN stage to release the OSS server resource, thereby improving the utilization rate of the OSS server resource. And the service nodes connected with the playing end can be released when the player backs to the single source stage or the CDN stage, so that the utilization rate of the service nodes is improved.
Example two
Fig. 3 is a flowchart illustrating a data processing method according to a second embodiment of the invention.
The data processing method can be applied to a scheduling server. The data processing method specifically comprises the following steps, and the sequence of the steps in the flowchart can be changed and some steps can be omitted according to different requirements.
S21, receiving the data flow request of the playing end and the service node information of the playing end currently connected.
The data flow request may carry identification information of the data flow.
The scheduling server stores node address information of a plurality of service nodes (shared nodes and Push servers), server address information of the OSS server, and server address information of the CDN server in advance. The scheduling server also records the data types of the data streams stored by the service nodes and the OSS server in advance.
When a scheduling server receives a data stream request of a playing end, identification information of the data stream in the data stream request is obtained, a data type required by the playing end is determined according to the identification information, a node or a server where the data stream is currently stored can be obtained through inquiry, a resource list is fed back to the playing end, and the data stream is requested from the playing end to a service node or the server in the resource list.
And S22, calculating the heat degree of the data stream.
When receiving a data stream request sent by a playing end, the scheduling server also receives currently connected service node information sent by the playing end, and calculates the heat of the data stream according to the data stream request and the currently connected service node information so as to decide whether to provide a service node for service.
In an optional embodiment of the invention, the calculating the heat of the data stream comprises:
acquiring a target node providing the data stream service;
calculating the number of clients connected with the target node;
and determining the number of the clients as the heat degree of the data stream.
S23, comparing the heat degree of the data stream with a first threshold value and a second threshold value, wherein the first threshold value is larger than the second threshold value.
The scheduling server stores a first threshold and a second threshold in advance, where the first threshold and the second threshold may be determined by a user according to an empirical value, for example, the first threshold may be 32, the second threshold may be 5, and the like.
S24, when the heat of the data stream is smaller than the first threshold but larger than the second threshold, allocating an OSS server, generating a resource list according to the OSS server information, and sending the resource list to the playing end, so that the playing end starts a first fallback mechanism.
When the calculated heat degree of the data stream is between the first threshold and the second threshold, it indicates that the heat degree of the data stream is low, and currently, fewer playing ends play the data stream, and only an OSS server is allocated and a resource list is generated according to the OSS server without allocating a shared node and a Push server, so that the playing ends start a first fallback mechanism according to the resource list and the service of the data stream is fallback to a single-source stage.
The Push server resources can be released when the single source stage is backed, and the Push server uses a private coding mode, so that the connection between the playing terminal and the Push server is disconnected, the performance overhead of SDK private protocol decoding and the extra data flow overhead can be reduced, the data processing efficiency of the playing terminal is improved, and the resources of the playing terminal are saved.
And S25, when the heat degree of the data stream is smaller than the second threshold, no service node and OSS server are allocated, and an empty resource list is fed back to the playing end, so that the playing end starts a second rollback mechanism.
When the heat of the data stream is smaller than the second threshold, it indicates that no playing end plays the data stream at present, and any sharing node, Push server and OSS server may not be allocated for service, so that an empty resource list is generated, and the playing end starts a second rollback mechanism to rollback the service of the data stream to a CDN stage, i.e., data is pulled from the CDN server.
When the OSS server is returned to the CDN stage, the resource of the OSS server can be released, and the limited OSS server can serve hotter data streams, so that the amplification ratio of the OSS server is maintained, and the resource utilization rate of the OSS server is improved.
In an optional embodiment of the invention, the method further comprises:
when the heat degree of the data flow is larger than the first threshold value, allocating a service node;
and updating the currently connected service node information according to the distributed service node information, generating a resource list according to the updated service node information, and sending the resource list to the playing end.
In this optional embodiment, when the calculated heat of the data stream is greater than the first threshold, it indicates that the heat of the data stream is higher, and currently, there are more playing ends playing the data stream, and more sharing nodes or Push servers may be provided for service.
In an optional embodiment of the present invention, the updating the currently connected service node information according to the allocated service node information and generating a resource list according to the updated service node information includes:
determining service node information needing to be added and service node information needing to be deleted according to the distributed service node information and the currently connected service node information;
deleting the service node information to be deleted in the service node information currently connected;
judging whether the number of the deleted service nodes which are currently connected is larger than a preset number threshold value or not;
when the number of the deleted currently connected service nodes is larger than the preset number threshold, generating a resource list according to the number of the deleted currently connected service nodes;
and when the number of the deleted currently connected service nodes is not larger than the preset number threshold, adding the newly added service node information into the deleted currently connected service node information and generating a resource list based on the added service node information.
In this optional embodiment, the newly added service node refers to a service node that is not in the service node currently connected to the playout end but in the service nodes allocated by the scheduling server. The service node to be deleted refers to a service node which is in the service node currently connected with the playing end but not in the service nodes distributed by the scheduling server.
Illustratively, assume that the service nodes assigned by the scheduling server include: the service node A, the service node B, the service node C, the service node D and the service node E are connected with the playing end at present, and the service node connected with the playing end at present comprises: and the service nodes to be added are the service node A and the service node B, and the service node to be deleted is the service node F. And the playing end is connected with the service node A and the service node B, and is disconnected with the service node F after the service node A and the service node B are successfully connected. And the dispatching server deletes the service node F in the currently connected section and then judges whether the number of the deleted currently connected service nodes (service node C, service node D and service node E) is greater than a preset number threshold value.
Assuming that the preset number threshold is 2, the number of the deleted currently connected service nodes (service node C, service node D, service node E) is greater than the preset number threshold 2, and the scheduling server generates a resource list according to the service node C, the service node D, and the service node E.
Assuming that the preset number threshold is 4, the number of the deleted currently connected service nodes (service node C, service node D, service node E) is smaller than the preset number threshold 4, and the scheduling server generates a resource list according to the service node a, the service node B, the service node C, the service node D, and the service node E, where the resource list may include the service node a, the service node C, the service node D, and the service node E, or include the service node B, the service node C, the service node D, and the service node E.
In summary, in the data processing method described in this embodiment, the scheduling server receives the data stream request sent by the playing end and the service node information currently connected to the playing end, determines whether to allocate a service node to provide a data stream service by calculating the heat of the data stream, and feeds back a resource list to the playing end for the playing end to play the data stream. When the heat of the data stream is reduced to the point that the service node quits the service, the playing end adjusts the SDK to enable the service to return to the single-source stage to release the Push server resource, so that the utilization rate of the Push server resource is improved, or adjusts the SDK to enable the service to return to the CDN stage to release the OSS server resource, so that the utilization rate of the OSS server resource is improved. And the service nodes connected with the playing end can be released when the player backs to the single source stage or the CDN stage, so that the utilization rate of the service nodes is improved.
EXAMPLE III
Fig. 4 is a flowchart illustrating a data processing method according to a third embodiment of the invention.
The data processing method can be applied to a scheduling server. The data processing method specifically comprises the following steps, and the sequence of the steps in the flowchart can be changed and some steps can be omitted according to different requirements.
S31, the playing end sends the data flow request and the service node information of the current connection to the dispatch server.
S32, the dispatch server receives the data flow request of the playing end and the service node information of the current connection of the playing end, calculates the heat of the data flow, and generates a resource list according to the heat of the data flow and the service node information of the current connection.
S33, the dispatching server feeds back the resource list to the playing end.
And S34, the playing end receives the resource list fed back by the scheduling server and judges whether the resource list has service node information or OSS server information.
S35, when there is no service node information or OSS server in the resource list, a fallback mechanism is initiated.
When the resource list does not have service node information but has OSS server information, connecting an OSS server and acquiring data streams provided by the OSS server through an SDK according to the OSS server information, and disconnecting the data streams from the currently connected service node.
In summary, the data processing method according to this embodiment, through data interaction between the play end and the scheduling server, when the heat of the data stream is reduced to the point that the service node exits the service, the SDK is adjusted to make the service fallback to the single-source stage to release the Push server resource, so that the utilization rate of the Push server resource is improved, or the SDK is adjusted to make the service fallback to the CDN stage to release the OSS server resource, so that the utilization rate of the OSS server resource is improved. And the service nodes connected with the playing end can be released when the player backs to the single source stage or the CDN stage, so that the utilization rate of the service nodes is improved.
Example four
Fig. 5 is a schematic diagram of functional modules of a data processing apparatus according to a fourth embodiment of the present invention.
In some embodiments, the data processing apparatus 40 is run in a computer device. The data processing device 40 may include a plurality of function blocks composed of downloaded programs. The program code of each downloaded program in the data processing apparatus 40 may be stored in a memory of a computer device and executed by at least one processor to perform the processing of data (described in detail in fig. 2).
In this embodiment, the data processing apparatus 40 may be divided into a plurality of functional modules according to the functions performed by the data processing apparatus. The functional module may include: a request sending module 401, a list receiving module 402, an information determining module 403, a first rollback module 404, a second rollback module 405, and a node updating module 406. The module referred to herein is a series of computer program segments capable of being executed by at least one processor and capable of performing a fixed function and is stored in memory. In the present embodiment, the functions of the modules will be described in detail in the following embodiments.
The request sending module 401 is configured to send a data stream request and information of a currently connected service node to a scheduling server.
In the live broadcast system, in an initial state, a playing end firstly sends registration information to a scheduling server in the live broadcast system so as to realize communication with the scheduling server.
The method comprises the steps that a playing end periodically sends a data stream request to a scheduling server to request to acquire a data stream played by the playing end, and simultaneously sends service node information of current connection and a connection state of a service node so that the scheduling server can evaluate whether the service node of the current connection needs to be updated or not.
The currently connected service node information refers to information of a service node to which the play end is successfully connected, and the service node information may include, but is not limited to: the stability of the connection, the number of connections, the quality of the connection, etc. may represent a number of data indicators of the connection status and node address information of the serving node.
The list receiving module 402 is configured to receive a resource list fed back by the scheduling server according to the data stream request.
And the scheduling server calculates the heat of the data stream according to the data stream request, decides whether to serve the data stream distribution service node and whether to serve the data stream distribution OSS server according to the heat of the data stream, and finally returns a resource list to the playing end.
The information determining module 403 is configured to determine whether the resource list includes service node information or OSS server information.
And after receiving the resource list, the playing end analyzes the information in the resource list. The resource list may include one or more of the following in combination: service node (including shared node and Push server) information, OSS server information.
The first fallback module 404 is configured to start a first fallback mechanism when there is no serving node information but there is OSS server information in the resource list.
The live broadcast system generally has three service stages for service stability: CDN stages, single-source stages and multi-source stages. The CDN stage is a stage of directly pulling data from a standard CDN server in order to increase the playback speed of the playback end. The single source phase is a phase of pulling data from an OSS server of a live system. The multi-source stage is a stage of pulling data from a Push server and a shared node. The playing end switches three service stages according to different state indexes through the SDK.
When the playing end judges that no service node information exists in the resource list, whether OSS server information exists or not can be judged. And when the OSS server information is determined, starting a first fallback mechanism and switching to other service stages.
In an optional embodiment of the present invention, the first fallback module 404 may initiate the first fallback mechanism by:
and connecting the OSS server through the SDK according to the OSS server information, acquiring the data stream provided by the OSS server, and disconnecting the current service node.
In this alternative embodiment, when the broadcast end determines that there is no service node information but there is OSS server information, a rollback mechanism is initiated to rollback the service of the data stream to a single source phase, i.e., to pull data from the OSS server. The Push server resources can be released when the single source stage is backed, because the Push server uses a private coding mode, the OSS server stores data which are not fragmented, and the playing end disconnects the Push server to acquire the data from the OSS server, the performance overhead and the extra data flow overhead of SDK private protocol decoding can be reduced, the data processing efficiency of the playing end is improved, and the resources of the playing end are saved.
The second fallback module 405 is configured to start a second fallback mechanism when there is no service node information and no OSS server information in the resource list.
And when the playing end judges that the resource list has neither service node information nor OSS server information, starting a second rollback mechanism and switching to other service stages.
In an optional embodiment of the present invention, the second rollback module 405 initiating the second rollback mechanism comprises:
and connecting the CDN server through the SDK according to the CDN server information, acquiring the data stream provided by the CDN server, and disconnecting the current connected service node.
In this optional embodiment, when the playback end determines that there is no service node information nor OSS server information, a fallback mechanism is started to enable the service of the data stream to fallback to the CDN stage, that is, the data is pulled from the CDN server. When the OSS server is returned to the CDN stage, the resource of the OSS server can be released, and the limited OSS server can serve hotter data streams, so that the amplification ratio of the OSS server is maintained, and the resource utilization rate of the OSS server is improved.
The node updating module 406 is configured to, when there is service node information in the resource list, compare the service node information with the currently connected service node information to determine a newly added service node and a service node to be released; and connecting the newly added service node, and disconnecting the service node to be released after the newly added service node is successfully connected.
In this optional embodiment, the newly added service node refers to a service node that is in the resource list but not in the service node to which the playing end is currently connected. The service node to be released refers to a service node which is in the service node currently connected with the playing end but not in the resource list.
Illustratively, the resource list fed back by the scheduling server is assumed to include: the service node A, the service node B, the service node C, the service node D and the service node E are connected with the playing end at present, and the service node connected with the playing end at present comprises: and the service node C, the service node D, the service node E and the service node F are added, the newly added service nodes are the service node A and the service node B, and the service node needing to be released is the service node F. And the playing end is connected with the service node A and the service node B, and is disconnected with the service node F after the service node A and the service node B are successfully connected.
In summary, in the data processing apparatus described in this embodiment, the playing end receives the resource list fed back by the scheduling server, and when it is determined that there is no service node information in the resource list but there is other information, different rollback mechanisms are adopted, so that when the heat of the data stream is reduced to a point where the service node exits the service, the SDK is adjusted to cause the service to rollback to the single-source stage to release the Push server resource, thereby improving the utilization rate of the Push server resource, or the SDK is adjusted to cause the service to rollback to the CDN stage to release the OSS server resource, thereby improving the utilization rate of the OSS server resource. And the service nodes connected with the playing end can be released when the player backs to the single source stage or the CDN stage, so that the utilization rate of the service nodes is improved.
EXAMPLE five
Fig. 6 is a schematic diagram of functional modules of a data processing apparatus according to a fifth disclosure of the present invention.
In some embodiments, the data processing apparatus 50 operates in a computer device. The data processing apparatus 50 may include a plurality of function blocks composed of downloaded programs. The program code of each downloaded program in the data processing apparatus 50 may be stored in a memory of a computer device and executed by at least one processor to perform the processing of data (described in detail in fig. 3).
In this embodiment, the data processing apparatus 50 may be divided into a plurality of functional modules according to the functions performed by the data processing apparatus. The functional module may include: the system comprises a request receiving module 501, a heat degree calculating module 502, a threshold value comparing module 503, a first generating module 504, a second generating module 505 and a third generating module 506. The module referred to herein is a series of computer program segments capable of being executed by at least one processor and capable of performing a fixed function and is stored in memory. In the present embodiment, the functions of the modules will be described in detail in the following embodiments.
The request receiving module 501 is configured to receive a data stream request of a playing end and service node information currently connected to the playing end.
The data flow request may carry identification information of the data flow.
The scheduling server stores node address information of a plurality of service nodes (shared nodes and Push servers), server address information of the OSS server, and server address information of the CDN server in advance. The scheduling server also records the data types of the data streams stored by the service nodes and the OSS server in advance.
When a scheduling server receives a data stream request of a playing end, identification information of the data stream in the data stream request is obtained, a data type required by the playing end is determined according to the identification information, a node or a server where the data stream is currently stored can be obtained through inquiry, a resource list is fed back to the playing end, and the data stream is requested from the playing end to a service node or the server in the resource list.
The heat degree calculating module 502 is configured to calculate a heat degree of the data stream.
When receiving a data stream request sent by a playing end, the scheduling server also receives currently connected service node information sent by the playing end, and calculates the heat of the data stream according to the data stream request and the currently connected service node information so as to decide whether to provide a service node for service.
In an optional embodiment of the present invention, the calculating the heat of the data stream by the heat calculating module 502 includes:
acquiring a target node providing the data stream service;
calculating the number of clients connected with the target node;
and determining the number of the clients as the heat degree of the data stream.
The threshold comparison module 503 is configured to compare the heat of the data stream with a first threshold and a second threshold, where the first threshold is greater than the second threshold.
The scheduling server stores a first threshold and a second threshold in advance, where the first threshold and the second threshold may be determined by a user according to an empirical value, for example, the first threshold may be 32, the second threshold may be 5, and the like.
The first generating module 504 is configured to, when the heat of the data stream is smaller than the first threshold but larger than the second threshold, allocate an OSS server, generate a resource list according to the OSS server information, and send the resource list to the play end, so that the play end starts a first fallback mechanism.
When the calculated heat degree of the data stream is between the first threshold and the second threshold, it indicates that the heat degree of the data stream is low, and currently, fewer playing ends play the data stream, and only an OSS server is allocated and a resource list is generated according to the OSS server without allocating a shared node and a Push server, so that the playing ends start a first fallback mechanism according to the resource list and the service of the data stream is fallback to a single-source stage.
The Push server resources can be released when the single source stage is backed, and the Push server uses a private coding mode, so that the connection between the playing terminal and the Push server is disconnected, the performance overhead of SDK private protocol decoding and the extra data flow overhead can be reduced, the data processing efficiency of the playing terminal is improved, and the resources of the playing terminal are saved.
The second generating module 505 is configured to, when the heat degree of the data stream is smaller than the second threshold, not allocate a service node and an OSS server, and feed back an empty resource list to the playing end, so that the playing end starts a second fallback mechanism.
When the heat of the data stream is smaller than the second threshold, it indicates that no playing end plays the data stream at present, and any sharing node, Push server and OSS server may not be allocated for service, so that an empty resource list is generated, and the playing end starts a second rollback mechanism to rollback the service of the data stream to a CDN stage, i.e., data is pulled from the CDN server.
When the OSS server is returned to the CDN stage, the resource of the OSS server can be released, and the limited OSS server can serve hotter data streams, so that the amplification ratio of the OSS server is maintained, and the resource utilization rate of the OSS server is improved.
The third generating module 506 is configured to allocate a service node when the heat degree of the data flow is greater than the first threshold; and updating the currently connected service node information according to the distributed service node information, generating a resource list according to the updated service node information, and sending the resource list to the playing end.
In this optional embodiment, when the calculated heat of the data stream is greater than the first threshold, it indicates that the heat of the data stream is higher, and currently, there are more playing ends playing the data stream, and more sharing nodes or Push servers may be provided for service.
In an optional embodiment of the present invention, the updating the currently connected service node information according to the allocated service node information and generating a resource list according to the updated service node information includes:
determining service node information needing to be added and service node information needing to be deleted according to the distributed service node information and the currently connected service node information;
deleting the service node information to be deleted in the service node information currently connected;
judging whether the number of the deleted service nodes which are currently connected is larger than a preset number threshold value or not;
when the number of the deleted currently connected service nodes is larger than the preset number threshold, generating a resource list according to the number of the deleted currently connected service nodes;
and when the number of the deleted currently connected service nodes is not larger than the preset number threshold, adding the newly added service node information into the deleted currently connected service node information and generating a resource list based on the added service node information.
In this optional embodiment, the newly added service node refers to a service node that is not in the service node currently connected to the playout end but in the service nodes allocated by the scheduling server. The service node to be deleted refers to a service node which is in the service node currently connected with the playing end but not in the service nodes distributed by the scheduling server.
Illustratively, assume that the service nodes assigned by the scheduling server include: the service node A, the service node B, the service node C, the service node D and the service node E are connected with the playing end at present, and the service node connected with the playing end at present comprises: and the service nodes to be added are the service node A and the service node B, and the service node to be deleted is the service node F. And the playing end is connected with the service node A and the service node B, and is disconnected with the service node F after the service node A and the service node B are successfully connected. And the dispatching server deletes the service node F in the currently connected section and then judges whether the number of the deleted currently connected service nodes (service node C, service node D and service node E) is greater than a preset number threshold value.
Assuming that the preset number threshold is 2, the number of the deleted currently connected service nodes (service node C, service node D, service node E) is greater than the preset number threshold 2, and the scheduling server generates a resource list according to the service node C, the service node D, and the service node E.
Assuming that the preset number threshold is 4, the number of the deleted currently connected service nodes (service node C, service node D, service node E) is smaller than the preset number threshold 4, and the scheduling server generates a resource list according to the service node a, the service node B, the service node C, the service node D, and the service node E, where the resource list may include the service node a, the service node C, the service node D, and the service node E, or include the service node B, the service node C, the service node D, and the service node E.
In summary, in the data processing apparatus described in this embodiment, the scheduling server receives the data stream request sent by the playing end and the service node information currently connected to the playing end, determines whether to allocate a service node to provide a data stream service by calculating the heat of the data stream, and feeds back a resource list to the playing end for the playing end to play the data stream. When the heat of the data stream is reduced to the point that the service node quits the service, the playing end adjusts the SDK to enable the service to return to the single-source stage to release the Push server resource, so that the utilization rate of the Push server resource is improved, or adjusts the SDK to enable the service to return to the CDN stage to release the OSS server resource, so that the utilization rate of the OSS server resource is improved. And the service nodes connected with the playing end can be released when the player backs to the single source stage or the CDN stage, so that the utilization rate of the service nodes is improved.
EXAMPLE six
Fig. 7 is a schematic diagram of an internal structure of a computer device according to an embodiment of the disclosure.
In this embodiment, the computer device 6 may include a memory 61, a processor 62, and a bus 63 and transceiver 64. The computer device 6 is configured to implement the functions of the data processing method according to the first embodiment, or implement the functions of the data processing method according to the second embodiment.
The memory 61 includes at least one type of readable storage medium, which includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The memory 61 may in some embodiments be an internal storage unit of the computer device 6, for example a hard disk of the computer device 6. The memory 61 may also be an external storage device of the computer device 6 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the computer device 6. Further, the memory 61 may also include both an internal storage unit of the computer device 6 and an external storage device. The memory 61 may be used not only to store an application program and various types of data installed in the computer device 6, such as a downloaded program of the data processing apparatus 40 and each module, or a downloaded program of the data processing apparatus 50 and each module, but also to temporarily store data that has been output or is to be output.
The processor 62 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor or other data Processing chip in some embodiments, and is used for executing a downloaded program stored in the memory 61 or Processing data.
The bus 63 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 7, but this is not intended to represent only one bus or type of bus.
Further, the computer device 6 may further include a network interface, which may optionally include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), and is generally used to establish a communication connection between the computer device 6 and other computer devices.
Optionally, the computer device 6 may further comprise a user interface, which may comprise a Display (Display), an input unit, such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an Organic Light-Emitting Diode (OLED) touch screen, or the like. The display, which may also be referred to as a display screen or display unit, is used for displaying messages processed in the computer device and for displaying a visualized user interface.
Fig. 7 shows only the computer device 6 with the components 61-64, it being understood by those skilled in the art that the configuration shown in fig. 7 does not constitute a limitation of the computer device 6, and may be either a bus-type configuration or a star-shaped configuration, and that the computer device 6 may also comprise fewer or more components than shown, or may combine certain components, or a different arrangement of components. Other electronic products, now existing or hereafter developed, that may be adapted to the present invention, are also included within the scope of the present invention and are hereby incorporated by reference.
In the above embodiments, all or part may be implemented by an application program, hardware, firmware, or any combination thereof. When implemented using an application program, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, digital subscriber line) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of an application program functional unit.
The integrated unit, if implemented in the form of an application functional unit and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in the form of a computer application program product, stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing downloaded programs, such as a usb disk, a hard disk, a Read-only memory (ROM), a magnetic disk, or an optical disk.
It should be noted that the above-mentioned numbers of the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A data processing method is applied to a playing end, and is characterized in that the method comprises the following steps:
sending a data flow request and the currently connected service node information to a scheduling server;
receiving a resource list fed back by the scheduling server according to the data stream request;
judging whether the resource list has service node information or OSS server information;
when the resource list does not have service node information but has OSS server information, starting a first fallback mechanism;
when there is no service node information and no OSS server information in the resource list, a second fallback mechanism is initiated.
2. The data processing method of claim 1, wherein the initiating the first fallback mechanism comprises:
and connecting the OSS server through the SDK according to the OSS server information, acquiring the data stream provided by the OSS server, and disconnecting the current service node.
3. The data processing method of claim 1, wherein the initiating the second fallback mechanism comprises:
and connecting the CDN server through the SDK according to the CDN server information, acquiring the data stream provided by the CDN server, and disconnecting the current connected service node.
4. A data processing method according to any one of claims 1 to 3, characterized in that the method further comprises:
when the resource list has service node information, comparing the service node information with the service node information connected currently to determine a newly added service node and a service node to be released;
and connecting the newly added service node, and disconnecting the service node to be released after the newly added service node is successfully connected.
5. A data processing method is applied to a scheduling server, and is characterized by comprising the following steps:
receiving a data stream request of a playing end and service node information currently connected with the playing end;
calculating the heat of the data stream;
comparing the heat of the data stream to a first threshold and a second threshold, wherein the first threshold is greater than the second threshold;
when the heat degree of the data stream is smaller than the first threshold value but larger than the second threshold value, allocating an OSS server, generating a resource list according to the OSS server information, and sending the resource list to the playing end, so that the playing end starts a first fallback mechanism;
and when the heat degree of the data stream is smaller than the second threshold value, no service node and an OSS server are allocated, and an empty resource list is fed back to the playing end, so that the playing end starts a second rollback mechanism.
6. The data processing method of claim 5, wherein the method further comprises:
when the heat degree of the data flow is larger than the first threshold value, allocating a service node;
and updating the currently connected service node information according to the distributed service node information, generating a resource list according to the updated service node information, and sending the resource list to the playing end.
7. The data processing method of claim 6, wherein the updating the currently connected service node information according to the allocated service node information and generating a resource list according to the updated service node information comprises:
determining service node information needing to be added and service node information needing to be deleted according to the distributed service node information and the currently connected service node information;
deleting the service node information to be deleted in the service node information currently connected;
judging whether the number of the deleted service nodes which are currently connected is larger than a preset number threshold value or not;
when the number of the deleted currently connected service nodes is larger than the preset number threshold, generating a resource list according to the number of the deleted currently connected service nodes;
and when the number of the deleted currently connected service nodes is not larger than the preset number threshold, adding the newly added service node information into the deleted currently connected service node information and generating a resource list based on the added service node information.
8. The data processing method of any of claims 5 to 7, wherein the calculating the heat of the data stream comprises:
acquiring a target node providing the data stream service;
calculating the number of clients connected with the target node;
and determining the number of the clients as the heat degree of the data stream.
9. A computer device comprising a memory and a processor, the memory having stored thereon a downloaded program of a data processing method executable on the processor, the downloaded program of the data processing method when executed by the processor implementing the data processing method as claimed in any one of claims 1 to 4 or implementing the data processing method as claimed in any one of claims 5 to 8.
10. A computer-readable storage medium having stored thereon a downloaded program of a data processing method, the downloaded program of the data processing method being executable by one or more processors to implement the data processing method of any one of claims 1 to 4, or to implement the data processing method of any one of claims 5 to 8.
CN201911329555.XA 2019-12-20 2019-12-20 Data processing method, computer device and storage medium Active CN110971709B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911329555.XA CN110971709B (en) 2019-12-20 2019-12-20 Data processing method, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911329555.XA CN110971709B (en) 2019-12-20 2019-12-20 Data processing method, computer device and storage medium

Publications (2)

Publication Number Publication Date
CN110971709A true CN110971709A (en) 2020-04-07
CN110971709B CN110971709B (en) 2022-08-16

Family

ID=70035683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911329555.XA Active CN110971709B (en) 2019-12-20 2019-12-20 Data processing method, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN110971709B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115686869A (en) * 2022-12-29 2023-02-03 杭州迈拓大数据服务有限公司 Resource processing method, system, electronic device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159580A (en) * 2007-11-29 2008-04-09 中国电信股份有限公司 Content P2P method and system in content distribution network
WO2015104583A1 (en) * 2014-01-08 2015-07-16 Telefonaktiebolaget L M Ericsson (Publ) Method, node and distributed system for configuring a network of cdn caching nodes
CN106230942A (en) * 2016-08-01 2016-12-14 中国联合网络通信集团有限公司 A kind of method and system of time source access
CN107426302A (en) * 2017-06-26 2017-12-01 腾讯科技(深圳)有限公司 Access scheduling method, apparatus, system, terminal, server and storage medium
CN107846454A (en) * 2017-10-25 2018-03-27 暴风集团股份有限公司 A kind of resource regulating method, device and CDN system
CN109787983A (en) * 2019-01-24 2019-05-21 北京百度网讯科技有限公司 Live stream dicing method, device and system
CN110035128A (en) * 2019-04-23 2019-07-19 深圳市网心科技有限公司 A kind of live streaming dispatching method, device, live broadcast system and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159580A (en) * 2007-11-29 2008-04-09 中国电信股份有限公司 Content P2P method and system in content distribution network
WO2015104583A1 (en) * 2014-01-08 2015-07-16 Telefonaktiebolaget L M Ericsson (Publ) Method, node and distributed system for configuring a network of cdn caching nodes
CN106230942A (en) * 2016-08-01 2016-12-14 中国联合网络通信集团有限公司 A kind of method and system of time source access
CN107426302A (en) * 2017-06-26 2017-12-01 腾讯科技(深圳)有限公司 Access scheduling method, apparatus, system, terminal, server and storage medium
CN107846454A (en) * 2017-10-25 2018-03-27 暴风集团股份有限公司 A kind of resource regulating method, device and CDN system
CN109787983A (en) * 2019-01-24 2019-05-21 北京百度网讯科技有限公司 Live stream dicing method, device and system
CN110035128A (en) * 2019-04-23 2019-07-19 深圳市网心科技有限公司 A kind of live streaming dispatching method, device, live broadcast system and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
常亮: "CDN搭配OSS最佳实践-搭建动静态分离的应用架构", 《INFOQ》 *
李子姝等: "移动边缘计算综述", 《电信科学》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115686869A (en) * 2022-12-29 2023-02-03 杭州迈拓大数据服务有限公司 Resource processing method, system, electronic device and storage medium

Also Published As

Publication number Publication date
CN110971709B (en) 2022-08-16

Similar Documents

Publication Publication Date Title
CN107483627B (en) File distribution method, file download method, distribution server, client and system
US9578389B2 (en) Method of targeted ad insertion using HTTP live streaming protocol
CN107645561B (en) Picture preview method of cloud mobile phone
US20140165119A1 (en) Offline download method, multimedia file download method and system thereof
CN111355971B (en) Live streaming transmission method and device, CDN server and computer readable medium
CN109379448B (en) File distributed deployment method and device, electronic equipment and storage medium
TW201246103A (en) Category information transmission method, system and apparatus
EP3499846A1 (en) File distribution method, file download method, distribution server, client, and system
CN110995866B (en) Node scheduling method, node scheduling device, scheduling server and storage medium
CN111212294B (en) Method and device for updating state of live broadcast room and readable storage medium
CN112437329B (en) Method, device and equipment for playing video and readable storage medium
CN111131505A (en) Data transmission method, equipment, system, device and medium based on P2P network
CN109600683A (en) A kind of VOD method, device and its relevant device
CN111478781B (en) Message broadcasting method and device
CN110971709B (en) Data processing method, computer device and storage medium
CN110290009B (en) Data scheduling method and device and computer readable storage medium
CN110677464A (en) Edge node device, content distribution system, method, computer device, and medium
CN110798495B (en) Method and server for end-to-end message push in cluster architecture mode
CN114222086A (en) Method, system, medium and electronic device for scheduling audio and video code stream
US20160357875A1 (en) Techniques for promoting and viewing social content written by nearby people
CN110798358B (en) Distributed service identification method and device, computer readable medium and electronic equipment
CN110430290B (en) Resource address updating method, computer device and storage medium
CN110798492B (en) Data storage method and device and data processing system
CN110233791B (en) Data deduplication method and device
WO2011160549A1 (en) Electronic program guide system and file downloading method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant