CN117376590A - View rendering method, view rendering device, electronic device, storage medium and program product - Google Patents

View rendering method, view rendering device, electronic device, storage medium and program product Download PDF

Info

Publication number
CN117376590A
CN117376590A CN202210773104.0A CN202210773104A CN117376590A CN 117376590 A CN117376590 A CN 117376590A CN 202210773104 A CN202210773104 A CN 202210773104A CN 117376590 A CN117376590 A CN 117376590A
Authority
CN
China
Prior art keywords
link
wheat
server
data
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210773104.0A
Other languages
Chinese (zh)
Inventor
廉金涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210773104.0A priority Critical patent/CN117376590A/en
Priority to PCT/CN2023/104589 priority patent/WO2024002334A1/en
Publication of CN117376590A publication Critical patent/CN117376590A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25808Management of client data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs

Abstract

The disclosure relates to a view rendering method, a device, an electronic device, a storage medium and a program product, which can reduce the number of times of triggering invalid link related data requests under the condition that a current live broadcast room is not in a link state, and can reduce network pressure. The method comprises the following steps: receiving a target message sent by a server, wherein the target message is used for indicating that a current live broadcasting room starts to link wheat, and the target message carries the link wheat associated data; and under the condition that the converging video stream of the link is pulled, rendering the link view according to the link related data.

Description

View rendering method, view rendering device, electronic device, storage medium and program product
Technical Field
The present disclosure relates to the field of live broadcast technologies, and in particular, to a view rendering method, a device, an electronic apparatus, a storage medium, and a program product.
Background
The link wheat PK is a link wheat playing method between a host and a host in live broadcasting. In the existing flow of Lianmai PK: after the two anchor streams are merged, the audience device can pull the merged video stream, one frame of supplemental enhancement information (Supplemental Enhancement Information, SEI) exists in the pulled merged video stream, then the audience device can judge whether the current is the mergence of the link wheat PK according to the SEI, if so, the audience device requests link wheat PK related data for rendering the link wheat PK view from the server, and after the link wheat PK related data returned by the server is obtained, the link wheat PK view is rendered.
However, the audience device may pull the converging video stream, and the SEI indicates that the current converging video stream is the converging of the link PK, but cannot indicate that the current live broadcasting room is necessarily in the link PK state, when the audience device pulls the converging video stream, the anchor fails to open the link PK, or when the audience device pulls the converging video stream, the anchor ends the situation that the current live broadcasting room such as the link PK is not in the link PK state, and at this time, the request of the link PK related data from the server is invalid, and the server does not return the link PK related data.
In this way, when the current live broadcasting room is not in the link-microphone PK state, the viewer device still pulls the merged video stream once at regular intervals, and when the SEI in the pulled merged video stream indicates that the current merged video stream is the link-microphone PK, the viewer device requests the link-microphone PK related data from the server once, but is an invalid request at this time, each viewer device may trigger the invalid link-microphone PK related data request multiple times, resulting in a larger network pressure.
Disclosure of Invention
To solve or at least partially solve the above technical problems, the present disclosure provides a view rendering method, apparatus, electronic device, storage medium, and program product.
In a first aspect of an embodiment of the present disclosure, there is provided a view rendering method, including: receiving a target message sent by a server, wherein the target message is used for indicating the current live broadcasting room to start wheat connection, the target message carries the wheat connection association data, and the wheat connection association data is used for indicating the current state; and under the condition that the converging video stream of the link is pulled, rendering the link view according to the link related data.
Optionally, before rendering the headset view according to the headset association data in the case of pulling the converged video stream to the headset, the method further includes: acquiring the wheat connecting associated data from the target message; and storing the data associated with the wheat.
Optionally, before rendering the headset view according to the headset association data in the case of pulling the converged video stream to the headset, the method further includes: saving the target message; under the condition that the converging video stream of the link is pulled, rendering the link view according to the link related data, including: and under the condition that the converging video stream of the link is pulled, acquiring the link related data from the target message, and rendering the link view according to the link related data.
In a second aspect of embodiments of the present disclosure, there is provided a view rendering method, the method including: receiving live broadcasting room information when entering a live broadcasting room; requesting the link related data to a server under the condition that the information of the live broadcasting room indicates that the live broadcasting room is in a link state, wherein the link related data is used for indicating the current state; storing the wheat connecting associated data returned from the server; and under the condition that the converging video stream of the link is pulled, rendering the link view according to the link related data.
Optionally, the method further comprises: receiving a first streaming video stream from a server upon entering a live room; requesting the link related data from the server when the first SEI in the first combined video stream indicates that the first combined video stream is a link; rendering the headset view according to the headset associated data returned from the server.
Optionally, the requesting the data associated with the wheat connection from the server includes: under the condition that the data associated with wheat returned by the server is not received and the number of times of requesting the data associated with wheat from the server is smaller than a threshold value of times, requesting the data associated with wheat from the server; and under the condition that the data associated with the wheat, which is returned by the server, is not received and the number of times of requesting the data associated with the wheat from the server is greater than or equal to the threshold number of times, the data associated with the wheat is forbidden to be requested from the server.
In a third aspect of embodiments of the present disclosure, a view rendering method is provided, the method including: receiving a first combined video stream from a server, wherein a first SEI in the first combined video stream indicates that the first combined video stream is a wheat-connected combined video stream; under the condition that the continuous wheat association data returned by the server is not received and the number of times of requesting the continuous wheat association data from the server is smaller than a number threshold, requesting the continuous wheat association data from the server, wherein the continuous wheat association data is used for indicating the current state; receiving the wheat connecting associated data returned by the server; rendering the headset view according to the headset association data.
Optionally, after the receiving the first merged video stream from the server, the method further comprises: and under the condition that the data associated with the wheat, which is returned by the server, is not received and the number of times of requesting the data associated with the wheat from the server is greater than or equal to the threshold number of times, the data associated with the wheat is forbidden to be requested from the server.
In a fourth aspect of embodiments of the present disclosure, there is provided a view rendering apparatus, the apparatus including: a receiving module and a rendering module; the receiving module is used for receiving a target message sent by the server, wherein the target message is used for indicating the current live broadcasting room to start wheat connection, the target message carries the wheat connection association data, and the wheat connection association data is used for indicating the current state; and the rendering module is used for rendering the headset view according to the headset associated data in the target message received by the receiving module under the condition of pulling the converged video stream of the headset.
Optionally, the view rendering device further comprises: the device comprises an acquisition module and a storage module; the acquisition module is used for acquiring the link-microphone related data from the target message before rendering the link-microphone view according to the link-microphone related data under the condition of pulling the converging video stream of the link-microphone; the storage module is used for storing the data related to the wheat, which is acquired by the acquisition module.
Optionally, the view rendering device further comprises: a storage module; the storage module is used for storing the target message before rendering the headset view according to the headset associated data under the condition that the converged video stream of the headset is pulled; the rendering module is specifically configured to obtain the ligature-microphone related data from the target message stored by the storage module under the condition that the converging video stream of the ligature-microphone is pulled, and render the ligature-microphone view according to the ligature-microphone related data.
In a fifth aspect of embodiments of the present disclosure, there is provided a view rendering apparatus, the apparatus including: the device comprises a receiving module, a requesting module, a storing module and a rendering module; the receiving module is used for receiving the information of the live broadcasting room when entering the live broadcasting room; the request module is used for requesting the continuous-wheat associated data to the server under the condition that the information of the direct-broadcasting room received by the receiving module indicates that the direct-broadcasting room is in a continuous-wheat state, wherein the continuous-wheat associated data is used for indicating the current state; the storage module is used for storing the wheat connecting associated data returned from the server; and the rendering module is used for rendering the headset view according to the headset related data stored by the storage module under the condition that the converged video stream of the headset is pulled.
Optionally, the receiving module is further configured to receive the first merged video stream from the server when entering the live room; the request module is used for requesting the link related data from the server when the first SEI in the first combined video stream indicates that the first combined video stream is a link; the rendering module is further configured to render the headset view according to the headset related data returned from the server.
Optionally, the request module is specifically configured to request the data associated with the wheat to the server when the data associated with the wheat returned by the server is not received and the number of times the data associated with the wheat is requested to the server is less than a threshold number of times; and under the condition that the data associated with the wheat, which is returned by the server, is not received and the number of times of requesting the data associated with the wheat from the server is greater than or equal to the threshold number of times, the data associated with the wheat is forbidden to be requested from the server.
A sixth aspect of embodiments of the present disclosure provides a view rendering device, the device comprising: the device comprises a receiving module, a request module and a rendering module; the receiving module is used for receiving a first combined video stream from the server, wherein a first SEI in the first combined video stream indicates that the first combined video stream is a wheat-connected combined video stream; the request module is used for requesting the continuous wheat association data from the server under the condition that the continuous wheat association data returned by the server is not received and the number of times of requesting the continuous wheat association data from the server is smaller than a number threshold value, wherein the continuous wheat association data is used for indicating the current state; the receiving module is also used for receiving the wheat connecting associated data returned by the server; the rendering module is used for rendering the headset view according to the headset related data received by the receiving module.
Optionally, the request module is further configured to prohibit, after receiving the first pooled video stream from the server, requesting the data associated with the headset from the server if the data associated with the headset returned by the server is not received and the number of times the data associated with the headset is requested from the server is greater than or equal to the number threshold.
A seventh aspect of an embodiment of the present disclosure provides an electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program implementing the view rendering method according to the first, second or third aspect when executed by the processor.
A fourth aspect of embodiments of the present disclosure provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the view rendering method according to the first, second or third aspects.
A fifth aspect of embodiments of the present disclosure provides a computer program product, wherein the computer program product comprises a computer program which, when run on a processor, causes the processor to execute the computer program to implement the view rendering method as described in the first, second or third aspect.
A sixth aspect of embodiments of the present disclosure provides a chip comprising a processor and a communication interface coupled to the processor for executing program instructions to implement the view rendering method according to the first, second or third aspects.
The first aspect provided by the embodiments of the present disclosure has the following advantages over the prior art: after receiving the target message sent by the server, under the condition that the converged video stream of the linked wheat is pulled, rendering the linked wheat view according to linked wheat association data carried in the target message, so that in the embodiment of the disclosure, linked wheat view rendering is performed according to linked wheat association data carried in the target message, after the converged video stream is received and SEI in the converged video stream is verified to indicate that the converged video stream is the linked wheat, requesting linked wheat association data from the server, and further, the audience equipment can not trigger invalid linked wheat association data requests for multiple times under the condition that the current live broadcasting is not in a linked wheat state, so that network pressure can be reduced.
The second aspect provided by the embodiments of the present disclosure has the following advantages over the prior art: receiving live broadcasting room information when entering a live broadcasting room; requesting the link related data from a server under the condition that the information of the live broadcasting room indicates that the live broadcasting room is in a link state; storing the wheat connecting associated data returned from the server; and under the condition that the converging video stream of the link is pulled, rendering the link view according to the link related data. In this way, in the embodiment of the disclosure, whether the live broadcasting room is in the continuous wheat state is determined according to the information of the live broadcasting room when the live broadcasting room is entered, if so, continuous wheat view rendering is performed based on the continuous wheat related data returned by the server, after the converging video stream is received and the SEI in the converging video stream is verified to indicate that the converging video stream is continuous wheat, continuous wheat related data is requested to the server, and further, invalid continuous wheat related data requests are triggered for many times under the condition that the audience equipment is not in the continuous wheat state in the current live broadcasting room, so that network pressure can be reduced.
The third aspect provided by the embodiments of the present disclosure has the following advantages over the prior art: receiving a first combined video stream from a server, wherein a first SEI in the first combined video stream indicates that the first combined video stream is a wheat-connected combined video stream; under the condition that the data associated with wheat returned by the server is not received and the number of times of requesting the data associated with wheat from the server is smaller than a threshold value of times, requesting the data associated with wheat from the server; receiving the wheat connecting associated data returned by the server; rendering the headset view according to the headset association data. In this way, in the embodiment of the disclosure, the number of times of requesting the link related data to the server according to the SEI indication in the converging video stream, where the link related data is indicated by the converging video stream, is limited, if the number of times of requesting the link related data before does not exceed the number threshold, the link related data is requested to the server, and further the total number of times of requesting the link related data to the server by the viewer device is smaller than the number threshold, so that invalid link related data requests cannot be triggered infinitely under the condition that the current live broadcasting room is not in the link state, the number of times of invalid link related data requests is reduced, and network pressure can be reduced to a certain extent.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is one of flow diagrams of a view rendering method according to an embodiment of the present disclosure;
FIG. 2 is a second flow chart of a view rendering method according to an embodiment of the disclosure;
FIG. 3 is a third flow chart of a view rendering method according to an embodiment of the disclosure;
FIG. 4 is one of the block diagrams of the view rendering apparatus provided in the embodiments of the present disclosure;
FIG. 5 is a second block diagram of a view rendering apparatus according to an embodiment of the present disclosure;
FIG. 6 is a third block diagram of a view rendering device according to an embodiment of the present disclosure;
fig. 7 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the disclosure.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, where appropriate, such that embodiments of the disclosure may be practiced in sequences other than those illustrated and described herein, and that the objects identified by "first," "second," etc. are generally of the same type and are not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
In the embodiment of the disclosure, the link wheat is started immediately after the link wheat is started, and the link wheat related data is link wheat PK related data.
Rendering the PK view: after the link-microphone PK confluence and link-microphone PK associated data are obtained, the PK view is loaded based on the video frames in the link-microphone PK confluence and the link-microphone PK associated data.
The flow for analyzing the headset PK from the technical realization point of view is as follows: after the anchor A invites the anchor B to carry out the link-wheat PK and the anchor B agrees, the streams of the two anchors are converged, after the convergence, the server sends a target message for starting the link-wheat PK to all audience devices of the anchor device and the living broadcast room, after the convergence, the anchor device requests an interface for starting the link-wheat PK, and then the link-wheat PK is started; and after merging, the audience equipment pulls the merged video stream, and the pulled merged video stream has a frame of SEI, judging whether the merged video stream is the merging of the link-microphone PK according to the SEI (determining whether the current live broadcasting room is in the link-microphone PK), and if so, requesting an interface for pulling PK related data (requesting the link-microphone PK related data from a server) to render the PK view. In the prior art, the flow of the link wheat PK can be summarized as starting the playing method after the anchor ends are converged, and the link wheat PK can be understood as a playing method on the convergence.
The server sends the target message of starting the link-microphone PK to all the viewer devices in the living broadcast room after the confluence, and the viewer devices pull the confluent video stream after the confluence, however, the delay of the pulling the confluence is about 6-8 seconds because the delay of the target message is about 1 second, so the viewer devices pull the confluence after receiving the target message.
The prior proposal is that when the converged video stream pulled by the audience equipment is the link-microphone PK and the link-microphone PK is the link-microphone PK, the link-microphone PK is required to be pulled, however, the carding discovery is realized through the technology of the link-microphone PK, the concept that the link-microphone PK and the current direct broadcasting room are not the same in the link-microphone PK is pulled by the audience equipment, the situation that the link-microphone PK is pulled by the audience but the host is not in the link-microphone PK at the moment exists, and the interface request is invalid. The audience pulls that the link-microphone PK merges but the host is not in the link-microphone PK at this time, the interface request is invalid, including:
1. when the anchor confluence succeeds but the link PK is failed to be opened, each time the audience equipment pulls the confluence video stream to be the link PK confluence, the interface for pulling the link PK associated data once can be requested, and then each audience equipment has a plurality of invalid interface requests.
2. When the anchor finishes the link-microphone PK, and at the moment, the SEI of the audience equipment which just enters the living broadcasting room pulls the converging video stream to judge that the current converging video stream is still the link-microphone PK, the interface for pulling the link-microphone PK associated data once is requested every time the converging video stream is pulled to be the link-microphone PK converging, and multiple invalid interface requests exist for each audience equipment which just enters the living broadcasting room.
Whether the audience equipment is always in a live broadcasting room, at the moment, a host player starts the headset PK, or the audience equipment enters the live broadcasting room of the headset PK in the middle, after the headset PK is pulled to be converged, the audience requests to pull the interface of the headset PK related data, the headset PK related data is taken to render the headset PK view, and the interval from the process of pulling to the process of converging the headset PK to the process of rendering the headset PK view is about 0.5 seconds, so that the user needs to wait for the request time of one interface to see the rendered headset PK view, and the visual experience of the user is poor.
The electronic device (i.e., the viewer device) in the embodiments of the present disclosure may be a mobile electronic device or a non-mobile electronic device. The mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook or a personal digital assistant (personal digital assistant, PDA), etc.; the non-mobile electronic device may be a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, or the like; the embodiments of the present disclosure are not particularly limited.
The execution body of the view rendering method provided in the embodiment of the present disclosure may be the above-mentioned electronic device (including mobile electronic device and non-mobile electronic device), or may be a functional module and/or a functional entity capable of implementing the view rendering method in the electronic device, which may be specifically determined according to actual use requirements, and the embodiment of the present disclosure is not limited.
The view rendering method provided by the embodiment of the present disclosure is described in detail below through specific embodiments and application scenarios thereof with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present disclosure provides a view rendering method, which may include steps 101 to 102 described below.
101. And receiving the target message sent by the server.
The target message is used for indicating the current live broadcasting room to start the link wheat PK, and the target message carries the link wheat PK associated data.
The target message may be an instant messaging message or other messages, which is not limited herein.
102. And under the condition that the converging video stream of the link is pulled, rendering the link view according to the link related data.
Wherein, SEI in the converged video stream of the link wheat PK indicates that the converged video stream is the link wheat PK convergence.
The link-microphone PK related data is used for indicating a current PK state, i.e., the link-microphone PK related data includes related data corresponding to the PK state (PK state data for short). The data content specifically included in the link-microphone PK related data may be determined according to actual use requirements, and the embodiment of the disclosure is not limited.
Illustratively, in the case where the current PK state is the in-PK state, the link PK association data includes: 1. acquiring the starting time of on-site headset PK and the total PK time which is set by a host, determining the time of the headset PK according to the starting time and the current time of the headset PK, determining the rest time of the headset PK by using the time of the headset PK and the custom-set headset PK time, displaying the rest time and the headset PK countdown, setting the headset PK state on the terminal as PK, determining the time information of other projects according to the starting time and the total PK time, designing the countdown of other projects and the like, and particularly determining according to actual requirements without limitation; 2. obtaining audience contribution of both anchor of the wheat connecting PK and showing; 3. unique identification of the current link-wheat PK session (i.e., pk_id).
Illustratively, in the case where the current PK state is the PK end state, the link PK association data includes: 1. acquiring the starting time of on-site headset PK and the headset PK time which is set by a host, determining the time of the headset PK according to the starting time and the current time of the headset PK, determining the rest time of the headset PK by using the time of the headset PK and the headset PK time which is set by the host, displaying the rest time and the headset PK countdown, setting the headset PK state on the terminal as PK, determining the time information of other projects according to the starting time and the headset PK time, designing the countdown of other projects and the like, and particularly determining according to actual requirements without limitation; 2. obtaining audience contribution of both anchor of the wheat connecting PK and showing; 3. unique identification of the current link wheat PK session (i.e., pk_id); 4. if the current link PK is finished, acquiring PK results of the anchor of both link PKs and displaying the animation of the related link PK results.
Optionally, the link PK related data may be further used to indicate a viewer interaction status, i.e., the link PK related data includes viewer interaction status data.
Illustratively, the link PK-associated data may further comprise: and PK scores (namely audience interaction data which can indicate audience interaction states) of the two broadcasters of the link wheat PK to display blood bars.
The data related to the link-microphone PK may further include other related data, which may be specifically determined according to actual conditions, which is not limited herein.
It can be understood that for the audience in the direct broadcasting room, when the main broadcasting is successful and the playing method is successful, the link PK state is stored in the data structure of the direct broadcasting room, and the target message of link PK start is issued to all audience devices in the direct broadcasting room, and the target message carries link PK associated data of the interface request (i.e. request to pull the link PK associated data interface). Therefore, after the converged video stream is pulled, the link PK view is rendered based on link PK associated data carried in the target message, so that the time for pulling the converged video stream and rendering the link PK view is bound, the user can be given as good visual experience as possible, the link PK associated data is not required to be acquired through an interface request, the request of an interface is further reduced, and particularly, the number of requests Per Second (QPS) of the interface request can be remarkably reduced in a high-heat period or a high-heat live broadcasting room, so that the invalid interface request can not be triggered for a plurality of times under the condition that the link PK is pulled and the current live broadcasting room is not in the link PK state.
Optionally, before the step 102, the view rendering method provided by the embodiment of the present disclosure may further include steps 103 to 104 described below.
103. And acquiring the link-microphone PK associated data from the target message.
104. And storing the data related to the link wheat PK.
Wherein, the data associated with the link wheat PK exists in the form of a data body A in the target message. The above steps 103 to 104 can be understood as transforming the data body a in the target message into another data body B more convenient for the viewer device, and buffering the transformed new data body B in the viewer device. And when the data is pulled to the link-wheat PK confluence, the consumption data body B is removed to conduct link-wheat PK view rendering.
In the embodiment of the disclosure, since the data layer issued in the target message is deeper, the useful data (data associated with the link-to-wheat PK) may be fetched by sequentially taking down several layers, so that the useful data is conveniently used in a flat data body (data body B) after conversion; because the target message contains data irrelevant to view rendering, only the data associated with the link-microphone PK is stored, so that occupied equipment memory can be saved; the data format of the link-microphone PK associated data is converted (for example, int_64t is converted into NSInterger which is commonly used in OC voice), so that the data format which is more convenient to process and more commonly used by the audience equipment end is converted, and the subsequent link-microphone PK view rendering is facilitated.
In the embodiment of the disclosure, the link-microphone PK related data in the target message is converted and cached into the data format which is more convenient for the audience equipment to process and more general, and when the audience equipment pulls the link-microphone PK to be converged, the link-microphone PK related data cached by consumption can render the link-microphone PK view, so that the efficiency of rendering the link-microphone PK view can be improved.
Optionally, before the step 102, the view rendering method provided by the embodiment of the present disclosure may further include a step 105 described below, where the step 102 may be specifically implemented by the steps 102a to 102b described below.
105. The target message is saved.
102a, under the condition of pulling the converging video stream of the link microphone PK, acquiring the link microphone PK related data from the target message.
102b, rendering the link-wheat PK view according to the link-wheat PK association data.
It can be appreciated that when a target message is received, the target message is saved, and under the condition that the converged video stream of the link PK is pulled, link PK related data is obtained from the target message, and the link PK view is rendered according to the link PK related data. In this way, the link-microphone PK view rendering can be performed based on link-microphone PK associated data carried in the target message.
As shown in fig. 2, an embodiment of the present disclosure provides a view rendering method, which may include steps 201 to 204 described below.
201. And receiving the information of the living broadcast room when entering the living broadcast room.
202. And requesting the link related data from the server under the condition that the live broadcasting room information indicates that the live broadcasting room is in a link state.
203. And saving the wheat connecting associated data returned from the server.
204. And under the condition that the converging video stream of the link is pulled, rendering the link view according to the link related data.
Wherein, SEI in the converged video stream of the link wheat PK indicates that the converged video stream is the link wheat PK convergence.
It may be appreciated that, in the embodiment of the present disclosure, for a viewer entering a live broadcast room halfway, if the live broadcast room information when entering the live broadcast room is earlier than the converged video stream pulled to the link PK, the trigger timing of the interface request is when the live broadcast room information indicates that the live broadcast room is in the link PK state, and the interface for pulling link PK related data is requested (i.e. requesting link PK related data).
It can be appreciated that if the live room information indicates that the live room is not in a link PK state, no interface is requested to pull link PK related data.
In the embodiment of the disclosure, because the sequence of the live broadcasting room information and the converging video stream pulled to the link-to-microphone PK when the live broadcasting room is entered is not fixed, for the situation that the live broadcasting room information is earlier than the converging video stream pulled to the link-to-microphone PK when the live broadcasting room is entered (the situation accounts for 75 percent according to statistics), the triggering time of the interface request is set to be that the live broadcasting room information indicates that the live broadcasting room is in the link-to-microphone PK state, and then when the converging video stream of the link-to-microphone PK is pulled, link-to-microphone PK view rendering is performed according to link-to-microphone PK related data returned by the server, and the rendering of the PK view and the time of the pulling to the converging video stream are bound. The method has the advantages that the method can give the user as good visual experience as possible, and the interface request is not required to be triggered based on the time of pulling the converged video stream, so that the number of invalid interface requests is reduced under the condition that the converged link-microphone PK is pulled but the current live broadcasting room is not in the link-microphone PK state, particularly, the number of invalid interface requests can be obviously reduced in a high-heat period or a high-heat live broadcasting room, and the network pressure can be reduced.
In the embodiment of the disclosure, the interface request of the audience equipment is preposed, and when the confluent video stream is pulled, the PK view is immediately rendered, so that the user visual experience is optimized.
Optionally, the view rendering method provided in the embodiment of the present application further includes the following steps 205 to 207.
205. A first streaming video stream is received from a server upon entering a live room.
206. And requesting the link-microphone PK associated data from the server when the first SEI in the first link-microphone video stream indicates that the first link-microphone video stream is the link-microphone PK link.
207. And rendering the link PK view according to the link PK associated data returned from the server.
In the embodiment of the disclosure, for the case that the information of the live broadcasting room when entering the live broadcasting room is later than the converging video stream pulled to the link-to-microphone PK (the situation accounts for 25% according to statistics), the triggering time of the interface request is still set to be triggered when the converging video stream pulled to the link-to-microphone PK, so that the information of the live broadcasting room when entering the live broadcasting room is later than the converging video stream pulled to the link-to-microphone PK can be ensured, and the link-to-microphone PK related data can be acquired to render the link-to-microphone PK view under the condition that the link-to-microphone PK is converged and the current live broadcasting room is in the link-to-microphone PK state.
Alternatively, the above step 206 may be specifically implemented by the following steps 206a to 206 b.
206a, requesting the data associated with the headset PK from the server when the data associated with the headset PK returned by the server is not received and the number of times the data associated with the headset PK is requested from the server is less than the threshold number of times.
206b, prohibiting the request of the data associated with the headset PK from the server when the data associated with the headset PK returned by the server is not received and the number of times of requesting the data associated with the headset PK from the server is greater than or equal to the threshold number of times.
The number of times threshold may be determined according to practical situations, and is not limited herein. For example, the number of times threshold may be 2 times.
The first video stream may be a video stream that is pulled for the first time to indicate that the video stream is a link-microphone PK stream, or may be a video stream that is pulled for any one time to indicate that the video stream is a link-microphone PK stream, which may be specifically determined according to the actual situation, and is not limited herein.
In the embodiment of the disclosure, for the case that the live broadcasting room information at the time of entering the live broadcasting room is later than the converging video stream pulled to the link-microphone PK (accounting for 25% of the case), although the triggering time of the interface request is still set to trigger when the converging video stream pulled to the link-microphone PK is still triggered, the number of times of requesting link-microphone PK related data to the server according to triggering in the converging video stream is limited, if the number of times of requesting link-microphone PK related data before does not exceed the number of times threshold, the request of link-microphone PK related data to the server is allowed, and further the total number of times of requesting link-microphone PK related data by the audience device to the server is smaller than the number of times threshold, the invalid link-microphone PK related data request is not triggered infinitely under the condition that the current live broadcasting room is not in the link-microphone PK state, the number of times of invalidating the interface request is reduced, and the network pressure can be reduced to a certain extent.
In the embodiment of the disclosure, the time of converging the link-microphone PK and rendering the link-microphone PK view is bound, so that the user is given as good visual experience as possible, the client-side request PK data interface is not only dependent on the SEI of the converging, and the concepts of converging and PK playing are decoupled.
As shown in fig. 3, an embodiment of the present disclosure provides a view rendering method, which may include steps 301 to 304 described below.
301. A first aggregate video stream is received from a server.
Wherein, the first SEI in the first pooled video stream indicates that the first pooled video stream is a link-to-wheat PK pool.
302. And under the condition that the data associated with the wheat, which is returned by the server, is not received and the number of times of requesting the data associated with the wheat from the server is smaller than a threshold number of times, requesting the data associated with the wheat from the server.
303. And receiving the wheat connecting associated data returned by the server.
304. Rendering the headset view according to the headset association data.
The number of times threshold may be determined according to practical situations, and is not limited herein. For example, the number of times threshold may be 2 times.
The first video stream may be a video stream that is pulled for the first time to indicate that the video stream is a link-microphone PK stream, or may be a video stream that is pulled for any one time to indicate that the video stream is a link-microphone PK stream, which may be specifically determined according to the actual situation, and is not limited herein.
Optionally, after the step 301, the view rendering method provided in the embodiment of the present application further includes the following step 305.
305. And under the condition that the data associated with the wheat, which is returned by the server, is not received and the number of times of requesting the data associated with the wheat from the server is greater than or equal to the threshold number of times, the data associated with the wheat is forbidden to be requested from the server.
In the embodiment of the disclosure, the situation that the live broadcasting room is always in the live broadcasting room or the situation that the live broadcasting room is in the middle is not limited, the triggering time of the interface request is still set to be triggered when the converging video stream of the connecting microphone PK is pulled, but the number of times of requesting connecting microphone PK related data to the server according to the triggering in the converging video stream is limited, if the number of times of requesting the connecting microphone PK related data does not exceed a number threshold, the requesting of the connecting microphone PK related data to the server is allowed, and further the total number of times of requesting the connecting microphone PK related data to the server by audience equipment is smaller than a number threshold, invalid connecting microphone PK related data requests cannot be triggered infinitely under the condition that the current live broadcasting room is not in the connecting microphone PK state, the number of times of invalid interface requests is reduced, and network pressure can be reduced to a certain extent.
Requesting to pull the data interface associated with the headset PK by limiting the number of times of requesting to pull the data interface associated with the headset PK, when the number of times threshold is not exceeded, if the data associated with the headset PK is contained in the interface packet, rendering the headset PK view according to the data associated with the headset PK, and not requesting to pull the data interface associated with the headset PK when the number of times threshold is exceeded. Therefore, the frequency control mode ensures that the audience equipment can normally render the link-microphone PK view when the audience equipment draws the link-microphone PK confluence and the current live broadcasting room is in the link-microphone PK, and can effectively reduce the number of times of requests of invalid interfaces when the audience equipment draws the link-microphone PK confluence and the current live broadcasting room is not in the link-microphone PK.
Fig. 4 is a block diagram illustrating a view rendering apparatus according to an embodiment of the present disclosure, as shown in fig. 4, including: a receiving module 401 and a rendering module 402; the receiving module 401 is configured to receive a target message sent by a server, where the target message is used to instruct a current live broadcasting room to start a link-wheat PK, and the target message carries link-wheat PK association data; the rendering module 402 is configured to render the link PK view according to the link PK related data in the target message received by the receiving module 401, when the merged video stream to the link PK is pulled.
Optionally, the view rendering device further comprises: the device comprises an acquisition module and a storage module; the acquisition module is used for acquiring the link-microphone PK associated data from the target message before rendering the link-microphone PK view according to the link-microphone PK associated data under the condition of pulling the converging video stream of the link-microphone PK; the storage module is used for storing the link wheat PK associated data acquired by the acquisition module.
Optionally, the view rendering device further comprises: a storage module; the storage module is used for storing the target message before rendering the link-microphone PK view according to the link-microphone PK associated data under the condition of pulling the link-microphone PK combined video stream; the rendering module is specifically configured to obtain, when the converged video stream of the link PK is pulled, link PK related data from the target message stored by the storage module, and render the link PK view according to the link PK related data.
In the embodiment of the present disclosure, each module may implement the view rendering method provided in the embodiment of the method, and may achieve the same technical effects, so that repetition is avoided, and details are not repeated here.
Fig. 5 is a block diagram illustrating a view rendering apparatus according to an embodiment of the present disclosure, and as shown in fig. 5, includes: a receiving module 501, a requesting module 502, a saving module 503, and a rendering module 504; the receiving module 501 is configured to receive live room information when entering a live room; the request module 502 is configured to request, when the live broadcast room information received by the receiving module 501 indicates that the live broadcast room is in a link PK state, the link PK related data from a server; the saving module 503 is configured to save the data related to the link wheat PK returned from the server; the rendering module 504 is configured to render the link PK view according to the link PK related data stored by the storing module 503 when the merged video stream to the link PK is pulled.
Optionally, the receiving module 501 is further configured to receive the first merged video stream from the server when entering the live room; the request module 502 is configured to request, when the first SEI in the first pooled video stream indicates that the first pooled video stream is a link-microphone PK pooled, the link-microphone PK associated data from the server; the rendering module 504 is further configured to render the headset PK view according to the headset PK related data returned from the server.
Optionally, the request module 502 is specifically configured to request the headset PK related data from the server when the headset PK related data returned by the server is not received and the number of times the headset PK related data is requested from the server is less than a number of times threshold; and under the condition that the data associated with the link PK returned by the server is not received and the number of times of requesting the data associated with the link PK from the server is greater than or equal to the threshold number of times, prohibiting requesting the data associated with the link PK from the server.
In the embodiment of the present disclosure, each module may implement the view rendering method provided in the embodiment of the method, and may achieve the same technical effects, so that repetition is avoided, and details are not repeated here.
Fig. 6 is a block diagram illustrating a view rendering apparatus according to an embodiment of the present disclosure, as shown in fig. 6, including: a receiving module 601, a requesting module 602, and a rendering module 603; the receiving module 601 is configured to receive a first pooled video stream from a server, where a first SEI in the first pooled video stream indicates that the first pooled video stream is a link-microphone PK pooled; the request module 602 is configured to request the headset PK related data from the server if the headset PK related data returned by the server is not received and the number of times the headset PK related data is requested from the server is less than a number of times threshold; the receiving module 601 is further configured to receive the link-microphone PK related data returned by the server; the rendering module 603 is configured to render the headset PK view according to the headset PK related data received by the receiving module 601.
Optionally, the request module 602 is further configured to prohibit, after receiving the first pooled video stream from the server, the request for the headset PK related data from the server if the headset PK related data returned by the server is not received and the number of times the headset PK related data is requested from the server is greater than or equal to the number of times threshold.
In the embodiment of the present disclosure, each module may implement the view rendering method provided in the embodiment of the method, and may achieve the same technical effects, so that repetition is avoided, and details are not repeated here.
Fig. 7 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure, which is used to exemplarily illustrate an electronic device implementing any view rendering method in an embodiment of the present disclosure, and should not be construed as specifically limiting the embodiment of the present disclosure.
As shown in fig. 7, the electronic device 700 may include a processor (e.g., a central processing unit, a graphics processor, etc.) 701, which may perform various suitable actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the electronic device 700 are also stored. The processor 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
In general, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While an electronic device 700 is shown having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communication device 709, or installed from storage 708, or installed from ROM 702. The computer program, when executed by the processor 701, may perform the functions defined in any of the view rendering methods provided by the embodiments of the present disclosure.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the client, server, may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a target message sent by a server, wherein the target message is used for indicating that a live broadcasting room starts to link a wheat PK, and the target message carries the link wheat PK associated data; and under the condition of pulling the converging video stream of the link-microphone PK, rendering the link-microphone PK view according to the link-microphone PK related data.
Alternatively, the computer-readable medium carries one or more programs that, when executed by the electronic device, cause the electronic device to: receiving live broadcasting room information when entering a live broadcasting room; requesting the link-to-microphone PK associated data from a server under the condition that the information of the living broadcast room indicates that the living broadcast room is in the link-to-microphone PK state; storing the link-wheat PK associated data returned from the server; and under the condition of pulling the converging video stream of the link-microphone PK, rendering the link-microphone PK view according to the link-microphone PK related data.
Alternatively, the computer-readable medium carries one or more programs that, when executed by the electronic device, cause the electronic device to: receiving a first combined video stream from a server, wherein a first SEI in the first combined video stream indicates that the first combined video stream is a link-microphone PK combined stream; under the condition that the data related to the link PK returned by the server is not received and the number of times of requesting the data related to the link PK from the server is smaller than a threshold value of times, requesting the data related to the link PK from the server; receiving the link-wheat PK associated data returned by the server; rendering the link PK view according to the link PK association data.
In an embodiment of the present disclosure, computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the computer, partly on the computer, as a stand-alone software package, partly on the computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a computer-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a computer-readable storage medium would include one or more wire-based electrical connections, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (13)

1. A method of view rendering, the method comprising:
receiving a target message sent by a server, wherein the target message is used for indicating that a current live broadcasting room starts to link wheat, and the target message carries the link wheat association data;
and under the condition that the converging video stream of the link is pulled, rendering a link view according to the link related data.
2. The method of claim 1, wherein, in the case of pulling the converged video stream to the headset, prior to rendering the headset view from the headset association data, the method further comprises:
acquiring the wheat connecting associated data from the target message;
storing the wheat connecting associated data;
or,
the method further comprises, before rendering the headset view according to the headset association data, under the condition that the converged video stream of the headset is pulled out:
Saving the target message;
and under the condition that the confluent video stream of the link is pulled, rendering the link view according to the link related data, wherein the method comprises the following steps:
and under the condition that the converging video stream of the link is pulled, acquiring the link related data from the target message, and rendering the link view according to the link related data.
3. A method of view rendering, the method comprising:
receiving live broadcasting room information when entering a live broadcasting room;
requesting the continuous-wheat associated data to a server under the condition that the information of the live broadcasting room indicates that the live broadcasting room is in a continuous-wheat state, wherein the continuous-wheat associated data is used for indicating the current state;
storing the wheat connecting associated data returned from the server;
and under the condition that the converging video stream of the link is pulled, rendering a link view according to the link related data.
4. A method according to claim 3, characterized in that the method further comprises:
receiving a first streaming video stream from a server upon entering a live room;
requesting the link-related data from the server in case the first supplemental enhancement information SEI in the first combined video stream indicates that the first combined video stream is a link-to-wheat combined stream;
And rendering the headset view according to the headset associated data returned from the server.
5. The method of claim 4, wherein the requesting the cam association data from the server comprises:
requesting the data associated with wheat from a server under the condition that the data associated with wheat returned by the server is not received and the number of times of requesting the data associated with wheat from the server is smaller than a threshold value of times;
and under the condition that the data associated with the wheat, which is returned by the server, is not received and the number of times of requesting the data associated with the wheat to the server is greater than or equal to the threshold number of times, canceling requesting the data associated with the wheat to the server.
6. A method of view rendering, the method comprising:
receiving a first combined video stream from a server, wherein first supplemental enhancement information SEI in the first combined video stream indicates that the first combined video stream is a wheat-connected combined video stream;
requesting the wheat connecting associated data to a server under the condition that the wheat connecting associated data returned by the server is not received and the frequency of requesting the wheat connecting associated data to the server is smaller than a frequency threshold, wherein the wheat connecting associated data is used for indicating the current state;
Receiving the wheat connecting associated data returned by the server;
and rendering the headset view according to the headset associated data.
7. The method of claim 6, wherein after receiving the first merged video stream from the server, the method further comprises:
and under the condition that the data associated with the wheat, which is returned by the server, is not received and the number of times of requesting the data associated with the wheat to the server is greater than or equal to the threshold number of times, canceling requesting the data associated with the wheat to the server.
8. A view rendering device, comprising: a receiving module and a rendering module;
the receiving module is used for receiving a target message sent by the server, wherein the target message is used for indicating the current live broadcasting room to start wheat connection, the target message carries the wheat connection association data, and the wheat connection association data is used for indicating the current state;
and the rendering module is used for rendering the headset view according to the headset associated data in the target message received by the receiving module under the condition that the converged video stream of the headset is pulled.
9. A view rendering device, comprising: the device comprises a receiving module, a requesting module, a storing module and a rendering module;
The receiving module is used for receiving the information of the live broadcasting room when entering the live broadcasting room;
the request module is used for requesting the continuous-wheat associated data from a server under the condition that the information of the live broadcasting room received by the receiving module indicates that the live broadcasting room is in a continuous-wheat state, wherein the continuous-wheat associated data is used for indicating the current state;
the storage module is used for storing the wheat connecting associated data returned from the server;
and the rendering module is used for rendering the headset view according to the headset associated data stored by the storage module under the condition that the converged video stream of the headset is pulled.
10. A view rendering device, comprising: the device comprises a receiving module, a request module and a rendering module;
the receiving module is configured to receive a first merged video stream from a server, where a first supplemental enhancement information SEI in the first merged video stream indicates that the first merged video stream is a joint wheat merge;
the request module is used for requesting the continuous wheat association data from the server under the condition that the continuous wheat association data returned by the server is not received and the number of times of requesting the continuous wheat association data from the server is smaller than a number threshold, wherein the continuous wheat association data is used for indicating the current state;
The receiving module is also used for receiving the wheat connecting associated data returned by the server;
and the rendering module is used for rendering the headset view according to the headset associated data received by the receiving module.
11. An electronic device, comprising: a memory and a processor, the memory for storing a computer program; the processor is configured to execute the view rendering method of any of claims 1 to 7 when the computer program is invoked.
12. A computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the view rendering method of any one of claims 1 to 7.
13. A computer program product, characterized in that it has stored thereon a computer program which, when executed by a processor, implements the view rendering method of any of claims 1 to 7.
CN202210773104.0A 2022-06-30 2022-06-30 View rendering method, view rendering device, electronic device, storage medium and program product Pending CN117376590A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210773104.0A CN117376590A (en) 2022-06-30 2022-06-30 View rendering method, view rendering device, electronic device, storage medium and program product
PCT/CN2023/104589 WO2024002334A1 (en) 2022-06-30 2023-06-30 View rendering method and apparatus, electronic device, storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210773104.0A CN117376590A (en) 2022-06-30 2022-06-30 View rendering method, view rendering device, electronic device, storage medium and program product

Publications (1)

Publication Number Publication Date
CN117376590A true CN117376590A (en) 2024-01-09

Family

ID=89383367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210773104.0A Pending CN117376590A (en) 2022-06-30 2022-06-30 View rendering method, view rendering device, electronic device, storage medium and program product

Country Status (2)

Country Link
CN (1) CN117376590A (en)
WO (1) WO2024002334A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106658205B (en) * 2016-11-22 2020-09-04 广州华多网络科技有限公司 Live broadcast room video stream synthesis control method and device and terminal equipment
CN107027048A (en) * 2017-05-17 2017-08-08 广州市千钧网络科技有限公司 A kind of live even wheat and the method and device of information displaying
CN108965932B (en) * 2017-05-17 2021-05-28 武汉斗鱼网络科技有限公司 Continuous wheat window display method and device
CN110392311B (en) * 2018-04-18 2021-11-09 武汉斗鱼网络科技有限公司 Connecting wheat display method, storage medium, connecting wheat server, client and system
CN113271470B (en) * 2021-05-17 2023-05-23 广州繁星互娱信息科技有限公司 Live broadcast wheat connecting method, device, terminal, server and storage medium

Also Published As

Publication number Publication date
WO2024002334A1 (en) 2024-01-04

Similar Documents

Publication Publication Date Title
JP2011003198A (en) Method, server, and client used in client-server distributed system
CN111629251B (en) Video playing method and device, storage medium and electronic equipment
CN112351222B (en) Image special effect processing method and device, electronic equipment and computer readable storage medium
US20220392026A1 (en) Video transmission method, electronic device and computer readable medium
CN113225483B (en) Image fusion method and device, electronic equipment and storage medium
US20240045641A1 (en) Screen sharing display method and apparatus, device, and storage medium
WO2022237744A1 (en) Method and apparatus for presenting video, and device and medium
CN112969075A (en) Frame supplementing method and device in live broadcast process and computing equipment
CN114699767A (en) Game data processing method, device, medium and electronic equipment
CN113259729B (en) Data switching method, server, system and storage medium
WO2023221941A1 (en) Image processing method and apparatus, device, and storage medium
US20220272283A1 (en) Image special effect processing method, apparatus, and electronic device, and computer-readable storage medium
WO2024001802A1 (en) Image processing method and apparatus, and electronic device and storage medium
CN111147885B (en) Live broadcast room interaction method and device, readable medium and electronic equipment
CN117376590A (en) View rendering method, view rendering device, electronic device, storage medium and program product
WO2023134509A1 (en) Video stream pushing method and apparatus, and terminal device and storage medium
CN114584808B (en) Video stream acquisition method, device, system, equipment and medium
CN114187169A (en) Method, device and equipment for generating video special effect package and storage medium
CN112346682A (en) Image special effect processing method and device, electronic equipment and computer readable storage medium
CN113055977B (en) Method and device for scanning wireless hotspots
WO2023197897A1 (en) Method and apparatus for processing live-streaming audio and video stream, and device and medium
CN113115074B (en) Video jamming processing method and device
CN111382378B (en) Resource loading method and device, mobile terminal and storage medium
CN111258670B (en) Method and device for managing component data, electronic equipment and storage medium
WO2022171039A1 (en) Video pre-loading method and apparatus, and device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination