CN113840170B - Method and device for live broadcast of wheat - Google Patents

Method and device for live broadcast of wheat Download PDF

Info

Publication number
CN113840170B
CN113840170B CN202010581753.1A CN202010581753A CN113840170B CN 113840170 B CN113840170 B CN 113840170B CN 202010581753 A CN202010581753 A CN 202010581753A CN 113840170 B CN113840170 B CN 113840170B
Authority
CN
China
Prior art keywords
texture data
wheat
local
continuous
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010581753.1A
Other languages
Chinese (zh)
Other versions
CN113840170A (en
Inventor
郑伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Douyu Network Technology Co Ltd
Original Assignee
Wuhan Douyu Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Douyu Network Technology Co Ltd filed Critical Wuhan Douyu Network Technology Co Ltd
Priority to CN202010581753.1A priority Critical patent/CN113840170B/en
Publication of CN113840170A publication Critical patent/CN113840170A/en
Application granted granted Critical
Publication of CN113840170B publication Critical patent/CN113840170B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44004Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440227Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by decomposing into layers, e.g. base layer and one or more enhancement layers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention relates to the technical field of Internet live broadcasting, in particular to a method and a device for live broadcasting with wheat, which are applied to a host side comprising an image processor, wherein the method comprises the following steps: receiving the wheat connecting video data sent by the wheat connecting terminal, and locally collecting local video data; respectively carrying out texture processing on the continuous-wheat video data and the local video data by utilizing an image processor, and correspondingly generating original continuous-wheat texture data and local texture data; synthesizing the original continuous-wheat texture data and the local texture data to generate a local continuous-wheat picture, and playing the local continuous-wheat picture at a main broadcasting end; copying original continuous-wheat texture data and local texture data, rendering the copied continuous-wheat texture data and local texture data to a cache area of an image processor, generating a live continuous-wheat picture, and carrying out live-wheat push on the live continuous-wheat picture. The invention reduces the processing amount of data, saves the calculation force of the anchor end and improves the processing efficiency because the texture processing is carried out only once.

Description

Method and device for live broadcast of wheat
Technical Field
The invention relates to the technical field of Internet live broadcasting, in particular to a method and a device for live broadcasting with wheat.
Background
When live video, a host user can conduct real-time video communication with a plurality of users in a live broadcast room, and the real-time video communication mode is called video link. For the video communication, the user in real-time video communication with the host user is called a communication user, and after the host user and the communication user are successfully communicated, a communication picture is generated, wherein the communication picture simultaneously comprises a picture from the host user and a picture from the communication user. The wheat linking picture can be played at the anchor end and pushed to all audience ends in the living broadcast room by the anchor end.
In the prior art, in the live wheat-connecting broadcast process, for a main broadcasting end, local video data are collected through a local image collecting unit, and meanwhile, the wheat-connecting video data sent by the wheat-connecting end are received. And then, performing two independent texture processes on the local video data to generate two local video texture data, and simultaneously performing two independent texture processes on the continuous video data to generate two continuous video texture data. And then, synthesizing the local video texture data and the wheat connecting video texture data to obtain a local wheat connecting picture for local play at the anchor end. And synthesizing the other local video texture data and the other wheat connecting video texture data to obtain a live-broadcast wheat connecting picture for live-broadcast push flow. In the prior art, two independent texture processing needs to be performed on the same type of video data, so that the problems of large data processing amount and low processing efficiency exist, and further the situation that delay and blocking occur easily in continuous live broadcasting is caused.
Disclosure of Invention
In view of the above, the present invention is directed to a method and apparatus for live-wheat-over-live-broadcasting that overcomes or at least partially solves the above-mentioned problems.
According to a first aspect of the present invention, there is provided a method for live wheat-over-live broadcast, applied to a host side including an image processor, the method comprising:
receiving the wheat connecting video data sent by the wheat connecting terminal, and locally collecting local video data;
respectively carrying out texture processing on the wheat-linked video data and the local video data by utilizing the image processor, and correspondingly generating original wheat-linked texture data and local texture data;
synthesizing the original wheat connecting texture data and the local texture data to generate a local wheat connecting picture, and playing the local wheat connecting picture at the anchor end;
copying the original continuous-wheat texture data and the local texture data, rendering the copied continuous-wheat texture data and the local texture data to a cache area of the image processor, generating a live continuous-wheat picture, and carrying out live-broadcasting plug-flow on the live continuous-wheat picture;
wherein said rendering said copied said wheat-with-grain data and said local texture data to a buffer of said image processor comprises:
Reading the resolution of a system screen of the anchor terminal;
judging whether the resolution of the system screen is the same as the target resolution corresponding to the coordinate system of the image processor;
if the coordinates of the copied wheat-connected texture data and the local texture data in the screen coordinate system of the system screen are converted into coordinates in the coordinate system of the image processor according to an equal-ratio scaling conversion mode; if the two frames are different, converting the coordinates of the copied wheat-connected texture data and the local texture data under the screen coordinate system of the system screen into coordinates under the coordinate system of the image processor according to a mode of converting the long side and then compensating the short side;
and rendering the wheat-linked texture data and the local texture data to the cache region according to the converted coordinates.
Preferably, the synthesizing the original headset texture data and the local texture data to generate a local headset picture includes:
rendering the original local texture data by using the image processor to obtain a first image layer;
rendering the original wheat-linked texture data on the first layer by using the image processor to obtain an adjustable second layer;
And synthesizing the first layer and the second layer to obtain the local wheat connecting picture.
Preferably, after said copying of said original said wheat-with-wheat texture data and said local texture data, said method further comprises;
establishing a synchronous relation between the original wheat connecting texture data and the copied wheat connecting texture data;
a synchronization relationship is established between the original local texture data and the copied local texture data.
Preferably, the converting, according to an equal-scaling conversion manner, the coordinates of the continuous-microphone texture data and the local texture data in the screen coordinate system of the system screen into the coordinates in the coordinate system of the image processor, includes the following formula:
Gx=Sx/SW*2.0-1.0;
Gy=Sy/SH*2.0-1.0;
Gw=Sw/SW*2.0;
Gh=Sh/SH*2.0;
wherein Gx is the abscissa of the vertices of the continuous-wheat texture data and the local texture data under the coordinate system of the image processor, gy is the ordinate of the vertices of the continuous-wheat texture data and the local texture data under the coordinate system of the image processor, sx is the abscissa of the vertices of the continuous-wheat texture data and the local texture data under the screen coordinate system, sy is the ordinate of the vertices of the continuous-wheat texture data and the local texture data under the screen coordinate system, SW is the width of the system screen under the screen coordinate system, SH is the height of the system screen under the screen coordinate system, SW is the width of the continuous-wheat texture data and the local texture data under the screen coordinate system, SH is the height of the continuous-wheat texture data and the local texture data under the screen coordinate system.
Preferably, the method of converting the coordinates of the wheat-connected texture data and the local texture data in the screen coordinate system of the system screen into coordinates in the coordinate system of the image processor according to the mode of converting the long side and compensating the short side comprises the following formula:
Calibrate=(1/A)/(SW/SH);
Gx=Sx/SW*2.0-1.0;
Gy=Sy/SH*2.0-1.0;
Gw=Sw/SW*2.0/Calibrate;
Gh=Sh/SH*2.0;
wherein calibre is a calibration coefficient, a is the target resolution, SW is the width of the system screen in the screen coordinate system, SH is the height of the system screen in the screen coordinate system, gx is the horizontal coordinate of the vertices of the continuous wheat texture data and the local texture data in the coordinate system of the image processor, gy is the vertical coordinate of the vertices of the continuous wheat texture data and the local texture data in the coordinate system of the image processor, sx is the horizontal coordinate of the vertices of the continuous wheat texture data and the local texture data in the screen coordinate system, sy is the vertical coordinate of the vertices of the continuous wheat texture data and the local texture data in the screen coordinate system, SW is the width of the continuous wheat texture data and the local texture data in the screen coordinate system, and SH is the height of the continuous wheat texture data and the local texture data in the screen coordinate system.
Preferably, the target resolution is 16:9.
According to a second aspect of the present invention, there is provided a live-wheat-connected device for use on a host side including an image processor, the device comprising:
the acquisition module is used for receiving the wheat connecting video data sent by the wheat connecting terminal and locally acquiring local video data;
the texture processing module is used for respectively carrying out texture processing on the continuous-wheat video data and the local video data by utilizing the image processor, and correspondingly generating original continuous-wheat texture data and local texture data;
the synthesis module is used for synthesizing the original wheat connecting texture data and the local texture data, generating a local wheat connecting picture and playing the local wheat connecting picture at the anchor end;
the copying and rendering module is used for copying the original continuous-wheat texture data and the local texture data, rendering the copied continuous-wheat texture data and the local texture data to a cache area of the image processor, generating a live continuous-wheat picture, and carrying out live-broadcasting push flow on the live continuous-wheat picture;
wherein, the copy rendering module includes:
The reading unit is used for reading the resolution ratio of the system screen of the anchor terminal;
a judging unit, configured to judge whether the resolution of the system screen is the same as a target resolution corresponding to the coordinate system of the image processor;
the processing unit is used for converting the coordinates of the copied wheat-connected texture data and the local texture data under the screen coordinate system of the system screen into the coordinates under the coordinate system of the image processor according to an equal-ratio scaling conversion mode if the coordinates are the same; if the two frames are different, converting the coordinates of the copied wheat-connected texture data and the local texture data under the screen coordinate system of the system screen into coordinates under the coordinate system of the image processor according to a mode of converting the long side and then compensating the short side;
and the rendering unit is used for rendering the wheat-linked texture data and the local texture data to the cache region according to the converted coordinates.
Preferably, the synthesis module includes:
the local texture data rendering unit is used for rendering the original local texture data by utilizing the image processor to obtain a first layer;
the continuous wheat texture data rendering unit is used for rendering the original continuous wheat texture data on the first image layer by utilizing the image processor to obtain an adjustable second image layer;
And the synthesis unit is used for synthesizing the first image layer and the second image layer to obtain the local wheat connecting picture.
According to a third aspect of the present invention there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the method steps of the first aspect described above.
According to a fourth aspect of the present invention there is provided a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method steps as described in the first aspect above when the program is executed.
The method for live broadcasting of the wheat connecting is applied to a main broadcasting end comprising an image processor, and firstly receives the wheat connecting video data sent by the wheat connecting end and locally collects the local video data. Then, respectively carrying out texture processing on the continuous wheat video data and the local video data by utilizing an image processor, and correspondingly generating original continuous wheat texture data and local texture data. Then, the original continuous-wheat texture data and the local texture data are synthesized to generate a local continuous-wheat picture, and the local continuous-wheat picture is played at a main broadcasting end. And meanwhile, copying original continuous-wheat texture data and local texture data, rendering the copied continuous-wheat texture data and local texture data to a cache region of an image processor, generating a live continuous-wheat picture, and carrying out live streaming on the live continuous-wheat picture. The invention only carries out one-time texture processing, and generates the data for generating the live-broadcast continuous-cast wheat picture in a copying way, thereby reducing the processing amount of the data, saving the calculation power of a host side, improving the processing efficiency and avoiding the condition of blocking and time delay in continuous-cast wheat. Meanwhile, in the invention, in the process of rendering the copied wheat-linked texture data and the local texture data to the cache region of the image processor, firstly, the resolution of the system screen of the anchor terminal is read. Then, it is determined whether the resolution of the system screen is the same as the target resolution corresponding to the coordinate system of the image processor. If the coordinates of the wheat-connected texture data and the local texture data are the same, converting the coordinates of the wheat-connected texture data and the local texture data under the screen coordinate system of the system screen into the coordinates of the image processor under the coordinate system according to an equal-ratio scaling conversion mode; if the two coordinates are different, the coordinates of the wheat-connected texture data and the local texture data under the screen coordinate system of the system screen are converted into the coordinates under the coordinate system of the image processor according to the mode of converting the long side and compensating the short side. And finally, rendering the wheat connecting texture data and the local texture data to a buffer area according to the converted coordinates. Through the rendering mode, live broadcasting equipment with different resolutions can be self-adaptive to live broadcasting with wheat.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also throughout the drawings, like reference numerals are used to designate like parts. In the drawings:
FIG. 1 shows a flow chart of a method of live link in an embodiment of the invention;
FIG. 2 shows a schematic diagram of an image processor coordinate system in an embodiment of the invention;
FIG. 3 shows a schematic diagram of a screen coordinate system in an embodiment of the invention;
FIG. 4 is a schematic diagram showing the relationship between original texture data and copied texture data in an embodiment of the present invention;
fig. 5 shows a block diagram of a live wheat-over device in an embodiment of the present invention;
fig. 6 shows a schematic structural diagram of a computer device in an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The first embodiment of the invention provides a live wheat-over method which is applied to a host side comprising an image processor (Graphics Processing Unit, GPU). The anchor end can carry out video communication with users in the living room, the communication object is called a communication end, and other clients except the anchor end and the communication end in the living room are called audience ends.
As shown in fig. 1, the method for live link wheat in the embodiment of the invention comprises the following steps:
step 101: and receiving the headset video data sent by the headset terminal, and locally collecting the local video data.
Step 102: and respectively carrying out texture processing on the continuous-wheat video data and the local video data by utilizing an image processor, and correspondingly generating original continuous-wheat texture data and local texture data.
Step 103: and synthesizing the original continuous-wheat texture data and the local texture data to generate a local continuous-wheat picture, and playing the local continuous-wheat picture at a main broadcasting end.
Step 104: copying original continuous-wheat texture data and local texture data, rendering the copied continuous-wheat texture data and local texture data to a cache area of an image processor, generating a live continuous-wheat picture, and carrying out live-wheat push on the live continuous-wheat picture.
For step 101, an image acquisition unit is locally provided at the anchor end, and the image acquisition unit at the anchor end is used for acquiring pictures at the anchor end in real time to generate local video data. The wheat connecting end is also provided with an image acquisition unit, and the image acquisition unit of the wheat connecting end is used for acquiring pictures of the wheat connecting end in real time to generate wheat connecting video data. Further, the wheat connecting terminal sends the wheat connecting video data to the anchor terminal, and the anchor terminal receives the wheat connecting video data sent by the wheat connecting terminal.
After obtaining the link video data and the local video data, the anchor performs step 102. In step 102, the anchor side performs texture processing on the link video data through its internal image processor to generate link texture data, and performs texture processing on the local video data to generate local texture data. It should be noted that, the headset texture data and the local texture data obtained through the texture processing belong to original headset texture data and local texture data, and the original headset texture data and the local texture data will be copied later. In addition, for both the link texture data and the local texture data, both are two-dimensional rectangles.
After generating the original ligature texture data and the local texture data, in step 103, it is determined how to synthesize the original ligature texture data with the local texture data, comprising the steps of:
step 201: and rendering the original local texture data by using an image processor to obtain a first image layer.
Step 202: and rendering the original wheat-linked texture data on the first image layer by using an image processor to obtain an adjustable second image layer.
Step 203: and synthesizing the first layer and the second layer to obtain the local wheat connecting picture.
Specifically, the first layer corresponds to local texture data and is used for displaying a picture of a main playing end. The second layer corresponds to the wheat connecting texture data and is used for displaying pictures of the wheat connecting end. The second layer is adjustable relative to the first layer, i.e. the size of the second layer relative to the first layer is adjustable, and the position of the second layer relative to the first layer is adjustable. Further, by combining the first layer and the second layer, a local link frame can be obtained, and the local link frame is displayed and visible on the anchor side for viewing by the anchor side. By the synthesis mode, the technical effect of dynamically adjusting the video layout position of the anchor terminal is achieved. It should be noted that, the coordinates of the second layer relative to the first layer are determined by the resolution of the system screen at the anchor end, and the coordinate system corresponding to the system screen is generally referred to as a screen coordinate system (UI coordinate system).
After the original commissure texture data and local texture data are generated, new commissure texture data and local texture data are obtained by copying the commissure texture data and the local texture data in step 104. And further, rendering the copied new continuous wheat texture data and the local texture data to a cache region of the image processor to generate a live continuous wheat picture. The relationship between the original texture data and the copied new texture data is shown in fig. 4. And the main broadcasting end transmits the live broadcasting continuous-cast pictures to all audience ends through the live broadcasting server by live broadcasting push flow.
Further, how to render the copied wheat-connected texture data and the local texture data comprises the following steps:
step 301: and reading the resolution of the system screen of the anchor side.
Step 302: and judging whether the resolution of the system screen is the same as the target resolution corresponding to the coordinate system of the image processor.
Step 303: if the coordinates of the copied wheat-connected texture data and the local texture data are the same, converting the coordinates of the copied wheat-connected texture data and the local texture data under the screen coordinate system of the system screen into the coordinates of the image processor under the coordinate system according to an equal-ratio scaling conversion mode; if the two frames are different, converting the coordinates of the copied wheat-connected texture data and the local texture data under the screen coordinate system of the system screen into the coordinates under the coordinate system of the image processor according to the mode of converting the long side and then compensating the short side.
Step 304: and rendering the wheat-connected texture data and the local texture data to a buffer area according to the converted coordinates.
Specifically, the image processor has a coordinate system, commonly referred to as an image processor coordinate system, which is a 2×2 square with the origin at the middle position, as shown in fig. 2. Whether it is the anchor, link or viewer, the system screen of the live device has a screen coordinate system, which is a shape corresponding to the resolution of the system screen, with the origin at the upper left corner, as shown in fig. 3. For the image processor, the image processor coordinate system is preset to correspond to a resolution, which is referred to as a target resolution. The target resolution is determined by the predetermined resolution of the live link. For example, if the resolution of the live link is specified to be 16:9, then the target resolution is determined to be 16:9. After the correspondence between the image processor coordinate system and the target resolution is established, it is indicated that the target resolution is the entire range of 2×2 squares covered by the image processor coordinate system. For example, if the target resolution is 16:9, then the entire range of the image processor coordinate system corresponds to one piece of data with an aspect ratio of 16:9.
The following describes embodiments of the present invention in detail, taking a target resolution of 16:9 as an example.
Firstly, reading the resolution of a system screen of a host side through an interface provided by an operating system of a live broadcast device of the host side. Then, it is determined whether the resolution of the system screen is 16:9. If the resolution of the system screen is 16:9, this indicates that the conversion from the screen coordinate system to the image processor coordinate system is an equal scaling process. Therefore, coordinate conversion is performed on the link texture data and the local texture data according to an equal scaling conversion method, respectively, and coordinates of the texture data in a screen coordinate system of a system screen are converted into coordinates in an image processor coordinate system, that is, the texture data is scaled to an area (-1, 1) in an equal ratio so as to render the texture data to a buffer area of the image processor. The specific conversion process of the wheat connecting texture data and the local texture data is the same, and the following formulas are adopted:
Gx=Sx/SW*2.0-1.0;
Gy=Sy/SH*2.0-1.0;
Gw=Sw/SW*2.0;
Gh=Sh/SH*2.0;
wherein Gx is the abscissa of the vertices of the continuous-wheat texture data and the local texture data under the coordinate system of the image processor, gy is the ordinate of the vertices of the continuous-wheat texture data and the local texture data under the coordinate system of the image processor, sx is the abscissa of the vertices of the continuous-wheat texture data and the local texture data under the screen coordinate system, sy is the ordinate of the vertices of the continuous-wheat texture data and the local texture data under the screen coordinate system, SW is the width of the system screen under the screen coordinate system, SH is the height of the system screen under the screen coordinate system, SW is the width of the continuous-wheat texture data and the local texture data under the screen coordinate system, SH is the height of the continuous-wheat texture data and the local texture data under the screen coordinate system. The invention is based on the principle of equal-ratio scaling conversion, can adapt to the conversion of system screens with different resolutions through the formula, has low calculation complexity and high conversion speed, improves the rendering speed, and ensures the fluency of live continuous seeding.
It should be noted that, the coordinate conversion process of the link texture data and the local texture data is performed separately. For example, when performing coordinate conversion on local texture data, gx is the abscissa of the vertex of the local texture data under the coordinate system of the image processor, gy is the ordinate of the vertex of the local texture data under the coordinate system of the image processor, sx is the abscissa of the vertex of the local texture data under the screen coordinate system, sy is the ordinate of the vertex of the local texture data under the screen coordinate system, SW is the width of the system screen under the screen coordinate system, SH is the height of the system screen under the screen coordinate system, SW is the width of the local texture data under the screen coordinate system, SH is the height of the local texture data under the screen coordinate system. Similarly, when coordinate transformation is performed on the data of the headset texture, gx, gy, sx, sy, SW, SH, sw and Sh are parameters corresponding to the data of the headset texture, and will not be described here again.
Further, after determining whether the resolution of the system screen is 16:9, if the resolution of the system screen is not 16:9, then the conversion from the screen coordinate system to the image processor coordinate system is no longer an equal scaling process. As can be seen from the foregoing, the texture data is a rectangle, and therefore, in the case that the resolution of the system screen is not 16:9, coordinate conversion is performed on the continuous texture data and the local texture data according to the manner of converting the long side and compensating the short side, respectively, and the coordinates of the texture data in the screen coordinate system of the system screen are converted into the coordinates in the image processor coordinate system, that is, the texture data is scaled to an area (-1, 1) in an equal ratio, so as to render the texture data to the buffer area of the image processor. The specific conversion process of the wheat connecting texture data and the local texture data is the same, and the following formulas are adopted:
Calibrate=(1/A)/(SW/SH);
Gx=Sx/SW*2.0-1.0;
Gy=Sy/SH*2.0-1.0;
Gw=Sw/SW*2.0/Calibrate;
Gh=Sh/SH*2.0;
Where calibre is the calibration factor, a is the target resolution, and in this example, a is 16:9.SW is the width of the system screen under the screen coordinate system, SH is the height of the system screen under the screen coordinate system, gx is the abscissa of the vertices of the continuous-wheat texture data and the local texture data under the coordinate system of the image processor, gy is the ordinate of the vertices of the continuous-wheat texture data and the local texture data under the coordinate system of the image processor, sx is the abscissa of the vertices of the continuous-wheat texture data and the local texture data under the screen coordinate system, sy is the ordinate of the vertices of the continuous-wheat texture data and the local texture data under the screen coordinate system, SW is the width of the continuous-wheat texture data and the local texture data under the screen coordinate system, and SH is the height of the continuous-wheat texture data and the local texture data under the screen coordinate system. According to the invention, the calibration coefficient is adjusted by utilizing the target resolution, and then the coordinate is converted by utilizing the calibration coefficient based on the principle of equal-ratio scaling conversion, so that the method not only can adapt to the conversion of system screens with different resolutions, but also has the advantages of low calculation complexity, high conversion speed, improvement of rendering speed and guarantee of smoothness of live broadcasting with wheat.
It should be noted that, in the same way as the scaling conversion, the coordinate conversion process of the data of the wheat-with-wheat texture and the local texture is also performed separately, and the parameters corresponding to the texture data are selected for different types of texture data, gx, gy, sx, sy, SW, SH, sw and Sh, which are not described herein.
Further, after the coordinate conversion of the texture data is completed, the texture data is rendered to a buffer area according to the converted coordinates, and a live-broadcast wheat-connected picture is generated. And finally, carrying out video coding and stream format encapsulation on the live-broadcast continuous-wheat pictures, pushing the live-broadcast continuous-wheat pictures to a live-broadcast server through a real-time message transmission protocol (Real Time Messaging Protocol, RTMP), distributing the live-broadcast continuous-wheat pictures to a viewer terminal by the live-broadcast server, and acquiring the live-broadcast continuous-wheat pictures by the viewer terminal through a live-broadcast streaming protocol.
In addition, after the original continuous wheat texture data and the local texture data are copied, a synchronous relation is established between the original continuous wheat texture data and the copied continuous wheat texture data, and a synchronous relation is established between the original local texture data and the copied local texture data, so that multiplexing of the texture data is realized. The synchronization relationship means that if the texture data copied from the previous frame is not rendered yet and the original texture data of the new frame appears, the texture data copied from the previous frame is directly lost, and the original texture data of the new frame is copied and rendered synchronously. Taking the example of the even wheat texture data, if the original even wheat texture data is a, The copied continuous wheat texture data is A', and the original continuous wheat texture data of the last frame is A n-1 The wheat connecting texture data of a new frame is A n The grain data of the wheat connecting copied from the previous frame is A' n-1 A 'is the new frame of copied wheat connecting texture data' n . If A' n-1 Not yet rendered to the buffer, A n And comes, then, drop A' n-1 Directly to A n Copying to obtain A' n And A 'is carried out' n Rendering to a buffer. According to the invention, the synchronization relation is established between the original texture data and the copied texture data, so that the timeliness and effectiveness of rendering can be ensured, the processing efficiency is improved, and the smoothness of the live continuous-cast image is ensured.
Based on the same inventive concept, a second embodiment of the present invention further provides a live-wheat-connected device, applied to a host side including an image processor, as shown in fig. 5, the device includes:
an acquisition module 21, configured to receive the headset video data sent by the headset terminal, and collect local video data from local;
a texture processing module 22, configured to perform texture processing on the continuous-microphone video data and the local video data by using the image processor, so as to generate original continuous-microphone texture data and local texture data correspondingly;
The synthesizing module 23 is configured to synthesize the original continuous-wheat texture data and the local texture data, generate a local continuous-wheat picture, and play the local continuous-wheat picture at the anchor end;
the copy rendering module 24 is configured to copy the original continuous-wheat texture data and the local texture data, render the copied continuous-wheat texture data and the local texture data to a cache area of the image processor, generate a live continuous-wheat picture, and perform live-cast plug-flow on the live continuous-wheat picture;
wherein the copy rendering module 24 comprises:
the reading unit is used for reading the resolution ratio of the system screen of the anchor terminal;
a judging unit, configured to judge whether the resolution of the system screen is the same as a target resolution corresponding to the coordinate system of the image processor;
the processing unit is used for converting the coordinates of the copied wheat-connected texture data and the local texture data under the screen coordinate system of the system screen into the coordinates under the coordinate system of the image processor according to an equal-ratio scaling conversion mode if the coordinates are the same; if the two frames are different, converting the coordinates of the copied wheat-connected texture data and the local texture data under the screen coordinate system of the system screen into coordinates under the coordinate system of the image processor according to a mode of converting the long side and then compensating the short side;
And the rendering unit is used for rendering the wheat-linked texture data and the local texture data to the cache region according to the converted coordinates.
Preferably, the synthesis module 23 comprises:
the local texture data rendering unit is used for rendering the original local texture data by utilizing the image processor to obtain a first layer;
the continuous wheat texture data rendering unit is used for rendering the original continuous wheat texture data on the first image layer by utilizing the image processor to obtain an adjustable second image layer;
and the synthesis unit is used for synthesizing the first image layer and the second image layer to obtain the local wheat connecting picture.
Preferably, the device further comprises;
the first synchronization module is used for establishing a synchronization relationship between the original wheat connecting texture data and the copied wheat connecting texture data;
and the second synchronization module is used for establishing a synchronization relationship between the original local texture data and the copied local texture data.
Based on the same inventive concept, a third embodiment of the present invention also provides a computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, implements the method steps described in the foregoing first embodiment.
Based on the same inventive concept, the fourth embodiment of the present invention further provides a computer device, as shown in fig. 6, for convenience of explanation, only the relevant parts of the embodiments of the present invention are shown, and specific technical details are not disclosed, please refer to the method parts of the embodiments of the present invention. The computer device may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant ), a POS (Point of Sales), a vehicle-mounted computer, and the like, taking the computer device as an example of the mobile phone:
fig. 6 is a block diagram showing a part of the structure related to the computer device provided by the embodiment of the present invention. Referring to fig. 6, the computer apparatus includes: a memory 61 and a processor 62. Those skilled in the art will appreciate that the computer device structure shown in FIG. 6 is not limiting of the computer device and may include more or fewer components than shown, or may be combined with certain components, or a different arrangement of components.
The following describes the respective constituent elements of the computer apparatus in detail with reference to fig. 6:
the memory 61 may be used to store software programs and modules, and the processor 62 performs various functional applications and data processing by running the software programs and modules stored in the memory 61. The memory 61 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebooks, etc.), etc. In addition, the memory 61 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 62 is a control center of the computer device, and performs various functions and processes data by running or executing software programs and/or modules stored in the memory 61, and calling data stored in the memory 61. Optionally, the processor 62 may include one or more processing units; preferably, the processor 62 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications.
In an embodiment of the present invention, the processor 62 included in the computer device may have functions corresponding to the steps of any of the methods in the first embodiment.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in accordance with embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.

Claims (10)

1. A method for live communication of wheat, characterized in that it is applied to a host side comprising an image processor, said method comprising:
receiving the wheat connecting video data sent by the wheat connecting terminal, and locally collecting local video data;
Respectively carrying out texture processing on the wheat-linked video data and the local video data by utilizing the image processor, and correspondingly generating original wheat-linked texture data and local texture data;
synthesizing the original wheat connecting texture data and the local texture data to generate a local wheat connecting picture, and playing the local wheat connecting picture at the anchor end;
copying the original continuous-wheat texture data and the local texture data, rendering the copied continuous-wheat texture data and the local texture data to a cache area of the image processor, generating a live continuous-wheat picture, and carrying out live-broadcasting plug-flow on the live continuous-wheat picture;
wherein said rendering said copied said wheat-with-grain data and said local texture data to a buffer of said image processor comprises:
reading the resolution of a system screen of the anchor terminal;
judging whether the resolution of the system screen is the same as the target resolution corresponding to the coordinate system of the image processor;
if the coordinates of the copied wheat-connected texture data and the local texture data in the screen coordinate system of the system screen are converted into coordinates in the coordinate system of the image processor according to an equal-ratio scaling conversion mode; if the two frames are different, converting the coordinates of the copied wheat-connected texture data and the local texture data under the screen coordinate system of the system screen into coordinates under the coordinate system of the image processor according to a mode of converting the long side and then compensating the short side;
And rendering the wheat-linked texture data and the local texture data to the cache region according to the converted coordinates.
2. The method of claim 1, wherein synthesizing the original headset texture data and the local texture data to generate a local headset picture comprises:
rendering the original local texture data by using the image processor to obtain a first image layer;
rendering the original wheat-linked texture data on the first layer by using the image processor to obtain an adjustable second layer;
and synthesizing the first layer and the second layer to obtain the local wheat connecting picture.
3. The method of claim 1, wherein after said copying of the original interground texture data and the local texture data, the method further comprises;
establishing a synchronous relation between the original wheat connecting texture data and the copied wheat connecting texture data;
a synchronization relationship is established between the original local texture data and the copied local texture data.
4. The method of claim 1, wherein the converting the coordinates of the commingled texture data and the local texture data in the screen coordinate system of the system screen into the coordinates in the coordinate system of the image processor according to the equal-scaling conversion method includes the following formula:
Gx=Sx/SW*2.0-1.0;
Gy=Sy/SH*2.0-1.0;
Gw=Sw/SW*2.0;
Gh=Sh/SH*2.0;
Wherein Gx is the abscissa of the vertices of the continuous-wheat texture data and the local texture data under the coordinate system of the image processor, gy is the ordinate of the vertices of the continuous-wheat texture data and the local texture data under the coordinate system of the image processor, sx is the abscissa of the vertices of the continuous-wheat texture data and the local texture data under the screen coordinate system, sy is the ordinate of the vertices of the continuous-wheat texture data and the local texture data under the screen coordinate system, SW is the width of the system screen under the screen coordinate system, SH is the height of the system screen under the screen coordinate system, SW is the width of the continuous-wheat texture data and the local texture data under the screen coordinate system, SH is the height of the continuous-wheat texture data and the local texture data under the screen coordinate system.
5. The method of claim 1, wherein the converting the coordinates of the wheat-linked texture data and the local texture data in the screen coordinate system of the system screen into the coordinates in the coordinate system of the image processor according to the manner of converting the long side and compensating the short side, comprises the following formula:
Calibrate=(1/A)/(SW/SH);
Gx=Sx/SW*2.0-1.0;
Gy=Sy/SH*2.0-1.0;
Gw=Sw/SW*2.0/Calibrate;
Gh=Sh/SH*2.0;
Wherein calibre is a calibration coefficient, a is the target resolution, SW is the width of the system screen in the screen coordinate system, SH is the height of the system screen in the screen coordinate system, gx is the horizontal coordinate of the vertices of the continuous wheat texture data and the local texture data in the coordinate system of the image processor, gy is the vertical coordinate of the vertices of the continuous wheat texture data and the local texture data in the coordinate system of the image processor, sx is the horizontal coordinate of the vertices of the continuous wheat texture data and the local texture data in the screen coordinate system, sy is the vertical coordinate of the vertices of the continuous wheat texture data and the local texture data in the screen coordinate system, SW is the width of the continuous wheat texture data and the local texture data in the screen coordinate system, and SH is the height of the continuous wheat texture data and the local texture data in the screen coordinate system.
6. The method of claim 1, wherein the target resolution is 16:9.
7. A live-wheat-connected device, which is applied to a host side containing an image processor, and comprises:
the acquisition module is used for receiving the wheat connecting video data sent by the wheat connecting terminal and locally acquiring local video data;
The texture processing module is used for respectively carrying out texture processing on the continuous-wheat video data and the local video data by utilizing the image processor, and correspondingly generating original continuous-wheat texture data and local texture data;
the synthesis module is used for synthesizing the original wheat connecting texture data and the local texture data, generating a local wheat connecting picture and playing the local wheat connecting picture at the anchor end;
the copying and rendering module is used for copying the original continuous-wheat texture data and the local texture data, rendering the copied continuous-wheat texture data and the local texture data to a cache area of the image processor, generating a live continuous-wheat picture, and carrying out live-broadcasting push flow on the live continuous-wheat picture;
wherein, the copy rendering module includes:
the reading unit is used for reading the resolution ratio of the system screen of the anchor terminal;
a judging unit, configured to judge whether the resolution of the system screen is the same as a target resolution corresponding to the coordinate system of the image processor;
the processing unit is used for converting the coordinates of the copied wheat-connected texture data and the local texture data under the screen coordinate system of the system screen into the coordinates under the coordinate system of the image processor according to an equal-ratio scaling conversion mode if the coordinates are the same; if the two frames are different, converting the coordinates of the copied wheat-connected texture data and the local texture data under the screen coordinate system of the system screen into coordinates under the coordinate system of the image processor according to a mode of converting the long side and then compensating the short side;
And the rendering unit is used for rendering the wheat-linked texture data and the local texture data to the cache region according to the converted coordinates.
8. The apparatus of claim 7, wherein the synthesis module comprises:
the local texture data rendering unit is used for rendering the original local texture data by utilizing the image processor to obtain a first layer;
the continuous wheat texture data rendering unit is used for rendering the original continuous wheat texture data on the first image layer by utilizing the image processor to obtain an adjustable second image layer;
and the synthesis unit is used for synthesizing the first image layer and the second image layer to obtain the local wheat connecting picture.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method steps of any of claims 1-6.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method steps of any of claims 1-6 when the program is executed.
CN202010581753.1A 2020-06-23 2020-06-23 Method and device for live broadcast of wheat Active CN113840170B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010581753.1A CN113840170B (en) 2020-06-23 2020-06-23 Method and device for live broadcast of wheat

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010581753.1A CN113840170B (en) 2020-06-23 2020-06-23 Method and device for live broadcast of wheat

Publications (2)

Publication Number Publication Date
CN113840170A CN113840170A (en) 2021-12-24
CN113840170B true CN113840170B (en) 2023-06-16

Family

ID=78964124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010581753.1A Active CN113840170B (en) 2020-06-23 2020-06-23 Method and device for live broadcast of wheat

Country Status (1)

Country Link
CN (1) CN113840170B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023245495A1 (en) * 2022-06-22 2023-12-28 云智联网络科技(北京)有限公司 Method and apparatus for converting rendered data into video stream, and electronic device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106534954A (en) * 2016-12-19 2017-03-22 广州虎牙信息科技有限公司 Information interaction method and device based on live broadcast video streams and terminal device
WO2018121556A1 (en) * 2016-12-27 2018-07-05 北京奇虎科技有限公司 Live broadcast data processing method, apparatus, program and medium
CN108848391A (en) * 2018-06-21 2018-11-20 深圳市思迪信息技术股份有限公司 The more people Lian Mai method and devices of net cast
CN109257618A (en) * 2018-10-17 2019-01-22 北京潘达互娱科技有限公司 Company wheat interflow method, apparatus and server in a kind of live streaming
CN109302617A (en) * 2018-10-19 2019-02-01 武汉斗鱼网络科技有限公司 A kind of video of specified multielement connects wheat method, apparatus, equipment and storage medium
CN109618191A (en) * 2018-12-17 2019-04-12 广州市百果园信息技术有限公司 Live streaming connects wheat method, apparatus, computer readable storage medium and terminal
CN109688419A (en) * 2018-12-27 2019-04-26 北京潘达互娱科技有限公司 Company's wheat method, apparatus and server in a kind of live streaming
CN109729379A (en) * 2019-02-01 2019-05-07 广州虎牙信息科技有限公司 Live video connects implementation method, device, terminal and the storage medium of wheat
CN109756744A (en) * 2017-11-02 2019-05-14 腾讯科技(深圳)有限公司 Data processing method, electronic equipment and computer storage medium
CN110958464A (en) * 2019-12-11 2020-04-03 北京达佳互联信息技术有限公司 Live broadcast data processing method and device, server, terminal and storage medium
CN111050185A (en) * 2018-10-15 2020-04-21 武汉斗鱼网络科技有限公司 Live broadcast room wheat-connected video mixing method, storage medium, electronic equipment and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100128802A1 (en) * 2008-11-24 2010-05-27 Yang-Hung Shih Video processing ciucuit and related method for merging video output streams with graphical stream for transmission
CN106331850B (en) * 2016-09-18 2020-01-24 上海幻电信息科技有限公司 Browser live broadcast client, browser live broadcast system and browser live broadcast method
US10818033B2 (en) * 2018-01-18 2020-10-27 Oath Inc. Computer vision on broadcast video

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106534954A (en) * 2016-12-19 2017-03-22 广州虎牙信息科技有限公司 Information interaction method and device based on live broadcast video streams and terminal device
WO2018121556A1 (en) * 2016-12-27 2018-07-05 北京奇虎科技有限公司 Live broadcast data processing method, apparatus, program and medium
CN109756744A (en) * 2017-11-02 2019-05-14 腾讯科技(深圳)有限公司 Data processing method, electronic equipment and computer storage medium
CN108848391A (en) * 2018-06-21 2018-11-20 深圳市思迪信息技术股份有限公司 The more people Lian Mai method and devices of net cast
CN111050185A (en) * 2018-10-15 2020-04-21 武汉斗鱼网络科技有限公司 Live broadcast room wheat-connected video mixing method, storage medium, electronic equipment and system
CN109257618A (en) * 2018-10-17 2019-01-22 北京潘达互娱科技有限公司 Company wheat interflow method, apparatus and server in a kind of live streaming
CN109302617A (en) * 2018-10-19 2019-02-01 武汉斗鱼网络科技有限公司 A kind of video of specified multielement connects wheat method, apparatus, equipment and storage medium
CN109618191A (en) * 2018-12-17 2019-04-12 广州市百果园信息技术有限公司 Live streaming connects wheat method, apparatus, computer readable storage medium and terminal
CN109688419A (en) * 2018-12-27 2019-04-26 北京潘达互娱科技有限公司 Company's wheat method, apparatus and server in a kind of live streaming
CN109729379A (en) * 2019-02-01 2019-05-07 广州虎牙信息科技有限公司 Live video connects implementation method, device, terminal and the storage medium of wheat
CN110958464A (en) * 2019-12-11 2020-04-03 北京达佳互联信息技术有限公司 Live broadcast data processing method and device, server, terminal and storage medium

Also Published As

Publication number Publication date
CN113840170A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
CN109983757B (en) View dependent operations during panoramic video playback
CN109983500B (en) Flat panel projection of reprojected panoramic video pictures for rendering by an application
US10242714B2 (en) Interface for application-specified playback of panoramic video
CN112204993B (en) Adaptive panoramic video streaming using overlapping partitioned segments
US9363496B2 (en) Moving image generation device
US20040109014A1 (en) Method and system for displaying superimposed non-rectangular motion-video images in a windows user interface environment
KR20200052846A (en) Data processing systems
US20080168512A1 (en) System and Method to Implement Interactive Video Streaming
US20120237186A1 (en) Moving image generating method, moving image generating apparatus, and storage medium
KR20030036160A (en) Information processing apparatus, graphic processing unit, graphic processing method, storage medium, and computer program
WO2023051138A1 (en) Immersive-media data processing method, apparatus, device, storage medium and program product
CN113840170B (en) Method and device for live broadcast of wheat
WO2020258907A1 (en) Virtual article generation method, apparatus and device
CN110012336A (en) Picture configuration method, terminal and the device at interface is broadcast live
CN110913118B (en) Video processing method, device and storage medium
US7936936B2 (en) Method of visualizing a large still picture on a small-size display
JP2019149785A (en) Video conversion device and program
CN112533005B (en) Interaction method and system for VR video slow live broadcast
JP2007102462A (en) Image composition method, system, terminal and image composition program
CN113301425A (en) Video playing method, video playing device and electronic equipment
CN111741343A (en) Video processing method and device and electronic equipment
CN114630184A (en) Video rendering method, device, equipment and computer readable storage medium
CN113259716A (en) Video issuing method, video acquiring method, server, terminal and system
CN110519530B (en) Hardware-based picture-in-picture display method and device
JP4929848B2 (en) Video data transmission system and method, transmission processing apparatus and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant