CN111479162B - Live data transmission method and device and computer readable storage medium - Google Patents

Live data transmission method and device and computer readable storage medium Download PDF

Info

Publication number
CN111479162B
CN111479162B CN202010265313.5A CN202010265313A CN111479162B CN 111479162 B CN111479162 B CN 111479162B CN 202010265313 A CN202010265313 A CN 202010265313A CN 111479162 B CN111479162 B CN 111479162B
Authority
CN
China
Prior art keywords
image
video
live
images
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010265313.5A
Other languages
Chinese (zh)
Other versions
CN111479162A (en
Inventor
陈文琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Kugou Business Incubator Management Co ltd
Original Assignee
Chengdu Kugou Business Incubator Management Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Kugou Business Incubator Management Co ltd filed Critical Chengdu Kugou Business Incubator Management Co ltd
Priority to CN202010265313.5A priority Critical patent/CN111479162B/en
Publication of CN111479162A publication Critical patent/CN111479162A/en
Application granted granted Critical
Publication of CN111479162B publication Critical patent/CN111479162B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4621Controlling the complexity of the content stream or additional data, e.g. lowering the resolution or bit-rate of the video stream for a mobile client with a small screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/631Multimode Transmission, e.g. transmitting basic layers and enhancement layers of the content over different transmission paths or transmitting with different error corrections, different keys or with different transmission protocols

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The disclosure provides a live broadcast data transmission method and device and a computer readable storage medium, and belongs to the field of network live broadcast. The method comprises the following steps: acquiring live video data, wherein the live video data comprises a plurality of frames of video images; each frame of video image is segmented to obtain a middle area image and a pair of edge area images, wherein the pair of edge area images are respectively positioned at two opposite sides of the middle area image; generating two live broadcast video streams based on the middle area image and the pair of edge area images, wherein the two live broadcast video streams comprise a first live broadcast video stream and a second live broadcast video stream, the first live broadcast video stream comprises the middle area image and a first identifier of each frame of video image, the second live broadcast video stream comprises the pair of edge area images and a second identifier of each frame of video image, and the first identifier and the second identifier of the video image of the same frame are associated; and two paths of live video streams are sent, vertical screen pictures do not need to be repeatedly transmitted, and the data volume transmitted in the double-stream live broadcasting process is reduced.

Description

Live data transmission method and device and computer readable storage medium
Technical Field
The present disclosure relates to the field of network live broadcast, and in particular, to a live broadcast data transmission method and apparatus, and a computer-readable storage medium.
Background
With the development of internet technology, live webcasting absorbs and continues the advantages of the internet, collects video data and audio data by erecting independent signal collection equipment (such as a camera, a microphone and the like) on site, uploads the video data and the audio data to a server through a network, and distributes the video data and the audio data to a website or a client receiving end for people to watch.
The client can watch the live webcast through mobile devices such as a mobile phone and a tablet, and the mobile devices have two modes of playing live webcast pictures: one is to play the live broadcast picture in a vertical screen, and the other is to play the live broadcast picture in a horizontal screen. For the two playing modes of the mobile device, a live broadcast platform currently adopts a double-stream live broadcast method, namely, a main broadcast uses a live broadcast stream pushing end to simultaneously push two live broadcast streams to a server for live broadcast, wherein one live broadcast stream is a cross-screen picture, and the cross-screen picture refers to a complete live broadcast picture. And the other path of plug flow is a vertical screen picture, and the vertical screen picture refers to a middle area picture cut from the complete live broadcast picture. And the mobile equipment acquires the horizontal screen picture or the vertical screen picture from the server according to the play mode.
In implementing the present disclosure, the inventors found that the related art has at least the following problems: currently, double-flow live broadcasting repeatedly pushes vertical screen pictures in a middle area in a flow pushing process, so that the transmission data volume is large.
Disclosure of Invention
The embodiment of the disclosure provides a live broadcast data transmission method, a live broadcast data transmission device and a computer readable storage medium, which can reduce the data volume transmitted in a stream pushing process of double-stream live broadcast, and the technical scheme is as follows:
in one aspect, an embodiment of the present disclosure provides a live data transmission method, where the method includes: acquiring live video data, wherein the live video data comprises a plurality of frames of video images; segmenting each frame of video image to obtain a middle area image and a pair of edge area images, wherein the pair of edge area images are respectively positioned at two opposite sides of the middle area image; generating two live video streams based on the middle area image and the pair of edge area images of each frame of video image, wherein the two live video streams include a first live video stream and a second live video stream, the first live video stream includes a first identifier corresponding to the middle area image and the middle area image of each frame of video image, the second live video stream includes a second identifier corresponding to the pair of edge area images and the pair of edge area images of each frame of video image, and the first identifier and the second identifier corresponding to the same frame of video image are associated; and sending the two paths of live video streams.
In some embodiments of the present disclosure, the video image is a cross-screen image, and the segmenting each frame of the video image to obtain a middle area image and a pair of edge area images includes:
based on the aspect ratio of the cross screen image, the cross screen image is segmented to obtain a middle area image and a pair of edge area images, the middle area image takes the vertical center line of the cross screen image as a center line, the width of the middle area image is determined based on the aspect ratio of the cross screen image, and the height of the middle area image is the same as the width of the cross screen image.
In some embodiments of the present disclosure, generating two live video streams based on the middle region image and the pair of edge region images includes: encoding image data of the middle area image of each frame of the video image to obtain a plurality of first image frames; adding a first identifier at the end of the first image frame; encoding image data of the pair of edge region images of each frame of the video image to obtain a plurality of second image frames; and adding a second identifier at the end of the second image frame.
In some embodiments of the present disclosure, the encoding of the image data of the middle region image of each frame of the video image includes: encoding image data of the middle area image of each frame of the video image at a first code rate; the encoding of the image data of the pair of edge region images of each frame of the video image includes: and encoding the image data of the middle area image of each frame of the video image at a second code rate, wherein the first code rate is greater than the second code rate.
On the other hand, an embodiment of the present disclosure provides a live data transmission method, including: determining a playing mode of a terminal, wherein the playing mode is a horizontal screen playing mode or a vertical screen playing mode; based on the play mode of the terminal, a first live video stream or a first live video stream and a second live video stream are obtained, the first live video stream comprises image data of a plurality of middle area images and a first identification corresponding to the middle area images, the second live video stream comprises image data of a plurality of pairs of edge area images and a second identification corresponding to the edge area images, and the middle area images and the edge area images corresponding to the associated first identifications and second identifications are obtained by dividing the same frame of video image.
In some embodiments of the present disclosure, after obtaining the first live video stream and the second live video stream, the method further comprises:
decoding the first direct-playing video stream to obtain image data of a middle area image and a first identifier corresponding to the middle area image;
decoding the second live video stream to obtain image data of a pair of edge area images and second identifications corresponding to the pair of edge area images;
and combining the middle area image and the edge area image corresponding to the associated first identifier and second identifier.
On the other hand, the embodiment of the present disclosure provides a live data transmission device, including: the system comprises an image acquisition module, a video processing module and a video processing module, wherein the image acquisition module is used for acquiring live video data which comprises multi-frame video images; the image processing module is used for carrying out segmentation processing on each frame of video image to obtain a middle area image and a pair of edge area images, and the pair of edge area images are respectively positioned at two opposite sides of the middle area image; a video generation module, configured to generate two live video streams based on the middle area image and the pair of edge area images of each frame of the video image, where the two live video streams include a first live video stream and a second live video stream, the first live video stream includes a first identifier corresponding to the middle area image and the middle area image of each frame of the video image, the second live video stream includes a second identifier corresponding to the pair of edge area images and the pair of edge area images of each frame of the video image, and the first identifier and the second identifier corresponding to the same frame of the video image are associated with each other; and the video sending module is used for sending the two paths of live video streams.
On the other hand, the embodiment of the present disclosure provides a live data transmission device, including: the terminal comprises a mode determining module, a display module and a display module, wherein the mode determining module is used for determining a playing mode of the terminal, and the playing mode is a horizontal screen playing mode or a vertical screen playing mode; the video acquisition module is used for acquiring a first live video stream or acquiring the first live video stream and a second live video stream based on the play mode of the terminal, wherein the first live video stream comprises image data of a plurality of middle area images and a first identifier corresponding to the middle area images, the second live video stream comprises image data of a plurality of pairs of edge area images and a second identifier corresponding to the edge area images, and the middle area images and the edge area images corresponding to the associated first identifiers and the associated second identifiers are obtained by segmenting the same frame of video images.
In another aspect, an embodiment of the present disclosure provides a computer device, including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to perform a live data transfer method as in the above-described embodiment of the claims or as in the above-described another aspect of the embodiment.
In another aspect, the present disclosure provides a computer-readable storage medium, where at least one instruction is stored, where the instruction is loaded by a processor and executes a live data transmission method as in the above embodiment of the present disclosure, or executes a live data transmission method as in the above embodiment of the another aspect
The beneficial effects brought by the technical scheme provided by the embodiment of the disclosure at least comprise: each frame of video image in the live broadcast video data is divided into a middle area image and a pair of edge area images, and then the image data of the middle area image and the edge area images are respectively sent by two live broadcast video streams, wherein the two live broadcast video streams do not have the same image data, so that the data volume transmitted in the double-stream live broadcast process is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a live broadcast method provided by an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a live data transmission method according to an embodiment of the present disclosure;
fig. 3 is a schematic flow chart of a live data transmission method according to another embodiment of the present disclosure;
FIG. 4 is a schematic view of a landscape screen provided by an embodiment of the present disclosure;
FIG. 5 is a schematic view of a portrait screen provided by an embodiment of the present disclosure;
fig. 6 is a schematic flow chart of a live data transmission method according to another embodiment of the present disclosure;
fig. 7 is a schematic flow chart of a live data transmission method according to another embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a live data transmission apparatus provided in an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a live data transmission apparatus according to another embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a computer device according to another embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure more apparent, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
Fig. 1 is a schematic view of an application scenario of a live broadcast method provided by an embodiment of the present disclosure. As shown in fig. 1, the application scenario includes a first terminal 1, a server 2, and a second terminal 3, and the first terminal 1, the server 2, and the second terminal 3 may be connected through a network.
The first terminal 1 may be a terminal used by a host user, and the first terminal 1 is configured to push the live stream to the server 2, so that the server 2 sends the live stream to the second terminal 3. The first terminal 1 may include an image capturing device (e.g., a camera, etc.) and a sound capturing device (e.g., a microphone, etc.), the image capturing device is configured to capture a video, the sound capturing device is configured to capture an audio, and the first terminal 1 may synthesize the video captured by the image capturing device and the audio captured by the sound capturing device to obtain a live stream. As an example, the first terminal 1 may be a Personal Computer (PC), a mobile phone, a tablet Computer, or the like.
The server 2 may be a streaming server, that is, a streaming media server, and the server 2 is configured to receive a live stream of the first terminal 1 and send the live stream of the first terminal 1 to the second terminal 3. The server 2 may be a server, a collection of servers, or a computing center. In some possible embodiments, the server may be a Content Delivery Network (CDN).
The second terminal 3 may be a terminal used by the viewer user. The second terminal 3 may present live content. As an example, the second terminal may be a PC, a mobile phone, a tablet computer, or the like.
Fig. 2 is a flowchart illustrating a live data transmission method according to an embodiment of the present disclosure, where the method may be executed by a terminal, for example, by the first terminal 1 in fig. 1. Referring to fig. 2, the live data transmission method includes the steps of:
in S201, live video data is acquired.
In an embodiment of the present disclosure, the live video data includes a plurality of frames of video images.
Illustratively, the first terminal may capture live video data through a camera.
In S202, each frame of video image is divided to obtain a middle area image and a pair of edge area images.
In the disclosed embodiment, for the same frame of video image, a pair of edge area images are located on opposite sides of a middle area image.
Illustratively, each frame of video image is a horizontal screen image displayed by the second terminal, a middle area image divided from each frame of video image is a vertical screen image displayed by the second terminal, and the second terminal can combine a middle area image and an edge area image corresponding to one frame of video image into one frame of video image when playing live content in a horizontal screen playing mode.
In S203, two live video streams are generated based on the middle region image and the pair of edge region images of each frame of video image, where the two live video streams include a first live video stream and a second live video stream.
In the embodiment of the disclosure, the first live video stream includes image data of a middle area image of each frame of video image and a first identifier corresponding to the middle area image, the second live video stream includes image data of a pair of edge area images of each frame of video image and a second identifier corresponding to the pair of edge area images, and the first identifier and the second identifier corresponding to the same frame of video image are associated. When necessary, the terminal can synthesize the middle area image and the edge area image with the associated first identifier and second identifier to obtain a complete video image for playing.
In the embodiment of the present disclosure, the first identifier and the second identifier are associated, which may mean that the first identifier and the second identifier are the same; alternatively, it may be that the first identifier and the second identifier are similar, as long as the middle area image and the edge area image obtained by dividing the same frame of video image can be identified by the first identifier and the second identifier.
In S204, two live video streams are sent.
In the embodiment of the present disclosure, the first terminal sends two live video streams to the server, and this process may also be referred to as dual-stream push.
Each frame of video image in the live broadcast video data is divided into a middle area image and a pair of edge area images, and then the image data of the middle area image and the edge area images are respectively sent by two live broadcast video streams, wherein the two live broadcast video streams do not have the same image data, so that the data volume transmitted in the double-stream live broadcast process is reduced.
Fig. 3 is a flowchart illustrating a live data transmission method according to another embodiment of the present disclosure, where the method may be executed by a terminal, for example, by the first terminal 1 in fig. 1. Referring to fig. 3, the live data transmission method includes the following steps:
in S301, live video data is acquired.
In an embodiment of the present disclosure, the live video data includes a plurality of frames of video images.
Illustratively, the multi-frame video image may include a person or other target object of the anchor. In most cases, the anchor portrait or other target object will be located in the middle area of the image when live.
In S302, the landscape image is divided based on the aspect ratio of the landscape image to obtain a middle area image and a pair of edge area images.
In the embodiment of the present disclosure, the divided image is a landscape image, which is an image having a width greater than a height of the image, that is, an image having an aspect ratio greater than 1. The middle area image obtained after segmentation takes the vertical central line of the transverse screen image as a central line, the width of the middle area image is determined based on the aspect ratio of the transverse screen image, and the height of the middle area image is the same as the width of the transverse screen image. Here, the middle area image obtained by the division is a portrait screen image, and the portrait screen image is an image in which the width of the image is smaller than the height of the image, that is, the aspect ratio is smaller than 1.
Illustratively, the step S302 may include:
taking the height of the horizontal screen image as the height of the vertical screen image;
determining the width of the vertical screen image according to the height of the vertical screen image and the aspect ratio of the vertical screen image;
calculating the distance from a dividing line of the transverse screen image to a vertical central line of the transverse screen image according to the width of the vertical screen image, wherein the dividing line is parallel to the vertical central line, and the distance is one half of the width of the vertical screen image;
and dividing the transverse screen image according to the distance from the dividing line of the transverse screen image to the vertical center line of the transverse screen image, wherein the image between the two dividing lines is a middle area image, and the images on the two sides of the two dividing lines are edge area images.
The following aspect ratios are 16: the division of the video image will be described with the cross-screen image of 9 as an example. Live webcasting generally uses an aspect ratio of 16: the video image of 9 is taken as a landscape screen image, and the aspect ratio is 9: the video image of 16 is taken as a vertical screen image, and the vertical screen image is obtained by dividing a horizontal screen image, namely the vertical screen image is an image of the middle area of the horizontal screen image.
As shown in fig. 4, a dot-dash line is a vertical center line 402 of the landscape image, a broken line 401 is a dividing line of the landscape image, a length H in an extending direction of the broken line 401 and the vertical center line 402 is a height of the landscape image, and a length W2 in a direction perpendicular to the broken line 401 and the vertical center line 402 is a width of the landscape image. As shown in fig. 5, fig. 5 is a portrait screen image, and a chain line 501 in fig. 5 is a vertical center line of the portrait screen image, and coincides with the vertical center line 402 of the landscape screen image. A length H in the extending direction of the dot-dashed line 501 is a height of the portrait screen image, and a length W1 in the direction perpendicular to the dot-dashed line 502 is a width of the portrait screen image.
To ensure that the aspect ratio of the vertical screen image cut from the horizontal screen image is 9: 16, namely W1: h is equal to 9: 16, so the ratio of the width W1 of the portrait screen image to the width W2 of the landscape screen image is determined to be 16: 81. the size of the middle area image (i.e., the portrait screen) is further determined based on the video image size of the landscape screen. According to the size of the middle area image, the first terminal symmetrically divides the middle area image from a vertical central line of the video image, and the left edge area and the right edge area of the video image are used as a pair of edge area images.
In S303, image data of a middle region image of each frame of video image is encoded, resulting in a plurality of first image frames.
In the embodiment of the disclosure, the first terminal encodes image data of a middle area image of each frame of video image through a video encoder to obtain a plurality of first image frames.
Illustratively, the video encoder may encode in the h.264 encoding format or in the h.265 encoding format.
In S304, a first identifier is added to the end of the first image frame.
In the embodiment of the present disclosure, the first image frame generally includes a frame header, a command frame, data, a check code, and a frame end, the first terminal adds Supplemental Enhancement Information (SEI) to the frame end portion of the first image frame through a video encoder, and the SEI Information includes a first identifier.
Alternatively, the first identifier may be a sequentially increasing timestamp or index number, for example, the index number of a certain first image frame is "000000025", and the index number of the next frame of the first image frame is "000000026". The video images of different frames are played in different time sequences, so that the timestamps or index numbers of the first image frames corresponding to the video images of different frames are different, and when the second terminal needs to play the vertical screen image, the second terminal can play the first image frames in sequence according to the timestamps or index numbers in the frame tail of each frame of the first image frame after acquiring the first image frames.
In S305, image data of a pair of edge region images of each frame of video image is encoded, resulting in a plurality of second image frames.
In the embodiment of the disclosure, the first terminal encodes, by the video encoder, image data of a pair of edge region images of each frame of video image, to obtain a plurality of second image frames.
Illustratively, encoding the image data of a pair of edge region images and encoding the image data of a middle region image employ the same encoding format.
In S306, a second identifier is added to the end of the second image frame.
In the embodiment of the present disclosure, the second image frame generally includes a frame header, a command frame, data, a check code, and a frame end, the first terminal adds SEI information to the frame end portion of the first image frame through the video encoder, where the SEI information includes a second identifier, and the first identifier and the second identifier are the same identifier information, that is, if the first identifier is an index number, the second identifier is also an index number; if the first identifier is a timestamp, the second identifier is also a timestamp.
As each frame of video image is divided into the middle area image and the two edge area images, the first mark in the first image frame and the second mark in the second image frame corresponding to each frame of video image are the same, so as to represent that the two images come from the same frame of video image. Due to the fact that the video images of different frames are different in playing time sequence, the first identification in the first image frame and the second identification in the second image frame corresponding to the video images of different frames are different.
For example, when the first identifier and the second identifier are both time stamps, the playing time corresponding to a certain frame of video image is "2020-03-3018: 56: 40", and the playing time "2020-03-3018: 56: 40" can be converted into the time stamp "1585565800000" by the time stamp conversion tool. The time stamps of the middle area image and the pair of edge area images corresponding to the frame of video image are both "1585565800000", that is, the first identifier of the middle area image and the second identifier of the pair of edge area images are both "1585565800000". Due to the fact that the playing time of different frame video images is different, the timestamps corresponding to the middle area image and the pair of edge area images of different frame video images are different.
When the cross screen picture needs to be played, the second terminal can acquire a plurality of first image frames and second image frames, then can decode the first image frames and the second image frames with the same identification and synthesize the first image frames and the second image frames into video images, and then sequentially play the video images of different frames according to the first identification or the second identification of the video images of different frames.
Through steps S303 to S306, two live video streams can be generated based on the middle area image and the pair of edge area images of each frame of video image.
In S307, two live video streams are transmitted.
In the embodiment of the disclosure, the first terminal sends two paths of live video streams to the server.
In the embodiment of the present disclosure, the encoding of the image data of the middle area image of each frame of the video image in step S303 includes: and encoding the image data of the middle area image of each frame of video image at a first code rate.
In an embodiment of the present disclosure, the encoding the image data of the pair of edge region images of each frame of the video image in step S306 includes: and coding the image data of the middle area image of each frame of video image at a second code rate, wherein the first code rate is greater than the second code rate.
Illustratively, the first terminal encodes the image data of the middle area image of each frame of the video image by a video encoder at a first code rate, wherein the first code rate may be 1200Kb/s to 1500 Kb/s.
Illustratively, the first terminal encodes, through the video encoder, image data of a pair of edge region images of each frame of video image at a second code rate, where the second code rate may be 500Kb/s to 800Kb/s, for example, the first code rate may be 1400Kb/s, the second code rate may be 600Kb/s, and the total code rate for the first terminal to push two paths of live video streams reaches 2000 Kb/s.
In the related technology, in a double-stream live broadcast scheme of network live broadcast, for a video image with the video definition of 720P, a terminal encodes image data of a horizontal screen image and image data of a vertical screen image through a video encoder at a code rate of 1200Kb/s, namely, the total code rate of live broadcast video data pushed to a server by the terminal can reach 2400 Kb/s. Under the condition of the same total code rate, a first code rate for encoding the image data of the middle area image (vertical screen image) is higher than a second code rate for encoding the image data of a pair of edge area images by the live broadcast data transmission method in the embodiment of the disclosure. (for example, the first code rate is 1800Kb/s, and the second code rate is 600Kb/s), the code rate of encoding the image data of the middle region image (vertical screen image) by the live broadcast data transmission method in the embodiment of the present disclosure is higher than the code rate of encoding the image data of the middle region image (vertical screen image) by the current live network live broadcast, which is to say, the definition of the middle region image (vertical screen image) by the live broadcast data transmission method in the embodiment of the present disclosure is higher than the definition of the middle region image by the current live network live broadcast. Because the middle area image is the main observation area of human eyes, the definition of the middle area image is improved, and after the middle area image and the pair of left and right edge area images are combined, the definition of the whole picture sensed by a viewer is higher than that of the current live video image of the double-stream live broadcast of the network live broadcast.
It should be noted that steps S303 and S304 may be executed simultaneously or sequentially, and the order is not limited when the steps are executed sequentially. Steps S305 and S306 may be performed simultaneously or sequentially, and the order of performing the steps is not limited.
Each frame of video image in the live broadcast video data is divided into a middle area image and a pair of edge area images, and then the image data of the middle area image and the edge area images are respectively sent by two live broadcast video streams, wherein the two live broadcast video streams do not have the same image data, so that the data volume transmitted in the double-stream live broadcast process is reduced.
And the image data of the image in the middle area is coded by adopting a first code rate, the image data of the image in the edge area is coded by adopting a second code rate, and the first code rate is greater than the second code rate, so that the definition of the pushed horizontal screen image and the pushed vertical screen image can be improved under the condition of limited code rate.
Fig. 6 is a flowchart illustrating a live data transmission method according to an embodiment of the present disclosure, where the method may be executed by a terminal, for example, the second terminal 3 in fig. 1. Referring to fig. 6, the live data transmission method includes the following steps:
in S601, a play mode of the terminal is determined, where the play mode is a landscape play mode or a portrait play mode.
In the embodiment of the present disclosure, the second terminal may determine whether the second terminal 3 is in the landscape playing mode or the portrait playing mode according to a control instruction input by a user, where the control instruction input by the user is used to instruct the second terminal 3 to enter the landscape playing mode or the portrait playing mode.
The control command input by the user can be directly input through the control option or input through changing the posture of the second terminal.
The second terminal may determine the posture of the second terminal by using a sensor (gyroscope, accelerometer) and the like built in the second terminal, so as to obtain a corresponding control instruction. For example, when the posture of the second terminal is in a landscape state, that is, the long side of the second terminal is in a horizontal state, a control instruction for instructing the second terminal to enter a landscape play mode is acquired, and accordingly, it is determined that the play mode of the terminal is the landscape play mode.
In S602, based on the play mode of the terminal, a first live video stream or a first live video stream and a second live video stream are acquired.
In this embodiment of the present disclosure, a first live video stream includes image data of a plurality of middle area images and a first identifier corresponding to the middle area images, a second live video stream includes image data of a plurality of pairs of edge area images and a second identifier corresponding to the edge area images, and the middle area images and the edge area images corresponding to the associated first identifiers and second identifiers are obtained by dividing the same frame of video image. For details, refer to the first live video stream and the second live video stream in the above embodiments, and details are not repeated here.
Exemplarily, the step S602 includes: responding to the fact that the second terminal is in a vertical screen playing mode, and obtaining a first direct-playing video stream from the server; and responding to the fact that the second terminal is in the transverse screen playing mode, and obtaining the first live video stream and the second live video stream from the server.
Fig. 7 is a flowchart illustrating a live data transmission method according to an embodiment of the present disclosure, where the method may be executed by a terminal, for example, the second terminal 3 in fig. 1. Referring to fig. 7, the live data transmission method includes the steps of:
in S701, a play mode of the terminal is determined, where the play mode is a landscape play mode or a portrait play mode.
See the above description of step S601.
In S702, in response to that the play mode of the terminal is the vertical screen play mode, a first live video stream is acquired from the server.
Illustratively, the method may further comprise: when the second terminal is in a vertical screen playing mode, after the second terminal acquires a first direct-playing video stream from the server, decoding the first direct-playing video stream to obtain image data of an intermediate area image and a first identifier corresponding to the intermediate area image; and sequentially playing the images of the middle area according to the first identifier.
In S703, in response to that the play mode of the terminal is the landscape play mode, a first live video stream and a second live video stream are acquired from the server.
Illustratively, the method may further comprise: when the second terminal is in a horizontal screen playing mode, after the second terminal obtains a first direct-broadcast video stream and a second direct-broadcast video stream from the server, the first direct-broadcast video stream is decoded to obtain a middle area image and a first identifier corresponding to the middle area image; decoding the second live video stream to obtain a pair of edge area images and a second identifier corresponding to the pair of edge area images; and combining the middle area image and the edge area image corresponding to the associated first identifier and second identifier to obtain a complete frame of video image, and sequentially playing each frame of video image by the second terminal.
It should be noted that the code rate during decoding is the same as the code rate during encoding, that is, in the embodiment of the present disclosure, the first live video stream is decoded at the first code rate, and the second live video stream is decoded at the second code rate. The related contents of the first code rate and the second code rate can be referred to the embodiment shown in fig. 3, and a detailed description is omitted here.
Step S702 and step S703 may be performed separately or sequentially, and the order is not limited when the steps are performed sequentially. Through the steps S702 and S703, a terminal-based play mode can be realized, and the first live video stream or the first live video stream and the second live video stream are acquired.
Fig. 8 is a schematic structural diagram of a live data transmission apparatus according to an embodiment of the present disclosure, and referring to fig. 8, the live data transmission apparatus includes: an image acquisition module 801, an image processing module 802, a video generation module 803, and a video transmission module 804.
The image obtaining module 801 is configured to obtain live video data, where the live video data includes multiple frames of video images. The image processing module 802 is configured to segment each frame of video image to obtain a middle area image and a pair of edge area images, where the pair of edge area images are located on two opposite sides of the middle area image respectively. The video generation module 803 is configured to generate two live video streams based on a middle area image and a pair of edge area images of each frame of video image, where the two live video streams include a first live video stream and a second live video stream, the first live video stream includes a middle area image of each frame of video image and a first identifier corresponding to the middle area image, the second live video stream includes a pair of edge area images of each frame of video image and a second identifier corresponding to the pair of edge area images, and the first identifier and the second identifier corresponding to the same frame of video image are associated with each other. The video sending module 804 is configured to send two live video streams.
In some embodiments of the present disclosure, the image processing module 802 in the above embodiments is further configured to perform segmentation processing on the cross-frame image based on an aspect ratio of the cross-frame image to obtain a middle area image and a pair of edge area images, where the middle area image takes a vertical center line of the cross-frame image as a center line, a width of the middle area image is determined based on the aspect ratio of the cross-frame image, and a height of the middle area image is the same as the width of the cross-frame image.
In some embodiments of the present disclosure, as shown in fig. 8, the video generation module 803 in the above embodiments further includes: a first encoding submodule 8031, a first identification submodule 8032, a second encoding submodule 8033, and a second identification submodule 8034.
The first encoding submodule 8031 is configured to encode image data of a middle area image of each frame of video image, so as to obtain a plurality of first image frames. The first identification submodule 8032 is used to add a first identification at the end of the first image frame. The second encoding sub-module 8033 is configured to encode image data of a pair of edge area images of each frame of video image, so as to obtain a plurality of second image frames. The second identification sub-module 8034 is used to add a second identification at the end of the second image frame.
In some embodiments of the present disclosure, the first encoding sub-module 8031 is further configured to encode the image data of the middle area image of each frame of the video image at the first code rate.
In some embodiments of the present disclosure, the second encoding sub-module 8033 is further configured to encode the image data of the middle area image of each frame of the video image at a second code rate, where the first code rate is greater than the second code rate.
Each frame of video image in the live broadcast video data is divided into a middle area image and a pair of edge area images, and then the image data of the middle area image and the edge area images are respectively sent by two live broadcast video streams, wherein the two live broadcast video streams do not have the same image data, so that the data volume transmitted in the double-stream live broadcast process is reduced.
And the image data of the image in the middle area is coded by adopting a first code rate, the image data of the image in the edge area is coded by adopting a second code rate, and the first code rate is greater than the second code rate, so that the definition of the pushed horizontal screen image and the pushed vertical screen image can be improved under the condition of limited code rate.
Fig. 9 is a schematic structural diagram of a live data transmission apparatus according to another embodiment of the present disclosure, and referring to fig. 9, the live data transmission apparatus includes: a mode determination module 901 and a video acquisition module 902.
The mode determining module 901 is configured to determine a play mode of the terminal, where the play mode is a landscape play mode or a portrait play mode. The video obtaining module 902 is configured to obtain a first live video stream or obtain the first live video stream and a second live video stream based on a play mode of the terminal, where the first live video stream includes image data of a plurality of middle area images and a first identifier corresponding to the middle area images, the second live video stream includes image data of a plurality of pairs of edge area images and a second identifier corresponding to the edge area images, and the middle area images and the edge area images corresponding to the associated first identifiers and second identifiers are obtained by segmenting the same frame of video image.
In some embodiments of the present disclosure, the video obtaining module 902 is further configured to obtain the first direct-playing video stream from the server in response to that the play mode of the terminal is the vertical-screen play mode.
In some embodiments of the present disclosure, the video obtaining module 902 is further configured to obtain the first live video stream and the second live video stream from the server in response to that the play mode of the terminal is a landscape play mode.
Optionally, the apparatus may further include: and a playing module. The playing module is used for decoding a first direct-playing video stream after the second terminal acquires the first direct-playing video stream from a server when the second terminal is in a vertical screen playing mode to obtain image data of a middle area image and a first identifier corresponding to the middle area image; and sequentially playing the images of the middle area according to the first identifier. Or, the playing module is configured to, when the second terminal is in the landscape playing mode, decode the first live video stream after the second terminal acquires the first live video stream and the second live video stream from the server, and obtain the middle area image and the first identifier corresponding to the middle area image; decoding the second live video stream to obtain a pair of edge area images and a second identifier corresponding to the pair of edge area images; and combining the middle area image and the edge area image corresponding to the associated first identifier and second identifier to obtain a complete frame of video image, and sequentially playing each frame of video image by the second terminal.
It should be noted that: in the above embodiment, when the live data transmission device performs white point coordinate compensation, only the division of the above functional modules is taken as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the live data transmission device and the live data transmission method provided by the above embodiments belong to the same concept, and details of the implementation process are given in the method embodiments and are not described herein.
The embodiment of the disclosure also provides a device for transmitting the live network data, which can be computer equipment. Fig. 10 is a schematic structural diagram of a computer device according to another embodiment of the present disclosure. As shown in fig. 10, the computer apparatus 1000 includes a Central Processing Unit (CPU)1001, a system memory 1004 including a Random Access Memory (RAM)1002 and a Read Only Memory (ROM)1003, and a system bus 1005 connecting the system memory 1004 and the central processing unit 1001. The computer device 1000 also includes a basic input/output system (I/O system) 1006, which facilitates the transfer of information between devices within the computer, and a mass storage device 1007, which stores an operating system 1013, application programs 1014, and other program modules 1015.
The basic input/output system 1006 includes a display 1008 for displaying information and an input device 1009, such as a mouse, keyboard, etc., for user input of information. Wherein a display 1008 and an input device 1009 are connected to the central processing unit 1001 via an input-output controller 1010 connected to the system bus 1005. The basic input/output system 1006 may also include an input/output controller 1010 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input-output controller 1010 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1007 is connected to the central processing unit 1001 through a mass storage controller (not shown) connected to the system bus 1005. The mass storage device 1007 and its associated computer-readable media provide non-volatile storage for the computer device 1000. That is, the mass storage device 1007 may include a computer-readable medium (not shown) such as a hard disk or CD-ROM drive.
Without loss of generality, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that computer storage media is not limited to the foregoing. The system memory 1004 and mass storage device 1007 described above may be collectively referred to as memory.
According to various embodiments of the invention, the computer device 1000 may also operate as a remote computer connected to a network through a network, such as the Internet. That is, the computer device 1000 may be connected to the network 1012 through the network interface unit 1011 connected to the system bus 1005, or may be connected to other types of networks or remote computer systems (not shown) using the network interface unit 1011.
The memory further includes one or more programs, the one or more programs are stored in the memory, and the central processing unit 1001 implements the live data transmission method shown in fig. 1 to 7 by executing the one or more programs.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as a memory comprising instructions, executable by a processor of a computer device to perform the live data transfer methods shown in the various embodiments of the invention is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is meant to be illustrative of the principles of the present disclosure and not to be taken in a limiting sense, and any modifications, equivalents, improvements and the like that are within the spirit and scope of the present disclosure are intended to be included therein.

Claims (9)

1. A live data transmission method, comprising:
acquiring live video data, wherein the live video data comprises a plurality of frames of video images, and the video images are cross screen images;
based on the aspect ratio of the cross screen image, each frame of video image is segmented to obtain a middle area image and a pair of edge area images, the middle area image takes the vertical central line of the cross screen image as a central line, the width of the middle area image is determined based on the aspect ratio of the cross screen image, the height of the middle area image is the same as the width of the cross screen image, and the pair of edge area images are respectively positioned on two opposite sides of the middle area image;
generating two live video streams based on the middle area image and the pair of edge area images of each frame of the video image, wherein the two live video streams include a first live video stream and a second live video stream, the first live video stream includes image data of the middle area image of each frame of the video image and a first identifier corresponding to the middle area image, the second live video stream includes image data of the pair of edge area images of each frame of the video image and a second identifier corresponding to the pair of edge area images, and the first identifier and the second identifier corresponding to the same frame of the video image are associated;
and sending the two paths of live video streams.
2. The method of claim 1, wherein generating two live video streams based on the middle region image and the pair of edge region images of each frame of the video image comprises:
encoding image data of the middle area image of each frame of the video image to obtain a plurality of first image frames;
adding a first identifier at the end of the first image frame;
encoding image data of the pair of edge region images of each frame of the video image to obtain a plurality of second image frames;
and adding a second identifier at the end of the second image frame.
3. The method of claim 2, wherein encoding the image data of the middle region image of each frame of the video image comprises:
encoding image data of the middle area image of each frame of the video image at a first code rate;
the encoding of the image data of the pair of edge region images of each frame of the video image includes:
and encoding the image data of the middle area image of each frame of the video image at a second code rate, wherein the first code rate is greater than the second code rate.
4. A live data transmission method is characterized by comprising the following steps:
determining a playing mode of a terminal, wherein the playing mode is a horizontal screen playing mode or a vertical screen playing mode;
acquiring a first live video stream or acquiring the first live video stream and a second live video stream based on a play mode of the terminal, the first live video stream comprises image data of a plurality of intermediate area images and corresponding first identifications of the intermediate area images, the second live video stream comprises a plurality of pairs of image data of edge area images and second identifications corresponding to the edge area images, the middle area images and the edge area images corresponding to the first identifications and the second identifications which are related are obtained by dividing based on the aspect ratio of the same frame of video image, the video image is a cross screen image, the middle area image takes the vertical central line of the cross screen image as a central line, the width of the middle area image is determined based on the aspect ratio of the landscape screen image, and the height of the middle area image is the same as the width of the landscape screen image.
5. The method of claim 4, wherein after obtaining the first live video stream and the second live video stream, the method further comprises:
decoding the first direct-playing video stream to obtain image data of a middle area image and a first identifier corresponding to the middle area image;
decoding the second live video stream to obtain image data of a pair of edge area images and second identifications corresponding to the pair of edge area images;
and combining the middle area image and the edge area image corresponding to the associated first identifier and second identifier.
6. A live data transmission apparatus, comprising:
the system comprises an image acquisition module, a video processing module and a video processing module, wherein the image acquisition module is used for acquiring live video data, the live video data comprises a plurality of frames of video images, and the video images are cross screen images;
the image processing module is used for segmenting each frame of video image based on the aspect ratio of the cross screen image to obtain a middle area image and a pair of edge area images, the middle area image takes the vertical central line of the cross screen image as a central line, the width of the middle area image is determined based on the aspect ratio of the cross screen image, the height of the middle area image is the same as the width of the cross screen image, and the pair of edge area images are respectively positioned on two opposite sides of the middle area image;
a video generation module, configured to generate two live video streams based on the middle area image and the pair of edge area images of each frame of the video image, where the two live video streams include a first live video stream and a second live video stream, the first live video stream includes a first identifier corresponding to the middle area image and the middle area image of each frame of the video image, the second live video stream includes a second identifier corresponding to the pair of edge area images and the pair of edge area images of each frame of the video image, and the first identifier and the second identifier corresponding to the same frame of the video image are associated with each other;
and the video sending module is used for sending the two paths of live video streams.
7. A live data transmission apparatus, comprising:
the terminal comprises a mode determining module, a display module and a display module, wherein the mode determining module is used for determining a playing mode of the terminal, and the playing mode is a horizontal screen playing mode or a vertical screen playing mode;
a video obtaining module, configured to obtain a first live video stream or obtain the first live video stream and a second live video stream based on a play mode of the terminal, the first live video stream comprises image data of a plurality of intermediate area images and first identifications corresponding to the intermediate area images, the second live video stream comprises image data of a plurality of pairs of edge area images and second identifications corresponding to the edge area images, the middle area images and the edge area images corresponding to the associated first identifications and second identifications are obtained by dividing based on the aspect ratio of the same frame of video images, the video image is a cross screen image, the middle area image takes the vertical central line of the cross screen image as a central line, the width of the middle area image is determined based on the aspect ratio of the landscape screen image, and the height of the middle area image is the same as the width of the landscape screen image.
8. A computer device, comprising:
a processor;
a memory for storing instructions executable by the processor;
wherein the processor is configured to perform a live data transmission method as claimed in any one of claims 1 to 3 or to perform a live data transmission method as claimed in any one of claims 4 to 5.
9. A computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor to implement a live data transmission method as claimed in any one of claims 1 to 3, or to perform a live data transmission method as claimed in any one of claims 4 to 5.
CN202010265313.5A 2020-04-07 2020-04-07 Live data transmission method and device and computer readable storage medium Active CN111479162B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010265313.5A CN111479162B (en) 2020-04-07 2020-04-07 Live data transmission method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010265313.5A CN111479162B (en) 2020-04-07 2020-04-07 Live data transmission method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111479162A CN111479162A (en) 2020-07-31
CN111479162B true CN111479162B (en) 2022-05-13

Family

ID=71750128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010265313.5A Active CN111479162B (en) 2020-04-07 2020-04-07 Live data transmission method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111479162B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114527948B (en) * 2020-11-23 2024-03-12 深圳Tcl新技术有限公司 Method and device for calculating clipping region, intelligent device and storage medium
CN113286196B (en) * 2021-05-14 2023-02-17 亿咖通(湖北)技术有限公司 Vehicle-mounted video playing system and video split-screen display method and device
CN113573117A (en) * 2021-07-15 2021-10-29 广州方硅信息技术有限公司 Video live broadcast method and device and computer equipment
CN114827684B (en) * 2022-04-25 2023-06-02 青岛海尔乐信云科技有限公司 5G-based interactive video service method and system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102006475A (en) * 2010-11-18 2011-04-06 无锡中星微电子有限公司 Video coding and decoding device and method
CN103339945A (en) * 2011-11-11 2013-10-02 索尼公司 Image data transmission device, image data transmission method, and image data receiving device
CN106454407A (en) * 2016-10-25 2017-02-22 广州华多网络科技有限公司 Video live broadcast method and device
CN107333119A (en) * 2017-06-09 2017-11-07 歌尔股份有限公司 The processing method and equipment of a kind of display data
CN109151342A (en) * 2018-07-19 2019-01-04 广州市迪士普音响科技有限公司 A kind of distributed video display methods and device
CN109286824A (en) * 2018-09-28 2019-01-29 武汉斗鱼网络科技有限公司 A kind of method, apparatus, equipment and the medium of the control of live streaming user side
CN109547724A (en) * 2018-12-21 2019-03-29 广州华多网络科技有限公司 A kind of processing method of video stream data, electronic equipment and storage device
CN110049326A (en) * 2019-05-28 2019-07-23 广州酷狗计算机科技有限公司 Method for video coding and device, storage medium
CN110062252A (en) * 2019-04-30 2019-07-26 广州酷狗计算机科技有限公司 Live broadcasting method, device, terminal and storage medium
CN110072121A (en) * 2018-01-23 2019-07-30 南京大学 A kind of immersion media data transmission method adapting to human eye perception situation

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7042950B2 (en) * 2001-11-14 2006-05-09 Matsushita Electric Industrial Co., Ltd. Multichannel video processing unit and method
JP4995775B2 (en) * 2008-06-30 2012-08-08 株式会社東芝 Screen transfer apparatus and method, and program for screen transfer
CN101562706B (en) * 2009-05-22 2012-04-18 杭州华三通信技术有限公司 Method for splicing images and equipment thereof
JP2011109397A (en) * 2009-11-17 2011-06-02 Sony Corp Image transmission method, image reception method, image transmission device, image reception device, and image transmission system
EP2744197A4 (en) * 2011-08-11 2015-02-18 Panasonic Corp Playback device, playback method, integrated circuit, broadcasting system, and broadcasting method
CN109126131B (en) * 2018-07-09 2022-04-12 网易(杭州)网络有限公司 Game picture display method, storage medium and terminal
CN110691259B (en) * 2019-11-08 2022-04-22 北京奇艺世纪科技有限公司 Video playing method, system, device, electronic equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102006475A (en) * 2010-11-18 2011-04-06 无锡中星微电子有限公司 Video coding and decoding device and method
CN103339945A (en) * 2011-11-11 2013-10-02 索尼公司 Image data transmission device, image data transmission method, and image data receiving device
CN106454407A (en) * 2016-10-25 2017-02-22 广州华多网络科技有限公司 Video live broadcast method and device
CN107333119A (en) * 2017-06-09 2017-11-07 歌尔股份有限公司 The processing method and equipment of a kind of display data
CN110072121A (en) * 2018-01-23 2019-07-30 南京大学 A kind of immersion media data transmission method adapting to human eye perception situation
CN109151342A (en) * 2018-07-19 2019-01-04 广州市迪士普音响科技有限公司 A kind of distributed video display methods and device
CN109286824A (en) * 2018-09-28 2019-01-29 武汉斗鱼网络科技有限公司 A kind of method, apparatus, equipment and the medium of the control of live streaming user side
CN109547724A (en) * 2018-12-21 2019-03-29 广州华多网络科技有限公司 A kind of processing method of video stream data, electronic equipment and storage device
CN110062252A (en) * 2019-04-30 2019-07-26 广州酷狗计算机科技有限公司 Live broadcasting method, device, terminal and storage medium
CN110049326A (en) * 2019-05-28 2019-07-23 广州酷狗计算机科技有限公司 Method for video coding and device, storage medium

Also Published As

Publication number Publication date
CN111479162A (en) 2020-07-31

Similar Documents

Publication Publication Date Title
CN111479162B (en) Live data transmission method and device and computer readable storage medium
CN110798697B (en) Video display method, device and system and electronic equipment
WO2016150317A1 (en) Method, apparatus and system for synthesizing live video
KR100742674B1 (en) Image data delivery system, image data transmitting device thereof, and image data receiving device thereof
KR20090126176A (en) Information processing apparatus and method, and program
TW201246942A (en) Object of interest based image processing
KR20090125236A (en) Information processing device and method
CN113453046B (en) Immersive media providing method, immersive media obtaining device, immersive media equipment and storage medium
CN112423110A (en) Live video data generation method and device and live video playing method and device
SG185110A1 (en) Multiple-site drawn-image sharing apparatus, multiple-site drawn-image sharing system, method executed by multiple-site drawn-image sharing apparatus, program, and recording medium
KR100576544B1 (en) Apparatus and Method for Processing of 3D Video using MPEG-4 Object Descriptor Information
CN111343415A (en) Data transmission method and device
CN112291502B (en) Information interaction method, device and system and electronic equipment
CN111818295B (en) Image acquisition method and device
CN112073543A (en) Cloud video recording method and system and readable storage medium
CN109756744B (en) Data processing method, electronic device and computer storage medium
CN112351307A (en) Screenshot method, server, terminal equipment and computer readable storage medium
CN112954433A (en) Video processing method and device, electronic equipment and storage medium
CN113141352B (en) Multimedia data transmission method and device, computer equipment and storage medium
CN109862385B (en) Live broadcast method and device, computer readable storage medium and terminal equipment
KR101085718B1 (en) System and method for offering augmented reality using server-side distributed image processing
CN110198457B (en) Video playing method and device, system, storage medium, terminal and server thereof
CN114466224B (en) Video data encoding and decoding method and device, storage medium and electronic equipment
CN116980392A (en) Media stream processing method, device, computer equipment and storage medium
CN113014905B (en) Image frame generation method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220406

Address after: 4119, 41st floor, building 1, No.500, middle section of Tianfu Avenue, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan 610000

Applicant after: Chengdu kugou business incubator management Co.,Ltd.

Address before: No. 315, Huangpu Avenue middle, Tianhe District, Guangzhou City, Guangdong Province

Applicant before: GUANGZHOU KUGOU COMPUTER TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant