CN115883922A - Video coding rendering method, device, equipment and storage medium - Google Patents

Video coding rendering method, device, equipment and storage medium Download PDF

Info

Publication number
CN115883922A
CN115883922A CN202310083576.8A CN202310083576A CN115883922A CN 115883922 A CN115883922 A CN 115883922A CN 202310083576 A CN202310083576 A CN 202310083576A CN 115883922 A CN115883922 A CN 115883922A
Authority
CN
China
Prior art keywords
video data
video
rendering
encoding
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310083576.8A
Other languages
Chinese (zh)
Inventor
李康卫
金昊
周辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Qianjun Network Technology Co ltd
Original Assignee
Guangzhou Qianjun Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Qianjun Network Technology Co ltd filed Critical Guangzhou Qianjun Network Technology Co ltd
Priority to CN202310083576.8A priority Critical patent/CN115883922A/en
Publication of CN115883922A publication Critical patent/CN115883922A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application discloses a method, a device, equipment and a storage medium for video coding rendering, wherein a video data packet is generated by compressing acquired video data through acquiring the video data; uploading the video data packets to a server, wherein the server is used for distributing the video data packets to corresponding clients; acquiring video data distributed to a corresponding client by the server; decoding the video data acquired from the server to obtain image data of each frame; and rendering and drawing the obtained image data. On the premise of ensuring the quality, the video data is subjected to hard coding processing, so that the coding efficiency and the compression rate are improved, the rendering efficiency is optimized, the performance consumption is reduced, and the fluency of video playing is ensured.

Description

Method, device and equipment for video coding rendering and storage medium
Technical Field
The present application relates to the field of design of behaviors of artificial systems, and in particular, to a method, an apparatus, a device, and a storage medium for video encoding and rendering.
Background
In the era of rapid development of the mobile internet, the original information transmission modes such as pictures and texts cannot meet the requirements of users. Along with the popularization of intelligent equipment and the development of information technology. The short video and live broadcast thresholds are reduced and rise rapidly, and the short video and live broadcast thresholds can better transmit information to meet the requirements of users. Meanwhile, the common user faces several key problems in the whole process from video production to viewing: the recording mobile phone consumes hot power, the flow generated in the transmission process consumes weak network time, and the recording mobile phone is blocked during playing. Therefore, it is additionally important to provide a high-quality video processing scheme, and the client needs to provide the user with low performance consumption, smaller video files and a clear and smooth viewing experience.
Currently, the H265, which is the mainstream of the iOS platform, has no better scheme for processing B frames, and the coding or decoding efficiency is low. In the prior art, openGL ESs (advanced 3D graphics application programming interface) implements a segment of rendering code, and OpenGL ES (OpenGL for Embedded Systems) is a subset of OpenGL three-dimensional graphics API. The CPU writes data into the buffer in the callback of the frame data, then the GPU needs to read the data written in the frame data from the buffer, and when the next frame data is written into the buffer, the GPU must wait for the rendering of the previous frame data to be completed, i.e. multithreading is not supported, asynchronous operation is performed, the GPU has the characteristic of being bloated and is easy to cause blockage. Therefore, how to improve the encoding efficiency, reduce the consumption of the mobile phone, enhance the compression rate, reduce the size of the video file, enhance the video rendering performance, and improve the fluency is a technical problem to be solved urgently by those skilled in the art.
Disclosure of Invention
Based on the above problems, the present application provides a method, an apparatus, a device and a storage medium for video coding rendering, so as to optimize rendering efficiency, reduce performance consumption and ensure fluency of video playing.
The embodiment of the application discloses the following technical scheme: a video encoding rendering method, comprising:
collecting video data;
compressing the obtained video data to generate a video data packet;
uploading the video data packets to a server, wherein the server is used for distributing the video data packets to corresponding clients;
acquiring video data distributed to a corresponding client by the server;
decoding the video data acquired from the server to obtain image data of each frame;
and rendering and drawing the obtained image data.
Optionally, the acquiring video data includes:
and starting the acquisition device, and setting an output signal output tone of the equipment to obtain video data.
Optionally, the compressing the obtained video data to generate a video data packet includes:
converting the acquired video data into video core library type image data frames;
and encoding each frame of the video core library type image data frames to determine a video data packet.
Optionally, before compressing the obtained video data to generate a video data packet, the method further includes:
establishing an encoder, and setting encoding width and height, format and encoding frame data callback;
setting encoder attribute, setting key frame interval, setting code rate and setting canceling encoding B frame;
and sending an encoding starting instruction to the encoder.
Optionally, the encoding each frame of the video core library type image data frames to determine a video data packet includes:
performing primary coding on each frame of the video core library type image data frames to obtain a primary coding result;
triggering a callback function in response to the success of the primary encoding;
determining callback data according to the preliminary coding result and the callback function;
and judging whether each frame of the callback data is a key frame, inserting a start code, a front video parameter set, a rear video parameter set, a sequence parameter set and an image parameter set in response to the judgment result that the frame is the key frame, and encoding to complete the video data stream to obtain a primary encoding result.
Optionally, the compressing the obtained video data to generate a video data packet further includes:
and destroying the encoder in response to acquiring the encoding stop instruction.
Optionally, the rendering and drawing processing on the obtained image data includes:
creating a rendering layer and setting equipment parameters;
creating a rendering command encoder according to the device parameters;
and rendering and drawing the obtained image data according to the rendering command encoder.
The embodiment of the present application further provides a device for video encoding and rendering, including:
the video data acquisition module is used for acquiring video data;
the video data packet generating module is used for compressing the acquired video data to generate a video data packet;
the uploading module is used for uploading the video data packet to a server, and the server is used for distributing the video data packet to a corresponding client;
the distribution data acquisition module is used for acquiring the video data distributed to the corresponding client by the server;
the decoding processing module is used for decoding the video data acquired from the server to obtain the image data of each frame;
and the rendering processing module is used for rendering and drawing the obtained image data.
The device further comprises:
the first setting module is used for creating an encoder and setting the encoding width and height, the format and the encoding frame data callback;
the second setting module is used for setting the attribute of the encoder, setting the key frame interval, setting the code rate and setting the B frame of the cancellation encoding;
and the instruction sending module is used for sending an encoding starting instruction to the encoder.
The device further comprises:
and the encoder destroying module is used for destroying the encoder in response to the acquisition of the encoding stop instruction.
An embodiment of the present application further provides a computer device, where the computer device includes a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the steps of a method of video encoding rendering as described above according to instructions in the program code.
Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of a method for video coding rendering as described above.
Compared with the prior art, the method has the following beneficial effects: the method comprises the steps of compressing the obtained video data to generate a video data packet by acquiring the video data; uploading the video data packets to a server, wherein the server is used for distributing the video data packets to corresponding clients; acquiring video data distributed to a corresponding client by the server; decoding the video data acquired from the server to obtain image data of each frame; and rendering and drawing the obtained image data. The video data is subjected to hard coding processing by using an iOS video Toolbox (hardware coding and decoding tool library) in a specially processed H.265-HEVC (high efficiency video coding) format on the premise of ensuring the quality, the coding efficiency and the compression rate are improved, the traditional OpenGL ES is a 3D graphics application programming interface which aims at being handheld and embedded, the rendering single thread does not support asynchrony, has low efficiency and is unsmooth and bloated, the defects of the OpenGL ES are completely avoided by realizing the rendering through Apple Metal, the rendering efficiency is optimized, the performance consumption is reduced, and the fluency of video playing is ensured.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the description below are only some embodiments of the present application, and for those skilled in the art, other drawings may be obtained according to these drawings without inventive labor.
Fig. 1 is a flowchart of a method for video encoding and rendering according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an apparatus for video encoding and rendering according to an embodiment of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions of the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The inventor has found that, in terms of encoding and decoding, in the prior art, more and more video sources are encoded in the hard coding H265 format to generate video streams for encoding efficiency and smaller compression rate. Therefore, in a practical application scenario, a client is required to support the encoding and decoding process of the H265 video stream. Then, in the basic implementation flow of enhancing the video rendering performance, a Graphics Processing Unit (GPU) decodes and stores video stream data in a video memory, analyzes the video stream data into YUV data, copies the YUV data from the video memory to a memory, converts the YUV data into RGB data in a Central Processing Unit (CPU), copies the video stream of the RGB data into the video memory, and performs rendering and rendering Processing in the GPU. Currently, ffmpeg is a set of open-source computer programs that can be used to record, convert digital audio and video, and convert them into streams, and this open-source library is a very excellent library, and supports the use of each large platform, and has the advantages of supporting a lot of formats, and because the library is mature, the invocation is very simple, and the encoded video data in H265 format is not subjected to special processing of various adaptive scenes. Video data rendering is now commonly implemented in OpenGL ES. ffmpeg is that soft decoding uses a CPU to perform coding, needs to be realized through a large amount of calculation, has a low processing speed, consumes a CUP greatly and needs to be subjected to hardware optimization independently.
Currently, the H265, which is the mainstream of the iOS platform, has no better scheme for processing B frames, and the coding or decoding efficiency is low. OpenGL ES implements a piece of rendered code. The CPU writes data into the buffer in the callback of the frame data, then the GPU reads the data written in the frame data from the buffer, and when the next frame data is written into the buffer, the GPU must wait for the rendering of the previous frame data to be completed.
Therefore, the core of the invention aims to improve the coding efficiency, reduce the consumption of the mobile phone, enhance the compression rate, reduce the size of the video file, enhance the video rendering performance and improve the fluency. By using the iOS video Toolbox in a specially processed H.265-HEVC format, on the premise of ensuring the quality, video data is subjected to hard coding processing, so that the coding efficiency and the compression rate are improved, the traditional OpenGL ES rendering single thread does not support asynchrony, is low in efficiency and is unsmooth and bloated, the rendering is completely realized through Apple Metal, the defects of OpenGL ES are avoided, the rendering efficiency is optimized, the performance consumption is reduced, and the smoothness of video playing is ensured. The Apple Metal is used for rendering high-level 3D graphics and calculating data in parallel with a graphics processor, and the whole process from recording to playing of a video can be simplified into five steps: collecting, encoding, transmitting, decoding and rendering.
In the implementation process of these steps, firstly, each frame of data of the acquired image is too large and inconvenient to be directly transmitted, the data of each frame needs to be hard-coded and compressed by using a video toolbox and then transmitted, then the transmitted data is hard-decoded to form image data of each frame, and finally, the rendering of the image data of each frame is realized by a Metal, wherein the Metal is a set of programming API for operating a GPU which has been released in the prior art. The H265 video data comprises I frames, P frames and B frames, and the I frames are complete and dense in the encoding process, so that the I frames may cause overlarge video data, the compression rate can be improved by reasonably controlling the interval of the I frames, meanwhile, the B frames are not started, and the time stamp synchronization is complicated due to the existence of the B frames, so that the encoding and decoding efficiency is reduced. The Metal applies for a plurality of buffers corresponding to a plurality of frames (one Frame of image), and then updates the buffer cache in real time according to the rendering callback of the GPU; the method is more suitable for interaction between a CPU and a GPU in the existing design, reduces CPU load, and supports multithread execution, resource control and synchronous and asynchronous control.
The method provided by the embodiment of the application is executed by a background system, for example, the method can be executed by a background server. The background server may be one server device, or may be a server cluster composed of a plurality of servers.
The embodiment of the application provides a video coding rendering method which comprises a step 101 to a step 106. Referring to fig. 1, fig. 1 is a flowchart of a method for video encoding and rendering according to an embodiment of the present disclosure.
Step 101: video data is collected.
In an actual application scene, the system controls to start the acquisition device, and sets the output signal output tone of the equipment to obtain video data. Video acquisition real-time image acquisition acquires image information. The method comprises the steps of starting a device camera, obtaining output data of an output device, setting an output tone, and obtaining video data, wherein the video data can be a core media library type image data frame, and the starting of the device camera can be realized through a data stream which is provided by the iOS and used for managing and coordinating input equipment and output equipment.
Step 102: and compressing the acquired video data to generate a video data packet.
And compressing each acquired frame data by video coding to produce a binary video data packet with a specific format. And converting the acquired core media library type image data frame into a video core library type image data frame, and then carrying out H265 coding on each frame by the VideoToolbox to form a video stream.
In a possible implementation manner, the compressing the obtained video data to generate the video data packet includes the following steps, steps A1 to A7:
a1, creating an object for managing and coordinating data flow from an input device to an output device, and creating an encoder for setting encoding width and height, format, encoding frame data callback and the like.
And A2, setting the related attributes of the data stream object, for example, setting the encoder attributes, setting the key frame interval, setting the code rate, and setting the B frame for canceling encoding.
And A3, calling the object of the data stream to prepare for decoding and encoding, and in an actual application scene, generating a reminding instruction to tell an encoder that encoding starts.
Namely, the system sends a coding start instruction to the coder, and carries out preliminary coding on each frame of the video core library type image data frame to obtain a preliminary coding result.
And A4, calling the object of the data stream to decode and encode, and triggering a callback function after the encoding is successful.
I.e., in response to the preliminary encoding being successful, a callback function is triggered.
And A5, acquiring and processing the coded callback data, and inserting a start code, vps, sps and pps for recoding to complete the H265 video data stream if the coded callback data is a key frame.
And A6, calling a stop encoder.
And A7, finally calling a destroy encoder.
In a practical application scenario, the system may complete decoding with the object of the data stream, and stop the encoder. And finally, calling the object of the data stream to destroy the encoder.
Step 103: and uploading the video data packet to a server.
And uploading the video data to a server through a network, and then distributing the video data to the corresponding client by the server.
Step 104: and acquiring the video data distributed to the corresponding client by the server.
Step 105: and decoding the video data acquired by the server to obtain the image data of each frame.
And processing the video data acquired by the server, creating a decoder and setting corresponding parameters, finishing the final decoding work and acquiring the image frame data of the video. The control client acquires video stream data, and decodes the video stream data through the video toolbox to obtain a plurality of video core library type image data frames.
Step 106: and rendering and drawing the obtained image data.
In a possible implementation manner, the rendering and drawing process on the obtained image data may include the following steps, B1 to B7:
and B1, rendering layers of advanced 3D graphics and calculation data in parallel with a graphics processor to create a rendered layer.
And B2, calling a default device object of the creating system, acquiring GPU (graphics processing Unit) devices, and providing interfaces for creating cache, textures and the like.
And B3, setting a transformation matrix of YUV (brightness, chroma and concentration) and RGB (three primary colors Red, green and Blue).
And B4, setting a rendering pipeline through the rendering pipeline object.
And B5, setting a texture data pixel color format, an image width and the like by rendering the texture object.
And B6, setting vertex, texture coordinate and rasterization through shader processing.
And B7, obtaining the set equipment parameters, creating a coded 3D graphic rendering instruction, and finishing rendering.
Therefore, in the method provided by the embodiment of the application, the video toolbox encodes the H265 video stream without the B frame, so that the purposes of improving the encoding efficiency, reducing the consumption of a mobile phone, enhancing the compression rate and reducing the size of a video file are achieved. Meatal supports multi-thread execution, resource control and synchronous and asynchronous control enhance video rendering performance and improve fluency.
The foregoing provides some specific implementation manners of a video encoding rendering method for the embodiments of the present application, and based on this, the present application also provides a corresponding apparatus. The device provided by the embodiment of the present application will be described in terms of functional modularity. Referring to fig. 2, fig. 2 is a schematic structural diagram of a video encoding and rendering apparatus according to an embodiment of the present disclosure.
In this embodiment, the apparatus may include:
a video data acquisition module 201, configured to acquire video data;
a video data packet generating module 202, configured to compress the obtained video data to generate a video data packet;
an upload module 203, configured to upload the video data packet to a server, where the server is configured to distribute the video data packet to a corresponding client;
a distribution data obtaining module 204, configured to obtain video data distributed by the server to a corresponding client;
a decoding processing module 205, configured to perform decoding processing on the video data acquired from the server to obtain image data of each frame;
and the rendering processing module 206 is configured to perform rendering and drawing processing on the obtained image data.
The device further comprises:
the first setting module is used for creating an encoder and setting the encoding width and height, the format and the encoding frame data callback;
the second setting module is used for setting the attribute of the encoder, setting the interval of the key frames, setting the code rate and setting the B frame of the cancellation coding;
and the instruction sending module is used for sending an encoding starting instruction to the encoder.
The device further comprises:
and the encoder destroying module is used for destroying the encoder in response to the acquisition of the encoding stopping instruction.
An embodiment of the present application further provides a computer device, where the computer device includes a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the steps of a method of video encoding rendering as described above according to instructions in the program code.
Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of a method for video coding rendering as described above.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the system or the device disclosed by the embodiment, the description is simple because the system or the device corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for video encoding rendering, comprising:
collecting video data;
compressing the obtained video data to generate a video data packet;
uploading the video data packets to a server, wherein the server is used for distributing the video data packets to corresponding clients;
acquiring video data distributed to a corresponding client by the server;
decoding the video data acquired from the server to obtain image data of each frame;
and rendering and drawing the obtained image data.
2. The method of claim 1, wherein the capturing video data comprises:
and starting the acquisition device, and setting the output signal output tone of the equipment to obtain video data.
3. The method according to claim 1, wherein the compressing the acquired video data to generate a video data packet comprises:
converting the acquired video data into video core library type image data frames;
and coding each frame of the video core library type image data frames to determine a video data packet.
4. The method according to claim 1, wherein before compressing the acquired video data to generate video data packets, further comprising:
establishing an encoder, and setting the encoding width and height, the format and the callback of encoding frame data;
setting encoder attribute, setting key frame interval, setting code rate and setting canceling encoding B frame;
and sending an encoding starting instruction to the encoder.
5. The method of claim 3, wherein said encoding each of said frames of video Kernel library type image data to determine video packets comprises:
performing primary coding on each frame of the video core library type image data frames to obtain a primary coding result;
triggering a callback function in response to the success of the primary encoding;
determining callback data according to the preliminary coding result and the callback function;
and judging whether each frame of the callback data is a key frame, inserting a start code, a front video parameter set, a rear video parameter set, a sequence parameter set and an image parameter set in response to the judgment result that the frame is the key frame, and encoding to complete the video data stream to obtain a primary encoding result.
6. The method according to claim 4, wherein said compressing the acquired video data to generate video data packets further comprises:
and destroying the encoder in response to acquiring the encoding stop instruction.
7. The method according to claim 1, wherein the rendering and drawing the obtained image data includes:
creating a rendering layer and setting equipment parameters;
creating a rendering command encoder according to the device parameters;
and rendering and drawing the obtained image data according to the rendering command encoder.
8. A video encoding rendering apparatus, comprising:
the video data acquisition module is used for acquiring video data;
the video data packet generating module is used for compressing the acquired video data to generate a video data packet;
the uploading module is used for uploading the video data packet to a server, and the server is used for distributing the video data packet to a corresponding client;
the distributed data acquisition module is used for acquiring video data distributed to the corresponding client by the server;
the decoding processing module is used for decoding the video data acquired from the server to obtain the image data of each frame;
and the rendering processing module is used for rendering and drawing the obtained image data.
9. A computer device, comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the steps of the video encoding rendering method according to any one of claims 1-7 according to instructions in the program code.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of a video encoding rendering method according to any one of claims 1-7.
CN202310083576.8A 2023-02-08 2023-02-08 Video coding rendering method, device, equipment and storage medium Pending CN115883922A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310083576.8A CN115883922A (en) 2023-02-08 2023-02-08 Video coding rendering method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310083576.8A CN115883922A (en) 2023-02-08 2023-02-08 Video coding rendering method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115883922A true CN115883922A (en) 2023-03-31

Family

ID=85760902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310083576.8A Pending CN115883922A (en) 2023-02-08 2023-02-08 Video coding rendering method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115883922A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104053073A (en) * 2013-03-15 2014-09-17 株式会社理光 DISTRIBUTION CONTROL SYSTEM, DISTRIBUTION SYSTEM and DISTRIBUTION CONTROL METHOD
US8995965B1 (en) * 2010-03-25 2015-03-31 Whatsapp Inc. Synthetic communication network method and system
CN105227963A (en) * 2015-08-31 2016-01-06 北京暴风科技股份有限公司 Streaming Media collection is carried out to terminal and automatically identifies direction and the method and system of adjustment
CN110213609A (en) * 2019-06-12 2019-09-06 珠海读书郎网络教育有限公司 The method, apparatus and storage medium of the company of progress wheat live streaming in Web education live streaming

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8995965B1 (en) * 2010-03-25 2015-03-31 Whatsapp Inc. Synthetic communication network method and system
CN104053073A (en) * 2013-03-15 2014-09-17 株式会社理光 DISTRIBUTION CONTROL SYSTEM, DISTRIBUTION SYSTEM and DISTRIBUTION CONTROL METHOD
CN105227963A (en) * 2015-08-31 2016-01-06 北京暴风科技股份有限公司 Streaming Media collection is carried out to terminal and automatically identifies direction and the method and system of adjustment
CN110213609A (en) * 2019-06-12 2019-09-06 珠海读书郎网络教育有限公司 The method, apparatus and storage medium of the company of progress wheat live streaming in Web education live streaming

Similar Documents

Publication Publication Date Title
CN109600666B (en) Video playing method, device, medium and electronic equipment in game scene
WO2018045927A1 (en) Three-dimensional virtual technology based internet real-time interactive live broadcasting method and device
US11418832B2 (en) Video processing method, electronic device and computer-readable storage medium
CN104096362B (en) The Rate Control bit distribution of video flowing is improved based on player's region-of-interest
CN109168014A (en) A kind of live broadcasting method, device, equipment and storage medium
CN107665128B (en) Image processing method, system, server and readable storage medium
KR20180103715A (en) Method for inverse tone mapping of an image with visual effects
CN113301342B (en) Video coding method, network live broadcasting method, device and terminal equipment
WO2023241459A1 (en) Data communication method and system, and electronic device and storage medium
CN112511896A (en) Video rendering method and device
US10237563B2 (en) System and method for controlling video encoding using content information
CN110049347B (en) Method, system, terminal and device for configuring images on live interface
US10997795B2 (en) Method and apparatus for processing three dimensional object image using point cloud data
CN106507115B (en) Coding/decoding method, device and the terminal device of VR video based on iOS device
US20170221174A1 (en) Gpu data sniffing and 3d streaming system and method
CN112533005B (en) Interaction method and system for VR video slow live broadcast
CN114938408B (en) Data transmission method, system, equipment and medium of cloud mobile phone
CN115225615B (en) Illusion engine pixel streaming method and device
CN115883922A (en) Video coding rendering method, device, equipment and storage medium
KR102238091B1 (en) System and method for 3d model compression and decompression
CN113286149B (en) Cloud conference self-adaptive multi-layer video coding method, system and storage medium
CN111406404A (en) Compression method, decompression method, system and storage medium for obtaining video file
CN114205662B (en) Low-delay video rendering method and device of iOS (integrated operation system) terminal
CN112702625B (en) Video processing method, device, electronic equipment and storage medium
CN115225902A (en) High-resolution VR cloud game solution method based on scatter coding and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20230331

RJ01 Rejection of invention patent application after publication