CN113923507A - Low-delay video rendering method and device for Android terminal - Google Patents

Low-delay video rendering method and device for Android terminal Download PDF

Info

Publication number
CN113923507A
CN113923507A CN202111513970.8A CN202111513970A CN113923507A CN 113923507 A CN113923507 A CN 113923507A CN 202111513970 A CN202111513970 A CN 202111513970A CN 113923507 A CN113923507 A CN 113923507A
Authority
CN
China
Prior art keywords
video stream
stream data
rendering
buffer queue
rendering method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111513970.8A
Other languages
Chinese (zh)
Other versions
CN113923507B (en
Inventor
沙宗超
贾宏伟
郭建君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Weiling Times Technology Co Ltd
Original Assignee
Beijing Weiling Times Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Weiling Times Technology Co Ltd filed Critical Beijing Weiling Times Technology Co Ltd
Priority to CN202111513970.8A priority Critical patent/CN113923507B/en
Publication of CN113923507A publication Critical patent/CN113923507A/en
Application granted granted Critical
Publication of CN113923507B publication Critical patent/CN113923507B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4331Caching operations, e.g. of an advertisement for later insertion during playback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games

Abstract

The invention provides a low-delay video rendering method of an Android terminal, device electronic equipment and a storage medium, and relates to the technical field of data processing, wherein the rendering method stores video stream data issued by a server terminal into a user-defined cache queue by defining the cache queue; starting a new thread to decode the video stream data in the buffer queue; starting a new thread to render the decoded video stream data according to a frame loss algorithm; the invention can optimize the problem of operation delay and reduce the performance of operation delay under the condition of unstable network transmission or network jitter in the cloud game scene.

Description

Low-delay video rendering method and device for Android terminal
Technical Field
The invention relates to the field of data processing, in particular to a low-delay video rendering method and device for an Android terminal, electronic equipment and a storage medium.
Background
The Android video playing scheme has been accumulated in the industry for many years, and many video playing and rendering schemes are available in the market, but in a cloud game scene, low latency is required, namely, a feedback result needs to be seen as soon as possible after control and feedback; but the obvious operation delay performance is easy to appear under the scene of 60fps in the case of the existing network.
Disclosure of Invention
The embodiment of the invention provides a low-delay video rendering method and device for an Android terminal, which can be used for optimizing the problem of operation delay and reducing the performance of operation delay under the condition that the existing network transmission is unstable or network jitters in a cloud game scene.
In a first aspect, an embodiment of the present invention provides a low-delay video rendering method for an Android end, where the rendering method includes:
defining a buffer queue by user, and storing video stream data sent by a server into the defined buffer queue;
starting a new thread to decode the video stream data in the buffer queue;
and starting a new thread to render the decoded video stream data according to a frame loss algorithm.
Optionally, when decoding, a return value of the mediacodec dequeueInputBuffer method is read in real time, and whether to decode the read video stream data in the cache queue is determined according to the return value.
Optionally, reading video cache data first during the decoding, if not, continuing to read next video cache data, and if so, judging whether to decode according to the return value; and if the return value is greater than 0, decoding the video stream data, and if the return value is less than 0, reading the next video stream data.
Optionally, sleep is set when the video stream data in the buffer queue is read, so that CPU occupation is reduced.
Optionally, the starting of the new thread to render the decoded video stream data according to a frame loss algorithm includes:
reading a return value of the mediaodec dequeuoutputbuffer method, and judging whether the decoded video stream data is acquired according to the return value;
and judging whether the obtained decoded video stream data is rendered or discarded according to a frame loss algorithm.
Optionally, if the return value is greater than 0, the decoded video stream data is obtained, and if the return value is less than 0, the decoded video stream data is continuously obtained.
Optionally, the frame loss algorithm includes:
setting different frame loss threshold values under the scene of 30fps or 60 fps;
reading the length of a custom buffer queue in the rendering method, if the length of the buffer queue is greater than the threshold value, discarding the decoded video stream data, and if the length of the buffer queue is less than the threshold value, rendering the decoded video stream data.
In a second aspect, an embodiment of the present invention provides an Android-end low-latency video rendering apparatus, where the rendering apparatus includes:
the buffer module stores the video stream data issued by the server into a predefined buffer queue;
the decoding module starts a new thread to decode the video stream data in the cache queue;
and the rendering module starts a new thread to render the decoded video stream data according to a frame loss algorithm.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory and a processor, where the memory stores a computer program thereon, and the processor implements the method according to any one of the first aspect when executing the program.
In a fourth aspect, an embodiment of the invention provides a computer-readable storage medium on which is stored a computer program which, when executed by a processor, implements the method of any one of the first aspects.
Advantageous effects
The embodiment of the invention provides a low-delay video rendering method and device of an Android terminal, electronic equipment and a storage medium, wherein the rendering method stores video stream data issued by a server terminal into a custom cache queue by customizing the cache queue; starting a new thread to decode the video stream data in the buffer queue; starting a new thread to render the decoded video stream data according to a frame loss algorithm; the method can optimize the operation delay problem and reduce the operation delay performance under the condition that the existing network transmission is unstable or the network shakes in the cloud game scene.
It should be understood that the statements herein reciting aspects are not intended to limit the critical or essential features of any embodiment of the invention, nor are they intended to limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate one or more embodiments or prior art solutions of the present specification, the drawings that are needed in the description of the embodiments or prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and that other drawings can be obtained by those skilled in the art without inventive exercise.
Fig. 1 shows a flowchart of a low-latency video rendering method on an Android side according to an embodiment of the present invention;
FIG. 2 illustrates a decode individual thread flow diagram of an embodiment of the invention;
FIG. 3 illustrates a render individual thread flow diagram of an embodiment of the present invention;
fig. 4 is a schematic structural diagram illustrating an Android-side low-latency video rendering apparatus according to an embodiment of the present invention;
fig. 5 shows a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in one or more embodiments of the present disclosure, the technical solutions in one or more embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in one or more embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all embodiments. All other embodiments that can be derived by a person skilled in the art from one or more of the embodiments described herein without making any inventive step shall fall within the scope of protection of this document.
In the related art, MediaCodec is available for the first time in Android version 4.1 (API 16), initially for direct access to the media codec of the device. It provides an extremely primitive interface. MediaCodec classes exist in both Java and C + + layers, but only the former are public access methods.
In Android4.3 (API 18), MediaCodec is extended to include a method of providing input through Surface (through createInputSurface method), which allows input from a camera preview or through OpenGL ES presentation. And android4.3 is also the first release version of the MediaCodec after CTS Test (CTS is a device Compatibility Test specification introduced by Google to ensure consistent user experience with different devices, and Google also provides a Compatibility standard document CDD).
Also android4.3 introduced MediaMuxer, which allowed the output of the AVC codec (raw h.264 elementary stream) to be converted into the.mp 4 format, either transcoded with the audio stream or converted separately.
From API16, Android provides Mediacodec class to facilitate developers to process the coding and decoding of audio and video more flexibly, compared with high-level APIs such as mediaPlayer/VideoView, the Mediacodec is low-level APIs, so that it provides more complete, flexible and rich interfaces, and developers can realize more flexible functions. The MediaCodec class can be used to access the Android underlying multimedia codec, e.g., the encoder/decoder component. It is part of the Android underlying multimedia support infrastructure (commonly used with mediaextra, MediaSync, MediaMuxer, MediaCrypto, MediaDrm, Image, Surface, and AudioTrack). Android5.0 (API 21) introduces an "asynchronous mode" that allows applications to provide a callback method to execute when buffers are available. In a broad sense, a codec processes input data to produce output data. MediaCode processes data in an asynchronous manner and uses a set of input and output buffers (i.e., buffers). Briefly, an empty input buffer (input buffer) is requested or received, filled with data and passed to the codec for processing. The codec finishes processing the data and outputs the processed result to an empty output buffer (output buffer). Finally, you request or receive an output buffer (output buffer) filled with the result data, use up the data in it, and release it to the codec for reuse.
Basic use:
the MediaCodec API of all sync patterns follows one pattern:
creating and configuring a MediaCodec object;
and circulating until finishing:
if the input buffer area is ready, reading an input block and copying the input block into the input buffer area;
if the output buffer zone is ready, copying the data of the output buffer zone;
releasing the MediaCodec object;
one example of MediaCodec handles one type of data, (e.g., MP3 audio or h.264 video), encoding or decoding. It operates on the original data and all information in any header, such as ID3 (typically in bytes at the beginning or end of an mp3 file, with information about the artist, title, album name, year, genre, etc. of the mp3 attached, which is referred to as ID3 information) is erased. It does not communicate with any advanced system components, does not play audio through speakers, or acquires video stream data over a network, it is simply an intermediate layer that buffers the data and returns it.
Some codecs are specific to their buffers, they may require some special memory alignment or have certain minimum and maximum limits, and to accommodate the wide range of possibilities, the buffer allocation is implemented by the codec itself, not at the application level. You do not need a buffer with data to MediaCodec, but apply a buffer directly to it and then copy your data in. This seems to be contrary to the "zero copy" principle, but the probability of copying is much smaller in most cases, since the codec does not need to copy or adjust the data to meet the requirements, and most of the buffer can be used directly, e.g. read directly from disk or network into the buffer, without copying.
The input of MediaCodec must be done in "access units", meaning a frame when encoding h.264 video and a NAL unit when decoding, however, it looks more like a stream, you cannot submit a single block and is expected to appear soon, in fact, the codec may enqueue several buffers before output.
The present invention is described below with specific examples, and it should be noted that the descriptions in the examples of the present application are only for clearly illustrating the technical solutions in the examples of the present application, and do not limit the technical solutions provided in the examples of the present application.
Fig. 1 shows a flowchart of a low-latency video rendering method on an Android side according to an embodiment of the present invention. Referring to fig. 1, the rendering method includes:
s20, storing the video stream data sent by the server into a predefined buffer queue;
specifically, a buffer queue is customized, and video stream data sent by a server is put into the customized buffer queue; because the video stream data is not uniform due to network reasons, the frame data can be lost when the video stream data is failed to be put into the decoder, and other situations such as screen splash and the like can occur, a buffer queue is defined by user, the video stream data obtained from the server side is put into the buffer queue and is not directly put into the decoder, and the situations are avoided.
S40, starting a new thread to decode the video stream data in the buffer queue;
FIG. 2 illustrates a decode individual thread flow diagram of an embodiment of the invention; as shown in fig. 2, decoding starts a new thread, reads the return value of the mediaodec dequeuInputBuffer method in real time, and judges the return value according to the return valueWhether video stream data can be put into the interrupt; the decoding thread is an independent circulation thread, firstly, the video cache data is read, if the video cache data is not read, the next video cache data is continuously read, and if the video cache data is read, whether decoding is carried out or not is judged according to the return value; after reading the video cache data, judging whether to decode the video cache data according to a returned value of a mediacodec dequeuInputBuffer method; if it is
Figure 182624DEST_PATH_IMAGE001
(return value greater than or equal to 0), decoding said video stream data, if so
Figure 942770DEST_PATH_IMAGE002
(the return value is less than 0), continuing to read the next video stream data; and sleep can be set in the step of continuously and circularly reading the video stream data when the video stream data is not read, so that the occupation of a CPU is reduced.
S60, starting a new thread to render the decoded video stream data according to a frame loss algorithm;
FIG. 3 illustrates a render individual thread flow diagram of an embodiment of the present invention; as shown in fig. 3, the frame loss algorithm calculates whether the current decoding frame number can be rendered according to the length of the buffer queue data (frame loss threshold), starts a new thread to render the decoded video stream data according to the frame loss algorithm, reads a return value of the media decoder queue outputbuffer method, and determines whether to acquire the decoded video stream data according to the return value, where the rendering thread is an independent loop thread if the rendering thread is an independent loop thread
Figure 396754DEST_PATH_IMAGE001
(return value is greater than or equal to 0), acquiring the decoded video stream data, if the decoded video stream data is not greater than the return value
Figure 798916DEST_PATH_IMAGE002
(the return value is less than 0), continuously acquiring the decoded video stream data; then judging whether the obtained decoded video stream data is rendered or directly discarded according to a frame loss algorithmSetting different frame loss threshold values (a data long queue which can be cached in a self-defined cache queue) under the scene of 30fps or 60fps, wherein the frame loss threshold value is 5 frames at 30fps, and the frame loss threshold value is 0 frame at 60 fps; reading the length of a custom buffer queue in the rendering method, if the length of the buffer queue is greater than the threshold value, discarding the decoded video stream data, and if the length of the buffer queue is less than the threshold value, rendering the decoded video stream data.
It should be noted that, in order to adapt to the device above android4.1, a frame loss algorithm strategy may also be used, and a synchronization method of mediaodec is used for decoding and rendering.
And self-defining a buffer queue, and putting the video stream data acquired from the server into the buffer queue without directly putting the video stream data into a decoder, so as to avoid the problems that the video stream data is not uniform due to network reasons, the frame data is lost after the video stream data is failed to be put into the decoder, and the screen is lost and the like. Setting sleep when the cached video data is read circularly, and reducing the occupation of a CPU; by the Android-end low-delay video rendering method, the problem of operation delay can be optimized and operation delay performance can be reduced under the condition that existing network transmission is unstable or network jitters in a cloud game scene.
Based on the same inventive concept, an embodiment of the present invention further provides an Android-end low-latency video rendering device, which can be used to implement the Android-end low-latency video rendering method described in the foregoing embodiment, as described in the following embodiment: the principle of solving the problems of the Android-side low-delay video rendering device is similar to that of the Android-side low-delay video rendering method, so that the implementation of the Android-side low-delay video rendering device can refer to the implementation of the Android-side low-delay video rendering method, and repeated parts are not repeated. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. While the system described in the embodiments below is preferably implemented in software, implementations in hardware, or a combination of software and hardware are also possible and contemplated.
Fig. 4 shows a block diagram of a low-latency video rendering apparatus on an Android side according to an embodiment of the present invention. As shown in fig. 4, the rendering apparatus includes:
the buffer module 20 is configured to store the video stream data sent by the server into a predefined buffer queue;
the decoding module 40 is configured to start a new thread to decode video stream data in the buffer queue;
and a rendering module 60, configured to start a new thread to render the decoded video stream data according to a frame loss algorithm.
The embodiment of the invention provides a low-delay video rendering device of an Android terminal, wherein the rendering device self-defines a buffer queue through a buffer module 20 and puts video stream data issued by a server terminal into the self-defined buffer queue; starting a new thread through a decoding module 40 to decode the video stream data in the buffer queue; a rendering module 60 starts a new thread to render the decoded video stream data according to a frame loss algorithm.
An embodiment of the present invention also provides a computer electronic device, and fig. 5 shows a schematic structural diagram of an electronic device to which an embodiment of the present invention can be applied, and as shown in fig. 5, the computer electronic device includes a Central Processing Unit (CPU) 501 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for system operation are also stored. The CPU 501, ROM 502, and RAM 503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present invention may be implemented by software, or may be implemented by hardware. The described units or modules may also be provided in a processor, and may be described as: a processor includes a buffer module, a decoding module and a rendering module, where the names of these modules do not form a limitation on the module itself under certain circumstances, for example, the buffer module may also be described as "a buffer module for storing video stream data delivered by a server into a predefined buffer queue".
As another aspect, the present invention further provides a computer-readable storage medium, where the computer-readable storage medium may be a computer-readable storage medium included in the Android-end low-latency video rendering apparatus in the foregoing embodiment; or it may be a computer-readable storage medium that exists separately and is not built into the electronic device. The computer readable storage medium stores one or more programs for executing a method for Android-based low-latency video rendering described in the present invention by one or more processors.
The foregoing description is only exemplary of the preferred embodiments of the invention and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features and (but not limited to) features having similar functions disclosed in the present invention are mutually replaced to form the technical solution.

Claims (10)

1. A low-delay video rendering method of an Android end is characterized by comprising the following steps:
storing video stream data issued by a server into a predefined buffer queue;
starting a new thread to decode the video stream data in the buffer queue;
and starting a new thread to render according to the video stream data decoded by the frame loss algorithm.
2. The rendering method according to claim 1, wherein a returned value of the mediacodec dequeuInputBuffer method is read in real time when decoding is performed, and whether to decode the read video stream data in the buffer queue is determined according to the returned value.
3. The rendering method according to claim 2, wherein the video cache data is read first during decoding, and if not, the next video cache data is read continuously, and if yes, whether decoding is performed is judged according to the return value; and if the return value is greater than or equal to 0, decoding the video stream data, and if the return value is less than 0, reading the next video stream data.
4. The rendering method according to claim 2, wherein a sleep is set when the video stream data in the buffer queue is read, so as to reduce CPU occupation.
5. The rendering method of claim 1, wherein the starting of the new thread to render the decoded video stream data according to a frame loss algorithm comprises:
reading a return value of the mediaodec dequeuoutputbuffer method, and judging whether the decoded video stream data is acquired according to the return value;
and judging whether the obtained decoded video stream data is rendered or discarded according to a frame loss algorithm.
6. The rendering method according to claim 5, wherein if the return value is greater than or equal to 0, the decoded video stream data is acquired, and if the return value is less than 0, the decoded video stream data is continuously acquired.
7. The rendering method of claim 5, wherein the frame loss algorithm comprises:
setting different frame loss threshold values under the scene of 30fps or 60 fps;
and reading the length of a custom buffer queue in the rendering method, discarding the decoded video stream data if the length of the buffer queue is greater than the threshold value, and rendering the decoded video stream data if the length of the buffer queue is less than the threshold value.
8. An Android-side low-latency video rendering device, the rendering device comprising:
the buffer module stores the video stream data issued by the server into a predefined buffer queue;
the decoding module starts a new thread to decode the video stream data in the cache queue;
and the rendering module starts a new thread to render the decoded video stream data according to a frame loss algorithm.
9. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the processor, when executing the computer program, implements the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202111513970.8A 2021-12-13 2021-12-13 Low-delay video rendering method and device for Android terminal Active CN113923507B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111513970.8A CN113923507B (en) 2021-12-13 2021-12-13 Low-delay video rendering method and device for Android terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111513970.8A CN113923507B (en) 2021-12-13 2021-12-13 Low-delay video rendering method and device for Android terminal

Publications (2)

Publication Number Publication Date
CN113923507A true CN113923507A (en) 2022-01-11
CN113923507B CN113923507B (en) 2022-07-22

Family

ID=79248623

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111513970.8A Active CN113923507B (en) 2021-12-13 2021-12-13 Low-delay video rendering method and device for Android terminal

Country Status (1)

Country Link
CN (1) CN113923507B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116916095A (en) * 2023-09-12 2023-10-20 深圳云天畅想信息科技有限公司 Smooth display method, device and equipment of cloud video and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090198573A1 (en) * 2008-01-31 2009-08-06 Iwin, Inc. Advertisement Insertion System and Method
US20110258338A1 (en) * 2010-04-14 2011-10-20 Adobe Systems Incorporated Media Quality Enhancement Among Connected Media Communication Devices
CN105898320A (en) * 2015-12-09 2016-08-24 乐视网信息技术(北京)股份有限公司 Panorama video decoding method and device and terminal equipment based on Android platform
CN106375793A (en) * 2016-08-29 2017-02-01 东方网力科技股份有限公司 Superposition method and superposition system of video structured information, and user terminal
CN108093258A (en) * 2018-01-11 2018-05-29 珠海全志科技股份有限公司 Coding/decoding method, computer installation and the computer readable storage medium of bit stream data
CN109302637A (en) * 2018-11-05 2019-02-01 腾讯科技(成都)有限公司 Image processing method, image processing apparatus and electronic equipment
US20190090162A1 (en) * 2014-03-05 2019-03-21 Interdigital Patent Holdings, Inc. Pcp handover in a mesh network after a change of role of a station associated with a first node receiving from another node an indication of association
CN109922360A (en) * 2019-03-07 2019-06-21 腾讯科技(深圳)有限公司 Method for processing video frequency, device and storage medium
CN110832875A (en) * 2018-07-23 2020-02-21 深圳市大疆创新科技有限公司 Video processing method, terminal device and machine-readable storage medium
CN111556325A (en) * 2019-02-12 2020-08-18 广州艾美网络科技有限公司 Audio and video combined rendering method, medium and computer equipment
CN111836104A (en) * 2020-07-09 2020-10-27 海信视像科技股份有限公司 Display apparatus and display method
CN112218117A (en) * 2020-09-29 2021-01-12 北京字跳网络技术有限公司 Video processing method and device
CN112600815A (en) * 2020-12-08 2021-04-02 努比亚技术有限公司 Video display method, terminal and computer readable storage medium
CN112929741A (en) * 2021-01-21 2021-06-08 杭州雾联科技有限公司 Video frame rendering method and device, electronic equipment and storage medium
CN113254120A (en) * 2021-04-02 2021-08-13 荣耀终端有限公司 Data processing method and related device
CN113411660A (en) * 2021-01-04 2021-09-17 腾讯科技(深圳)有限公司 Video data processing method and device and electronic equipment

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090198573A1 (en) * 2008-01-31 2009-08-06 Iwin, Inc. Advertisement Insertion System and Method
US20110258338A1 (en) * 2010-04-14 2011-10-20 Adobe Systems Incorporated Media Quality Enhancement Among Connected Media Communication Devices
US20190090162A1 (en) * 2014-03-05 2019-03-21 Interdigital Patent Holdings, Inc. Pcp handover in a mesh network after a change of role of a station associated with a first node receiving from another node an indication of association
CN105898320A (en) * 2015-12-09 2016-08-24 乐视网信息技术(北京)股份有限公司 Panorama video decoding method and device and terminal equipment based on Android platform
CN106375793A (en) * 2016-08-29 2017-02-01 东方网力科技股份有限公司 Superposition method and superposition system of video structured information, and user terminal
CN108093258A (en) * 2018-01-11 2018-05-29 珠海全志科技股份有限公司 Coding/decoding method, computer installation and the computer readable storage medium of bit stream data
CN110832875A (en) * 2018-07-23 2020-02-21 深圳市大疆创新科技有限公司 Video processing method, terminal device and machine-readable storage medium
CN109302637A (en) * 2018-11-05 2019-02-01 腾讯科技(成都)有限公司 Image processing method, image processing apparatus and electronic equipment
CN111556325A (en) * 2019-02-12 2020-08-18 广州艾美网络科技有限公司 Audio and video combined rendering method, medium and computer equipment
CN109922360A (en) * 2019-03-07 2019-06-21 腾讯科技(深圳)有限公司 Method for processing video frequency, device and storage medium
CN111836104A (en) * 2020-07-09 2020-10-27 海信视像科技股份有限公司 Display apparatus and display method
CN112218117A (en) * 2020-09-29 2021-01-12 北京字跳网络技术有限公司 Video processing method and device
CN112600815A (en) * 2020-12-08 2021-04-02 努比亚技术有限公司 Video display method, terminal and computer readable storage medium
CN113411660A (en) * 2021-01-04 2021-09-17 腾讯科技(深圳)有限公司 Video data processing method and device and electronic equipment
CN112929741A (en) * 2021-01-21 2021-06-08 杭州雾联科技有限公司 Video frame rendering method and device, electronic equipment and storage medium
CN113254120A (en) * 2021-04-02 2021-08-13 荣耀终端有限公司 Data processing method and related device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHAOLIANG MA: "An improved Real-Time Video Communication System", 《2018 IEEE VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP)》 *
马兆良: "低延迟视频编码和通信系统关键技术研究", 《 CNKI优秀硕士学位论文全文库》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116916095A (en) * 2023-09-12 2023-10-20 深圳云天畅想信息科技有限公司 Smooth display method, device and equipment of cloud video and storage medium
CN116916095B (en) * 2023-09-12 2024-01-12 深圳云天畅想信息科技有限公司 Smooth display method, device and equipment of cloud video and storage medium

Also Published As

Publication number Publication date
CN113923507B (en) 2022-07-22

Similar Documents

Publication Publication Date Title
US11347370B2 (en) Method and system for video recording
US10930318B2 (en) Gapless video looping
US20100050221A1 (en) Image Delivery System with Image Quality Varying with Frame Rate
CN113457160A (en) Data processing method and device, electronic equipment and computer readable storage medium
JP2006148875A (en) Creation of image based video using step-images
CN111225171B (en) Video recording method, device, terminal equipment and computer storage medium
CN114071226B (en) Video preview graph generation method and device, storage medium and electronic equipment
JP2009503630A (en) Event queuing in an interactive media environment
JP2009017535A (en) Coding of multimedia signal
WO2017080175A1 (en) Multi-camera used video player, playing system and playing method
CN113542757A (en) Image transmission method and device for cloud application, server and storage medium
CN113923507B (en) Low-delay video rendering method and device for Android terminal
WO2020155956A1 (en) First-frame equalization current-limiting method and apparatus, computer device and readable storage medium
CN114339412B (en) Video quality enhancement method, mobile terminal, storage medium and device
CN113411660B (en) Video data processing method and device and electronic equipment
US8280220B2 (en) Reproduction apparatus, data processing system, reproduction method, program, and storage medium
CN116546228A (en) Plug flow method, device, equipment and storage medium for virtual scene
CN115914745A (en) Video decoding method and device, electronic equipment and computer readable medium
CN112804579A (en) Video playing method and device, computer equipment and readable storage medium
CN114205662B (en) Low-delay video rendering method and device of iOS (integrated operation system) terminal
CN112511887B (en) Video playing control method, corresponding device, equipment, system and storage medium
JP2010204892A (en) Video analysis device, video analysis method and video analysis program
US9336557B2 (en) Apparatus and methods for processing of media signals
CN116886974B (en) Method and device for optimizing decoding rendering performance of terminal
CN110022480B (en) H265 hardware coding method based on AMD display card and live broadcast platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant