CN110933457B - Multi-node low-delay parallel coding method for 8K ultra-high definition - Google Patents

Multi-node low-delay parallel coding method for 8K ultra-high definition Download PDF

Info

Publication number
CN110933457B
CN110933457B CN201911211710.8A CN201911211710A CN110933457B CN 110933457 B CN110933457 B CN 110933457B CN 201911211710 A CN201911211710 A CN 201911211710A CN 110933457 B CN110933457 B CN 110933457B
Authority
CN
China
Prior art keywords
node
coding
slice
slave
master
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911211710.8A
Other languages
Chinese (zh)
Other versions
CN110933457A (en
Inventor
谢亚光
李日
朱建国
陈勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Arcvideo Technology Co ltd
Original Assignee
Hangzhou Arcvideo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Arcvideo Technology Co ltd filed Critical Hangzhou Arcvideo Technology Co ltd
Priority to CN201911211710.8A priority Critical patent/CN110933457B/en
Publication of CN110933457A publication Critical patent/CN110933457A/en
Application granted granted Critical
Publication of CN110933457B publication Critical patent/CN110933457B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • H04N21/2335Processing of audio elementary streams involving reformatting operations of audio signals, e.g. by converting from one coding standard to another
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234309Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a multi-node low-delay parallel coding method for 8K ultra-high definition. The method adopts master-slave multi-node coding, video coding is based on slice coding, a complete frame to be coded is divided into a plurality of slices, different coding nodes are respectively distributed for coding, each slice is limited to contain a plurality of integral CTU rows, the number of rows is variable, dynamic adjustment is carried out according to coding time of each node, after coding of each node is finished, a coded code stream is transmitted back to a master node, and the master node splices code stream data into a complete frame. The invention has the beneficial effects that: the 8K real-time coding in HEVC, AVS2 and other formats is realized, the coding performance can be improved and the coding delay can be reduced by adding a coding server, and the coding time of the master coding node and the slave coding node can be balanced, so that the optimal whole system is achieved, and the delay is the lowest.

Description

Multi-node low-delay parallel coding method for 8K ultra-high definition
Technical Field
The invention relates to the technical field of video decoding, in particular to a multi-node low-delay parallel coding method for 8K ultra-high definition.
Background
HEVC is a latest generation video coding standard published by the international telecommunication union (ITU-T) in 2013, and compared with the previous generation coding standard H264, the compression efficiency of HEVC is doubled, so that the bit rate of 8K ultra-high definition video which is currently popularized can be greatly reduced, and the bandwidth consumption is reduced.
AVS2 is the latest generation standard for ultra high definition video applications released in china in 2016, month 12. The coding framework of AVS2 is similar to HEVC, and is slightly better than HEVC in coding efficiency. At present, the AVS2 video standard becomes the only video coding standard adopted by the China radio and television central office, and has wide application prospect.
8K resolution is a digital video standard. In 23/8/2012, the international telecommunication union under the united state flag uses the resolution of 7680x4320 suggested by the NHK tv station in japan as the international 8K super high definition television (SHV) standard, and SHV is used as an "ultra high definition video system" exceeding the current digital tv, and has a broad application prospect in ultra high resolution code super definition. Meanwhile, the Ultra HD Forum publishes an Ultra-high definition specification, and the Ultra HD Forum establishes standards of 4K and 8K, and the Ultra HD Forum Guidelines v2.1 adopts HEVC and AVS2 as standards of 8K coding.
The 8K resolution is 4 times of 4K, 16 times of HD, and the video complexity is 16 times of HD, and the decoding complexity of HEVC and AVS2 is much more complex than the decoding complexity of the previous generation H264 and AVS PLUS, which brings great challenges to the real-time encoding of the edge encoder. With the current server, even a strongest general rack-mounted server with double CPUs, the 8K @60fps real-time coding cannot be realized. If 8K real-time coding is needed, two or even a plurality of servers of nodes can be required to code in parallel to achieve the purpose of real-time. Currently, there is no hardware encoding chip on the market for AVS 2. For HEVC, there is a solution for supporting 8K HEVC, but it is not yet mature, such as coding scheme based on NTT code card. For example, since the performance of one hardware encoding card is not enough to encode 8K, only 1 way of 4K can be encoded, and usually 4 blocks of 4K cards are needed to be concurrent, each block encodes one quarter of 8K pictures, and finally 4 independent streams are output, and finally four streams are combined into one way of 8K through a playing end. Strictly speaking, this kind belongs to interim solution, and 4K are not 8K all the way, need the customized player, have the problem of playing end concatenation frame alignment simultaneously. In addition, once the core-level bug occurs to the coding card, the debugging and repairing period is long, which is not beneficial to safe broadcasting and needs to perfect maturity.
So for some time the 8K video coding scheme must also rely on a server-based software coding scheme. Since one server cannot perform 8K real-time encoding computationally, dual-node or even multi-node parallel encoding may be required. The invention provides a master-slave multi-node scheme based on GOP parallel, which has the advantages of simple realization, independent operation and independent coding of each node and no need of excessive interaction with other nodes. However, there are some disadvantages that the delay is large and the code rate allocation cannot be reasonably allocated effectively. In particular, the more nodes, the greater the coding delay, in order to improve the overall coding performance.
Disclosure of Invention
The invention provides a multi-node low-delay parallel coding method facing 8K ultra-high definition, which can reduce coding delay and overcome the defects in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
a multi-node low-delay parallel coding method for 8K ultra-high definition specifically comprises the following steps:
(1) deploying master nodes and slave nodes which are connected with each other through a high-speed network card, wherein one master node is provided, CJ slave nodes are provided, an integral coding and transcoding system is deployed on the master nodes, and only a coding kernel and necessary modules communicated with the master nodes are deployed on the slave nodes;
(2) initially, initializing the number of CTU lines of coded Slice of a master node and a slave node, wherein the number of CTU lines of the whole frame is M, the number of CTU lines of coded Slice of a master node is Nz, the number of CTU lines of coded Slice of a slave node is Nc, and initially, the number of CTU lines of the master node and the slave node are set to be equal, wherein Nc is M/(1+ CJ), and Nz is M-CJNc;
(3) after decoding a frame each time, the main node obtains a frame to be decoded, splits the current frame to each node code, then the encoder sends the corresponding data to be coded to the corresponding slave node, and the sending to each node adopts an independent network card and an independent thread, thus ensuring the lowest transmission delay and not influencing each other;
(4) after receiving the data corresponding to the Slice, each slave node starts encoding to complete Slice encoding, after the Slice encoding of each node is finished, Slice compressed data obtained by encoding needs to be sent back to the master node, and the master node splices the data to form complete frame data;
(5) and (3) dynamically adjusting the CTU line height of the Slice encoded by the master node and the slave node, wherein the Slice encoding time of the master node and the slave node tends to be stable after a period of time, and the CTU line number of the Slice of each node also tends to be stable.
The method adopts master-slave multi-node coding, and the video coding is based on slice coding, namely, each frame of image of the video coding is divided into a plurality of slices. Slice is a concept defined in common coding specifications such as H264 and HEVC/AVS2, and is a continuous number of CTUs that can be encoded and decoded individually without relying on other slices of the current frame. The specification is not required for the number of CTUs per Slice. The invention uses the characteristic to divide a complete frame to be coded into a plurality of slices, and different coding nodes are respectively distributed for coding. After each node finishes coding, the coded code stream is transmitted back to the main node, and the main node splices the code stream data into a complete frame. In addition, in the method, each slice is limited to contain a plurality of integral CTU rows, the number of the rows is variable, and the dynamic adjustment is carried out according to the coding time of each node. The method can realize the 8K real-time coding in HEVC, AVS2 and other formats, and can improve the coding performance and reduce the coding delay by increasing the coding servers. The method can be extended to future video formats such as H.266 (namely VVC), AVS3 and the like, and can also be extended to 16K or even higher resolution. The method can balance the coding time of the master coding node and the slave coding node so as to achieve the optimal overall system and the lowest delay.
Preferably, in step (2), the initialization method is as follows: if the number of pixel rows of a CTU row is CTU — h, let the resolution of the video be Width × Height, and the number of CTU rows of the entire frame be M, which is Height/CTU — h, then M is rounded up if it cannot be divided exactly.
Preferably, in step (3), the method for splitting the current frame to encode each node is as follows: according to the sequence from top to bottom, the frame to be decoded has the 1 st Slice as the first Nz CTU rows, and then each Slice is Nc CTU rows which are respectively allocated to the 1 st and 2 … CJ slave nodes.
Preferably, in step (4), the encoding process is consistent with that of a common encoder, after the encoding of each node is finished, the reconstructed data of the current Slice is stored, and the data of the upper and lower rows are transmitted to the adjacent node, and the part of data is used for referring to the Slice of the next frame of the adjacent node.
Preferably, in step (5), the method for dynamically adjusting the CTU line height of the master-slave node encoding Slice is as follows: in the encoding process of a master node and a slave node, each node records the encoding time of each Slice, and takes a certain fixed time as a period Duration, and counts the average Slice encoding time T (i) of each node in the period, wherein i is 0, 1 and 2 … Nc; wherein T (0) corresponds to the average encoding time of the master node Slice, and T (i) corresponds to the average encoding time of the ith slave node; and after the time period Duration is up, all the slave nodes transmit the average coding time to the master node, and the master node adjusts the CTU line number of the coding Slice of the master node and the slave node according to the average coding time.
Preferably, the specific method for the master node to adjust the number of CTU lines of the coding Slice of the master node and the slave node according to the average coding time is as follows: firstly, obtaining the average Slice coding time Tc of all slave nodes as (T (1) + T (2) + … + T (Nc))/Nc, wherein the average coding time duration of the master node is Tz as T (0); if the difference between Tc and Tz is within a certain fixed proportion Thr, namely ABS (Tc-Tz)/Tz is not more than Thr, the number of CTU rows in the current period is not changed, wherein ABS is an absolute value; otherwise, if Tc > Tz and ABS (Tc-Tz)/Tz > Thr, then Nc minus 1, Nz-M-CJ Nc; conversely if Tc < Tz and ABS (Tc-Tz)/Tz > Thr, Nc is increased by 1, Nz-M-CJ Nc; the updated Nz, Nc take effect in the next frame after the period.
The invention has the beneficial effects that: the 8K real-time coding in HEVC, AVS2 and other formats is realized, the coding performance can be improved and the coding delay can be reduced by adding a coding server, and the coding time of the master coding node and the slave coding node can be balanced, so that the optimal whole system is achieved, and the delay is the lowest.
Drawings
Fig. 1 is an overall frame diagram of the present invention.
Detailed Description
The invention is further described with reference to the following figures and detailed description.
In the embodiment shown in fig. 1, an 8K ultra high definition-oriented multi-node low-latency parallel coding method specifically includes the following steps:
(1) the method comprises the steps of deploying master nodes and slave nodes which are connected through high-speed network cards, deploying CJ slave nodes, deploying an integral coding and transcoding system on the master nodes, wherein the integral coding and transcoding system comprises input signal acquisition, input audio and video decoding, input audio and video preprocessing (such as chrominance space conversion, HDR conversion, image quality enhancement and the like), coded audio and video multiplexing, IP distribution and the like, and only deploying coding cores and necessary modules communicated with the master nodes on the slave nodes.
(2) At the beginning, the CTU line number of the coded Slice of the master node and the slave node is initialized, the CTU line number of the coded Slice of the master node is Nz, and the CTU line number Nc of the coded Slice of the slave node is initialized. The initialization method is as follows: if the number of pixel lines of a CTU line is CTU — h (CTU — h can be set, but once set, the entire encoding stage does not change, default is 64, and can be set to 32, 16 depending on the encoding configuration), the resolution of the video is Width Height, for 8K video, Width is 7680, Height is 4320, the number of CTU lines of the entire frame is M, Height/CTU — h, and if not, M is rounded up. Initially, the number of rows of master-slave nodes CTU is set equal, Nc ═ M/(1+ CJ), Nz ═ M-CJ x Nc.
(3) After each frame is decoded, the main node obtains a frame to be decoded, and needs to determine how to split the current frame to encode each node, and the method comprises the following steps: according to the sequence from top to bottom, the frame to be decoded has the 1 st Slice as the first Nz CTU rows, and then each Slice is Nc CTU rows which are respectively allocated to the 1 st and 2 … CJ slave nodes. Then the encoder sends the corresponding data to be coded to the corresponding slave nodes, and the sending to each node adopts an independent network card and an independent thread, so that the lowest transmission delay can be ensured and the transmission delay is not influenced mutually.
(4) And after each slave node receives the data corresponding to the Slice, starting encoding to finish the encoding of the Slice. The encoding process is consistent with a common encoder, and various parallel techniques such as WPP parallel can be adopted. After the Slice coding of each node is finished, Slice compressed data obtained by coding needs to be sent back to the main node, and the main node splices the data to form complete frame data. In addition, after each node is coded, the reconstructed data of the current Slice needs to be stored, and meanwhile, data of a plurality of upper and lower lines (generally 64 lines) need to be transmitted to the adjacent node, and the part of data needs to be referred to and used by the Slice of the next frame of the adjacent node.
(5) And dynamically adjusting the CTU row height of the node coding Slice. Initially, as described in step (2), the number of CTU lines of the master node is set to Nz, and the number of CTU lines of the slave node is set to Nc. In the encoding process of the master node and the slave node, each node records the encoding time of each Slice, and takes a certain fixed time as a period Duration (typically, Duration is 5 seconds), and counts the average Slice encoding time t (i) of each node in the period, i is 0, 1, 2 … Nc. Wherein, T (0) corresponds to the average encoding time of the master node Slice, and T (i) corresponds to the average encoding time of the ith slave node. And after the time period Duration is up, all the slave nodes transmit the average coding time to the master node, and the master node adjusts the CTU line number of the coding Slice of the master node and the slave node according to the average coding time. The method comprises the following steps: the average Slice encoding time Tc of all the slave nodes is obtained first, (T (1) + T (2) + … + T (Nc))/Nc, and the average encoding time duration of the master node is Tz ═ T (0). If Tc differs from Tz by some fixed ratio Thr (typically Thr ═ 10%), i.e., ABS (Tc-Tz)/Tz ≦ Thr, where ABS is taken as an absolute value, the number of CTU rows in the current cycle does not change. Otherwise, if Tc > Tz and ABS (Tc-Tz)/Tz > Thr, then Nc minus 1, Nz-M-CJ Nc; conversely, if Tc < Tz and ABS (Tc-Tz)/Tz > Thr, Nc is increased by 1, Nz-M-CJ Nc. The updated Nz, Nc take effect in the next frame after the period. After a period of time, Slice encoding time of the master node and the slave node tends to be stable, and the number of CTU lines of slices of each node also tends to be stable.
Except for video coding, the main node completes all tasks of other coding and transcoding on the main node, such as acquisition, decoding and video preprocessing of input signals, and all audio processing including decoding, coding and multiplexing and distribution of coded code streams on the main node. The slave node only serves as a coprocessor to help do partial video coding. And the master node and the slave node exchange data through equipment such as a high-speed network card and the like. And the master node and the slave node divide tasks in a Slice parallel mode. After a frame to be coded is taken by a main node coding kernel, the frame is divided into N CTU rows, each CTU row is a slice, and N is the number of all nodes (including master nodes and slave nodes). And then the coding core respectively transmits the data of each CTU row to a corresponding node in a TCP/IP mode. Each node completes one slice code. The master node and the slave node are connected by adopting an optical module at 40Gbps or even higher speed, and the master node and the slave node communicate by using a TCP protocol. And after the Slice coding of all the nodes of the current frame is finished, all the coded data are collected to the coding module of the main node, and the coding module of the main node carries out streaming again according to the sequence of the Slice, so that the complete video coding frame is completed.
Since the master node does much more processing than the slave nodes, if the pure video coding loads of the master node and the slave nodes are consistent, the total calculation amount of the master node is far larger than that of the slave nodes, so that the coding speed of the master node is slow compared with that of the slave nodes. According to the short-slab effect, the encoding speed of the whole frame is slowed down by the master node. The invention balances the calculation load of the master node and the slave node by dynamically adjusting the size of each Slice (corresponding to the row number of the CTU rows), and achieves the purpose that the coding time of each node has no obvious difference. Because of real-time encoding, the total data to be encoded per unit time is fixed. For the slave nodes, the encoding tasks they undertake are consistent as they are equally located between them. That is, the number of CTU lines of Slice of each slave node is the same and is Nc, and the number of CTU lines of Slice of the master node does not match the slave node and is Nz. If Nc is increased, obviously Nz is reduced, the encoding task amount of the slave node is increased, and under the condition that the total computing hardware resources are not changed, the encoding speed of the slave node is reduced, and the encoding speed of the corresponding master node is increased. And vice versa. The method is characterized in that the encoding time of each Slice of each node is counted in real time, the difference between the nodes is counted, the Slice encoding time of each node is balanced by adjusting the size of the Slice, and finally the short plate effect is eliminated.
The invention can realize the real-time coding of 8K HEVC/AVS2 on the Intel strong rack server 2Ru 2 node. And the coding delay can be within 200 milliseconds. In the case of 4-node encoding, the delay of 8K encoding can be within 100 milliseconds.

Claims (4)

1. A multi-node low-delay parallel coding method for 8K ultra-high definition is characterized by comprising the following steps:
(1) deploying master nodes and slave nodes which are connected with each other through a high-speed network card, wherein one master node is provided, CJ slave nodes are provided, an integral coding and transcoding system is deployed on the master nodes, and only a coding kernel and necessary modules communicated with the master nodes are deployed on the slave nodes;
(2) initially, initializing the number of CTU lines of the coded Slice of the master node and the slave node, wherein the number of CTU lines of the whole frame is M, the number of CTU lines of the coded Slice of the master node is Nz, and the number of CTU lines of the coded Slice of the slave node is Nc, and initially, setting the number of CTU lines of the master node and the slave node to be equal, wherein Nc = M/(1+ CJ), and Nz = M-CJNc;
(3) after decoding a frame each time, the main node obtains a frame to be decoded, splits the current frame to each node code, then the encoder sends the corresponding data to be coded to the corresponding slave node, and the sending to each node adopts an independent network card and an independent thread, thus ensuring the lowest transmission delay and not influencing each other;
(4) after receiving the data corresponding to the Slice, each slave node starts encoding to complete Slice encoding, after the Slice encoding of each node is finished, Slice compressed data obtained by encoding needs to be sent back to the master node, and the master node splices the data to form complete frame data;
(5) dynamically adjusting the CTU line height of the Slice encoded by the master node and the slave node, wherein the Slice encoding time of the master node and the slave node tends to be stable after a period of time, and the CTU line number of the Slice of each node also tends to be stable; the method for dynamically adjusting the CTU line height of the master-slave node encoding Slice comprises the following steps: in the encoding process of a master node and a slave node, each node records the encoding time of each Slice, and takes a certain fixed time as a period Duration, and counts the average Slice encoding time T (i) of each node in the period, wherein i =0, 1 and 2 … Nc; wherein T (0) corresponds to the average encoding time of the master node Slice, and T (i) corresponds to the average encoding time of the ith slave node; after the time period Duration is up, all the slave nodes transmit the average coding time to the master node, and the master node adjusts the CTU line number of the coding Slice of the master node and the slave node according to the average coding time; the specific method for the master node to adjust the CTU line number of the coding Slice of the master node and the slave node according to the average coding time is as follows: firstly, obtaining the average Slice coding time Tc = (T (1) + T (2) + … + T (Nc))/Nc of all slave nodes, wherein the average coding time length of a master node is Tz = T (0); if the difference between Tc and Tz is within a certain fixed proportion Thr, namely ABS (Tc-Tz)/Tz is not more than Thr, the number of CTU rows in the current period is not changed, wherein ABS is an absolute value; otherwise, if Tc > Tz, and ABS (Tc-Tz)/Tz > Thr, then Nc minus 1, Nz = M-CJ × Nc; whereas if Tc < Tz and ABS (Tc-Tz)/Tz > Thr, Nc is increased by 1, Nz = M-CJ × Nc; the updated Nz, Nc take effect in the next frame after the period.
2. The 8K ultra high definition-oriented multi-node low-latency parallel coding method according to claim 1, wherein in the step (2), the initialization method is as follows: if the number of pixel rows of a CTU row is CTU — h, let the resolution of the video be Width × Height, and the number of CTU rows of the entire frame be M, M = Height/CTU — h, M is rounded up if not divisible.
3. The 8K ultra high definition-oriented multi-node low-delay parallel coding method according to claim 1 or 2, wherein in the step (3), the method for splitting the current frame to code each node is as follows: according to the sequence from top to bottom, the frame to be decoded has the 1 st Slice as the first Nz CTU rows, and then each Slice is Nc CTU rows which are respectively allocated to the 1 st and 2 … CJ slave nodes.
4. The 8K ultra high definition-oriented multi-node low-delay parallel coding method according to claim 1 or 2, wherein in the step (4), the coding process is consistent with that of a common coder, after the coding of each node is finished, the reconstructed data of the current Slice is stored, and meanwhile, the data of the upper and lower lines are transmitted to the adjacent node, and the part of data is used for referring to the Slice of the next frame of the adjacent node.
CN201911211710.8A 2019-12-02 2019-12-02 Multi-node low-delay parallel coding method for 8K ultra-high definition Active CN110933457B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911211710.8A CN110933457B (en) 2019-12-02 2019-12-02 Multi-node low-delay parallel coding method for 8K ultra-high definition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911211710.8A CN110933457B (en) 2019-12-02 2019-12-02 Multi-node low-delay parallel coding method for 8K ultra-high definition

Publications (2)

Publication Number Publication Date
CN110933457A CN110933457A (en) 2020-03-27
CN110933457B true CN110933457B (en) 2022-01-11

Family

ID=69848264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911211710.8A Active CN110933457B (en) 2019-12-02 2019-12-02 Multi-node low-delay parallel coding method for 8K ultra-high definition

Country Status (1)

Country Link
CN (1) CN110933457B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112788024B (en) * 2020-12-31 2023-04-07 上海网达软件股份有限公司 Method and system for real-time coding of 8K ultra-high-definition video
CN112911346A (en) * 2021-01-27 2021-06-04 北京淳中科技股份有限公司 Video source synchronization method and device
CN114374848B (en) * 2021-12-20 2024-03-19 杭州当虹科技股份有限公司 Video coding optimization method and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101267564B (en) * 2008-04-16 2011-06-15 中国科学院计算技术研究所 A multi-processor video coding chip device and method
US9781477B2 (en) * 2010-05-05 2017-10-03 Cavium, Inc. System and method for low-latency multimedia streaming
CN102868888B (en) * 2012-04-27 2014-11-26 北京航空航天大学 Dynamic slice control method oriented to parallel video encoding

Also Published As

Publication number Publication date
CN110933457A (en) 2020-03-27

Similar Documents

Publication Publication Date Title
CN110933457B (en) Multi-node low-delay parallel coding method for 8K ultra-high definition
US10200698B2 (en) Determining chroma quantization parameters for video coding
KR102063385B1 (en) Content adaptive entropy coding for next generation video
US10555002B2 (en) Long term reference picture coding
US20110305273A1 (en) Parallel multiple bitrate video encoding
US20070086528A1 (en) Video encoder with multiple processors
US20050243922A1 (en) High definition scalable array encoding system and method
WO1995035628A1 (en) Video compression
CN101466038A (en) Method for encoding stereo video
US11343501B2 (en) Video transcoding method and device, and storage medium
WO2003036984A1 (en) Spatial scalable compression
CN110650345B (en) Master-slave multi-node coding method for 8K ultra-high definition
CN106454271B (en) Processing system for video and method
US20180020222A1 (en) Apparatus and Method for Low Latency Video Encoding
CN101094406A (en) Method and device for transferring video data stream
CN102055970A (en) Multi-standard video decoding system
Lee et al. Reduced complexity single core based HEVC video codec processor for mobile 4K-UHD applications
US10841585B2 (en) Image processing apparatus and method
WO2016006746A1 (en) Device for super-resolution image processing
US20070297505A1 (en) Method and device for video encoding and decoding
CN104780377A (en) Parallel high efficiency video coding (HEVC) system and method based on distributed computer system
Nakamura et al. Low delay 4K 120fps HEVC decoder with parallel processing architecture
CN114205595A (en) Low-delay transmission method and system based on AVS3 coding and decoding
KR20100054586A (en) System and method for multiplexing stereoscopic high-definition video through gpu acceleration and transporting the video with light-weight compression and storage media having program source thereof
Sugito et al. UHD-2/8K 120-Hz Realtime Video Codec

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant