WO2020209437A1 - Appareil et procédé permettant de transcoder des images segmentées en temps réel - Google Patents

Appareil et procédé permettant de transcoder des images segmentées en temps réel Download PDF

Info

Publication number
WO2020209437A1
WO2020209437A1 PCT/KR2019/005776 KR2019005776W WO2020209437A1 WO 2020209437 A1 WO2020209437 A1 WO 2020209437A1 KR 2019005776 W KR2019005776 W KR 2019005776W WO 2020209437 A1 WO2020209437 A1 WO 2020209437A1
Authority
WO
WIPO (PCT)
Prior art keywords
video stream
gpu
resolution
tile
task
Prior art date
Application number
PCT/KR2019/005776
Other languages
English (en)
Korean (ko)
Inventor
장준환
박우출
김용화
양진욱
윤상필
김현욱
조은경
최민수
이준석
양재영
Original Assignee
전자부품연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 전자부품연구원 filed Critical 전자부품연구원
Publication of WO2020209437A1 publication Critical patent/WO2020209437A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Definitions

  • the present invention relates to a transcoding technology, and more particularly, to a real-time segmented image transcoding apparatus and method for transcoding tiles with high quality in real time using a plurality of GPUs (Graphics Processing Units).
  • GPUs Graphics Processing Units
  • 360 VR images are composed of stereos and require a higher resolution (4096 ⁇ 4096 or higher) than a general 4K image (3840 ⁇ 2160).
  • a 360 VR image requires more bandwidth than a 2D image representing a flat image because it streams an image corresponding to 360°.
  • 360 VR video not only requires a lot of bandwidth, but due to the nature of the video, viewers do not view the entire video at once, but only a part of the video. Have. Accordingly, various studies have been conducted to solve this problem, but a technology for effectively streaming 360 VR images without wasting bandwidth has not been developed.
  • An object of the present invention is to provide a real-time segmented image transcoding apparatus and method for spatially segmenting an original video stream and transcoding the segmented tiles in real time.
  • the real-time segmented image transcoding apparatus of the present invention generates a tile by spatially dividing an input unit receiving an original video stream and the input original video stream, and a frame of the generated tile ( Tiled frames) are encoded in a parallel structure using a plurality of GPUs (Graphics Processing Units), and the encoded frames are rearranged to obtain a first video stream having a first resolution and a second resolution lower than the first resolution.
  • the branch includes a control unit for generating a second video stream and a third video stream having a third resolution lower than the second resolution.
  • control unit may further include an image space dividing unit for generating tiles by dividing the original video stream into a preset number of tiles, calculating an amount of work related to a frame of the generated tile, and the plurality of GPUs according to the calculated amount of work.
  • a GPU task management unit that allocates a job to a GPU, and a GPU unit that includes the plurality of GPUs in a parallel structure, performs encoding of a video stream for a job assigned to each GPU, and synchronizes the encoded video stream, and the synchronized
  • a video post-processor configured to rearrange the video streams to generate the first video stream, the second video stream, and the third video stream.
  • the image space division unit is characterized in that the number of horizontal and vertical pixels of the tile is divided by a multiple of 128.
  • the image space division unit does not limit the number of pixels.
  • the GPU task manager may allocate the task according to an average task completion time of each GPU and a size of an assigned task queue.
  • the GPU task manager may predict a task completion time of each GPU based on an average task time according to a task type of each GPU and allocate the task.
  • each task is sequentially copied to the GPU by the frame of the tile as a GOP (Group of Pictures) size.
  • GOP Group of Pictures
  • the GPU task manager may further include and transmit frame number information when delivering information related to the task to each GPU.
  • the video post-processing unit may include multiplexers corresponding to the first video stream, the second video stream, and the third video stream.
  • the real-time tile transcoding method includes the steps of receiving an original video stream by a segmented image transcoding apparatus, generating a tile by spatially dividing the input original video stream by the segmented image transcoding apparatus, and Encoding, by a transcoding device, the frame of the generated tile in a parallel structure using a plurality of GPUs, and by rearranging the encoded frames by the split image transcoding device to a first video stream having a first resolution, And generating a second video stream having a second resolution lower than one resolution and a third video stream having a third resolution lower than the second resolution.
  • the apparatus and method for real-time divided image transcoding of the present invention may spatially divide an original video stream and transcode the divided tiles in a parallel structure through a plurality of GPUs.
  • a high-quality video stream can be provided in real time by performing a fast operation.
  • FIG. 1 is a block diagram illustrating a split image transcoding apparatus according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram for explaining an entire process of driving a divided image transcoding apparatus according to an embodiment of the present invention.
  • FIG 3 is a view for explaining a work management process according to an embodiment of the present invention.
  • FIG. 4 is a diagram for explaining task assignment according to an embodiment of the present invention.
  • FIG. 5 is a view for explaining a post-processing process according to an embodiment of the present invention.
  • FIG. 6 is a flowchart illustrating a split image transcoding method according to an embodiment of the present invention.
  • FIG. 1 is a block diagram illustrating a split image transcoding apparatus according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram illustrating an entire process of driving a divided image transcoding apparatus according to an embodiment of the present invention.
  • the divided image transcoding apparatus 100 spatially divides an original video stream and transcodes the divided tiles in real time.
  • the split image transcoding apparatus 100 includes an input unit 10 and a control unit 30.
  • the input unit 10 receives an original video stream.
  • the input unit 10 may receive an original video stream in various ways, such as a file, a network, and an application program interface (API).
  • the original video stream may be a 4K stereo and high-definition video stream of 4096 ⁇ 4096 px or more, and H.264 format, HEVC (High Efficiency Video Coding) format, YUV420 raw frame format, RGB low frame format, etc. Can support.
  • the control unit 30 generates a tile by spatially dividing the original video stream input from the input unit 10.
  • the controller 30 encodes the generated tiled frame in a parallel structure using a plurality of GPUs.
  • the controller 30 rearranges the encoded frames to obtain a first video stream having a first resolution, a second video stream having a second resolution lower than the first resolution, and a third resolution lower than the second resolution.
  • the branch produces a third video stream.
  • the first resolution refers to a high resolution of high quality (HQ)
  • the second resolution refers to a normal resolution of middle quality (MQ)
  • the third resolution refers to a low resolution of low quality (LQ).
  • the control unit 30 includes an image space division unit 31, a GPU task management unit 33, a GPU unit 35, and a video post-processing unit 37.
  • the image spatial dividing unit 31 generates tiles by dividing the original video stream into a preset number of tiles.
  • the image spatial dividing unit 31 divides the original video stream into tiles so that the number of rows and columns is even.
  • the image space dividing unit 31 may divide the width and height into 6 ⁇ 6, 6 ⁇ 8, 8 ⁇ 8, 8 ⁇ 12, 12 ⁇ 12, or the like.
  • the image space division unit 31 divides the number of vertical and horizontal pixels of each tile by a multiple of 128.
  • the image space dividing unit 31 may divide the width and height into 256 ⁇ 256, 256 ⁇ 512, 512 ⁇ 512, or the like.
  • the image space dividing unit 31 may not limit the number of pixels in the case of the last horizontal tile at the bottom and the last vertical tile at the right of the tiles. Through this, the image space dividing unit 31 may flexibly divide the image into a plurality of tiles. Meanwhile, the image spatial division unit 31 performs only logical division on the original video stream and does not perform data movement.
  • the GPU task management unit 33 calculates the amount of work related to the frame of the tile generated by the image space division unit 31.
  • the GPU task management unit 33 allocates tasks to a plurality of GPUs according to the calculated amount of work.
  • the operation may mean an encoding operation performed through the GPU.
  • the GPU task management unit 33 may allocate tasks according to the average task completion time of each GPU and the size of the assigned task queue.
  • the GPU task management unit 33 may predict the task completion time of each GPU and allocate the task based on the average task time according to the task type of each GPU.
  • the GPU unit 35 includes a plurality of GPUs.
  • the GPU unit 35 may include a first GPU, a second GPU to an n-th GPU.
  • the GPU unit 35 may include GPUs of the same specification to facilitate compatibility between each GPU, but is not limited thereto, and may include GPUs of different specifications depending on the environment to be performed.
  • the GPU unit 35 has a plurality of GPUs in a parallel structure, and encodes a video stream for a job allocated from the GPU job management unit 33 in each of the GPUs.
  • the video post-processing unit 37 synchronizes the video stream encoded from the GPU unit 35 and rearranges the synchronized video stream.
  • the video post-processing unit 37 generates a first video stream, a second video stream, and a third video stream through rearrangement.
  • the video post-processing unit 37 may include respective multiplexers corresponding to the first video stream, the second video stream, and the third video stream.
  • FIG. 3 is a diagram for explaining a task management process according to an embodiment of the present invention
  • FIG. 4 is a diagram for explaining a task assignment according to an embodiment of the present invention.
  • 3(a) is a diagram showing the status of the existing job buffer for each GPU
  • FIG. 3(b) is a diagram showing a new job
  • FIG. 3(c) is a diagram showing the buffer status to which a new job is allocated for each GPU to be.
  • the GPU work management unit 33 includes a frame buffer 51, a work queue loader 53, and a load balancer 55. .
  • the frame buffer 51 stores a tile frame generated by logically dividing from the image space dividing unit 31. At this time, the frame buffer 51 has a function of temporarily storing the frame of the tile before transferring it to the work queue loader 53.
  • the job queue loader 53 calculates the amount of work related to the frame of the tile stored from the frame buffer 51, and allocates the work to the GPU unit 35 according to the calculated amount of work.
  • the job queue loader 53 may predict the job completion time of each GPU and allocate the job based on the average job time according to the job type (HQ/MQ/LQ) of each GPU.
  • the work queue loader 53 may generate two HQ/MQ work commands for a frame of one tile and an LQ work command for all frames.
  • the work queue loader 53 calculates the average work completion time of each GPU and the size of the allocated work queue, and allocates the work to each GPU using the calculated information. For example, when a frame of a new tile is input, the work queue loader 53 updates the estimated average work time for each GPU. At this time, the work queue loader 53 receives update information from the load balancer 55. The work queue loader 53 sorts in ascending order according to the work time, and allocates a frame of a newly input tile to the GPU having the fastest work completion time. If there are remaining tiles even after allocation, the job queue loader 53 performs the above-described process again to perform job allocation for the remaining tiles.
  • the job queue loader 53 may not only assign a job to the GPU unit 35 but also copy a frame of a tile to a corresponding GPU. At this time, the job queue loader 53 may sequentially copy the tile frames to the GPU as much as the GOP (Group of Pictures) size for each job.
  • GOP Group of Pictures
  • the job queue loader 53 may further include and transmit frame number information when transmitting information related to a job to the GPU.
  • the frame number information means time information.
  • the load balancer 55 receives the current work status of each GPU from the GPU unit 35.
  • the load balancer 55 calculates an average working time according to the type of work of each GPU by using the received information.
  • the load balancer 55 transmits the calculated average work time to the work queue loader 53 so that the work queue loader 53 can use the information to perform work allocation.
  • FIG. 5 is a view for explaining a post-processing process according to an embodiment of the present invention.
  • the video post-processing unit 37 includes a video synchronizer 71 and a multiplexer 73.
  • the video synchronization unit 71 synchronizes the video stream encoded from the GPU unit 35.
  • the encoded video stream may not be sequentially generated by load balancing.
  • the video synchronization unit 71 preferentially receives the encoded result and stores it in a buffer for each tile ID. Through this, the video synchronization unit 71 transmits the corresponding frame to the multiplexer 73 when all tiles for the same frame time are encoded.
  • the multiplexer unit 73 rearranges the frames transmitted from the video synchronization unit 71 to form a single media container.
  • the media container may be in the form of MP4 or TS.
  • MP4 collects frames for a certain unit time (3sec, 5sec, etc.) according to the preset settings and delivers them through file/network/API, and TS is delivered through file/network/API as soon as the work for one frame time is completed. Deliver.
  • the multiplexer 73 includes a first video stream having a first resolution, a second video stream having a second resolution lower than the first resolution, and a third video stream having a third resolution lower than the second resolution.
  • the multiplexer 73 includes multiplexers 91, 93, and 95 corresponding to the first video stream, the second video stream, and the third video stream.
  • the first resolution may mean a high-quality high-resolution
  • the second resolution may mean a medium-quality normal resolution
  • the third resolution may mean a low-quality low resolution.
  • FIG. 6 is a flowchart illustrating a split image transcoding method according to an embodiment of the present invention.
  • an original video stream is spatially divided and the divided tiles are transcoded in a parallel structure through a plurality of GPUs.
  • the split image transcoding method allocates jobs to a plurality of GPUs according to the average job completion time of each GPU and the size of the allocated job queue, thereby performing a fast operation to provide a high-quality video stream in real time.
  • the split image transcoding apparatus 100 receives an original video stream.
  • the split image transcoding apparatus 100 receives an original video stream in various ways, such as a file, a network, and an application program interface (API).
  • the original video stream may be a 4K stereo and high-definition video stream of 4096 ⁇ 4096 px or more, and H.264 format, HEVC (High Efficiency Video Coding) format, YUV420 raw frame format, RGB low frame format, etc. Can support.
  • the divided image transcoding apparatus 100 generates a tile by spatially dividing the input original video stream.
  • the split image transcoding apparatus 100 generates tiles by dividing the original video stream into a preset number of tiles.
  • the split image transcoding apparatus 100 may divide the original video stream into tiles so that the number of horizontal and vertical numbers is even, and the number of vertical and horizontal pixels of each tile may be divided into 128 times. In this case, the split image transcoding apparatus 100 may not limit the number of pixels in the case of the bottom last horizontal tile and the right last vertical tile among tiles.
  • step S150 the split image transcoding apparatus 100 encodes the frame of the generated tile in a parallel structure using a plurality of GPUs.
  • the split image transcoding apparatus 100 calculates an amount of work related to the frame of the generated tile, and performs encoding by allocating work to a plurality of GPUs according to the calculated amount of work. Through this, the split image transcoding apparatus 100 may perform encoding optimized to fit the working state of the GPU in a parallel structure.
  • the split image transcoding apparatus 100 rearranges the encoded frames.
  • the split image transcoding apparatus 100 may synchronize the encoded video stream and rearrange the synchronized video stream.
  • the split image transcoding apparatus 100 includes a first video stream having a first resolution, a second video stream having a second resolution lower than the first resolution, and a third resolution having a third resolution lower than the second resolution. 3 Create a video stream.
  • the first resolution may mean a high-quality high-resolution
  • the second resolution may mean a medium-quality normal resolution
  • the third resolution may mean a low-quality low resolution.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un appareil et un procédé permettant de transcoder des images segmentées en temps réel. Un appareil permettant de transcoder des images segmentées, selon la présente invention, comprend : une unité d'entrée pour recevoir une entrée d'un flux vidéo d'origine; et une unité de commande pour générer des pavés par segmentation spatiale du flux vidéo d'origine entré, coder des trames en mosaïque générées dans une structure parallèle à l'aide d'une pluralité d'unités de traitement graphique (GPU), et générer un premier flux vidéo comprenant une première résolution, un second flux vidéo comprenant une seconde résolution qui est inférieure à la première résolution, et un troisième flux vidéo comprenant une troisième résolution qui est inférieure à la seconde résolution, par réarrangement des trames codées.
PCT/KR2019/005776 2019-04-09 2019-05-14 Appareil et procédé permettant de transcoder des images segmentées en temps réel WO2020209437A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020190041176A KR102316495B1 (ko) 2019-04-09 2019-04-09 실시간 분할 영상 트랜스코딩 장치 및 방법
KR10-2019-0041176 2019-04-09

Publications (1)

Publication Number Publication Date
WO2020209437A1 true WO2020209437A1 (fr) 2020-10-15

Family

ID=72750743

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/005776 WO2020209437A1 (fr) 2019-04-09 2019-05-14 Appareil et procédé permettant de transcoder des images segmentées en temps réel

Country Status (2)

Country Link
KR (1) KR102316495B1 (fr)
WO (1) WO2020209437A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102577210B1 (ko) * 2020-11-12 2023-09-18 주식회사 네트워크디파인즈 적응형 미디어 편집 방법 및 장치

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120068285A (ko) * 2010-12-17 2012-06-27 주식회사 픽스트리 동영상 파일의 분산 트랜스코딩 방법
US20160219286A1 (en) * 2010-07-19 2016-07-28 Google Inc. Parallel video transcoding
US20170094290A1 (en) * 2015-09-24 2017-03-30 Tfi Digital Media Limited Method for distributed video transcoding
KR20180035087A (ko) * 2016-09-28 2018-04-05 가천대학교 산학협력단 멀티코어 시스템을 이용한 병렬 비디오 처리
KR20180067781A (ko) * 2016-12-12 2018-06-21 이에이트 주식회사 분할영상의 병렬 처리를 이용한 고화질 영상의 초고해상도 업스케일링 방법

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8966556B2 (en) * 2009-03-06 2015-02-24 Alcatel Lucent Real-time multi-media streaming bandwidth management
KR101923619B1 (ko) 2011-12-14 2018-11-30 한국전자통신연구원 멀티 gpu를 이용한 실시간 3차원 외형 복원 모델 생성 방법 및 그 장치
KR20150033194A (ko) * 2013-09-23 2015-04-01 삼성전자주식회사 병렬 부호화/복호화 방법 및 장치
KR102111436B1 (ko) * 2014-01-06 2020-05-18 에스케이 텔레콤주식회사 다중 영상의 단일 비트 스트림 생성방법 및 생성장치

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160219286A1 (en) * 2010-07-19 2016-07-28 Google Inc. Parallel video transcoding
KR20120068285A (ko) * 2010-12-17 2012-06-27 주식회사 픽스트리 동영상 파일의 분산 트랜스코딩 방법
US20170094290A1 (en) * 2015-09-24 2017-03-30 Tfi Digital Media Limited Method for distributed video transcoding
KR20180035087A (ko) * 2016-09-28 2018-04-05 가천대학교 산학협력단 멀티코어 시스템을 이용한 병렬 비디오 처리
KR20180067781A (ko) * 2016-12-12 2018-06-21 이에이트 주식회사 분할영상의 병렬 처리를 이용한 고화질 영상의 초고해상도 업스케일링 방법

Also Published As

Publication number Publication date
KR20200119435A (ko) 2020-10-20
KR102316495B1 (ko) 2021-10-25

Similar Documents

Publication Publication Date Title
CN104216671B (zh) 一种在多套拼接显示屏上实现同步协同显示的方法
US9741316B2 (en) Method and system for displaying pixels on display devices
KR101994599B1 (ko) 전송 동기화 이벤트에 따라 압축된 픽처의 전송을 제어하는 방법 및 장치
WO2020189817A1 (fr) Procédé et système de décodage distribué d'image divisée pour diffusion en continu à base de tuiles
WO2010056013A2 (fr) Appareil de codage/de décodage de film et procédé de traitement de film divisé en unités de tranches
US20110085019A1 (en) Multipoint control unit cascaded system, communications method and device
JPH03147491A (ja) 音声・映像通信装置およびそのインターフエース装置
EP1343315A1 (fr) Mur d'affichage vidéo
JP2008538484A5 (fr)
KR101668858B1 (ko) 다채널 비디오 스트림 전송 방법, 그리고 이를 이용한 관제 시스템
WO2011010857A2 (fr) Procédé et appareil de codage et décodage de canaux de couleurs dans un système de codage et décodage vidéo hiérarchiques
CA3098941C (fr) Systemes et procedes pour reduire la bande passante dans la transmission de signal video
US6614440B1 (en) System and method for load balancing in a multi-channel graphics system
WO2020209437A1 (fr) Appareil et procédé permettant de transcoder des images segmentées en temps réel
KR20120079255A (ko) 비디오데이터를 전송하는 대용량 비디오 매트릭스 장치 및 방법
CA2661768A1 (fr) Systeme video a spectateurs multiples et mise a l'echelle repartie, et methodes connexes
GB2526618A (en) Method for generating a screenshot of an image to be displayed by a multi-display system
WO2014205690A1 (fr) Procédé de codage vidéo par compression et codeur
KR20140050522A (ko) 멀티비전 가상화 시스템 및 가상화 서비스 제공 방법
WO2012030096A2 (fr) Procédé et appareil permettant de générer un paquet de commande
CN1744711A (zh) 数据接收器
Marrinan et al. Pxstream: Remote visualization for distributed rendering frameworks
WO2020085569A1 (fr) Dispositif et procédé de codage par division en temps réel
WO2010035943A2 (fr) Appareil et procédé de commande de tampon utilisant la durée de diffusion comme transmission d'image
CN112040164B (zh) 一种数据处理方法、装置、集成芯片及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19924365

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19924365

Country of ref document: EP

Kind code of ref document: A1