WO2020251124A1 - Procédé, dispositif et système de décodage réparti hevc reposant sur un modèle d'apprentissage machine à l'aide d'une chaîne de blocs - Google Patents
Procédé, dispositif et système de décodage réparti hevc reposant sur un modèle d'apprentissage machine à l'aide d'une chaîne de blocs Download PDFInfo
- Publication number
- WO2020251124A1 WO2020251124A1 PCT/KR2019/015963 KR2019015963W WO2020251124A1 WO 2020251124 A1 WO2020251124 A1 WO 2020251124A1 KR 2019015963 W KR2019015963 W KR 2019015963W WO 2020251124 A1 WO2020251124 A1 WO 2020251124A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sub
- bit stream
- decoding
- decoded
- node
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/149—Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/184—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
Definitions
- the present application relates to a distributed HEVC decoding method, apparatus, and system using a block chain based on a machine learning model.
- HEVC has improved encoding/decoding efficiency
- the complexity of encoding/decoding is also greatly increased.
- a high-performance encoder or decoder is required for encoding/decoding an image, and a substantial storage space is required. to be.
- a method of processing independent decoding of each divided bit stream by dividing one bit stream to be decoded may be adopted.
- high stability is required for the parallel decoding work on the split bit stream, and even if the split bit stream is independently decoded, the decoding work has a high data dependency, so the decoding result of each split bit stream must be shared.
- the decoding result of each of the divided bit streams can be shared based on a common memory device, but in the case of a heterogeneous distributed decoder, the decoding result of each of the divided bit streams is shared. It has a limitation that additional data transmission is required.
- the present application is to solve the above-described problems of the prior art, and the decoding performance of the sub-node is predicted based on a machine learning model, and the decoding target bit stream is divided into a plurality of sub-bit streams based on the predicted decoding performance.
- a distributed HEVC decryption method using a block chain based on a machine learning model that can solve the problem of stability and data dependence by allocating to each node to decrypt in parallel and sharing the decryption result of each sub-node based on the block chain, It is intended to provide devices and systems.
- the distributed HEVC decoding method using a block chain based on a machine learning model includes the steps of receiving a bit stream to be decoded, the received bit stream Obtaining image information to be decoded from, acquiring device information of a sub-node connected to a master node, a sub-node allocated to a sub-node through a machine learning model previously learned based on the image information and the device information It may include determining a bit stream and allocating the determined sub bit stream to each of the sub nodes through a block chain.
- a distributed HEVC decoding method using a block chain based on a machine learning model includes the steps of receiving a decoding result of the sub-node for the sub-bit stream allocated for each sub-node, and the received It may include the step of decoding the bit stream to be decoded by synthesizing the decoding result of the sub-bit stream.
- the distributed HEVC decoding method using a block chain based on a machine learning model may include reproducing the decoding target based on a decoding result of the decoding target bit stream.
- the distributed HEVC decoding method using a block chain based on a machine learning model processes a boundary value by applying a deblocking filter to the decoding result of the sub-bit stream received from each of the sub-nodes. It may include the step of.
- information of a sub-bit stream that can be decoded for each of the sub-nodes may be determined by using device information and image information to be decoded as inputs of the previously learned machine learning model.
- the device information includes floating-point operations per second (FLOPS), input/output operations per second (IOPS), memory information, single instruction, and multiple data processing capabilities (Single Instruction Multiple Data, SIMD), or a thread number.
- FLOPS floating-point operations per second
- IOPS input/output operations per second
- SIMD Single Instruction Multiple Data
- the image information to be decoded may include at least one of CTU depth information, resolution information, and quantization parameter.
- the pre-trained machine learning model is connected to at least one of a convolutional neural network (CNN), a deep neural network (DNN), and a recurrent neural network (RNN).
- CNN convolutional neural network
- DNN deep neural network
- RNN recurrent neural network
- the master node and the sub-node may share an encryption key for using the block chain.
- the distributed HEVC decoding apparatus using a block chain based on a machine learning model receives an image to be decoded and acquires image information to be decoded from the received bit stream.
- a sub-bit stream determining unit that determines a sub-bit stream allocated to a sub-node through a pre-learned machine learning model based on the image information and device information of the connected sub-node, the determined sub-bit stream through a block chain
- a sub-bit stream allocator for allocating to each of the sub-nodes, a collection unit for receiving a decoding result of the sub-node for the sub-bit stream allocated for each sub-node, and a decoding result for the received sub-bit stream It may include a main processor that decodes the bit stream to be decoded.
- the distributed HEVC decoding apparatus using a block chain based on a machine learning model may include an image reproducing unit that reproduces the decoding target based on a decoding result of the main processing unit.
- the main processor may process the boundary value by applying a deblocking filter to the decoding result of the sub-bit stream received from each of the sub-nodes.
- the sub-bit stream determining unit may determine information of a sub-bit stream that can be decoded for each of the sub-nodes by using the device information and the image information to be decoded as inputs of the pre-learned machine learning model.
- the distributed HEVC decoding system using a block chain based on a machine learning model receives a bit stream to be decoded, obtains image information to be decoded from the received bit stream, and A sub-bit stream is determined through a pre-learned machine learning model based on image information and device information of a plurality of connected sub-nodes, and the determined sub-bit stream is allocated to each of the plurality of sub-nodes through a block chain.
- a master node for decoding the bit stream to be decoded by receiving a decoding result for the bit stream, and decoding the sub-bit stream allocated by the master node, and returning the decoding result of the sub-bit stream to the master node. It may include a plurality of sub-nodes.
- the decoding performance of the sub-node is predicted based on the machine learning model, and the decoding target bit stream is divided into a plurality of sub-bit streams and allocated to each of the sub-nodes based on the predicted decoding performance.
- the problem of stability and data dependence can be solved by decoding in parallel and sharing the decoding result of each sub-node based on the block chain.
- the sub-bit stream obtained by dividing the bit stream to be decoded is allocated as appropriate to the performance of each sub-node in consideration of the performance of each sub-node,
- the decoding efficiency of the system can be improved.
- the effect obtainable in the present application is not limited to the effects as described above, and other effects may exist.
- FIG. 1 is a schematic configuration diagram of a distributed HEVC decoding system using a block chain based on a machine learning model according to an embodiment of the present application.
- FIG. 2 is a diagram for describing device information and image information to be decoded according to an embodiment of the present application.
- FIG. 3 is a schematic configuration diagram of a distributed HEVC decoding apparatus using a block chain based on a machine learning model according to an embodiment of the present application.
- FIG. 4 is a flowchart illustrating an operation of a distributed HEVC decoding method using a block chain based on a machine learning model according to an embodiment of the present application.
- FIG. 1 is a schematic configuration diagram of a distributed HEVC decoding system using a block chain based on a machine learning model according to an embodiment of the present application.
- a distributed HEVC decoding system 10 using a block chain based on a machine learning model according to an embodiment of the present application is a distributed type HEVC decoding system 10 using a block chain based on a machine learning model according to an embodiment of the present application.
- the HEVC decoding apparatus 100 (hereinafter, referred to as'master node 100'), a plurality of sub-nodes 200, and a block chain network 300 (hereinafter referred to as'block chain 300') are referred to as the HEVC decoding apparatus 100 (hereinafter referred to as'master node 100').
- the master node 100 and the sub node 200 may be subjects capable of performing tasks such as operation processing, signal processing, computing, and image processing (image encoding or image decoding).
- the master node 100 and the sub node 200 include a central processing unit (CPU), a graphic processing unit (GPU), a digital signal processor (DSP), and other hardware. It may mean any one of a processing unit, a processor, and a thread, or may mean a user terminal (20b, referenced in FIG. 2) equipped with the above-described central processing unit.
- the distributed HEVC decoding apparatus 100 using a block chain based on the machine learning model in the present application may be understood as the same configuration as the master node.
- a CPU or the like is formed of a plurality of cores in the user terminal 20b (for example, a user terminal 20b equipped with a multi-core processor), one user terminal 20b is also , Each of the plurality of cores may be the sub-node 200 in the present application.
- the user terminal 20b includes a smartphone, a smart pad, a tablet PC, etc., and a personal communication system (PCS), a global system for mobile communication (GSM), a personal digital cellular (PDC), and a PHS.
- PCS personal communication system
- GSM global system for mobile communication
- PDC personal digital cellular
- PHS Personal Handyphone System
- IMT International Mobile Telecommunication
- CDMA Code Division Multiple Access
- W-CDMA Wide-Code Division Multiple Access
- Wibro Wireless Broadband Internet
- Blockchain 300 is a decentralized data storage technology that stores data in a block and connects it in a chain form, and simultaneously duplicates and stores data on multiple nodes (computers, etc.).
- the biggest feature of the blockchain 300 is a central server. Does not exist, and all nodes can share the result of the operation processing of each node.
- the master node 100 and the sub-node 200 exchange data through the block chain 300, so that the bit stream to be decoded is divided into independent units, so that each of the sub-nodes 200 is independently Even if decryption is performed, the data dependency problem required for the decryption operation is solved by allowing each decryption result to be shared by all the sub-nodes 200 and the master node 100.
- data dependence is high because the current frame or the current region is encoded by referring to the previously encoded/decoded frame or the encoding/decoding result of the surrounding region.
- the distributed HEVC decoding system 10 using a block chain based on a machine learning model uses the data synchronization function of the block chain 300 to inter-prediction (inter-screen) an image to be decoded. Prediction) and intra-prediction (in-screen prediction) can be performed.
- the block chain 300 herein may perform a function such as shared memory in a distributed processing system including heterogeneous sub-nodes 200 and master nodes 100.
- the master node 100 and the sub node 200 may share an encryption key for using the block chain 300.
- the master node 100 when the new sub-node 200 is connected to the master node 100, the master node 100 may be implemented to provide the encryption key to the newly connected sub-node 200. . Since the master node 100 and the sub node 200 in the present application are connected through the blockchain network 300 based on a shared encryption key, the security of the data can be maintained.
- the master node 100 may receive a bit stream A to be decoded.
- FIG. 2 is a diagram for describing device information and image information to be decoded according to an embodiment of the present application.
- image information 2a to be decoded is CTU depth information (CTU Depth), resolution information, and quantization parameter of the image to be decoded 20a. ) May include at least one of.
- the image information 2a to be decoded may be included in the header file of the bit stream A received by the master node 100, and the master node 100 may obtain the image information 2a by referring to the header file. have.
- device information 2b includes floating-point operations per second (FLOPS), input/output operations per second (IOPS), memory information, and single It may include at least one of Single Instruction Multiple Data (SIMD) or a thread number. Also, the device information 2b may be determined for the user terminal 20b corresponding to the sub-node 200. The master node 100 may receive device information 2b from each connected user terminal 20b.
- FLOPS floating-point operations per second
- IOPS input/output operations per second
- SIMD Single Instruction Multiple Data
- SIMD Single Instruction Multiple Data
- the master node 100 may receive device information 2b from each connected user terminal 20b.
- the device information and the image information to be decoded shown in FIG. 2 may be a feature to be learned of the machine learning model in the present application.
- the machine learning model in the present application is based on device information and image information to be decoded, information on a sub-bit stream that a specific device (for example, a sub-node in the present application) can decode (e.g. It may be a model that predicts the size information or resolution information of the sub-bit stream.
- the master node 100 may obtain image information to be decoded from the received bit stream.
- the image information to be decoded may be obtained from a header file of the image to be decoded linked to the bit stream A received by the master node 100.
- the master node 100 may obtain device information of the sub-node 200 connected to the master node 100.
- the master node 100 may determine a sub-bit stream allocated to the sub-node through a machine learning model that is previously learned based on image information and device information. Specifically, the master node 100 may determine information of a sub-bit stream that can be decoded for each sub-node 200 by using device information and image information to be decoded as inputs of a previously learned machine learning model.
- the information on the sub-bit stream that can be decoded includes at least one of size information, time information, or resolution information of the sub-bit stream that can be decoded by the sub-node 200, which is determined based on the performance of the sub-node 200. can do.
- the master node 100 so that the difference between the decoding end time of the sub-bit stream allocated for each sub node 200 is less than a preset time (that is, the decoding end time is synchronized) If possible) each of the sub-nodes 200 may determine information on the sub-bit stream to be decoded.
- the master node 100 collects the decoding results for each sub-bit stream of the sub-node 200, and decodes the entire bit stream to be decoded. Therefore, even if the decoding result of any one sub-node 200 is received before the decoding result of the other sub-node 200, it is because it is necessary to wait until the decoding of the entire sub-node 200 is completed.
- the pre-trained machine learning model may be linked with at least one of a convolutional neural network (CNN), a deep neural network (DNN), and a recurrent neural network (RNN).
- CNN convolutional neural network
- DNN deep neural network
- RNN recurrent neural network
- the present invention is not limited thereto, and may be generated through various machine learning methods.
- the master node 100 may allocate the determined sub-bit stream to each of the sub-nodes 200 through the block chain 300. That is, according to the present application, even when the master node 100 has low specifications and low performance, it is difficult for the master node 100 to independently perform decoding on a large target image having high resolution, the bit stream to be decoded is divided into a plurality of independent sub bit streams. And, the plurality of sub-nodes 200 divide and decode the divided plurality of sub-bit streams, and synthesize the decoding result of the divided sub-bit streams to process the decoding of the entire bit stream to be decoded. There is an effect of constructing a distributed HEVC decoding system 10 with high scalability even at (low-cost).
- the master node 100 may predict the performance of the sub-node 200 through a pre-learned machine learning model and allocate a throughput (sub-bit stream) suitable for the performance of each of the sub-nodes 200. .
- the master node 100 may receive a decoding result of the sub node 200 for the sub bit stream allocated for each sub node 200 through the block chain 300.
- Each sub-node 200 transmits and shares the decoding result of the sub-bit stream allocated to it to the blockchain 300 network, and each sub-node 200 transmits the decoding result of the other sub-node 200 to the block chain. It can be shared through 300 and referred to decoding of a sub-bit stream allocated to itself.
- the master node 100 receives the decoding result of the sub-bit stream of each sub-node 200.
- the decoding result of the sub-node 200 for the allocated sub-bit stream means the decoded segmented image itself obtained by decoding each sub-bit stream or data associated with the decoded segmented image. I can.
- the master node 100 may process the boundary value by applying a de-blocking filter to the decoding result of the sub-bit stream received from each of the sub-nodes 200.
- the de-blocking filter removes irregular edges and discontinuous characteristics that may be formed between the decoding results of each sub-bit stream through boundary value processing, so that the entire bit stream to be decoded is It may mean an image filter applied to improve the image quality of the reproduced image B, which is a result of decoding the image, and to improve prediction performance.
- the master node 100 may decode a bit stream to be decoded by synthesizing the decoding result of the received sub-bit stream. According to an embodiment of the present application, the master node 100 may generate a reproduction image B obtained by decoding the bit stream A to be decoded by concatenating the decoded divided images of the sub bit stream in time series. .
- the master node 100 may reproduce the decoding target based on the decoding result of the decoding target bit stream.
- a reproduced image B based on a result of decoding a bit stream to be decoded may be reproduced by a display device or an image reproducing device provided in the master node 100.
- the master node 100 transmits the data for the playback image B to a separate display device or video playback terminal, and in a separate display device or video playback terminal that has received the data for the playback image B.
- the reproduction image B can be reproduced.
- the sub-node 200 which has been allocated a sub-bit stream from the master node 100, is disconnected from the master node 100 for a predetermined reason, or receives a new sub-bit stream.
- the master node 100 redistributes the sub-bit stream, It may be implemented to newly allocate the sub-bit stream that has been subdivided into the changed plurality of sub-nodes 200.
- one sub-bit stream for one sub-node 200 is determined based on the machine learning model, and one It has been described that one sub-bit stream is allocated to the sub-node 200, but according to the embodiment, when the CPU of the sub-node 200 is composed of a plurality of cores (for example, a user equipped with a multi-core processor) In the case of a terminal, etc.), a sub-bit stream may be allocated to each core for one sub-node 200 as well.
- the plurality of sub-nodes 200 may decode a sub-bit stream allocated by the master node 100.
- the plurality of sub-nodes 200 may transmit a result of decoding the allocated sub-bit stream to the master node 100 through the block chain 300.
- FIG. 3 is a schematic configuration diagram of a distributed HEVC decoding apparatus using a block chain based on a machine learning model according to an embodiment of the present application.
- the distributed HEVC decoding apparatus 100 using a block chain based on a machine learning model includes an image acquisition unit 110, a sub bit stream determination unit 120, and a sub bit stream.
- a stream allocator 130, a collection unit 140, a main processing unit 150, and an image reproducing unit 160 may be included.
- the image acquisition unit 110 may receive a bit stream to be decoded, and obtain image information to be decoded from the received bit stream.
- the image information to be decoded may be obtained from a header file of an image to be decoded linked to a bit stream received by the image acquisition unit 110.
- the sub bit stream determiner 120 may obtain device information of the connected sub node 200.
- the sub-bit stream determiner 120 uses the machine learning model previously learned based on the image information acquired by the image acquisition unit 110 and the device information of the connected sub-node 200. ) Can be determined. Specifically, information of a sub-bit stream that can be decoded may be determined for each sub-node 200 by using device information and image information to be decoded as inputs of a previously learned machine learning model.
- the device information includes floating-point operations per second (FLOPS), input/output operations per second (IOPS), memory information, and single instruction multiple data processing capabilities (Single Instruction Multiple). Data, SIMD), or a thread number.
- the image information to be decoded may include at least one of CTU depth information, resolution information, and quantization parameter.
- the sub-bit stream determiner 120 determines information of the sub-bit stream that can be decoded for each sub-node 200, for example, the sub-node 200 determined based on the performance of the sub-node 200 ) May be understood as determining at least one of size information or resolution information of a sub-bit stream that can be decoded.
- the sub-bit stream determiner 120 may make the difference between the decoding end time point of the sub-bit stream allocated for each sub node 200 less than or equal to a preset time (in other words, the decoding end).
- a preset time in other words, the decoding end
- Each of the sub-nodes 200 may determine information on the sub-bit stream to be decoded so that the viewpoint is synchronized.
- the sub-bit stream allocator 130 may allocate the sub-bit stream determined by the sub-bit stream determiner 120 to each of the sub-nodes 200 through the block chain 300.
- the collection unit 140 may receive a result of decoding of the sub-node 200 for the sub-bit stream allocated for each sub-node 200.
- the decoding result of the sub-node 200 for the allocated sub-bit stream received by the collection unit 140 is the decoded segmented image itself or the decoded segmented image obtained by decoding each sub-bit stream. It may mean data associated with.
- the main processing unit 150 may decode a bit stream to be decoded by synthesizing the decoding result of the received sub-bit stream.
- the main processing unit 150 may process the boundary value by applying a de-blocking filter to the decoding result of the sub-bit stream received from each of the sub-nodes 200.
- the image reproducing unit 160 may reproduce a decoding target based on the decoding result of the main processing unit 150.
- the image reproducing unit 160 may include a display device, an image reproducing device, and the like, and the reproduced image B based on the decoding result of the bit stream to be decoded is the image reproducing unit 160 ) May be played on a display device, an image playback device, or the like.
- the image reproducing unit 160 transmits data for the reproduced image B based on the decoding result of the bit stream to be decoded to a separately provided display device or an image reproducing terminal, and The reproduced image B may be reproduced in a separate display device or an image reproducing terminal that has received the data for (B).
- FIG. 4 is a flowchart illustrating an operation of a distributed HEVC decoding method using a block chain based on a machine learning model according to an embodiment of the present application.
- the distributed HEVC decoding method using a block chain based on the machine learning model shown in FIG. 4 is a distributed HEVC decoding apparatus 100 using a block chain based on the machine learning model described above or a block chain based on a machine learning model. It can be performed by the distributed HEVC decoding system 10. Therefore, although omitted below, the description of the distributed HEVC decoding apparatus 100 using a block chain based on a machine learning model or the distributed HEVC decoding system 10 using a block chain based on a machine learning model is shown in FIG. The same can be applied to the description of 4.
- step S410 the image acquisition unit 110 may receive a bit stream to be decoded.
- the image acquisition unit 120 may acquire image information to be decoded from the received bit stream.
- the sub-bit stream determiner 120 may obtain device information of a sub-node connected to the master node.
- the sub-bit stream determiner 120 may determine a sub-bit stream allocated to the sub-node 200 through a pre-learned machine learning model based on image information and device information.
- the sub-bit stream allocator 130 may allocate the sub-bit stream determined through the block chain 300 to each of the sub-nodes 200.
- the collection unit 140 may receive a decoding result of the sub-node 200 for the sub-bit stream allocated for each sub-node 200 through the block chain 300.
- step S470 the main processing unit 150 may decode a bit stream to be decoded by synthesizing the decoding result of the received sub-bit stream.
- step S480 the video reproducing unit 160 may reproduce the decoding target based on the decoding result of the decoding target bit stream.
- steps S410 to S480 may be further divided into additional steps or may be combined into fewer steps, according to an embodiment of the present disclosure.
- some steps may be omitted as necessary, and the order between steps may be changed.
- the distributed HEVC decoding method using a block chain based on a machine learning model may be implemented in the form of program instructions that can be executed through various computer means and recorded in a computer-readable medium.
- the computer-readable medium may include program instructions, data files, data structures, and the like alone or in combination.
- the program instructions recorded in the medium may be specially designed and configured for the present invention, or may be known and usable to those skilled in computer software. Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tapes, optical media such as CD-ROMs and DVDs, and magnetic media such as floptical disks.
- -A hardware device specially configured to store and execute program instructions such as magneto-optical media, and ROM, RAM, flash memory, and the like.
- program instructions include not only machine language codes such as those produced by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like.
- the above-described hardware device may be configured to operate as one or more software modules to perform the operation of the present invention, and vice versa.
- the distributed HEVC decoding method using a block chain based on the machine learning model described above may be implemented in the form of a computer program or application executed by a computer stored in a recording medium.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Algebra (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
L'invention concerne un procédé, dispositif et système de décodage réparti HEVC reposant sur un modèle d'apprentissage machine à l'aide d'une chaîne de blocs. Un procédé de décodage réparti HEVC reposant sur un modèle d'apprentissage machine à l'aide d'une chaîne de blocs selon un mode de réalisation de la présente invention peut consister : à recevoir un flux binaire d'un objet à décoder ; à obtenir, à partir du flux binaire reçu, des informations d'image de l'objet à décoder ; à obtenir des informations machine de sous-nœuds connectés à un nœud maître ; à déterminer des sous-flux binaires à attribuer aux sous-nœuds, au moyen d'un modèle d'apprentissage machine préalablement formé, sur la base des informations d'image et des informations machine ; et à attribuer les sous-flux binaires déterminés à des sous-nœuds respectifs au moyen d'une chaîne de blocs.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020190068438A KR102296987B1 (ko) | 2019-06-11 | 2019-06-11 | 기계 학습 모델에 기초한 블록 체인을 이용한 hevc 분산형 복호화 방법, 장치 및 시스템 |
KR10-2019-0068438 | 2019-06-11 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020251124A1 true WO2020251124A1 (fr) | 2020-12-17 |
Family
ID=73780888
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2019/015963 WO2020251124A1 (fr) | 2019-06-11 | 2019-11-20 | Procédé, dispositif et système de décodage réparti hevc reposant sur un modèle d'apprentissage machine à l'aide d'une chaîne de blocs |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102296987B1 (fr) |
WO (1) | WO2020251124A1 (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113591041A (zh) * | 2021-09-28 | 2021-11-02 | 环球数科集团有限公司 | 一种防止代码注入或源码反编译的分布式编码系统 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102516548B1 (ko) * | 2021-03-10 | 2023-03-31 | 숭실대학교 산학협력단 | 블록체인 기반의 영상 위변조 방지 시스템 및 방법과 이를 위한 컴퓨터 프로그램 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009225378A (ja) * | 2008-03-18 | 2009-10-01 | Fujitsu Microelectronics Ltd | 復号方法、復号装置、暗号化装置、認証方法および認証装置 |
KR20180052651A (ko) * | 2015-09-03 | 2018-05-18 | 미디어텍 인크. | 비디오 코딩에서의 신경망 기반 프로세싱의 방법 및 장치 |
KR101976787B1 (ko) * | 2018-12-21 | 2019-05-09 | (주)소프트제국 | 블록체인에서 스마트 컨트랙트를 이용한 전자 문서 유통 방법 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018107500A (ja) * | 2016-12-22 | 2018-07-05 | キヤノン株式会社 | 符号化装置、符号化方法及びプログラム、復号装置、復号方法及びプログラム |
-
2019
- 2019-06-11 KR KR1020190068438A patent/KR102296987B1/ko active IP Right Grant
- 2019-11-20 WO PCT/KR2019/015963 patent/WO2020251124A1/fr active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009225378A (ja) * | 2008-03-18 | 2009-10-01 | Fujitsu Microelectronics Ltd | 復号方法、復号装置、暗号化装置、認証方法および認証装置 |
KR20180052651A (ko) * | 2015-09-03 | 2018-05-18 | 미디어텍 인크. | 비디오 코딩에서의 신경망 기반 프로세싱의 방법 및 장치 |
KR101976787B1 (ko) * | 2018-12-21 | 2019-05-09 | (주)소프트제국 | 블록체인에서 스마트 컨트랙트를 이용한 전자 문서 유통 방법 |
Non-Patent Citations (2)
Title |
---|
LIU WEI: "Implementation of parallel S/W-H/W for 3D- HEVC decoder", 3D-HEVC S/W-H/W FOR 3D-HEVC DECODER, 31 August 2016 (2016-08-31), pages 11 - 37 * |
PAN SUJING: "Implementation of Heterogeneous cluster Scheduler for a suitable HEVC Encoder based on Support Vector Machine", vol. 22, 31 August 2018 (2018-08-31), pages 12 - 16 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113591041A (zh) * | 2021-09-28 | 2021-11-02 | 环球数科集团有限公司 | 一种防止代码注入或源码反编译的分布式编码系统 |
CN113591041B (zh) * | 2021-09-28 | 2021-12-31 | 环球数科集团有限公司 | 一种防止代码注入或源码反编译的分布式编码系统 |
Also Published As
Publication number | Publication date |
---|---|
KR20200141651A (ko) | 2020-12-21 |
KR102296987B1 (ko) | 2021-09-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12003743B2 (en) | Video stream decoding method and apparatus, terminal device, and storage medium | |
JP2022137130A (ja) | 輝度チャネル及び少なくとも1つのクロミナンスチャネルによって表される画像データを含む画像ユニットを符号化/復号する方法及び装置 | |
WO2011034380A2 (fr) | Procédé et appareil de codage et de décodage d'image sur la base du mode saut | |
WO2011052897A2 (fr) | Procédé et appareil de codage/décodage de vecteurs de mouvement par segmentation spatiale, et procédé et appareil pour le codage/décodage d'images au moyen de ceux-ci | |
WO2013141671A1 (fr) | Procédé et appareil de prédiction intra inter-couche | |
WO2012173389A2 (fr) | Procédé et appareil pour transmettre et recevoir un contenu multimédia dans un système multimédia | |
WO2010068020A9 (fr) | Appareil et procédé de décodage/codage de vidéo multivue | |
WO2012057528A2 (fr) | Procédé de codage et de décodage à prédiction intra adaptative | |
WO2020251124A1 (fr) | Procédé, dispositif et système de décodage réparti hevc reposant sur un modèle d'apprentissage machine à l'aide d'une chaîne de blocs | |
WO2015060638A1 (fr) | Procédé de transcodage en temps réel adaptatif et serveur de diffusion en continu associé | |
WO2016195325A1 (fr) | Procédé et appareil permettant de transmettre un contenu par insertion d'informations de suivi, et procédé et appareil permettant de recevoir un contenu par insertion d'informations de suivi | |
WO2012081877A2 (fr) | Appareil et procédé d'encodage/de décodage vidéo à vues multiples | |
JP7068481B2 (ja) | 復号化のための方法、装置及びコンピュータプログラム | |
WO2013002620A2 (fr) | Procédé et dispositif de codage d'informations de mouvement par utilisation du mode "saut" et procédé et dispositif pour les décoder | |
CN112203085B (zh) | 图像处理方法、装置、终端和存储介质 | |
WO2017043702A1 (fr) | Procédé de transmission d'un paquet chiffré dans un système de communication | |
WO2013133587A1 (fr) | Procédé et appareil de traitement de signaux vidéo | |
CN112203086B (zh) | 图像处理方法、装置、终端和存储介质 | |
US10944982B1 (en) | Rendition switch indicator | |
WO2011108841A2 (fr) | Procédé et appareil pour générer des paquets vidéo | |
WO2012173317A1 (fr) | Procédé de codage et de décodage multi-fils, codeur et décodeur l'utilisant, et support d'enregistrement lisible par ordinateur | |
WO2023033300A1 (fr) | Codage et décodage de données vidéo | |
CN110381320A (zh) | 信号传输系统、信号编解码方法及装置 | |
US10462482B2 (en) | Multi-reference compound prediction of a block using a mask mode | |
Lee et al. | Exploring the Video Coding for Machines Standard: Current Status and Future Directions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19932479 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19932479 Country of ref document: EP Kind code of ref document: A1 |