WO2019009490A1 - 영상을 부호화/복호화 하는 방법 및 그 장치 - Google Patents
영상을 부호화/복호화 하는 방법 및 그 장치 Download PDFInfo
- Publication number
- WO2019009490A1 WO2019009490A1 PCT/KR2018/001542 KR2018001542W WO2019009490A1 WO 2019009490 A1 WO2019009490 A1 WO 2019009490A1 KR 2018001542 W KR2018001542 W KR 2018001542W WO 2019009490 A1 WO2019009490 A1 WO 2019009490A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- unit
- encoding
- encoding unit
- dnn
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/002—Image coding using neural networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/154—Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/184—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/33—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
Definitions
- the method and apparatus according to an embodiment of the present invention is an invention for changing the original signal and the decoded signal in order to improve the efficiency of encoding and decoding before and after the encoding or decoding of the image.
- the video data is encoded by a codec according to a predetermined data compression standard, for example, a Moving Picture Expert Group (MPEG) standard, and then stored in a recording medium in the form of a bit stream or transmitted over a communication channel.
- a codec for example, a Moving Picture Expert Group (MPEG) standard
- a method of reconstructing an image comprising: obtaining a residual signal of a compressed image obtained by down-sampling an image from a bitstream; Decoding a compressed image using a residual signal and a prediction signal obtained by performing prediction; And performing decompression up-sampling using a DNN (Deep Neural Network) on the decoded compressed image to restore an image.
- the DNN includes an upsampling process using information generated in the downsampling process
- the image restoration method may have a predetermined network structure.
- a method of compressing an image includes: determining a compressed image by performing down-sampling using DNN for an image; Performing a prediction based on the compressed image to determine a prediction signal; Determining a residual signal based on the compressed image and the prediction signal; And generating a bitstream including information on a residual signal, wherein the DNN has a network structure determined through learning of a downsampling process using information generated in an upsampling process, Can be provided.
- an apparatus for reconstructing an image comprising: a residual signal obtaining unit for obtaining a residual signal of a compressed image obtained by downsampling an image from a bitstream; And a decompression unit for decompressing the compressed image using the prediction signal obtained by performing the residual signal and the prediction, and for restoring the image by performing up-sampling using the DNN to the decoded compressed image, wherein the DNN is generated in the down-
- the image compression apparatus having a predetermined network structure through learning of an upsampling process using information to be transmitted to the image compression apparatus.
- the encoding and decoding efficiency can be improved by reducing the amount of data processing performed in the encoding and decoding of images having a large amount of information.
- FIG. 1A is a block diagram of an image restoration apparatus for restoring an image according to an embodiment.
- FIG. 1B shows a block diagram of an image compression apparatus 150 for compressing an image according to an embodiment.
- FIG. 2A is a flowchart illustrating an image restoration process that the image restoration apparatus 100 can perform according to an embodiment.
- 2B is a flowchart illustrating an image compression process that can be performed by the image compression apparatus 150 according to an embodiment of the present invention.
- FIG. 3 is a diagram for explaining a process in which a compressed image is reconstructed through a coding and decoding process according to an embodiment.
- 4A is a view for explaining a deep convoluted neural network included in the DNN.
- FIGS. 4b through 4f show an exemplary structure of various Convolutional Neural Networks (CNN).
- FIG. 5A is a diagram for explaining the up-sampling operation of spatial information using DNN according to an embodiment.
- FIG. 5B is a view for explaining a down-sampling operation of spatial information using DNN according to an embodiment.
- FIG. 6 is a diagram for explaining that the types of filter kernels used in up-sampling or down-sampling may be different according to an embodiment.
- FIG. 7A is a diagram for explaining a feature of performing filtering using a plurality of filter kernels in a predetermined layer among a plurality of layers included in a DNN according to an embodiment.
- FIG. 7B is a diagram for explaining a filtering process using characteristic maps determined by filtering with a plurality of sizes of filter kernels according to an exemplary embodiment.
- FIG. 8 is a diagram illustrating loss information generated in a DNN for downsampling according to an embodiment.
- FIG. 9 is a diagram illustrating loss information generated in DNN for up-sampling.
- FIG. 10 illustrates a process in which at least one encoding unit is determined by dividing a current encoding unit according to an embodiment.
- FIG. 11 illustrates a process in which at least one encoding unit is determined by dividing a non-square encoding unit according to an embodiment.
- FIG. 12 illustrates a process in which an encoding unit is divided based on at least one of block type information and division type information according to an embodiment.
- FIG. 13 illustrates a method of determining a predetermined encoding unit among odd number of encoding units according to an embodiment.
- FIG. 14 shows a sequence in which a plurality of encoding units are processed when a current encoding unit is divided to determine a plurality of encoding units according to an embodiment.
- FIG. 15 illustrates a process in which, when an encoding unit can not be processed in a predetermined order according to an embodiment, it is determined that the current encoding unit is divided into odd number of encoding units.
- FIG. 16 illustrates a process in which a first encoding unit is divided into at least one encoding unit according to an embodiment of the present invention.
- FIG. 17 shows that when the non-square type second coding unit determined by dividing the first coding unit according to an embodiment satisfies a predetermined condition, the form in which the second coding unit can be divided is limited .
- FIG. 18 illustrates a process of dividing a square-shaped encoding unit when the division type information can not indicate division into four square-shaped encoding units according to an embodiment
- FIG. 19 illustrates that the processing order among a plurality of coding units may be changed according to the division process of the coding unit according to an embodiment.
- FIG. 20 illustrates a process of determining the depth of an encoding unit according to a change in type and size of an encoding unit when the encoding unit is recursively divided according to an embodiment to determine a plurality of encoding units.
- FIG. 21 illustrates a depth index (PID) for coding unit classification and depth that can be determined according to the type and size of coding units according to an exemplary embodiment.
- PID depth index
- FIG. 22 shows that a plurality of coding units are determined according to a plurality of predetermined data units included in a picture according to an embodiment.
- FIG. 23 shows a processing block serving as a reference for determining a determination order of a reference encoding unit included in a picture according to an embodiment.
- a method of reconstructing an image comprising: obtaining a residual signal of a compressed image obtained by down-sampling an image from a bitstream; Decoding a compressed image using a residual signal and a prediction signal obtained by performing prediction; And reconstructing the decoded compressed image by performing up-sampling using a DNN (Deep Neural Network).
- the DNN includes a step of learning the up-sampling process using information generated in the down-sampling process
- the image restoration method according to the present invention has a network structure determined through a network restoration method.
- the step of reconstructing the image of the image reconstruction method may include upsampling using a deep convolutional neural network including a plurality of layers (hidden layers) have.
- the step of performing the upsampling using the convolutional neural network of the image reconstruction method includes: performing upsampling by performing filtering for each of a plurality of layers using at least one of a plurality of filter kernels And the type of the plurality of filter kernels may be different from the type of the filter kernels used when the image is downsampled.
- the step of performing the upsampling of the image reconstruction method may include performing filtering using at least one filter kernel in each of the plurality of layers of the DNN.
- the filtering of the image restoration method may include performing filtering using a plurality of filter kernels in a layer where a plurality of filter kernels among a plurality of layers are used; Concatenating a plurality of signals obtained according to a filtering result; And performing filtering at the next layer by using the coupled signals as input to the next layer.
- the step of connecting the plurality of signals of the image restoration method includes the steps of: when characteristic maps including a plurality of signals have different sizes, performing padding; And concatenating the padded property maps.
- the DNN used in the image restoration method is learned so that the sum of the loss information determined by the comparison between the reconstructed image and the original image before downsampling is reduced by performing upsampling And at least one of the loss information is used in a learning process of the DNN for downsampling.
- a method of compressing an image includes: determining a compressed image by performing down-sampling using DNN for an image; Performing a prediction based on the compressed image to determine a prediction signal; Determining a residual signal based on the compressed image and the prediction signal; And generating a bitstream including information on a residual signal, wherein the DNN has a network structure determined through learning of a downsampling process using information generated in an upsampling process, Can be provided.
- the step of determining a compressed image of the image compression method may include a step of determining a compressed image using a deep convolutional neural network including a plurality of layers.
- the step of determining a compressed image of the image compression method may include generating a compressed image by performing filtering using at least one of a plurality of filter kernels for each of a plurality of layers.
- the filtering of the image compression method includes performing filtering on a plurality of filter kernels in a layer where a plurality of filter kernels among a plurality of layers are used; Concatenating a plurality of signals obtained according to the convolution result; And performing filtering at the next layer by using the coupled signals as input to the next layer.
- the step of generating a bitstream of the image compression method includes generating a bitstream including sampling information indicating a degree of reduction of at least one of a size of an image and a frame rate of the image by downsampling And a step of generating the data.
- a DNN for downsampling is learned so that a sum of at least one loss information indicating a loss caused by downsampling using DNN is reduced.
- Some of the loss information is determined based on the result of the comparison between the compressed image and the original image before the downsampling is performed by performing the upsampling after the compressed image is decoded, And the like.
- an apparatus for reconstructing an image comprising: a residual signal obtaining unit for obtaining a residual signal of a compressed image obtained by downsampling an image from a bitstream; And a decompression unit for decompressing the compressed image using the prediction signal obtained by performing the residual signal and the prediction, and for restoring the image by performing up-sampling using the DNN to the decoded compressed image, wherein the DNN is generated in the down- And a network structure determined through learning of an upsampling process using information that is obtained by the upsampling process.
- part refers to a hardware component such as software, FPGA or ASIC, and “ part " However, “ part “ is not meant to be limited to software or hardware. &Quot; Part " may be configured to reside on an addressable storage medium and may be configured to play back one or more processors.
- part (s) refers to components such as software components, object oriented software components, class components and task components, and processes, Subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays and variables.
- the functions provided in the components and “ parts " may be combined into a smaller number of components and “ parts “ or further separated into additional components and “ parts ".
- the " image” may be a static image such as a still image of a video or a dynamic image such as a moving image, i.e., the video itself.
- the " signal " or " sample” hereinafter refers to data to be processed as data assigned to a sampling position of an image.
- pixel values in the image of the spatial domain, and transform coefficients on the transform domain may be samples.
- a unit including at least one of these samples may be defined as a block.
- FIG. 1A shows a block diagram of an image restoration apparatus 100 for restoring an image according to an embodiment.
- the image reconstruction apparatus 100 includes a bitstream acquisition unit 110 that can acquire a bitstream and obtain information related to an encoded image, and a reconstructed image using information obtained from the bitstream And a restoration unit 120 that can perform the process of restoring the image.
- the reconstruction unit 120 may acquire various information used in the encoding process of the image using the bitstream obtained by the bitstream obtaining unit 110, The image can be restored by performing the decoding process.
- the restoration unit 120 may execute a program command stored in a memory and / or a storage device.
- the restoring unit 120 may include at least one processor including a central processing unit (CPU), a graphics processing unit (GPU), and the like.
- FIG. 2A is a flowchart illustrating an image restoration process that the image restoration apparatus 100 can perform according to an embodiment.
- the bitstream acquisition unit 110 of the image reconstruction apparatus 100 may acquire a residual signal for a compressed image obtained by downsampling an image from the bitstream according to an exemplary embodiment.
- the residual signal obtained from the bitstream by the image reconstruction apparatus 100 according to an embodiment may be a result of encoding based on the downsampled image in the compression process of the image.
- FIG. 3 is a diagram for explaining a process in which a compressed image is reconstructed through a coding and decoding process according to an embodiment.
- the original image 300 may be encoded (304) to generate a bitstream that is a result of being transformed into a frequency domain.
- the amount of information of the original signal 300 can be reduced through the process of encoding the original signal 300.
- a residual signal generation process corresponding to the difference between the original signal 300 and the prediction signal, a process of transforming the residual signal, which is a spatial domain component, into a frequency domain component, A process of quantizing the residual residual signal, and a process of generating a bitstream by entropy coding the quantized residual signal.
- the decoding of the bit stream for the residual signal 306 is performed to transform the residual signal, which is a frequency domain component, into the spatial domain component, and the reconstructed image 309 can be generated based on the residual signal.
- a compressed image 303 obtained by downsampling 302 the original image 300 can be generated, and the compressed image 303 can be encoded 304.
- the decoded compressed image 307 can be determined and up-sampling 308 for the decoded compressed image 307 is performed, (309) can be determined.
- downsampling 302 and upsampling 308 may be performed using a DNN (Deep Neural Network), and downsampling 302 and upsampling 308 using DNN may be performed in various embodiments Will be described later.
- DNN Deep Neural Network
- the image reconstruction apparatus 100 may decode the compressed image using the prediction signal obtained by performing the residual signal and the prediction according to an embodiment of the present invention.
- the decompression unit 120 of the image decompression apparatus 100 may divide the compressed image to be decoded based on a predetermined data unit.
- the reconstruction unit 120 may divide an image into a plurality of maximum encoding units, and decode the image using an encoding unit determined by dividing the maximum encoding unit recursively.
- the decompression unit 120 may perform a prediction process to decode a signal included in an encoding unit.
- the restoration unit 120 may add the prediction signal determined through the prediction process and the residual signal obtained in step S200.
- the restoration unit 120 may perform a predetermined process (e.g., in-loop filtering, DPB storing process, entropy decoding, and the like) for decoding an image in addition to a result of adding the prediction signal and the residual signal have.
- a predetermined process e.g., in-loop filtering, DPB storing process, entropy decoding, and the like
- the image decoding process using the prediction signal and the residual signal can be included in various processes that can be easily performed by those skilled in the art.
- the image reconstruction apparatus 100 may perform up-sampling using the DNN to reconstruct the decoded compressed image.
- the image decoded in step S202 may correspond to a result obtained by decoding information obtained by encoding a compressed image from a bitstream.
- the restoring unit 120 may restore the image by performing up-sampling using the DNN for the compressed image decoded in step S202.
- 4A is a view for explaining a deep convoluted neural network included in the DNN.
- the image restoration apparatus 100 may use a DNN including a plurality of layers to perform upsampling.
- the decompression unit 120 may use a Deep Convolutional Neural Network (DNN) as a DNN that can perform upsampling in order to perform a convolution operation on a plurality of layers.
- DNN Deep Convolutional Neural Network
- the deep convoluted neural network may include a plurality of layers (e.g., a plurality of layers including first layer 410 and nth layer 420).
- each of the plurality of layers constituting the deep convoluted neural network may include convolution layers for generating a plurality of feature maps using filter kernels, Activation layers may be included.
- the convolution layers may each include a plurality of nodes.
- the convolution layer may generate a plurality of characteristic maps using a plurality of filter kernels.
- the characteristic maps generated by the nodes of the convolution layer can be input to the activation layer.
- the restoration unit 120 may perform convolution operation and activation for each of a plurality of nodes 411, 412, and 413 included in the first layer 410.
- the restoration unit 120 performs a convolution operation on the input signal of the first layer (for example, the input 400 as a compression signal) in the convolution layers CL1_1, CL1_2, ..., CL1_a included in the first layer , And different filter kernels may be used for the convolution operation in each of the convolution layers CL1_1, CL1_2, ..., CL1_a.
- a convolution operation result may be input to an activation layer associated with each convolution layer to activate a result of the convolution operation in each of the convolution layers CL1_1, CL1_2, ..., CL1_a.
- the restoration unit 120 may determine a plurality of characteristic maps of the first layer 410 by activating the convolution operation result.
- the number of characteristic maps obtained in a specific layer may be proportional to the number of filter kernels.
- the characteristic map acquired in a specific layer can be used as an input value of the next layer. That is, the characteristic map obtained in the first layer 410 is input to the nth layer 420 (n > 1), and convolution operation and activation can be performed.
- a predetermined signal processing process performed in each layer including convolution operation and activation is referred to as a filtering process.
- the output signal 440 can be acquired by passing through a fully connected layer 430.
- the complete connection layer may be connected to the first layer 410 to the nth layer 420.
- a fully connected layer (FC) can assign different weights to all previous layers.
- the manner in which the fully connected layer (FC) weights the previous layers can be learned and the manner in which they are learned can involve a variety of ways including a map learning approach.
- the restoration unit 120 can improve the deep convolutional neural network by changing the manner in which the full connection layer (FC) weights lower layers by learning.
- the active layer may impart non-linear characteristics to the output of the convolution layer.
- Deep convoluted neural networks can learn nonlinear functions or parameters using activation layers.
- Activation layers can use the activation function.
- the activation function may include, but is not limited to, sigmoid function, tanh function, and ReLU (rectified linear unit) function.
- the in-depth convoluted neural network can determine the weights of the nodes included in each of the convolution layers.
- the nodes included in each of the convolutional layers may generate characteristic maps using different filter kernels.
- the in-depth convoluted neural network can adjust the weights of the filter kernels that generate the property maps by adjusting the weights of the nodes.
- the restoration unit 120 may change the weights of the nodes included in the convolution layers.
- the process in which the restoration unit 120 changes the weights of the nodes included in the convolution layers is referred to as a back propagation process.
- the restoration unit 120 may learn the convolutional neural network through a reverse process.
- the decompression unit 120 may decode a compressed image, which is a downsampled image using DNN, and then upsample the decoded compressed image using DNN.
- the downsampling or upsampling process using DNN may correspond to a process of compressing or reducing at least one of temporal information such as spatial information such as resolution of an image and bit rate.
- FIGS. 4b through 4f show an exemplary structure of various Convolutional Neural Networks (CNN).
- FIG. 4B the structure of CNN according to another embodiment is shown.
- the CNN 450 of FIG. 4B may be a network composed of a plurality of parallel layers. That is, a plurality of convolution layers and pulling layers may be arranged side by side.
- the result output from the previous layer in the CNN 450 may be input to a plurality of separated parallel layers.
- a plurality of separated parallel layers may apply different filters. For example, a plurality of separated parallel layers may be reduced to a 1x1 convolution and then applied a convolution of 3x3, 5x5, and so on. On other layers, you can apply convolution after performing 3x3 max pooling.
- a layer applying only 1x1 convolution can serve as an identity loop for holding initial information.
- the plurality of parallel layers that have undergone the convolution may be finally concatenated and output as the calculation result of the current layer.
- layers need not always be stacked sequentially.
- the structure of CNN 450 is based on the fact that non-sequentially optimized networks can be less error-prone than sequential networks.
- FIG. 4C the structure of CNN according to another embodiment is shown.
- the CNN 460 in FIG. 4C is a network using the concept of a skip layer.
- the CNN 460 has a structure in which the input of the past layer is added to the output of the current layer.
- the result of adding the outputs of the past layer and the current layer can be the input of the next layer.
- the convolutions and pooling process may be performed at multiple layers, resulting in an excessively small size of the result. In this case, the detail information of the result value may disappear.
- CNN (460) has the effect of reinforcing sophisticated parts by recycling past results in the convolution and pooling process.
- FIG. 4D the structure of CNN according to another embodiment is shown.
- the CNN 470 in FIG. 4D is a network using the concept of a skip layer like the CNN 460 in FIG. 4C.
- the CNN 470 has a feature that the relationship between the layers is dense compared to the CNN 460 in that past results can be added as an input of a layer at an arbitrary position.
- the CNN 470 may use the result calculated by the convolution operation of the past layer as the input of the layer at an arbitrary position.
- FIG. 4E the structure of CNN according to another embodiment is shown.
- the CNN 480 of Figure 4e is a network using a multi-resolution pyramid structure.
- CNN 480 may divide the results of the previous convolution layer into pyramids of different stages. For example, in step 1, the resolution is not scaled, in step 2, the resolution is scaled by 1/2 x 1/2, and in step 3, the resolution is scaled by 1/4 x 1/4.
- the results of the various steps thus obtained can be concatenated and used as inputs to the fully connected layer.
- the convolution layer is not affected by the size of the image, but the size of the input image has to be fixed in a normal network because the fully connected layer is limited by the size of the input image.
- features such as CNN 480 which are output at various levels of pyramid level, are used as input to the complete connection layer and the output of the pyramid is predetermined in advance, regardless of the size of the image, .
- FIG. 4F the structure of CNN according to another embodiment is shown.
- the CNN 490 in FIG. 4F is a network having a structure for performing batch normalization before or after the non-linear function ReLu.
- the batch normalization layer is located at the front end of the hidden layer and controls the distribution of the inputs.
- the batch normalization layer is a layer absorbed in the network, it can optimize the related variables (scale, shift) through back propagation.
- a method of improving the distribution of the input may be a method of normalizing the data input to each layer to 0, variance of 1, multiplying the scale variable gamma, and adding the data by the shift variable beta.
- the scale and shift variables can be determined through learning.
- CNN 490 can prevent problems such as gradient vanishing or gradient exploding by normalizing the convolution results.
- the learning time can be shortened through batch normalization, and the accuracy of learning can be improved.
- CNNs of various structures as described above with reference to Figs. 4A to 4F can be applied, and their possible combinations or combinations with known learning networks can be applied. Accordingly, it should be noted that CNNs having various structures described above are merely examples for convenience of description, and CNNs having various structures of modified CNNs can be used in the present embodiment.
- FIG. 5A is a diagram for explaining the up-sampling operation of spatial information using DNN according to an embodiment.
- the decompression unit 120 may perform up-sampling of the decoded compressed image spatially using DNN.
- the restoration unit 120 may use a DNN for performing various operations related to the convolution operation for upsampling.
- the decompression unit 120 may perform an up-sampling operation using a DNN to restore a spatial component of a compressed image to a spatial component of an original image before being compressed, Transposed Convolution, un-pooling, and the like.
- the decompression unit 120 includes a plurality of layers 510, 520, and 530 for performing up-sampling on a plurality of frames included in the compressed image 500, Can be used. At each layer, pre-convolution for upsampling can be performed.
- the restoring unit 120 can determine a frame whose resolution is increased according to a pre-convolution result performed in each layer.
- the reconstruction unit 120 may perform pre-convolution for the frame of the compressed image 500 in the first layer 510 and may determine a feature map having a size of Wu_1 x Hu_1 x Au_1 .
- Wu_1 and Hu_1 may represent the width and height of the characteristic map determined in the first layer 510 and Au_1 may correspond to the number of filter kernels 512 used in the first layer 510.
- the width Wu_1 and the height Hu_1 of the characteristic map determined in the first layer 510 are set such that the width W0 and the height H0 of the frame of the compressed image input to the first layer 510, Lt; / RTI >
- the reconstruction unit 120 may perform pre-convolution in the second layer 520 and may determine a characteristic map having a size of Wu_2 x Hu_2 x Au_2.
- Wu_2 and Hu_2 may represent the characteristic map width and height determined in the second layer 520 and Au_2 may correspond to the number of filter kernels 522 used in the second layer 520.
- the input of the second layer 520 may correspond to the output of the first layer 510.
- the width Wu_2 and the height Hu_2 of the characteristic map determined in the second layer 520 are larger than the width Wu_1 and height Hu_1 of the characteristic map of the first layer 510 according to an embodiment.
- the decompression unit 120 may perform up-sampling of the compressed image 500 using a DNN including n layers.
- the upsampling performed in the nth layer 530 is performed and the determined characteristic map may have the size of Wu_n x Hu_n x Au_n.
- the restoration unit 120 may determine the restored image 540 using the characteristic map of the nth layer having a size larger than the frame of the compressed image 500.
- the decompression unit 120 of the image decompression apparatus 100 may up-sample the compressed image using the DNN.
- the compressed image may be a temporally compressed image using DNN (e.g., a compressed image whose bit rate is smaller than that of the original image).
- the restoration unit 120 may use a DNN (e.g., a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN), or the like)
- the restoration unit 120 may perform a temporal upsampling by inserting an additional frame between the frames of the two frames, which are input to add an additional frame between a plurality of frames included in the compressed image, according to an embodiment.
- the up-sampling process using the DNN can be performed in consideration of the bit rate multiplication factor (for example, up-sampling at 30 fps to 60 fps) to be up-sampled, and the number of frames to be added between two frames. (T + 1, t + 2, ...) in the frame of the previous time zone (t-1, t-2, At least one of the frames Two or more can be used.
- the bit rate multiplication factor for example, up-sampling at 30 fps to 60 fps
- the decompression unit 120 may perform temporal upsampling using a predetermined time frame according to the number of frames required for upsampling. For example, if the number of frames required for temporal upsampling at time t is two, the reconstructing unit 120 may perform upsampling using frames of time periods t-1 and t + 1. For example, if the number of frames required for temporal upsampling in the time period t is three, the reconstructing unit 120 may calculate the time interval t-1, t-2, t + 1 or t-1 t + And upsampling can be performed using a frame of +2 time frame.
- the restoring unit 120 may use the frames of time periods t-1, t-2, t + 1, and t + The restoring unit 120 may use the frames of the time frame required for the temporal upsampling in order to perform the temporal upsampling of the time period t according to an embodiment.
- the decompression unit 120 may perform filtering on frames of different time periods used for temporal upsampling to determine a characteristic map for each frame.
- the restoring unit 120 may concatenate the characteristic maps determined for each time period to determine the characteristic map for the time period frame.
- the restoring unit 120 may perform filtering (e.g., convolution, etc.) to recover the characteristic map, thereby temporally restoring the frame at time t.
- the method of connecting the characteristic maps for each time zone may be a method corresponding to the method used in the spatial upsampling process described above.
- the bitstream acquisition unit 110 of the image reconstruction apparatus 100 may acquire temporal upsampling information from the bitstream, which is information indicating a frame in a time zone requiring temporal upsampling,
- the restoring unit 120 may perform time upsampling based on the obtained temporal upsampling information.
- the image restoration apparatus 100 may perform up-sampling by obtaining change information indicating how much spatial and temporal upsampling should be performed from a bitstream.
- the decompression unit 120 may increase the resolution of the compressed image based on the change information obtained from the bitstream. For example, when the obtained change information indicates that the resolution of the original image is two times the compressed image, the restoration unit 120 may perform the upsampling using the DNN to increase the resolution of the compressed image by two times.
- the decompression unit 120 calculates the bit rate of the compressed image Up-sampling, which increases by a factor of two, can be performed using DNN.
- the above-described characteristic of the change information obtained by the image restoring apparatus 100 is that the image compressing apparatus 150 can generate a bit stream including information indicating the degree of compression of the image , The change information should be interpreted as being able to include various information that can indicate the degree of compression.
- the image restoration apparatus 100 may perform the upsampling in consideration of the details of the image restoration apparatus 100. It is possible to restore the image optimized for image reproduction by performing up-sampling in consideration of the amount of computation of the image restoration apparatus 100. For example, if the display (not shown) included in the image restoration apparatus 100 supports only the FHD (Full HD) resolution of 1920 x 1080 as the maximum resolution, and the resolution of the compressed image is 1280 x 720, It is possible to perform the upsampling which doubles the resolution of the image.
- FHD Full HD
- the restoring unit 120 restores the compressed image having the bit rate of 30 fps
- the up-sampling may be performed to increase the bit rate of the video signal by two times.
- the restoring unit 120 calculates the bit rate The bit rate of the compressed image having the resolution of the compressed image is doubled and the resolution of the compressed image is increased 1.5 times.
- the image restoration apparatus 100 may use a filter kernel for each layer in order to perform up-sampling using DNN.
- the types of filter kernels available for each layer include a type of a filter kernel for down- Lt; / RTI > That is, the size and number of filter kernels used in the layers included in DNN for downsampling and DNN for upsampling may be different.
- FIG. 6 is a diagram for explaining that the types of filter kernels used in up-sampling or down-sampling may be different according to an embodiment.
- a filter kernel is used to perform filtering (e.g., a transposed convolution operation) Can be performed.
- the type of filter kernel that can be used in the filtering for upsampling according to one embodiment may be different from the type of filter kernel used in the filtering for downsampling. For example, even when the types of the filter kernel sizes used in the DNN for downsampling are 3x3, 3x3, and 5x5, the restoration unit 120 determines the types of the filter kernel sizes used in the DNN for upsampling are 3x3, 5x5, 7x7.
- the size and number of the filter kernels available in each layer of the DNN according to an exemplary embodiment may be different from the size and number of the filter kernels used in the DNN for downsampling.
- FIG. 7A is a diagram for explaining a feature of performing filtering using a plurality of filter kernels in a predetermined layer among a plurality of layers included in a DNN according to an embodiment.
- the restoration unit 120 may perform filtering using one kind of filter kernel for each layer.
- the decompression unit 120 may use a DNN including a plurality of layers to recover a compressed image, which is the input 700 of the DNN.
- the restoration unit 120 performs filtering using a filter kernel A_a having a size of Fw_a x Fh_a in a layer 710 among a plurality of layers to extract characteristic maps A_a having a size of W_a x H_a You can decide.
- the restoration unit 120 may perform filtering using a filter kernel having a plurality of sizes in a predetermined layer. According to one embodiment.
- the restoration unit 120 may perform filtering using the filter kernel 722 having a size of Fw_b1 x Fh_b1, Fw_b2 x Fh_b2, Fw_b3 x Fh_b3, etc. in the b-th layer 720 among the plurality of layers.
- filter kernels having different sizes may include different numbers of filter kernels.
- A_b1 filter kernels with Fw_b1 x Fh_b1 size, A_b2 filter kernels with Fw_b2 x Fh_b2 size, and A_b3 filter kernels with Fw_b3 x Fh_b3 size can be used for filtering.
- the restoration unit 120 may perform a filtering using a plurality of filter kernels of a size type, and may determine a property map of the number of filter kernels used.
- the restoring unit 120 can determine A_b number of characteristic maps by performing filtering using the filter kernels 722 having the sizes of Fw_b1 x Fh_b1, Fw_b2 x Fh_b2, and Fw_b3 x Fh_b3, where A_b is A_b1 + A_b2 + A_b3.
- the restoring unit 120 may determine a restored image as an output 725 by performing a restoration process using A_b W_b x H_b size property maps.
- FIG. 7B is a diagram for explaining a filtering process using characteristic maps determined by filtering with a plurality of sizes of filter kernels according to an exemplary embodiment.
- the decompression unit 120 of the image restoration apparatus 100 may determine characteristic maps using filter kernels having a plurality of sizes in an arbitrary layer.
- the decompression unit 120 may perform filtering using the filter kernels 732 having a plurality of sizes in the n-th layer 730, which is one of a plurality of layers included in the DNN.
- the characteristic maps 740, 742, and 744 having a plurality of sizes can be determined.
- the restoration unit 120 calculates a filter kernel having a size of Fw_n1 x Fh_n1 in the nth layer 730 (W_n? Fw_n1 + 1) x (H0? Fh_n1 + 1) by performing filtering using the feature map A_n1. Furthermore, the restoration unit 120 performs the filtering using the filter kernels of different sizes to obtain the feature maps A_n2 having the size of (W_n? Fw_n2 + 1) x (H0? Fh_n2 + 1), (W_n? Fw_n3 + 1) x (H0? Fh_n3 + 1), and the like.
- the decompression unit 120 may perform a padding operation such that the property maps generated for each size of the filter kernel have the same size.
- the padded feature maps may be of the same size as the input of the layer. 7B, characteristic maps 740, 742, and 744 of a plurality of sizes generated for each size of the filter kernel may include a feature map or a frame size of a compressed image input to the nth layer 730, W_n x H_n It is possible to perform padding on the characteristic maps 740, 742, and 744 so as to have the same size. Accordingly, the padded characteristic maps 741, 743, and 745 may have the same size (W_n x H_n). According to an exemplary embodiment, the input and output of a layer using a plurality of sizes of filter kernels may have the same plane size.
- the restoration unit 120 may perform a filtering using a plurality of filter kernels of a size type, and may determine a property map of the number of filter kernels used. That is, the restoration unit 120 may perform the filtering using the filter kernels 732 having the sizes of Fw_n1 x Fh_n1, Fw_n2 x Fh_n2, and Fw_n3 x Fh_n3 to determine the characteristic maps 740, 742, and 744, Padding is performed on the maps 740, 742, and 744 to determine characteristic maps 741, 743, and 745 padded to the same size.
- the restoring unit 120 may determine the output of the n-th layer 730 by concatenating padded characteristic maps 741, 743, and 745. As a result, the feature map of W_n x H_n size is input to the nth layer and filtering is performed. As a result, A_n1 + A_n2 + A_n3 feature maps having W_n x H_n size can be output.
- the image restoring apparatus 100 may use various types of data units to decode an image and perform upsampling.
- the various embodiments described above may be performed on the basis of various data units that can be used in the image encoding process.
- the reconstructing unit 120 may encode an image using various data units including a video, a sequence, a frame, a slice, a slice segment, a maximum encoding unit, an encoding unit, a prediction unit, a conversion unit, , Upsampling, and downsampling.
- the restoration unit 120 may determine the subjective image quality for each frame.
- the bitstream obtaining unit 110 may obtain change information indicating how much downsampling has been performed for each picture.
- the decompression unit 120 may perform a downsampling or upsampling process for each maximum encoding unit.
- the data units used by the restoration unit 120 to perform a predetermined process should not be construed to be limited by the above-described embodiments, and various data units may be used within a range available to those skilled in the art. . Features of various data units that the image restoration apparatus 100 can use will be described later with reference to FIGS. 10 to 23.
- an image compressing apparatus 150 capable of compressing an image to be reconstructed by the image reconstruction apparatus 100 will be described with reference to various embodiments.
- FIG. 1B shows a block diagram of an image compression apparatus 150 for compressing an image according to an embodiment.
- the image compressing apparatus 150 may include a compressing unit 160 for compressing an original image by performing a down-sampling process, and a bit generating unit 160 for generating a bit stream including information on the compressed image, And a stream generating unit 170.
- compression unit 160 may execute program commands stored in memory and / or storage devices.
- the compression unit 160 may include at least one processor including a central processing unit (CPU), a graphics processing unit (GPU), and the like.
- 2B is a flowchart illustrating an image compression process that can be performed by the image compression apparatus 150 according to an embodiment of the present invention.
- step S210 down-sampling using the DNN of the image of the image compressing apparatus 150 is performed to determine a compressed image.
- FIG. 3 is a diagram for explaining a process of compressing a compressed image through an encoding and decoding process according to an embodiment
- the compression unit 160 may reduce the amount of information that the original signal 300 has through the process of encoding the original signal 300.
- a residual signal generation process corresponding to the difference between the original signal 300 and the prediction signal, a process of transforming the residual signal, which is a spatial domain component, into a frequency domain component, A process of quantizing the residual residual signal, and a process of generating a bitstream by entropy coding the quantized residual signal.
- a bitstream decoding process 306 is performed on the residual signal so that the residual signal which is a frequency domain component is transformed into a spatial domain component and a compressed image 309 can be generated based on the residual signal.
- the bitstream generator 170 may generate a bitstream including a result of transforming the original image 300 into a frequency domain by performing a coding process (step 304).
- the image compression apparatus 150 can downsample the original image 300 to generate a compressed image 303, and perform encoding 304 on the compressed image 303 can do.
- the compression unit 160 may perform not only an encoding process but also a corresponding decoding process for error-free decoding.
- the compression unit 160 may decode the decoded compressed image 307 by performing a decryption process and upsample 308 on the decoded compressed image 307 to determine the compressed image 309.
- the bitstream generating unit 170 may generate a bitstream including information about the compressed image 309 and may transmit the bitstream to the image reconstruction apparatus 100 capable of reconstructing the compressed image.
- downsampling 302 and upsampling 308 may be performed using a DNN (Deep Neural Network), and downsampling 302 and upsampling 308 using this DNN may be performed in various ways The embodiment will be described later.
- DNN Deep Neural Network
- the image compression apparatus 150 may decode the compressed image using the prediction signal obtained by performing the residual signal and the prediction according to an embodiment of the present invention.
- the compression unit 160 of the image compression apparatus 150 may divide the original image to be compressed based on a predetermined data unit. For example, the compression unit 160 may divide an image into a plurality of maximum encoding units, and decode the image using an encoding unit determined by dividing the maximum encoding unit recursively. According to an exemplary embodiment, the compression unit 160 may perform a prediction process to decode a signal included in an encoding unit.
- the image compression apparatus may determine a residual signal based on the compressed image and the prediction signal according to an embodiment.
- the compression unit 160 may determine a residual signal by subtracting the predicted signal determined in step S212 from the compressed image determined in step S210.
- the compression unit 160 may perform a predetermined process (e.g., in-loop filtering, DPB storing process, entropy coding, and the like) for additionally encoding an image with respect to a residual signal.
- a predetermined process e.g., in-loop filtering, DPB storing process, entropy coding, and the like
- the image encoding process using the residual signal can be included in various processes that can be easily performed by those skilled in the art.
- the bitstream generator 170 of the image compression apparatus 150 may generate a bitstream including information related to the encoded residual signal.
- FIG. 4A illustrates a deep convolutional neural network included in the DNN Fig.
- the image compression apparatus 150 may use a DNN including a plurality of layers to perform downsampling.
- the compression unit 160 may use a Deep Convolutional Neural Network (DNN) as a DNN capable of performing downsampling in order to perform a convolution operation on a plurality of layers.
- DNN Deep Convolutional Neural Network
- the deep convoluted neural network may include a plurality of layers (e.g., a plurality of layers including first layer 410 and nth layer 420).
- each of the plurality of layers constituting the deep convoluted neural network may include convolution layers for generating a plurality of feature maps using filter kernels, Activation layers may be included.
- the convolution layers may each include a plurality of nodes.
- the convolution layer may generate a plurality of characteristic maps using a plurality of filter kernels.
- the characteristic maps generated by the nodes of the convolution layer can be input to the activation layer.
- the compression unit 160 may perform convolution operation and activation for each of a plurality of nodes 411, 412, 413, etc. included in the first layer 410.
- the compression unit 160 performs a convolution operation on the input signal of the first layer (for example, the input 400 as a compression signal) in the convolution layers CL1_1, CL1_2, ..., CL1_a included in the first layer , And different filter kernels may be used for the convolution operation in each of the convolution layers CL1_1, CL1_2, ..., CL1_a.
- a convolution operation result may be input to an activation layer associated with each convolution layer to activate a result of the convolution operation in each of the convolution layers CL1_1, CL1_2, ..., CL1_a.
- the compression unit 160 may determine a plurality of characteristic maps of the first layer 410 by activating the result of the convolution operation.
- the number of characteristic maps obtained in a specific layer may be proportional to the number of filter kernels.
- the characteristic map acquired in a specific layer can be used as an input value of the next layer. That is, the characteristic map obtained in the first layer 410 is input to the nth layer 420 (n > 1), and convolution operation and activation can be performed.
- a predetermined signal processing process performed in each layer including convolution operation and activation is referred to as a filtering process.
- the feature of the DNN that the image compression apparatus 150 can use according to an exemplary embodiment may be the same or similar to the feature of the DNN used by the image restoration apparatus 100 described with reference to FIG. 4A, and a detailed description thereof will be omitted .
- FIG. 5B is a view for explaining a down-sampling operation of spatial information using DNN according to an embodiment.
- the compression unit 160 may downsample the original image spatially using DNN.
- the compression unit 160 may use a DNN to perform various operations associated with the convolution operation for downsampling.
- the downsampling operation performed by the compression unit 160 may include operations such as convolution, pooling, and the like.
- the compression unit 160 compresses the DNNs including a plurality of layers 560, 570, and 580 to perform downsampling on a plurality of frames included in the original image 550 according to an exemplary embodiment. Can be used. At each layer, convolution for downsampling can be performed. The compression unit 160 may determine a frame whose resolution is decreased according to a convolution result performed in each layer. According to one embodiment, the compressing unit 160 may perform convolution of the original image 550 on the first layer 560 with respect to the frame, thereby determining a characteristic map having a size of Wd_1 x Hd_1 x Ad_1 have.
- Wd_1 and Hd_1 may represent the width and height of the characteristic map determined in the first layer 560 and Ad_1 may correspond to the number of filter kernels 562 used in the first layer 560.
- the width Wd_1 and the height Hd_1 of the characteristic map determined in the first layer 560 according to an exemplary embodiment are the same as the width W0 and height H0 of the frame of the original image input to the first layer 560, Lt; / RTI >
- the compression unit 160 may perform convolution in the second layer 570, and as a result, may determine a characteristic map having a size of Wd_2 x Hd_2 x Ad_2.
- Wd_2 and Hd_2 may represent the property map width and height determined in the second layer 570 and Ad_2 may correspond to the number of filter kernels 572 used in the second layer 570.
- the input of the second layer 570 may correspond to the output of the first layer 560.
- the width Wd_2 and the height Hd_2 of the characteristic map determined in the second layer 570 are smaller than the width Wd_1 and height Hd_1 of the characteristic map of the first layer 560 according to an embodiment.
- the compression unit 160 may perform downsampling of the original image 550 using DNN including n layers.
- the characteristic map determined by performing the downsampling performed in the nth layer 580 may have a size of Wd_n x Hd_n x Ad_n.
- the compression unit 160 may determine the compressed image 540 using the characteristic map of the nth layer having a size smaller than the frame of the original image 550.
- the compression unit 160 of the image compression apparatus 150 may temporally downsample an original image using DNN.
- the compressed image may be a temporally compressed image using DNN (e.g., a compressed image whose bit rate is smaller than that of the original image).
- the compression unit 160 compresses a plurality (for example, two or more) of the original images using a DNN (for example, a Convolutional Neural Network (CNN), a Recurrent Neural Network
- CNN Convolutional Neural Network
- the compression unit 160 may perform a temporal downsampling operation to remove a predetermined frame among the frames of the original image.
- a down-sampling process using DNN in consideration of a bit rate multiplication factor (e.g., downsampling at 60 fps to 30 fps) to be downsampled, the number of frames to be removed, etc.
- the restoring unit 120 receives at least two of the frames of the previous time zone (t-1, t-2, ...) and the frames of the later time zone (t + 1, t + 2, ...) Can be used.
- the compression unit 160 may perform temporal downsampling using a predetermined time frame according to the number of frames required for downsampling. For example, if the number of frames required for temporal downsampling at time t is two, the compressing unit 160 can perform downsampling using frames of time periods t-1 and t + 1. In another example, if the number of frames required for temporal downsampling in the time period t is three, the compression unit 160 may store the t-1, t-2, t + 1 time period or t-1 t + 1, t It is possible to perform downsampling on the frame of the time slot t using the frame of the +2 time slot.
- the compressing unit 160 may store the frames of the frames t-1, t-2, t + The downsampling can be performed using the downsampling.
- the compression unit 160 may use frames of the time frame required for temporal upsampling to perform temporal downsampling at time t.
- the decompression unit 120 may perform filtering on frames in different time zones used for temporal downsampling to determine a characteristic map for each frame.
- the restoration unit 120 when the restoration unit 120 determines that there are few frames among a plurality of frames in consideration of motion information (e.g., a global motion vector and a local motion vector) of characteristic maps determined for each time zone , And can perform temporal downsampling to remove frames included between a plurality of frames.
- motion information e.g., a global motion vector and a local motion vector
- the reconstructing unit 120 compares characteristic maps determined for respective time zones and determines that frames in a plurality of time zones are different scenes, It can be determined that sampling is not performed.
- the compression unit 160 of the image compression apparatus 150 may determine what frame is to be subjected to temporal downsampling, and the bitstream generation unit 170 may generate a frame A bitstream including temporal downsampling information, which is information indicating what is a bitstream, can be generated.
- the image compression apparatus 150 may generate a bitstream including change information indicating how much spatial and temporal downsampling is to be performed.
- the bitstream generation unit 170 generates change information indicating that the resolution of the original image is a compressed image and that the original image is doubled And generate a bitstream containing the bitstream.
- the bitstream generation unit 170 when the compression unit 160 performs downsampling to reduce the bit rate of the original image by a factor of 1/2, the bitstream generation unit 170 generates the bitstream of the compressed image by using a bit rate of 1 / 2-fold reduction in the bit rate.
- the above-described feature of the change information included in the bit stream that the image compression apparatus 150 can generate is that the image compression apparatus 150 can generate a bit stream including information indicating the degree of compression of the image Since the change information is only an example for explaining the feature, the change information should be interpreted as including various information that can indicate the degree of compression.
- the image restoration apparatus 100 may use a filter kernel for each layer in order to perform up-sampling using DNN.
- the types of filter kernels usable for each layer include a filter kernel It may be different from the kind.
- FIG. 6 is a diagram for explaining that the types of filter kernels used in up-sampling or down-sampling may be different according to an embodiment.
- the compression unit 160 may use a DNN including n layers 610, 620, 630 to generate a downsampled compressed image 635.
- filtering using a filter kernel (for example, convolution operation) can be performed as a process for downsampling.
- the size of the filter kernels 612, 622, and 632 used for filtering for each layer may be at least one kind of size.
- filtering may be performed using Au_1 filter kernels having a size of 5x5 in the first layer 610
- filtering may be performed using Au2 filter kernels having a size of 3x3 in the second layer 620
- filtering may be performed using Au_1 filter kernels having a size of 3x3 in the nth layer 630.
- the compression unit 160 may use DNN to upsample the downsampled compressed image 635 through n layers.
- a filter kernel is used to perform filtering (e.g., a transposed convolution operation) Can be performed.
- the type of filter kernel that can be used in the filtering for upsampling according to one embodiment may be different from the type of filter kernel used in the filtering for downsampling. For example, even if the types of the filter kernel sizes used in the DNN for downsampling are 3x3, 3x3, and 5x5, the compression unit 160 can determine the types of the filter kernel sizes used in the DNN for upsampling by 3x3, 5x5, 7x7.
- the size and number of the filter kernels available in each layer of the DNN by the compression unit 160 according to an exemplary embodiment may be different from the size and the number of the filter kernels used in the DNN for downsampling.
- the compression unit 160 can downsample the original image using the DNN for downsampling, and generate the encoded residual signal using the downsampled compressed image.
- the compression unit 160 may decode the residual signal and then perform the upsampling using the DNN to determine the restored signal and the learning process between the DNNs used in the downsampling and upsampling processes may be shared.
- the learning process of the DNN will be described later with reference to embodiments.
- FIG. 7A illustrates a feature of performing filtering using a plurality of filter kernels in a predetermined layer among a plurality of layers included in a DNN Fig.
- the compression unit 160 may perform filtering using one kind of filter kernel for each layer.
- the compression unit 160 may use a DNN including a plurality of layers to compress an original image, which is an input 700 of DNN.
- the compression unit 160 performs filtering using a filter kernel A_a having a size of Fw_a x Fh_a in a layer 710 among a plurality of layers to obtain a characteristic map A_a having a size of W_a x H_a You can decide.
- the compression unit 160 may perform filtering using a plurality of size filter kernels in a predetermined layer. According to one embodiment.
- the compression unit 160 may perform filtering using the filter kernel 722 having a size of Fw_b1 x Fh_b1, Fw_b2 x Fh_b2, Fw_b3 x Fh_b3, etc. in the b-th layer 720 among the plurality of layers.
- filter kernels having different sizes may include different numbers of filter kernels.
- A_b1 filter kernels with Fw_b1 x Fh_b1 size, A_b2 filter kernels with Fw_b2 x Fh_b2 size, and A_b3 filter kernels with Fw_b3 x Fh_b3 size can be used for filtering.
- the compression unit 160 may perform a filtering using a plurality of filter kernels of a size type, and may determine a property map of the number of filter kernels used.
- the compression unit 160 can determine A_b number of characteristic maps by performing filtering using the filter kernels 722 having a size of Fw_b1 x Fh_b1, Fw_b2 x Fh_b2, Fw_b3 x Fh_b3, where A_b is A_b1 + A_b2 + A_b3.
- the compression unit 160 may determine a compressed image, which is the output 725 of the DNN, by performing the remaining compression process using the characteristic map of size A_b W_b x H_b.
- FIG. 7B is a diagram for explaining a filtering process using characteristic maps determined by filtering with a plurality of sizes of filter kernels according to an exemplary embodiment.
- the compression unit 160 of the image compression apparatus 150 may determine characteristic maps using filter kernels having a plurality of sizes in an arbitrary layer.
- the compression unit 160 may perform filtering using the filter kernels 732 having a plurality of sizes in the n-th layer 730, which is one of a plurality of layers included in the DNN.
- characteristic maps 740, 742, and 744 having a plurality of sizes can be determined.
- the compression unit 160 compresses the filter kernel having the size of Fw_n1 x Fh_n1 in the nth layer 730 (W_n? Fw_n1 + 1) x (H0? Fh_n1 + 1) by performing filtering using the feature map A_n1. Further, the compression unit 160 performs the filtering using the filter kernels of different sizes to obtain the feature maps A_n2 having the size of (W_n? Fw_n2 + 1) x (H0? Fh_n2 + 1), (W_n? Fw_n3 + 1) (H0? Fh_n3 + 1), and the like.
- the compression unit 160 may perform a padding operation such that the characteristic maps generated for each size of the filter kernel have the same size.
- the padded feature maps may be of the same size as the input of the layer. 7B, characteristic maps 740, 742, and 744 of a plurality of sizes generated for each size of the filter kernel are formed to have the same size as Wd_n x H_n, which is the size of the characteristic map input to the nth layer 730 Padding can be performed on the characteristic maps 740, 742, and 744.
- the padded characteristic maps 741, 743, and 745 may have the same size (W_n x H_n).
- the input and output of a layer using a plurality of sizes of filter kernels may have the same plane size.
- the compression unit 160 may perform a filtering using a plurality of filter kernels of a size type, and may determine a property map of the number of filter kernels used. That is, the compression unit 160 can perform the filtering using the filter kernels 732 having the sizes of Fw_n1 x Fh_n1, Fw_n2 x Fh_n2, and Fw_n3 x Fh_n3 to determine the characteristic maps 740, 742, and 744, Padding is performed on the maps 740, 742, and 744 to determine characteristic maps 741, 743, and 745 padded to the same size.
- the compression unit 160 may determine the output of the n-th layer 730 by concatenating the padded characteristic maps 741, 743, and 745. As a result, the feature map of W_n x H_n size is input to the nth layer and filtering is performed. As a result, A_n1 + A_n2 + A_n3 feature maps having W_n x H_n size can be output.
- FIG. 8 is a diagram illustrating loss information generated in a DNN for downsampling according to an embodiment.
- the compression unit 160 of the image compression apparatus 150 may determine a downsampled image 808 using a downsampling DNN 806 that can downsample the original image 800 .
- the downsampled image 808 determined through the downsampling DNN 806 may be used to determine whether the structural features of the image (e.g., hue, contrast, histogram, etc.) (800), the difference may be large. If the downsampled image 808 is significantly different from the original image 800, the encoding efficiency may be degraded.
- the structural information storage unit 802 of the reconstruction unit 160 may reduce the spatial size of the original image 800 more than the original image 800 using the structural features of the original image 800,
- the structural reconstruction image 804 can be determined and the compressing section 160 can compare the downsampled image 808 and the structural reconstruction image 804 with each other.
- the downsampled image 800 and the structured reconstruction image 804 may be spatially identical or similar in resolution according to an exemplary embodiment.
- the structural information storage unit 802 may store a structural preservation image 804 in consideration of various structural features such as brightness, contrast, histogram, image compression rate, encoding quality, compression history information, And generate a downsampled image 808 according to the result of the comparison with the structured saved image 804.
- the structural information may include predetermined information based on the original image 800, and structural information that is determined based on input signals or parameter information may be included.
- the structural information storage unit 802 may store structural information of the image having similar characteristics to the structural features of the original image 800 by using structural features such as brightness, contrast, histogram, etc. of the original image 800, It is possible to generate a structural preserved image 804 of reduced size or resolution.
- the structural information storage unit 802 may generate the structured saved image 804 based on the coding quality or the compression ratio indicating the degree of entropy coding of the original image 800.
- the spatial resolution of the structured preserved image 804 may be determined according to the encoding quality or the predetermined encoding quality determined based on information input from the user or from outside, and thus the downsampling performed by the downsampling DNN The spatial resolution of the resulting compressed image can be determined.
- the structural information storage unit 802 may generate the downsampled image 808 using the compression history information stored in the image compression apparatus 150.
- the image compression apparatus 150 can determine the spatial resolution of the structured preserved image 804 using the compression history information stored in the storage unit (not shown) or received from the outside, The spatial size of the sampled image 808 can be determined. More specifically, according to the compression history information that can be used by the image compression apparatus 150, a user's preferred encoding quality or compression rate can be determined, and the structured preserved image 804 can be determined according to the encoding quality determined based on the compression history information. And the size of the downsampled image 808 may be determined.
- the size of the structured preserved image 804 and the size of the downsampled image 808 may be determined according to the encoding quality that has been most used according to the compression history information. For example, based on the encoding quality that has been used more than a predetermined threshold value according to the compression history information (for example, by using the average quality of the encoding quality that has been used more than a predetermined threshold value) The size of the preserved image 804 and the size of the downsampled image 808 may be determined.
- the structural information preservation unit 802 may generate a structural preservation image 804 based on the type of the original image 800.
- the structural information or the image quality may be similar to the original image 800 even if it is restored later.
- the structural information or the image quality may be similar to the original image 800 even if the resolution is reduced by m%.
- the structural information storage unit 802 can determine the ratio of reducing the spatial resolution in consideration of the type of the original image 800 (i.e., " reduced information ") and thereby generate the structural preserved image 804 .
- the reduction information may be determined from the structural information storage unit 802, but it may be determined arbitrarily according to the input of the user.
- the reduction information according to an exemplary embodiment may be encoded and transmitted through a bitstream.
- the downsampling DNN 806 may downsample the original image 800 based on the reduction information.
- the structure of the downsampling DNN 806 required to perform downsampling may differ according to the reduction ratio indicated by the reduction information. For example, in order to reduce the original image 800 to the maximum ratio, the entire layer in the downsampling DNN 806 should be used, whereas when the original image 800 is reduced to a ratio smaller than the maximum ratio, 0.0 > 806 < / RTI > may not necessarily be used.
- the downsampling DNN 806 can adjust the degree of reduction of the original image 800 using only a part of the layer. And, at this time, the layer to be used for downsampling in the downsampling DNN 806 can be determined based on the reduction information.
- the downsampling DNN 806 is a network learned in consideration of structural information of an image, a compressed bit amount, and a restoration network. At this time, the learning of the downsampling DNN 806 is performed in such a manner that the connection relations and the weights of the plurality of network nodes constituting the downsampling DNN 806 are updated based on the input / output data set provided for learning.
- the downsampling DNN 806 may be an always updateable network.
- the compression unit 160 may determine the first loss information 812 indicating the magnitude of the difference between the structured reconstruction image 804 and the downsampled compressed image 808 according to one embodiment. According to an embodiment, the compression unit 160 may determine second loss information 814 indicating the spatial complexity included in the downsampled compressed image 808. According to one embodiment, the compression unit 160 may determine the second loss information 814 by calculating a total variance value to determine the spatial complexity of the downsampled compressed image 808.
- FIG. 9 is a diagram illustrating loss information generated in DNN for up-sampling.
- the downsampled image 908 may be upsampled through the upsampling DNN 910, and the resulting reconstructed image 916 may be determined.
- the input of the up-sampling DNN 910 may be a down-sampled image 908 or a decoded image after the down-sampled image 908 is encoded.
- the compression unit 160 may determine the third loss information 918 and the fourth loss information 920 by comparing the original image 900 and the restored image 916 according to an embodiment.
- the third loss information 918 may represent the L 1 -norm value for the difference between the original image 900 and the restored image 916 and the fourth loss information 920 may represent the L 1 -norm value for the original image 900 ) and it may indicate the L 2 -norm value for the difference between the reconstructed image (916).
- L 1 -norm may be a result of adding the absolute values of the vector components representing the difference between the original image 900 and the restored image 916.
- L 2 -norm may represent the root value of the sum of squares of the vector components representing the difference between the original image 900 and the reconstructed image 916.
- the compression unit 160 may learn DNN for upsampling and DNN for downsampling through the following Equation (1).
- the loss DS may correspond to a sum of at least one loss information indicating a loss caused by downsampling.
- Loss US may correspond to a sum of at least one loss information determined by a comparison between a downsampled image and an original image before downsampling is performed.
- a, b, c, and d may correspond to predetermined predetermined weights.
- the compression unit 160 may share any loss information to determine a loss DS and a loss US .
- the compression unit 160 may determine the loss DS and the loss US based on the fourth loss information as shown in Equation (1).
- the shared information in the process of determining the loss DS and the loss US should not be interpreted as being limited to the above-described embodiments, and various loss information within the range that can easily be practiced by a person skilled in the art is determined in the process of determining the loss DS and the loss US Should be construed as being commonly available.
- the DNN for upsampling which can be used by the decompression unit 120 of the image restoration apparatus 100 according to an exemplary embodiment, may include a decompressed image and then an upsampled image, The sum of the loss information determined by the comparison between the loss information and the loss information is reduced.
- the restoration unit 120 may learn Loss US to have a minimum value based on the third loss information and the fourth loss information to which the weight is applied.
- the restoring unit 120 may perform the upsampling using the learned DNN to give priority to the restoration performance by learning the DNN for upsampling so that Loss US has a minimum value.
- some of the at least one loss information used in the learning process of the DNN for upsampling may also be used in the learning process of the DNN for downsampling.
- the fourth loss information used for determining Loss US may be one of loss information used in the determination of the loss DS .
- the DNN for downsampling used by the compression unit 160 of the image compression apparatus 150 is configured such that the sum of at least one loss information indicating the loss caused by downsampling is reduced .
- the compression unit 160 may learn that the loss DS has a minimum value based on the first loss information, the second loss information, and the fourth loss information to which the weights are applied.
- the compression unit 160 may perform downsampling using the learned DNN so as to prioritize the compression performance and the reconstruction performance by learning the DNN for downsampling so that the loss DS has a minimum value.
- At least one piece of loss information used in the learning process of the DNN for downsampling is a comparison between the reconstructed image by performing the upsampling after the compressed image is decoded and the original image before the downsampling is performed May be determined based on the result, and the result of the comparison may be that used in the learning process of DNN for upsampling.
- the fourth loss information may be used not only in the learning process of DNN for downsampling but also in the learning process of DNN for upsampling.
- the various embodiments described above may be performed on the compression unit 160 according to one embodiment, and may be performed based on various data units that can be used in the encoding of the image.
- the compression unit 160 may encode an image using various data units including a video, a sequence, a frame, a slice, a slice segment, a maximum encoding unit, an encoding unit, a prediction unit, a conversion unit, , Downsampling, and upsampling.
- the bitstream generator 170 may generate a bitstream including change information indicating how much the original image is compressed through downsampling for each picture.
- the compression unit 160 may perform a downsampling or upsampling process for each maximum encoding unit.
- the DNN related model described above can be implemented as a software module.
- the DNN model When implemented in a software module (e.g., a program module containing instructions), the DNN model may be stored in a computer readable recordable medium.
- the DNN model may be integrated in the form of a hardware chip to be a part of the image decompression apparatus 100 or the image compression apparatus 150 described above.
- the DNN model may be made in the form of a dedicated hardware chip for artificial intelligence or as part of a conventional general purpose processor (e.g., a CPU or an application processor) or a graphics-only processor (e.g., a GPU) .
- the DNN model may also be provided in the form of downloadable software.
- a computer program product may include a manufacturer or product (e.g., downloadable application) in the form of a software program that is electronically distributed through an electronic marketplace. For electronic distribution, at least a portion of the software program may be stored on a storage medium or may be created temporarily.
- the storage medium may be a manufacturer or a server of an electronic market, or a storage medium of a relay server.
- FIG. 10 a method of determining a data unit of an image according to an embodiment will be described with reference to FIGS. 10 to 23.
- FIG. 10 a method of determining a data unit of an image according to an embodiment will be described with reference to FIGS. 10 to 23.
- FIG. 10 illustrates a process in which the image restoring apparatus 100 determines at least one encoding unit by dividing a current encoding unit according to an embodiment.
- the image restoring apparatus 100 can determine the type of an encoding unit using block type information, and can determine a type of an encoding unit to be divided using the type information. That is, the division method of the encoding unit indicated by the division type information can be determined according to which block type the block type information used by the image restoration apparatus 100 represents.
- the image restoration apparatus 100 may use block type information indicating that the current encoding unit is a square shape. For example, the image restoration apparatus 100 may determine whether to divide a square encoding unit according to division type information, vertically divide, horizontally divide, or divide into four encoding units. 10, if the block type information of the current coding unit 1000 indicates a square shape, the decoding unit 1030 may calculate the size of the current coding unit 1000 according to the division type information indicating that the current block is not divided It is possible to determine the divided coding units 1010b, 1010c, and 1010d based on the division type information indicating the predetermined division method or not dividing the coding unit 1010a.
- the image restoring apparatus 100 determines two encoding units 1010b obtained by dividing the current encoding unit 1000 in the vertical direction based on the division type information indicating that the image is divided in the vertical direction according to an embodiment .
- the image reconstruction apparatus 100 can determine two coding units 1010c obtained by dividing the current coding unit 1000 in the horizontal direction based on the division type information indicating that the image is divided in the horizontal direction.
- the image restoration apparatus 100 can determine the four coding units 1010d that divide the current coding unit 1000 in the vertical direction and the horizontal direction based on the division type information indicating that the division is performed in the vertical direction and the horizontal direction.
- the division type in which the square coding unit can be divided should not be limited to the above-mentioned form, but may include various forms in which the division type information can be represented.
- the predetermined divisional form in which the square encoding unit is divided will be described in detail by way of various embodiments below.
- FIG. 11 illustrates a process in which the image restoring apparatus 100 determines at least one encoding unit by dividing a non-square encoding unit according to an embodiment.
- the image restoration apparatus 100 may use block type information indicating that the current encoding unit is a non-square type.
- the image restoring apparatus 100 may determine whether to divide the current non-square coding unit according to the division type information or not in a predetermined method. 11, if the block type information of the current coding unit 1100 or 1150 indicates a non-square shape, the image restoring apparatus 100 may determine the current coding unit 1100 or 1150 according to the division type information indicating that the current block is not divided.
- the image restoration apparatus 100 may determine a type in which an encoding unit is divided using segmentation type information.
- the segmentation type information indicates a number of at least one encoding unit . 11
- the image reconstruction apparatus 100 determines the current encoding unit 1100 or 1150 based on the division type information, To determine two encoding units 1120a, 11420b, or 1170a and 1170b included in the current encoding unit.
- the non-square current coding unit 1100 or 1150 The current encoding unit can be divided in consideration of the position of the long side.
- the image restoring apparatus 100 may divide the current encoding unit 1100 or 1150 in a direction of dividing the long side of the current encoding unit 1100 or 1150 in consideration of the type of the current encoding unit 1100 or 1150 So that a plurality of encoding units can be determined.
- the image restoring apparatus 100 may determine an odd number of encoding units included in the current encoding unit 1100 or 1150.
- the image reconstruction apparatus 100 divides the current encoding unit 1100 or 1150 into three encoding units 1130a , 1130b, 1130c, 1180a, 1180b, 1180c.
- the image restoration apparatus 100 may determine an odd number of encoding units included in the current encoding unit 1100 or 1150, and the sizes of the determined encoding units may not be the same.
- the size of a predetermined encoding unit 1130b or 1180b among the determined odd number of encoding units 1130a, 1130b, 1130c, 1180a, 1180b, and 1180c is different from that of other encoding units 1130a, 1130c, 1180a, and 1180c . That is, an encoding unit that can be determined by dividing the current encoding unit 1100 or 1150 may have a plurality of types of sizes, and an odd number of encoding units 1130a, 1130b, 1130c, 1180a, 1180b, May have different sizes.
- the image restoring apparatus 100 can determine an odd number of encoding units included in the current encoding unit 1100 or 1150, The image restoring apparatus 100 may set a predetermined restriction on at least one of the odd number of encoding units generated by division. 11, the image restoring apparatus 100 includes an encoding unit 1130a, 1130b, 1130c, 1180a, 1180b, and 1180c generated by dividing a current encoding unit 1100 or 1150, 1130b, and 1180b may be different from the other encoding units 1130a, 1130c, 1180a, and 1180c. For example, the image restoring apparatus 100 may restrict the central encoding units 1130b and 1180b to not be further divided, different from the other encoding units 1130a, 1130c, 1180a, and 1180c, It can be limited to be divided.
- FIG. 12 illustrates a process in which the image restoring apparatus 100 divides an encoding unit based on at least one of block type information and division type information according to an embodiment.
- the image restoring apparatus 100 may determine that the first unit of the square format 1200 is divided or not divided into units based on at least one of the block type information and the division type information. According to an embodiment, when the division type information indicates that the first encoding unit 1200 is divided in the horizontal direction, the image restoring apparatus 100 divides the first encoding unit 1200 in the horizontal direction, (1210).
- the first encoding unit, the second encoding unit, and the third encoding unit used according to an embodiment are terms used to understand the relation before and after the division between encoding units. For example, if the first encoding unit is divided, the second encoding unit can be determined, and if the second encoding unit is divided, the third encoding unit can be determined.
- the relationship between the first coding unit, the second coding unit and the third coding unit used can be understood to be in accordance with the above-mentioned characteristic.
- the image restoring apparatus 100 may determine that the determined second encoding unit 1210 is not divided or divided into encoding units based on at least one of the block type information and the division type information.
- the image restoring apparatus 100 may include a second encoding unit 1210 of a non-square shape determined by dividing a first encoding unit 1200 based on at least one of block type information and division type information It may be divided into at least one third encoding unit 1220a, 1220b, 1220c, 1220d, or the like, or the second encoding unit 1210 may not be divided.
- the image restoration apparatus 100 may obtain at least one of the block type information and the division type information and the image restoration apparatus 100 may acquire at least one of the first encoding unit 1200
- the second encoding unit 1210 may divide a plurality of second encoding units (for example, 1210) of various types into a first encoding unit 1210 based on at least one of block type information and division type information,
- the unit 1200 can be divided according to the divided method.
- the encoding unit 1210 may also be divided into a third encoding unit (e.g., 1220a, 1220b, 1220c, 1220d, etc.) based on at least one of block type information and division type information for the second encoding unit 1210 have. That is, an encoding unit can be recursively divided based on at least one of division type information and block type information associated with each encoding unit.
- a third encoding unit e.g., 1220a, 1220b, 1220c, 1220d, etc.
- a square encoding unit may be determined in a non-square encoding unit, and a non-square encoding unit may be determined by dividing the square encoding unit recursively.
- predetermined encoding units for example, An encoding unit or a square-shaped encoding unit
- the third encoding unit 1220c in the form of a square which is one of the odd numbered third encoding units 1220b, 1220c, and 1220d, may be divided in the horizontal direction and divided into a plurality of fourth encoding units.
- the non-square fourth encoding unit 1240 which is one of the plurality of fourth encoding units, may be further divided into a plurality of encoding units.
- the non-square-shaped fourth encoding unit 1240 may be further divided into odd-numbered encoding units 1250a, 1250b, and 1250c.
- the image restoring apparatus 100 may divide each of the third encoding units 1220a, 1220b, 1220c, and 1220d into units of encoding based on at least one of block type information and division type information, It can be determined that the unit 1210 is not divided.
- the image restoration apparatus 100 may divide the second encoding unit 1210 in the non-square form into third encoding units 1220b, 1220c, and 1220d in an odd number according to an embodiment.
- the image restoring apparatus 100 may set a predetermined restriction on a predetermined third encoding unit among odd numbered third encoding units 1220b, 1220c, and 1220d.
- the image restoration apparatus 100 may restrict the encoding unit 1220c located in the middle among the odd-numbered third encoding units 1220b, 1220c, and 1220d to not be further divided, or be divided into a set number of times . Referring to FIG.
- the image restoring apparatus 100 may include an encoding unit (not shown) located in the middle among odd numbered third encoding units 1220b, 1220c, and 1220d included in a second encoding unit 1210 in a non- 1220c may not be further divided or may be limited to being divided into a predetermined division type (for example, divided into four coding units or divided into a form corresponding to the divided form of the second coding unit 1210) (For example, dividing only n times, n > 0).
- a predetermined division type for example, divided into four coding units or divided into a form corresponding to the divided form of the second coding unit 1210
- the above restriction on the encoding unit 1220c positioned in the center is merely an example and should not be construed to be limited to the above embodiments and the encoding unit 1220c located in the center is not limited to the other encoding units 1220b and 1220d Quot;), < / RTI > which can be decoded differently.
- the image restoring apparatus 100 may obtain at least one of block type information and division type information used for dividing a current encoding unit at a predetermined position in a current encoding unit.
- FIG. 13 illustrates a method for an image restoration apparatus 100 to determine a predetermined encoding unit among odd number of encoding units according to an embodiment.
- at least one of the block type information and the division type information of the current encoding unit 1300 is a sample of a predetermined position among a plurality of samples included in the current encoding unit 1300 (for example, Sample 1340).
- the predetermined position in the current coding unit 1300 in which at least one of the block type information and the division type information can be obtained should not be limited to the middle position shown in FIG.
- the image restoration apparatus 100 may determine that the current encoding unit is divided or not divided into the encoding units of various types and sizes by acquiring at least one of the block type information and the division type information obtained from the predetermined position.
- the image restoring device 100 may select one of the encoding units.
- the method for selecting one of the plurality of encoding units may be various, and description of these methods will be described later in various embodiments.
- the image restoring apparatus 100 may divide the current encoding unit into a plurality of encoding units and determine a predetermined encoding unit.
- FIG. 13 illustrates a method for an image restoring apparatus 100 to determine an encoding unit of a predetermined position among odd number of encoding units according to an embodiment.
- the image restoring apparatus 100 may use information indicating the positions of odd-numbered encoding units in order to determine an encoding unit located in the middle among the odd-numbered encoding units. Referring to FIG. 13, the image restoring apparatus 100 may divide the current encoding unit 1300 to determine odd number of encoding units 1320a, 1320b, and 1320c. The image restoring apparatus 100 can determine the center encoding unit 1320b by using information on the positions of odd number of encoding units 1320a, 1320b, and 1320c.
- the image restoring apparatus 100 determines the positions of the encoding units 1320a, 1320b, and 1320c based on information indicating the positions of predetermined samples included in the encoding units 1320a, 1320b, and 1320c, Can be determined. More specifically, the image restoration apparatus 100 may reconstruct the image data of the coding units 1320a, 1320b, and 1320c based on information indicating the positions of the upper left samples 1330a, 1330b, and 1330c of the coding units 1320a, 1320b, and 1320c. By determining the position, the coding unit 1320b located in the center can be determined.
- Information indicating the positions of the upper left samples 1330a, 1330b, and 1330c included in the coding units 1320a, 1320b, and 1320c is a position in the picture of the coding units 1320a, 1320b, and 1320c Or information about the coordinates.
- Information indicating the positions of the upper left samples 1330a, 1330b, and 1330c included in the coding units 1320a, 1320b, and 1320c according to one embodiment is stored in the coding units 1320a and 1320b included in the current coding unit 1300 And 1320c, and the width or height may correspond to information indicating a difference between coordinates in the picture of the encoding units 1320a, 1320b, and 1320c.
- the image restoration apparatus 100 can directly use the information on the position or the coordinates in the pictures of the coding units 1320a, 1320b, and 1320c or the information on the width or height of the coding unit corresponding to the difference value between the coordinates
- the encoding unit 1320b located in the center can be determined.
- the information indicating the position of the upper left sample 1330a of the upper coding unit 1320a may indicate the coordinates (xa, ya) and the upper left sample 1330b of the middle coding unit 1320b May represent the coordinates (xb, yb), and the information indicating the position of the upper left sample 1330c of the lower coding unit 1320c may indicate the coordinates (xc, yc).
- the image restoring apparatus 100 can determine the center encoding unit 1320b by using the coordinates of the upper left samples 1330a, 1330b, and 1330c included in the encoding units 1320a, 1320b, and 1320c.
- the coding unit 1320b including the coordinates (xb, yb) of the sample 1330b positioned at the center May be determined as a coding unit located in the middle of the coding units 1320a, 1320b, and 1320c determined by dividing the current coding unit 1300.
- the coordinates indicating the positions of the samples 1330a, 1330b, and 1330c in the upper left corner may indicate the coordinates indicating the absolute position in the picture
- the position of the upper left sample 1330a of the upper coding unit 1320a may be (Dxb, dyb), which is the information indicating the relative position of the sample 1330b at the upper left of the middle encoding unit 1320b, and the relative position of the sample 1330c at the upper left of the lower encoding unit 1320c
- Information dyn (dxc, dyc) coordinates may also be used.
- the method of determining the coding unit at a predetermined position by using the coordinates of the sample as information indicating the position of the sample included in the coding unit should not be limited to the above-described method, and various arithmetic Should be interpreted as a method.
- the image restoring apparatus 100 may divide the current encoding unit 1300 into a plurality of encoding units 1320a, 1320b, and 1320c, and may include a predetermined number of encoding units 1320a, 1320b, and 1320c
- the encoding unit can be selected.
- the image restoring apparatus 100 can select an encoding unit 1320b having a different size from among the encoding units 1320a, 1320b, and 1320c.
- the image restoring apparatus 100 is configured to include (xa, ya) coordinates, which is information indicating the position of the upper left sample 1330a of the upper encoding unit 1320a, (Xc, yc) coordinates, which is information indicating the position of the lower-stage coding unit 1330b and the position of the upper-left sample 1330c of the lower-stage coding unit 1320c, 1320b, and 1320c, respectively.
- the image restoring apparatus 100 may use the coding units 1320a, 1320b, 1320c, and 1320c using coordinates (xa, ya), (xb, yb), (xc, yc) indicating the positions of the coding units 1320a, 1320b, ) Can be determined.
- the image reconstruction apparatus 100 can determine the width of the upper encoding unit 1320a as xb-xa and the height as yb-ya. According to an embodiment, the image reconstruction apparatus 100 can determine the width of the center encoding unit 1320b as xc-xb and the height as yc-yb. The image restoration apparatus 100 may determine the width or height of the lower stage encoding unit using the width or height of the current encoding unit and the width and height of the upper encoding unit 1320a and the middle encoding unit 1320b .
- the image restoring apparatus 100 may determine an encoding unit having a different size from other encoding units based on the width and height of the determined encoding units 1320a, 1320b, and 1320c. Referring to FIG. 13, the image restoring apparatus 100 may determine a coding unit 1320b as a coding unit at a predetermined position while having a size different from that of the upper coding unit 1320a and the lower coding unit 1320c.
- the process of determining the encoding unit having a size different from that of the other encoding units by the above-described image restoring device 100 may be the same as that of the first embodiment in which the encoding unit of a predetermined position is determined using the size of the encoding unit determined based on the sample coordinates , Various processes may be used for determining the encoding unit at a predetermined position by comparing the sizes of the encoding units determined according to predetermined sample coordinates.
- the position of the sample to be considered for determining the position of the coding unit should not be interpreted as being limited to the left upper end, and information about the position of any sample included in the coding unit can be interpreted as being available.
- the image restoring apparatus 100 may select a coding unit at a predetermined position among the odd number of coding units in which the current coding unit is divided by considering the type of the current coding unit. For example, if the current coding unit is a non-square shape having a width greater than the height, the image restoring apparatus 100 can determine a coding unit at a predetermined position along the horizontal direction. That is, the image restoration apparatus 100 may determine one of the encoding units having different positions in the horizontal direction and set a restriction on the encoding unit. If the current encoding unit is a non-square shape having a height greater than the width, the image restoration apparatus 100 can determine a coding unit at a predetermined position along the vertical direction. That is, the image restoring apparatus 100 may determine one of the encoding units which are located in the vertical direction and limit the encoding unit.
- the image restoring apparatus 100 may use information indicating positions of even-numbered encoding units in order to determine an encoding unit at a predetermined position among the even-numbered encoding units.
- the image restoration apparatus 100 can determine the even number of encoding units by dividing the current encoding unit and determine the encoding unit at a predetermined position using the information on the positions of the even number of encoding units.
- a concrete procedure for this is omitted because it may be a process corresponding to a process of determining a coding unit of a predetermined position (for example, a middle position) among the above-mentioned odd number of coding units.
- the image reconstruction apparatus 100 may convert the block type information stored in the sample included in the middle encoding unit, Information can be used.
- the image restoring apparatus 100 may divide the current encoding unit 1300 into a plurality of encoding units 1320a, 1320b, and 1320c based on at least one of block type information and division type information,
- the encoding unit 1320b positioned in the middle of the plurality of encoding units 1320a, 1320b, and 1320c can be determined.
- the image restoring apparatus 100 may determine a coding unit 1320b positioned at the center in consideration of a position where at least one of the block type information and the division type information is obtained.
- At least one of the block type information and the division type information of the current encoding unit 1300 can be acquired in the sample 1340 located in the middle of the current encoding unit 1300, and the block type information and the division type information If the current encoding unit 1300 is divided into a plurality of encoding units 1320a, 1320b, and 1320c based on at least one of the encoding units 1320a to 1320c and 1320c, You can decide.
- the information used for determining the coding unit located in the middle should not be limited to at least one of the block type information and the division type information, and various kinds of information may be used in the process of determining the coding unit located in the middle .
- predetermined information for identifying a coding unit at a predetermined position may be obtained from a predetermined sample included in a coding unit to be determined.
- the image restoring apparatus 100 includes a plurality of encoding units 1320a, 1320b, and 1320c, which are determined by dividing the current encoding unit 1300, (For example, a sample located in the middle of the current encoding unit 1300) at a predetermined position in the current encoding unit 1300 in order to determine an encoding unit located in the middle of the encoding unit, And at least one of division type information. .
- the image restoration apparatus 100 can determine the sample at the predetermined position in consideration of the block block shape of the current encoding unit 1300, and the image restoration apparatus 100 determines the current encoding unit 1300 by dividing the current encoding unit 1300 A coding unit 1320b including a sample from which predetermined information (for example, at least one of block type information and division type information) can be obtained is determined among a plurality of coding units 1320a, 1320b, and 1320c A predetermined limit can be set. Referring to FIG.
- the image restoring apparatus 100 can determine a sample 1340 located in the center of the current encoding unit 1300 as a sample from which predetermined information can be obtained,
- the coding unit 100 may limit the coding unit 1320b including the sample 1340 to a predetermined limit in the decoding process.
- the position of the sample from which predetermined information can be obtained can not be construed to be limited to the above-mentioned position, but can be interpreted as samples at arbitrary positions included in the encoding unit 1320b to be determined for limiting.
- the position of a sample from which predetermined information can be obtained according to an embodiment may be determined according to the type of the current encoding unit 1300.
- the block type information can determine whether the current encoding unit is a square or a non-square, and determine the position of a sample from which predetermined information can be obtained according to the shape.
- the image restoration apparatus 100 may use at least one of the information on the width of the current encoding unit and the information on the height, so that the image restoration apparatus 100 is located on the boundary dividing at least one of the width and height of the current encoding unit into half
- the sample can be determined as a sample from which predetermined information can be obtained.
- the image restoring apparatus 100 may set one of the samples adjacent to the boundary dividing the long side of the current encoding unit in half into a predetermined Can be determined as a sample from which the information of < / RTI >
- the image restoring apparatus 100 may determine at least one of the block type information and the division type information to determine a predetermined unit of the plurality of the encoding units One can be used.
- the image restoration apparatus 100 may acquire at least one of the block type information and the division type information from a sample at a predetermined position included in the encoding unit, and the image restoration apparatus 100 may determine that the current encoding unit is divided And divide the generated plurality of coding units by using at least one of division type information and block type information obtained from samples at predetermined positions included in each of the plurality of coding units.
- the coding unit can be recursively divided using at least one of the block type information and the division type information obtained in the sample at the predetermined position included in each of the coding units. Since the recursive division process of the encoding unit has been described with reference to FIG. 12, a detailed description will be omitted.
- the image restoring apparatus 100 may determine at least one coding unit by dividing the current coding unit, and may determine the order in which the at least one coding unit is decoded in a predetermined block (for example, ). ≪ / RTI >
- FIG. 14 illustrates a sequence in which a plurality of coding units are processed when the image restoring apparatus 100 determines a plurality of coding units by dividing a current coding unit according to an embodiment.
- the image restoring apparatus 100 may divide the first encoding unit 1400 in the vertical direction according to the block type information and the division type information to determine the second encoding units 1410a and 1410b, 1450b, 1450c, and 1450d by dividing the first coding unit 1400 in the horizontal direction to determine the second coding units 1430a and 1430b or dividing the first coding unit 1400 in the vertical direction and the horizontal direction, Can be determined.
- the image restoring apparatus 100 may determine the order in which the second encoding units 1410a and 1410b determined by dividing the first encoding unit 1400 in the vertical direction are processed in the horizontal direction 1410c .
- the image restoring apparatus 100 may determine the processing order of the second encoding units 1430a and 1430b determined by dividing the first encoding unit 1400 in the horizontal direction as the vertical direction 1430c.
- the image restoration apparatus 100 processes the encoding units located in one row of the second encoding units 1450a, 1450b, 1450c, and 1450d determined by dividing the first encoding unit 1400 in the vertical direction and the horizontal direction (For example, a raster scan order or a z scan order 1450e) in which the encoding units located in the next row are processed.
- the image restoration apparatus 100 may recursively divide encoding units.
- the image restoring apparatus 100 may divide the first coding unit 1400 to determine a plurality of coding units 1410a, 1410b, 1430a, 1430b, 1450a, 1450b, 1450c, and 1450d, It is possible to recursively divide each of the determined plurality of encoding units 1410a, 1410b, 1430a, 1430b, 1450a, 1450b, 1450c, and 1450d.
- the method of dividing the plurality of encoding units 1410a, 1410b, 1430a, 1430b, 1450a, 1450b, 1450c, and 1450d may be a method corresponding to the method of dividing the first encoding unit 1400.
- the plurality of coding units 1410a, 1410b, 1430a, 1430b, 1450a, 1450b, 1450c, and 1450d may be independently divided into a plurality of coding units. Referring to FIG.
- the image restoring apparatus 100 can determine the second encoding units 1410a and 1410b by dividing the first encoding unit 1400 in the vertical direction, and further determines the second encoding units 1410a and 1410b Can be determined not to divide or separate independently.
- the image restoring apparatus 100 may divide the second encoding unit 1410a on the left side in the horizontal direction into the third encoding units 1420a and 1420b and the second encoding units 1410b ) May not be divided.
- the processing order of the encoding units may be determined based on the division process of the encoding units.
- the processing order of the divided coding units can be determined based on the processing order of the coding units immediately before being divided.
- the image restoring apparatus 100 may determine the order in which the third coding units 1420a and 1420b determined by dividing the left second coding unit 1410a are processed independently of the second coding unit 1410b on the right.
- the third coding units 1420a and 1420b may be processed in the vertical direction 1420c since the second coding units 1410a on the left side are divided in the horizontal direction and the third coding units 1420a and 1420b are determined.
- the order in which the left second encoding unit 1410a and the right second encoding unit 1410b are processed corresponds to the horizontal direction 1410c, the third encoding unit 1410a included in the left second encoding unit 1410a, The right encoding unit 1410b can be processed after the blocks 1420a and 1420b are processed in the vertical direction 1420c.
- the above description is intended to explain the process sequence in which encoding units are determined according to the encoding units before division. Therefore, it should not be construed to be limited to the above-described embodiments, It should be construed as being used in various ways that can be handled independently in sequence.
- FIG. 15 illustrates a process of determining that the current encoding unit is divided into odd number of encoding units when the image reconstruction apparatus 100 can not process the encoding units in a predetermined order according to an embodiment.
- the image restoring apparatus 100 may determine that the current encoding unit is divided into odd number of encoding units based on the obtained block type information and the division type information.
- the first encoding unit 1500 in a square form may be divided into second non-square encoding units 1510a and 1510b, and the second encoding units 1510a and 1510b may be independently 3 encoding units 1520a, 1520b, 1520c, 1520d, and 1520e.
- the image restoring apparatus 100 can determine the plurality of third encoding units 1520a and 1520b by dividing the left encoding unit 1510a of the second encoding unit in the horizontal direction, and the right encoding unit 1510b May be divided into an odd number of third encoding units 1520c, 1520d, and 1520e.
- the image reconstruction apparatus 100 determines whether or not the third encoding units 1520a, 1520b, 1520c, 1520d, and 1520e can be processed in a predetermined order, and determines whether there are odd-numbered encoding units You can decide.
- the image restoring apparatus 100 may recursively divide the first encoding unit 1500 to determine the third encoding units 1520a, 1520b, 1520c, 1520d, and 1520e.
- the image restoring apparatus 100 may further include a first encoding unit 1500, a second encoding unit 1510a and a third encoding unit 1520a, 1520b, 1520c, and 1520c based on at least one of block type information and division type information, 1520d, and 1520e may be divided into odd number of coding units among the divided types. For example, an encoding unit located on the right of the second encoding units 1510a and 1510b may be divided into odd third encoding units 1520c, 1520d, and 1520e.
- the order in which the plurality of coding units included in the first coding unit 1500 are processed may be a predetermined order (for example, a z-scan order 1530) 100 can determine whether the third encoding units 1520c, 1520d, and 1520e determined by dividing the right second encoding unit 1510b into odd numbers satisfy the condition that the third encoding units 1520c, 1520d, and 1520e can be processed according to the predetermined order.
- a predetermined order for example, a z-scan order 1530
- the image restoring apparatus 100 satisfies a condition that third encoding units 1520a, 1520b, 1520c, 1520d, and 1520e included in the first encoding unit 1500 can be processed in a predetermined order And it is determined whether or not at least one of the width and the height of the second encoding units 1510a and 1510b is divided in half according to the boundaries of the third encoding units 1520a, 1520b, 1520c, 1520d, and 1520e .
- the third coding units 1520a and 1520b determined by dividing the height of the left-side second coding unit 1510a in the non-square form by half are satisfied, but the right second coding unit 1510b is set to 3 Since the boundaries of the third encoding units 1520c, 1520d, and 1520e determined by dividing the number of encoding units 1520c, 1520d, and 1520e can not divide the width or height of the right second encoding unit 1510b in half, 1520e may be determined as not satisfying the condition and the image reconstruction apparatus 100 may determine that the scan order is disconnection in the case of such unsatisfactory condition and the right second encoding unit 1510b is determined based on the determination result It can be determined to be divided into odd number of encoding units. According to an exemplary embodiment, the image restoring apparatus 100 may limit a coding unit of a predetermined position among the divided coding units when divided into odd number of coding units. Since the embodiment has been described above, a detailed description thereof will be o
- the image restoring apparatus 100 may divide the first encoding unit 1600 based on at least one of the block type information and the division type information acquired through the receiving unit 210.
- the first encoding unit 1600 in the form of a square may be divided into four encoding units having a square shape or may be divided into a plurality of non-square encoding units. For example, referring to FIG.
- the image reconstruction apparatus 100 includes a first encoding unit
- the encoding unit 1600 may be divided into a plurality of non-square encoding units. More specifically, when the division type information indicates that the first encoding unit 1600 is divided horizontally or vertically to determine an odd number of encoding units, the image restoring apparatus 100 includes a first encoding unit 1600 in the form of a square 1620b, and 1610c divided in the vertical direction as the odd number of coding units, or into the second coding units 1620a, 1620b, and 1620c determined by being divided in the horizontal direction.
- the image restoring apparatus 100 may be configured such that the second encoding units 1610a, 1610b, 1610c, 1620a, 1620b, and 1620c included in the first encoding unit 1600 are processed in a predetermined order And the condition is that at least one of the width and height of the first encoding unit 1600 is divided in half according to the boundaries of the second encoding units 1610a, 1610b, 1610c, 1620a, 1620b, and 1620c .
- the boundaries of the second encoding units 1610a, 1610b, and 1610c, which are determined by dividing the first encoding unit 1600 in the vertical direction, are divided in half by the width of the first encoding unit 1600
- the first encoding unit 1600 can be determined as not satisfying a condition that can be processed in a predetermined order.
- 1 encoding unit 1600 may be determined as not satisfying a condition that can be processed in a predetermined order.
- the image restoration apparatus 100 may determine that the scan sequence is disconnection in the case of such unsatisfactory condition and determine that the first encoding unit 1600 is divided into odd number of encoding units based on the determination result. According to an exemplary embodiment, the image restoring apparatus 100 may limit a coding unit of a predetermined position among the divided coding units when divided into odd number of coding units. Since the embodiment has been described above, a detailed description thereof will be omitted.
- the image restoration apparatus 100 may determine the encoding units of various types by dividing the first encoding unit.
- the image restoring apparatus 100 may divide a first coding unit 1600 in a square form and a first coding unit 1630 or 1650 in a non-square form into various types of coding units .
- the image restoring apparatus 100 may include a first encoding unit 1700 in the form of a square based on at least one of block type information and division type information acquired through the receiving unit 210, 2 encoding units 1710a, 1710b, 1720a, and 1720b.
- the second encoding units 1710a, 1710b, 1720a, and 1720b may be independently divided. Accordingly, the image restoring apparatus 100 determines whether to divide or not divide into a plurality of coding units based on at least one of the block type information and the division type information related to each of the second coding units 1710a, 1710b, 1720a, and 1720b .
- the image restoring apparatus 100 divides the left second encoding unit 1710a in the non-square form determined by dividing the first encoding unit 1700 in the vertical direction into the horizontal direction, 1712a, and 1712b. However, when the left second encoding unit 1710a is divided in the horizontal direction, the right-side second encoding unit 1710b is horizontally aligned with the left-side second encoding unit 1710a in the same direction as the division of the left second encoding unit 1710a, As shown in Fig.
- the left second encoding unit 1710a and the right second encoding unit 1710b are arranged in the horizontal direction
- the third encoding units 1712a, 1712b, 1714a, and 1714b can be determined by being independently divided. However, this is because the image restoring apparatus 100 divides the first encoding unit 1700 into four square-shaped second encoding units 1730a, 1730b, 1730c, and 1730d based on at least one of the block type information and the division type information And this may be inefficient in terms of image decoding.
- the image restoring apparatus 100 divides the non-square second encoding unit 1720a or 1720b determined by dividing the first encoding unit 11300 in the horizontal direction into vertical directions, (1722a, 1722b, 1724a, 1724b).
- the image restoring apparatus 100 may be configured to have a different second encoding unit (for example, Coding unit 1720b) can be restricted such that the upper second encoding unit 1720a can not be divided vertically in the same direction as the divided direction.
- FIG. 18 shows a process in which the image restoration apparatus 100 divides a square-shaped encoding unit when the division type information can not be divided into four square-shaped encoding units according to an embodiment.
- the image restoring apparatus 100 divides the first encoding unit 1800 based on at least one of the block type information and the division type information, and outputs the second encoding units 1810a, 1810b, 1820a, 1820b, You can decide.
- the division type information may include information on various types in which the coding unit can be divided, but information on various types may not include information for dividing into four square units of coding units.
- the image restoration apparatus 100 can not divide the first encoding unit 1800 in the square form into the second encoding units 1830a, 1830b, 1830c, and 1830d in the form of four squares.
- the image restoration apparatus 100 can determine the non-square second encoding units 1810a, 1810b, 1820a, 1820b, and the like.
- the image restoring apparatus 100 may independently divide the non-square second encoding units 1810a, 1810b, 1820a, and 1820b, respectively.
- Each of the second encoding units 1810a, 1810b, 1820a, 1820b, and the like may be divided in a predetermined order through a recursive method, and the first encoding unit 1800 May be a partitioning method corresponding to a method in which a partition is divided.
- the image restoring apparatus 100 can determine the third encoding units 1812a and 1812b in the form of a square by dividing the left second encoding unit 1810a in the horizontal direction and the right second encoding unit 1810b It is possible to determine the third encoding units 1814a and 1814b in the form of a square by being divided in the horizontal direction. Further, the image restoring apparatus 100 may divide the left second encoding unit 1810a and the right second encoding unit 1810b in the horizontal direction to determine the third encoding units 1816a, 1816b, 1816c, and 1816d in the form of a square have. In this case, the encoding unit may be determined in the same manner as the first encoding unit 1800 is divided into the four second-type second encoding units 1830a, 1830b, 1830c, and 1830d.
- the image restoring apparatus 100 may determine that the upper second encoding unit 1820a is vertically divided to determine the third encoding units 1822a and 1822b in the form of a square, and the lower second encoding units 1820b May be divided in the vertical direction to determine the third encoding units 1824a and 1824b in the form of a square. Further, the image restoring apparatus 100 may divide the upper second encoding unit 1820a and the lower second encoding unit 1820b in the vertical direction to determine the square-shaped third encoding units 1822a, 1822b, 1824a, and 1824b have. In this case, the encoding unit may be determined in the same manner as the first encoding unit 1800 is divided into the four second-type second encoding units 1830a, 1830b, 1830c, and 1830d.
- FIG. 19 illustrates that the processing order among a plurality of coding units may be changed according to the division process of the coding unit according to an embodiment.
- the image restoring apparatus 100 may divide the first encoding unit 1900 based on the block type information and the division type information.
- the image restoration apparatus 100 includes a first encoding unit 1900 (For example, 1910a, 1910b, 1920a, 1920b, 1930a, 1930b, 1930c, 1930d, etc.) can be determined.
- the non-square second encoding units 1910a, 1910b, 1920a, and 1920b which are determined by dividing the first encoding unit 1900 only in the horizontal direction or the vertical direction, As shown in FIG.
- the image restoring apparatus 100 divides the second encoding units 1910a and 1910b generated by dividing the first encoding unit 1900 in the vertical direction into the horizontal direction and outputs the third encoding units 1916a, 1916b, 1926c and 1916d can be determined and the second encoding units 1920a and 1920b generated by dividing the first encoding unit 1900 in the horizontal direction are divided in the horizontal direction and the third encoding units 1926a, 1926b, and 1926c , 1926d) can be determined. Since the process of dividing the second encoding units 1910a, 1910b, 1920a, and 1920b has been described in detail with reference to FIG. 17, a detailed description thereof will be omitted.
- the image restoration apparatus 100 may process an encoding unit in a predetermined order.
- the features of the processing of the encoding unit in the predetermined order have been described in detail with reference to FIG. 14, and a detailed description thereof will be omitted.
- the image reconstruction apparatus 100 divides a first coding unit 1900 in a square form into four quadrangle-shaped third coding units 1916a, 1916b, 1916c, 1916d, 1926a, 1926b, 1926c, and 1926d Can be determined.
- the image restoring apparatus 100 may process the third encoding units 1916a, 1916b, 1916c, 1916d, 1926a, 1926b, 1926c, and 1926d according to the form in which the first encoding unit 1900 is divided You can decide.
- the image restoring apparatus 100 divides the second encoding units 1910a and 1910b generated in the vertical direction into the horizontal direction to determine the third encoding units 1916a, 1916b, 1916c, and 1916d And the image restoration apparatus 100 first processes the third encoding units 1916a and 1916b included in the left second encoding unit 1910a in the vertical direction and then processes the third encoding units 1916a and 1916b included in the right second encoding unit 1910b
- the third encoding units 1916a, 1916b, 1916c, and 1916d can be processed in accordance with the order 1917 of processing the third encoding units 1916c and 1916d in the vertical direction.
- the image reconstruction apparatus 100 divides the second encoding units 1920a and 1920b generated in the horizontal direction into vertical directions to determine the third encoding units 1926a, 1926b, 1926c, and 1926d And the image restoration apparatus 100 first processes the third encoding units 1926a and 1926b included in the upper second encoding unit 1920a in the horizontal direction and then processes the third encoding units 1926a and 1926b included in the lower second encoding unit 1920b
- the third encoding units 1926a, 1926b, 1926c, and 1926d can be processed according to the order 1927 of processing the third encoding units 1926c and 1926d in the horizontal direction.
- the second encoding units 1910a, 1910b, 1920a, and 1920b are divided to determine the third encoding units 1916a, 1916b, 1916c, 1916d, 1926a, 1926b, 1926c, and 1926d, have.
- the second encoding units 1910a and 1910b determined to be divided in the vertical direction and the second encoding units 1920a and 1920b determined to be divided in the horizontal direction are divided into different formats, but the third encoding unit 1916a , 1916b, 1916c, 1916d, 1926a, 1926b, 1926c, and 1926d, the result is that the first encoding unit 1900 is divided into the same type of encoding units.
- FIG. 20 illustrates a process of determining the depth of an encoding unit according to a change in type and size of an encoding unit when the encoding unit is recursively divided according to an embodiment to determine a plurality of encoding units.
- the image restoration apparatus 100 may determine the depth of a coding unit according to a predetermined criterion.
- a predetermined criterion may be a length of a long side of a coding unit.
- the depth of the current encoding unit is smaller than the depth of the encoding unit before being divided it can be determined that the depth is increased by n.
- an encoding unit with an increased depth is expressed as a lower-depth encoding unit.
- 1 coding unit 2000 can be divided to determine the second coding unit 2002, the third coding unit 2004, etc. of the lower depth.
- the size of the square shape of the first encoding unit (2000) if it 2Nx2N, the first second encoding unit (2002) is determined by dividing the width and height of 1 to 1/2 of the encoding unit (2000) have a size of NxN .
- the third encoding unit 2004 determined by dividing the width and height of the second encoding unit 2002 by a half size may have a size of N / 2xN / 2.
- the depth of the first encoding unit (2000) corresponds to 1/2 2 times the third encoding unit (2004), the width and height of a first encoding unit (2000).
- block type information indicating a non-square shape for example, block type information is' 1: NS_VER 'indicating that the height is a non-square having a width greater than the width or'
- the image reconstruction apparatus 100 divides the first encoding unit 2010 or 2020, which is a non- The third encoding unit 2014 or 2024, or the like.
- the image restoration apparatus 100 may determine a second encoding unit (e.g., 2002, 2012, 2022, etc.) by dividing at least one of the width and the height of the first encoding unit 2010 of Nx2N size. That is, the image restoring apparatus 100 can determine the second encoding unit 2002 of the NxN size or the second encoding unit 2022 of the NxN / 2 size by dividing the first encoding unit 2010 in the horizontal direction, The second encoding unit 2012 may be divided into the horizontal direction and the vertical direction to determine the second encoding unit 2012 of N / 2xN size.
- a second encoding unit e.g., 2002, 2012, 2022, etc.
- the image restoring apparatus 100 divides at least one of the width and the height of the 2NxN first encoding unit 2020 to determine a second encoding unit (for example, 2002, 2012, 2022, etc.) It is possible. That is, the image restoring apparatus 100 can determine the second encoding unit 2002 of NxN size or the second encoding unit 2012 of N / 2xN size by dividing the first encoding unit 2020 in the vertical direction, The second encoding unit 2022 of the NxN / 2 size may be determined by dividing the image data in the horizontal direction and the vertical direction.
- a second encoding unit for example, 2002, 2012, 2022, etc.
- the image reconstruction apparatus 100 divides at least one of the width and the height of the second encoding unit 2002 of NxN size to determine a third encoding unit (for example, 2004, 2014, 2024, etc.) It is possible. That is, the image decoding device 100 includes a second sub-coding unit (2002) in the vertical direction and the horizontal direction to determine the N / 2xN / 2 size, a third encoding unit (2004) or to N / 2 2 xN / 2 size The third encoding unit 2014 of the N / 2xN / 2 size can be determined, or the third encoding unit 2024 of the N / 2xN / 2 2 size can be determined.
- a third encoding unit for example, 2004, 2014, 2024, etc.
- the image reconstruction apparatus 100 divides at least one of the width and the height of the second encoding unit 2012 of N / 2xN size to generate a third encoding unit (for example, 2004, 2014, 2024, . That is, the image restoring apparatus 100 divides the second encoding unit 2012 in the horizontal direction to obtain a third encoding unit 2004 of N / 2xN / 2 or a third encoding unit of N / 2xN / 2 2 size 2024) may be a crystal or by dividing in the vertical direction and the horizontal direction to determine the N / 2 2 xN / 2 the size of the third encoding unit (2014).
- the image restoring apparatus 100 divides at least one of the width and the height of the second encoding unit 2014 of NxN / 2 size, and outputs the third encoding unit (for example, 2004, 2014, 2024, . That is, the image decoding device 100 includes a second divide a coding unit (03) in the vertical direction N / 2xN / 2 size, a third encoding unit (2004), or N / 2 2, xN / 2 size, a third encoding unit of the The second encoding unit 2014 can be determined or the third encoding unit 2024 of N / 2xN / 2 2 size can be determined by dividing it in the vertical direction and the horizontal direction.
- the image restoring apparatus 100 may divide a square-shaped encoding unit (for example, 2000, 2002, 2004) into a horizontal direction or a vertical direction.
- the first encoding unit 2000 of the size 2Nx2N is divided in the vertical direction to determine the first encoding unit 2010 of the size Nx2N or the horizontal direction to determine the first encoding unit 2020 of 2NxN size .
- the depth of the encoding unit in which the first encoding unit 2000, 2002, or 2004 of size 2Nx2N is divided in the horizontal direction or the vertical direction is determined May be the same as the depth of the first encoding unit (2000, 2002 or 2004).
- it may correspond to 1/2 2 times the third encoding unit (2014 or 2024), the width and height of a first encoding unit (2010 or 2020) of the.
- the depth of the first coding unit 2010 or 2020 is D
- the depth of the second coding unit 2012 or 2014 which is half the width and height of the first coding unit 2010 or 2020, is D + 1
- the depth of the first encoding unit (2010 or 2020) 1/2 2 times the third encoding unit (2014 or 2024) of the width and height may be a D + 2.
- FIG. 21 illustrates a depth index (PID) for coding unit classification and depth that can be determined according to the type and size of coding units according to an exemplary embodiment.
- PID depth index
- the image restoration apparatus 100 may determine a second type of encoding unit of various types by dividing the first encoding unit 2100 in a square form. Referring to FIG. 21, the image restoring apparatus 100 divides the first encoding unit 2100 into at least one of a vertical direction and a horizontal direction according to the division type information, and outputs the second encoding units 2102a, 2102b, 2104a, 2104b, 2106a, 2106b, 2106c, and 2106d. That is, the image restoring apparatus 100 may determine the second encoding units 2102a, 2102b, 2104a, 2104b, 2106a, 2106b, 2106c, and 2106d based on the division type information for the first encoding unit 2100. [
- the second encoding units 2102a, 2102b, 2104a, 2104b, 2106a, 2106b, 2106c, and 2106d which are determined according to the division type information for the first encoding unit 2100 in the square form, Depth can be determined based on. For example, since the length of one side of the first encoding unit 2100 in the square form is the same as the length of long sides of the second encoding units 2102a, 2102b, 2104a, and 2104b in the non-square form, 2100 and the non-square type second encoding units 2102a, 2102b, 2104a, and 2104b are denoted by D in the same manner.
- the image restoration apparatus 100 divides the first encoding unit 2100 into four square-shaped second encoding units 2106a, 2106b, 2106c, and 2106d based on the division type information
- the length of one side of the second encoding units 2106a, 2106b, 2106c and 2106d is one-half the length of one side of the first encoding unit 2100. Therefore, the depths of the second encoding units 2106a, 2106b, May be a depth of D + 1 that is one depth lower than D, which is the depth of the first encoding unit 2100.
- the image restoring apparatus 100 divides a first encoding unit 2110 having a height greater than a width in a horizontal direction according to division type information, and generates a plurality of second encoding units 2112a, 2112b, 2114a, 2114b, and 2114c.
- the image restoring apparatus 100 divides the first encoding unit 2120 having a shape having a width greater than the height in the vertical direction according to the division type information to generate a plurality of second encoding units 2122a, 2122b, 2124a, 2124b, and 2124c.
- the second encoding units 2112a, 2112b, 2114a, 2114b, 2116a, 2116b, 2116c, and 2116d determined according to the division type information for the first encoding unit 2110 or 2120 in the non-
- the depth can be determined based on the length of the long side. For example, since the length of one side of the square-shaped second encoding units 2112a and 2112b is 1/2 times the length of one side of the non-square first encoding unit 2110 whose height is longer than the width, The depth of the second encoding units 2102a, 2102b, 2104a and 2104b in the form of D + 1 is one depth lower than the depth D of the first encoding unit 2110 in the non-square form.
- the image restoring apparatus 100 may divide the non-square-shaped first coding unit 2110 into odd-numbered second coding units 2114a, 2114b, and 2114c based on the division type information.
- the odd number of second encoding units 2114a, 2114b and 2114c may include non-square second encoding units 2114a and 2114c and a square second encoding unit 2114b.
- the length of the longer sides of the non-square second encoding units 2114a and 2114c and the length of one side of the second encoding unit 2114b of the square shape are 1 /
- the depth of the second encoding units 2114a, 2114b and 2114c may be a depth of D + 1 which is one depth lower than the depth D of the first encoding unit 2110.
- the image restoration apparatus 100 is connected to the first coding unit 2120 of a non-square shape having a width greater than the height in a manner corresponding to the method of determining the depths of the coding units associated with the first coding unit 2110 The depth of the encoding units can be determined.
- the image restoration apparatus 100 determines an index (PID) for segmentation of the divided coding units. If the odd-numbered coding units are not the same size, The index can be determined based on the index. 21, an encoding unit 2114b positioned at the center among the odd-numbered encoding units 2114a, 2114b, and 2114c has the same width as other encoding units 2114a and 2114c, May be two times as high as the height of the first and second electrodes 2114a and 2114c. That is, in this case, the coding unit 2114b positioned at the center may include two of the other coding units 2114a and 2114c.
- the coding unit 2114c positioned next to the coding unit 2114c may be three days in which the index is increased by two. That is, there may be a discontinuity in the value of the index.
- the image restoring apparatus 100 may determine whether the odd-numbered encoding units are not the same size based on whether there is an index discontinuity for distinguishing between the divided encoding units.
- the image restoration apparatus 100 may determine whether the image is divided into a specific division form based on an index value for identifying a plurality of coding units divided and determined from the current coding unit. Referring to FIG. 21, the image restoring apparatus 100 divides a rectangular first encoding unit 2110 whose height is longer than the width to determine even-numbered encoding units 2112a and 2112b or odd-numbered encoding units 2114a and 2114b , 2114c can be determined.
- the image restoration apparatus 100 may use an index (PID) indicating each encoding unit to distinguish each of the plurality of encoding units.
- the PID may be obtained at a sample of a predetermined position of each coding unit (e.g., the upper left sample).
- the image restoring apparatus 100 may determine a coding unit of a predetermined position among the coding units divided and determined using an index for classifying a coding unit.
- the image reconstruction apparatus 100 may include a first encoding unit 2110 It can be divided into three encoding units 2114a, 2114b, and 2114c.
- the image restoring apparatus 100 may assign an index to each of the three encoding units 2114a, 2114b, and 2114c.
- the image restoring apparatus 100 may compare the indexes of the respective encoding units in order to determine the middle encoding unit among the encoding units divided into odd numbers.
- the image restoring apparatus 100 encodes an encoding unit 2114b having an index corresponding to a middle value among the indices based on the indices of the encoding units so as to encode the middle position among the encoding units determined by dividing the first encoding unit 2110 Can be determined as a unit.
- the image restoration apparatus 100 may determine an index based on a size ratio between coding units when the coding units are not the same size in determining the index for dividing the divided coding units .
- the coding unit 2114b generated by dividing the first coding unit 2110 is divided into coding units 2114a and 2114c having the same width as the coding units 2114a and 2114c but different in height Can be double the height.
- the index (PID) of the coding unit 2114b positioned at the center is 1, the coding unit 2114c located next to the coding unit 2114c may be three (3)
- the image restoration apparatus 100 can determine that the image is divided into a plurality of encoding units including encoding units having different sizes from other encoding units.
- the image reconstruction apparatus 100 may determine that the coding unit (for example, the middle coding unit) at a predetermined position among the odd number of coding units has a different format The current encoding unit can be divided into.
- the image restoring apparatus 100 may determine an encoding unit having a different size by using an index (PID) for the encoding unit.
- PID index
- the index and the size or position of the encoding unit at a predetermined position to be determined are specific for explaining an embodiment, and thus should not be construed to be limited thereto, and various indexes, positions and sizes of encoding units can be used Should be interpreted.
- the image restoration apparatus 100 may use a predetermined data unit in which recursive division of the encoding unit starts.
- FIG. 22 shows that a plurality of coding units are determined according to a plurality of predetermined data units included in a picture according to an embodiment.
- a predetermined data unit may be defined as a data unit in which an encoding unit starts to be recursively segmented using at least one of block type information and partition type information. That is, it may correspond to a coding unit of the highest depth used in the process of determining a plurality of coding units for dividing the current picture.
- a predetermined data unit is referred to as a reference data unit for convenience of explanation.
- the reference data unit may represent a predetermined size and shape.
- the reference encoding unit may comprise samples of MxN.
- M and N may be equal to each other, or may be an integer represented by a multiplier of 2. That is, the reference data unit may represent a square or a non-square shape, and may be divided into an integer number of encoding units.
- the image restoration apparatus 100 may divide the current picture into a plurality of reference data units. According to an exemplary embodiment, the image restoring apparatus 100 may divide a plurality of reference data units for dividing a current picture by using division information for each reference data unit.
- the segmentation process of the reference data unit may correspond to the segmentation process using a quad-tree structure.
- the image restoration apparatus 100 may determine in advance a minimum size that the reference data unit included in the current picture can have. Accordingly, the image restoring apparatus 100 can determine reference data units of various sizes having sizes equal to or larger than the minimum size, and determine at least one coding unit using block type information and division type information based on the determined reference data unit You can decide.
- the image restoring apparatus 100 may use a square-shaped reference encoding unit 2200 or a non-square-shaped reference encoding unit 2202.
- the type and size of the reference encoding unit may include various data units (e.g., a sequence, a picture, a slice, a slice segment a slice segment, a maximum encoding unit, and the like).
- the receiving unit 210 of the image reconstruction apparatus 100 may acquire at least one of the information on the type of the reference encoding unit and the size of the reference encoding unit from the bit stream for each of the various data units .
- the process of determining at least one encoding unit included in the reference-type encoding unit 2200 in the form of a square is described in detail in the process of dividing the current encoding unit 300 of FIG. 10, and the non- Is determined in the process of dividing the current encoding unit 1100 or 1150 of FIG. 11, so that a detailed description thereof will be omitted.
- the image restoring apparatus 100 may include an index for identifying the size and type of the reference encoding unit Can be used. That is, the receiving unit 210 receives a predetermined condition (for example, a data unit having a size equal to or smaller than a slice) among the various data units (for example, a sequence, a picture, a slice, a slice segment, It is possible to obtain only an index for identifying the size and type of the reference encoding unit for each slice, slice segment, maximum encoding unit, and the like.
- a predetermined condition for example, a data unit having a size equal to or smaller than a slice
- the various data units for example, a sequence, a picture, a slice, a slice segment
- the image restoring apparatus 100 can determine the size and shape of the reference data unit for each data unit that satisfies the predetermined condition by using the index.
- the information on the type of the reference encoding unit and the information on the size of the reference encoding unit are obtained from the bitstream for each relatively small data unit and used, the use efficiency of the bitstream may not be good. Therefore, Information on the size of the reference encoding unit and information on the size of the reference encoding unit can be acquired and used. In this case, at least one of the size and the type of the reference encoding unit corresponding to the index indicating the size and type of the reference encoding unit may be predetermined.
- the image restoring apparatus 100 selects at least one of the size and the type of the reference encoding unit included in the data unit serving as a reference for index acquisition by selecting at least one of the size and the shape of the predetermined reference encoding unit You can decide.
- the image restoring apparatus 100 may use at least one reference encoding unit included in one maximum encoding unit. That is, the maximum encoding unit for dividing an image may include at least one reference encoding unit, and the encoding unit may be determined through a recursive division process of each reference encoding unit. According to an exemplary embodiment, at least one of the width and the height of the maximum encoding unit may correspond to at least one integer multiple of the width and height of the reference encoding unit. According to an exemplary embodiment, the size of the reference encoding unit may be a size obtained by dividing the maximum encoding unit n times according to a quadtree structure.
- the image restoring apparatus 100 can determine the reference encoding unit by dividing the maximum encoding unit n times according to the quad-tree structure, and according to various embodiments, the reference encoding unit may convert at least one of the block type information and the division type information As shown in FIG.
- FIG. 23 shows a processing block serving as a reference for determining a determination order of the reference encoding units included in the picture 2300 according to an embodiment.
- the image restoration apparatus 100 may determine at least one processing block that divides a picture.
- the processing block is a data unit including at least one reference encoding unit for dividing an image, and at least one reference encoding unit included in the processing block may be determined in a specific order. That is, the order of determination of at least one reference encoding unit determined in each processing block may correspond to one of various kinds of order in which the reference encoding unit can be determined, and the reference encoding unit determination order determined in each processing block May be different for each processing block.
- the order of determination of the reference encoding unit determined for each processing block is a raster scan, a Z scan, an N scan, an up-right diagonal scan, a horizontal scan a horizontal scan, and a vertical scan. However, the order that can be determined should not be limited to the scan orders.
- the image restoration apparatus 100 may obtain information on the size of the processing block to determine the size of the at least one processing block included in the image.
- the image restoration apparatus 100 may obtain information on the size of the processing block from the bitstream to determine the size of the at least one processing block included in the image.
- the size of such a processing block may be a predetermined size of a data unit represented by information on the size of the processing block.
- the receiver 210 of the image restoration apparatus 100 may obtain information on the size of the processing block from the bitstream for each specific data unit.
- information on the size of a processing block can be obtained from a bitstream in units of data such as an image, a sequence, a picture, a slice, a slice segment, and the like. That is, the receiving unit 210 may obtain information on the size of the processing block from the bitstream for each of the plurality of data units, and the image decompression apparatus 100 may decompress the picture using the information on the size of the obtained processing block
- the size of one processing block may be determined, and the size of the processing block may be an integer multiple of the reference encoding unit.
- the image restoration apparatus 100 may determine the sizes of the processing blocks 2302 and 2312 included in the picture 2300. For example, the image restoration apparatus 100 can determine the size of the processing block based on information on the size of the processing block obtained from the bitstream.
- the image restoration apparatus 100 according to an exemplary embodiment of the present invention includes a processing unit 2302 and a plurality of processing units 2312, each of which has a horizontal size of four times the horizontal size of the reference encoding unit and a vertical size of four times the vertical size of the reference encoding unit You can decide.
- the image restoration apparatus 100 may determine an order in which at least one reference encoding unit is determined in at least one processing block.
- the image reconstruction apparatus 100 may determine each of the processing blocks 2302 and 2312 included in the picture 2300 based on the size of the processing block, and may include in the processing blocks 2302 and 2312 The determination order of at least one reference encoding unit is determined.
- the determination of the reference encoding unit may include determining the size of the reference encoding unit according to an embodiment.
- the image restoration apparatus 100 may obtain information on a determination order of at least one reference encoding unit included in at least one processing block from a bitstream, So that the order in which at least one reference encoding unit is determined can be determined.
- the information on the decision order can be defined in the order or direction in which the reference encoding units are determined in the processing block. That is, the order in which the reference encoding units are determined may be independently determined for each processing block.
- the image restoring apparatus 100 may obtain information on a determination order of a reference encoding unit from a bitstream for each specific data unit.
- the receiving unit 210 may acquire information on the order of determination of the reference encoding unit from a bitstream for each data unit such as an image, a sequence, a picture, a slice, a slice segment, and a processing block. Since the information on the determination order of the reference encoding unit indicates the reference encoding unit determination order in the processing block, the information on the determination order can be obtained for each specific data unit including an integer number of processing blocks.
- the image restoration apparatus 100 may determine at least one reference encoding unit based on the determined order according to an embodiment.
- the receiver 210 may obtain information on a reference encoding unit determination order from the bitstream as information related to the processing blocks 2302 and 2312, 2302, and 2312, and determine at least one reference encoding unit included in the picture 2300 according to the determination order of the encoding units.
- the image restoration apparatus 100 can determine the determination order 2304 and 2314 of at least one reference encoding unit associated with each of the processing blocks 2302 and 2312. For example, when information on a determination order of reference encoding units is obtained for each processing block, the reference encoding unit determination order associated with each processing block 2302, 2312 may be different for each processing block.
- the reference encoding unit included in the processing block 2302 can be determined according to the raster scan order.
- the reference encoding unit determination order 2314 related to the other processing block 2312 is in the reverse order of the raster scan order, the reference encoding unit included in the processing block 2312 can be determined according to the reverse order of the raster scan order.
- the image restoration apparatus 100 may decode the determined at least one reference encoding unit according to an embodiment.
- the image restoring apparatus 100 can decode the image based on the reference encoding unit determined through the above-described embodiment.
- the method of decoding the reference encoding unit may include various methods of decoding the image.
- the image restoring apparatus 100 may obtain block type information indicating a type of a current encoding unit or division type information indicating a method of dividing a current encoding unit from a bitstream.
- the block type information or the division type information may be included in a bitstream related to various data units.
- the image restoration apparatus 100 may include a sequence parameter set, a picture parameter set, a video parameter set, a slice header, a slice segment header slice segment type information included in the segment header can be used.
- the image restoring apparatus 100 may obtain the syntax corresponding to the block type information or the division type information from the bit stream for each of the maximum encoding unit, the reference encoding unit, and the processing block, from the bitstream.
- the above-described embodiments of the present invention can be embodied in a general-purpose digital computer that can be embodied as a program that can be executed by a computer and operates the program using a computer-readable recording medium.
- the computer-readable recording medium includes a storage medium such as a magnetic storage medium (e.g., ROM, floppy disk, hard disk, etc.), optical reading medium (e.g., CD ROM,
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression Of Band Width Or Redundancy In Fax (AREA)
Abstract
Description
Claims (15)
- 영상을 복원하는 방법에 있어서,비트스트림으로부터 상기 영상을 다운샘플링(down-sampling)한 압축영상에 대한 잔차신호를 획득하는 단계;상기 잔차신호 및 예측을 수행하여 획득한 예측신호를 이용하여 압축 상기 압축영상을 복호화하는 단계; 및상기 복호화된 압축영상에 DNN(Deep Neural Network)을 이용한 업샘플링(up-sampling)을 수행하여 상기 영상을 복원하는 단계를 포함하고,상기 DNN은 다운샘플링 과정에서 생성되는 정보를 이용한 업샘플링 과정의 학습을 통해 미리 결정된 네트워크 구조를 가지는 것을 특징으로 하는 영상 복원 방법.
- 제 1 항에 있어서, 상기 영상을 복원하는 단계는복수개의 레이어(hidden layer)를 포함하는 심층 컨볼루셔널 뉴럴 네트워크(Deep Convolutional Neural Network)를 이용하여 업샘플링을 수행하는 단계를 포함하는 영상 복원 방법.
- 제 2 항에 있어서, 상기 심층 컨볼루셔널 뉴럴 네트워크를 이용하여 업샘플링을 수행하는 단계는,복수개의 필터 커널들 중 적어도 하나를 이용하여 상기 복수개의 레이어마다 필터링을 수행하여 상기 업샘플링을 수행하는 단계를 포함하고상기 복수개의 필터 커널들의 종류는 상기 영상이 다운샘플링될 때 이용된 필터 커널들의 종류와 다른 것을 특징으로 하는 영상 복원 방법.
- 제 2 항에 있어서, 상기 영상을 복원하는 단계는,상기 DNN의 복수개의 레이어 각각에서 적어도 하나의 필터 커널을 이용하여 필터링을 수행하는 단계를 포함하는 영상 복원 방법.
- 제 1 항에 있어서,상기 DNN은 업샘플링이 수행됨으로써 복원된 영상과 다운샘플링이 수행되기 전의 원본 영상 간의 비교에 의해 결정되는 적어도 하나의 손실정보의 합이 감소되도록 학습된 것을 특징으로 하고,상기 적어도 하나의 손실정보 중 일부는 다운샘플링을 위한 DNN의 학습 과정에서 이용되는 것을 특징으로 하는 영상 복원 방법.
- 제 5 항에 있어서,다운샘플링을 위한 DNN은, 다운샘플링 되기 전의 원본영상과 상기 원본 영상의 구조적 특징에 기초하여 공간적 크기가 축소된 구조적 복원 영상 간의 차이에 기초하여 결정되는 적어도 하나의 손실정보의 합이 감소되도록 학습된 것을 특징으로 하고,상기 압축영상은 상기 학습과정이 수행된 다운샘플링을 위한 DNN에 의해 다운샘플링된 영상인 것을 특징으로 하는 영상 복원 방법.
- 제 6 항에 있어서,상기 구조적 특징은 원본영상의 휘도, 대비, 히스토그램, 부호화 품질, 압축 히스토리 정보 및 상기 원본영상의 타입 중 적어도 하나를 포함하는 것을 특징으로 하는 영상 복원 방법.
- 영상을 압축하는 방법에 있어서,상기 영상에 대한 DNN을 이용한 다운샘플링을 수행하여 압축영상을 결정하는 단계;상기 압축영상에 기초한 예측을 수행하여 예측신호를 결정하는 단계;상기 압축영상 및 예측신호에 기초하여 잔차신호를 결정하는 단계; 및상기 잔차신호에 대한 정보를 포함하는 비트스트림을 생성하는 단계를 포함하고,상기 DNN은, 업샘플링 과정에서 생성되는 정보를 이용한 다운샘플링 과정의 학습을 통해 미리 결정된 네트워크 구조를 가지는 것을 특징으로 하는 영상 압축 방법.
- 제 8 항에 있어서, 상기 압축영상을 결정하는 단계는,복수개의 레이어를 포함하는 심층 컨볼루셔널 뉴럴 네트워크를 이용하여 상기 압축영상을 결정하는 단계를 포함하는 영상 압축 방법.
- 제 9 항에 있어서, 상기 압축영상을 결정하는 단계는,상기 복수개의 레이어마다 복수개의 필터 커널들 중 적어도 하나를 이용하여 필터링을 수행하여 상기 압축영상을 생성하는 단계를 포함하는 영상 압축 방법.
- 제 10 항에 있어서, 상기 필터링을 수행하는 단계는,상기 복수개의 레이어 중 복수개의 필터 커널이 이용되는 레이어에서는 상기 복수개의 필터 커널로 필터링을 수행하는 단계;상기 필터링 결과에 따라 획득된 복수개의 신호들을 연결(concatenate)하는 단계; 및상기 연결된 신호들을 다음 레이어의 입력으로 이용함으로써 다음 레이어에서 필터링을 수행하는 단계를 포함하는 영상 압축 방법.
- 제 8 항에 있어서, 상기 비트스트림을 생성하는 단계는,상기 다운샘플링에 의해 상기 영상의 크기 및 영상의 프레임 레이트(frame rate) 중 적어도 하나가 감축된 정도를 나타내는 샘플링 정보를 포함하는 비트스트림을 생성하는 단계를 포함하는 것을 특징으로 하는 영상 압축 방법.
- 제 9 항에 있어서,상기 DNN을 이용한 다운샘플링에 의해 발생하는 손실(loss)을 나타내는 적어도 하나의 손실정보의 합이 감소되도록 상기 DNN이 학습되는 것을 특징으로 하고,상기 적어도 하나의 손실정보 중 일부는 업샘플링이 수행됨으로써 복원된 영상과 다운샘플링이 수행되기 전의 원본 영상 간의 비교 결과에 기초하여 결정되고,상기 비교 결과는 업샘플링을 위한 DNN의 학습 과정에서 이용되는 것을 특징으로 하는 영상 압축 방법.
- 제 13 항에 있어서,상기 비교 결과는 상기 업샘플링을 위한 DNN의 학습 과정에서 이용되는 것을 특징으로 하는 영상 압축 방법.
- 영상을 복원하는 장치에 있어서,비트스트림으로부터 상기 영상을 다운샘플링한 압축영상에 대한 잔차신호를 획득하는 잔차신호 획득부; 및상기 잔차신호 및 예측을 수행하여 획득한 예측신호를 이용하여 상기 압축영상을 복호화하고, 상기 복호화된 압축영상에 DNN을 이용한 상기 업샘플링을 수행하여 상기 영상을 복원하는 복원부를 포함하고,상기 DNN은 다운샘플링 과정에서 생성되는 정보를 이용한 업샘플링 과정의 학습을 통해 미리 결정된 네트워크 구조를 가지는 것을 특징으로 하는 영상 복원 장치.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020207000378A KR102285737B1 (ko) | 2017-07-06 | 2018-02-06 | 영상을 부호화/복호화 하는 방법 및 그 장치 |
US16/468,338 US11190784B2 (en) | 2017-07-06 | 2018-02-06 | Method for encoding/decoding image and device therefor |
CN201880013752.8A CN110337813B (zh) | 2017-07-06 | 2018-02-06 | 用于对图像进行编码/解码的方法及其装置 |
US16/750,615 US10986356B2 (en) | 2017-07-06 | 2020-01-23 | Method for encoding/decoding image and device therefor |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20170086137 | 2017-07-06 | ||
KRPCT/KR2017/007258 | 2017-07-06 | ||
PCT/KR2017/007258 WO2019009447A1 (ko) | 2017-07-06 | 2017-07-06 | 영상을 부호화/복호화 하는 방법 및 그 장치 |
KR10-2017-0086137 | 2017-07-06 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/468,338 A-371-Of-International US11190784B2 (en) | 2017-07-06 | 2018-02-06 | Method for encoding/decoding image and device therefor |
US16/750,615 Continuation US10986356B2 (en) | 2017-07-06 | 2020-01-23 | Method for encoding/decoding image and device therefor |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019009490A1 true WO2019009490A1 (ko) | 2019-01-10 |
Family
ID=64950176
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2018/001542 WO2019009490A1 (ko) | 2017-07-06 | 2018-02-06 | 영상을 부호화/복호화 하는 방법 및 그 장치 |
Country Status (5)
Country | Link |
---|---|
US (1) | US11190784B2 (ko) |
EP (1) | EP3567857A1 (ko) |
KR (1) | KR102285737B1 (ko) |
CN (1) | CN110337813B (ko) |
WO (1) | WO2019009490A1 (ko) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111147862A (zh) * | 2020-01-03 | 2020-05-12 | 南京大学 | 一种基于目标编码的端到端图像压缩方法 |
WO2020238439A1 (zh) * | 2019-05-24 | 2020-12-03 | 浙江大学 | 无线自组织网络带宽受限下的视频业务质量增强方法 |
WO2020246756A1 (en) | 2019-06-05 | 2020-12-10 | Samsung Electronics Co., Ltd. | Apparatus and method for performing artificial intelligence encoding and artificial intelligence decoding on image |
CN112183736A (zh) * | 2019-07-05 | 2021-01-05 | 三星电子株式会社 | 人工智能处理器及其执行神经网络运算的方法 |
WO2021054697A1 (ko) * | 2019-09-17 | 2021-03-25 | 삼성전자 주식회사 | 영상의 ai 부호화 방법 및 장치, 영상의 ai 복호화 방법 및 장치 |
WO2021086016A2 (en) | 2019-10-28 | 2021-05-06 | Samsung Electronics Co., Ltd. | Apparatus and method for performing artificial intelligence (ai) encoding and ai decoding on image |
CN114631315A (zh) * | 2019-10-29 | 2022-06-14 | 三星电子株式会社 | 图像编码方法和设备以及图像解码方法和设备 |
US12072806B2 (en) | 2020-01-22 | 2024-08-27 | Alibaba Group Holding Limited | Compression and decompression module in a cache controller for reducing off-chip data traffic |
Families Citing this family (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110235146A (zh) * | 2017-02-03 | 2019-09-13 | 西门子股份公司 | 用于检测图像中的感兴趣对象的方法和装置 |
WO2018165753A1 (en) * | 2017-03-14 | 2018-09-20 | University Of Manitoba | Structure defect detection using machine learning algorithms |
US11190784B2 (en) | 2017-07-06 | 2021-11-30 | Samsung Electronics Co., Ltd. | Method for encoding/decoding image and device therefor |
WO2019208677A1 (ja) * | 2018-04-27 | 2019-10-31 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 符号化装置、復号装置、符号化方法および復号方法 |
EP3794828A1 (en) * | 2018-05-16 | 2021-03-24 | Isize Limited | Encoding and decoding image data |
WO2020080665A1 (en) | 2018-10-19 | 2020-04-23 | Samsung Electronics Co., Ltd. | Methods and apparatuses for performing artificial intelligence encoding and artificial intelligence decoding on image |
WO2020080765A1 (en) | 2018-10-19 | 2020-04-23 | Samsung Electronics Co., Ltd. | Apparatuses and methods for performing artificial intelligence encoding and artificial intelligence decoding on image |
CN111565314A (zh) * | 2019-02-13 | 2020-08-21 | 合肥图鸭信息科技有限公司 | 图像压缩方法、编解码网络训练方法、装置及电子设备 |
JP7141007B2 (ja) * | 2019-05-10 | 2022-09-22 | 日本電信電話株式会社 | 符号化装置、符号化方法及びプログラム |
US20210049473A1 (en) * | 2019-08-14 | 2021-02-18 | The Board Of Trustees Of The Leland Stanford Junior University | Systems and Methods for Robust Federated Training of Neural Networks |
KR20210067788A (ko) * | 2019-11-29 | 2021-06-08 | 삼성전자주식회사 | 전자 장치, 시스템 및 그 제어 방법 |
KR20210067783A (ko) | 2019-11-29 | 2021-06-08 | 삼성전자주식회사 | 전자 장치, 그 제어 방법 및 시스템 |
US20210177380A1 (en) * | 2019-12-13 | 2021-06-17 | Korea Advanced Institute Of Science And Technology | Method and apparatus for quantitative ultrasound imaging using single-ultrasound probe |
US20210183521A1 (en) * | 2019-12-13 | 2021-06-17 | Korea Advanced Institute Of Science And Technology | Method and apparatus for quantitative imaging using ultrasound data |
CN113052924A (zh) * | 2019-12-27 | 2021-06-29 | 无锡祥生医疗科技股份有限公司 | 用于超声图像编解码间的画质补偿方法及其卷积神经网络 |
KR102287942B1 (ko) * | 2020-02-24 | 2021-08-09 | 삼성전자주식회사 | 전처리를 이용한 영상의 ai 부호화 및 ai 복호화 방법, 및 장치 |
US11769276B2 (en) | 2020-03-05 | 2023-09-26 | Electronics And Telecommunications Research Institute | Method, apparatus, and storage medium using padding/trimming in compression neural network |
CN111787323B (zh) * | 2020-05-23 | 2021-09-03 | 清华大学 | 一种基于对抗学习的可变比特率生成式压缩方法 |
CN111757110A (zh) * | 2020-07-02 | 2020-10-09 | 中实燃气发展(西安)有限公司 | 视频编码方法及编码树单元划分方法、系统、设备及可读存储介质 |
KR102532006B1 (ko) * | 2020-07-24 | 2023-05-12 | 한국전자기술연구원 | Self-Spatial Adaptive Normalization 기법을 적용한 영상 영역 분할 방법 및 시스템 |
JP7481956B2 (ja) * | 2020-08-26 | 2024-05-13 | 株式会社東芝 | 推論装置、方法、プログラムおよび学習装置 |
CN115668273A (zh) | 2020-09-15 | 2023-01-31 | 三星电子株式会社 | 电子装置、其控制方法和电子系统 |
CN114205646B (zh) * | 2020-09-18 | 2024-03-29 | 阿里巴巴达摩院(杭州)科技有限公司 | 数据处理方法、装置、电子设备及存储介质 |
KR102615404B1 (ko) * | 2020-09-23 | 2023-12-20 | 한국전자통신연구원 | 피쳐 정보에 대한 방법, 장치, 시스템 및 컴퓨터 판독 가능한 기록 매체 |
KR20220045920A (ko) * | 2020-10-06 | 2022-04-13 | 한국항공대학교산학협력단 | 머신비전을 위한 영상의 처리 방법 및 장치 |
US11973976B2 (en) * | 2021-03-26 | 2024-04-30 | Sharp Kabushiki Kaisha | Systems and methods for performing padding in coding of a multi-dimensional data set |
JP7543978B2 (ja) * | 2021-05-12 | 2024-09-03 | 横河電機株式会社 | 装置、監視システム、方法およびプログラム |
WO2022250397A1 (en) * | 2021-05-27 | 2022-12-01 | Samsung Electronics Co., Ltd. | Methods and apparatus for processing of high-resolution video content |
FR3124671B1 (fr) * | 2021-06-25 | 2023-07-07 | Fond B Com | Procédés de décodage et de codage d’une image, dispositifs et signal associés |
US20230215052A1 (en) * | 2022-01-05 | 2023-07-06 | Kenneth Tang | Systems and methods for intelligently compressing whole slide images |
KR20240005485A (ko) * | 2022-07-05 | 2024-01-12 | 삼성전자주식회사 | Ai 부호화/복호화를 이용하여 영상을 처리하는 전자 장치 및 그 제어 방법 |
WO2024029812A1 (ko) * | 2022-08-01 | 2024-02-08 | 배태면 | 적응적 인코딩 파라메터 운용 방법 및 이를 지원하는 전자 장치 |
CN115866252B (zh) * | 2023-02-09 | 2023-05-02 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | 一种图像压缩方法、装置、设备及存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080140594A1 (en) * | 2001-12-21 | 2008-06-12 | International Business Machines Corporation | System for scaling images using neural networks |
KR101375663B1 (ko) * | 2007-12-06 | 2014-04-03 | 삼성전자주식회사 | 영상을 계층적으로 부호화/복호화하는 방법 및 장치 |
KR101425602B1 (ko) * | 2008-03-12 | 2014-07-31 | 삼성전자주식회사 | 영상 부호화/복호화 장치 및 그 방법 |
WO2016132148A1 (en) * | 2015-02-19 | 2016-08-25 | Magic Pony Technology Limited | Machine learning for visual processing |
Family Cites Families (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050129306A1 (en) | 2003-12-12 | 2005-06-16 | Xianglin Wang | Method and apparatus for image deinterlacing using neural networks |
ZA200607434B (en) | 2004-03-09 | 2008-08-27 | Thomson Res Funding Corp | Reduced resolution update mode for advanced video coding |
US20160360155A1 (en) | 2005-09-07 | 2016-12-08 | Vidyo, Inc. | System and method for scalable and low-delay videoconferencing using scalable video coding |
EP2360843A3 (en) | 2006-02-16 | 2013-04-03 | Vidyo, Inc. | System and method for thinning of scalable video coding bit-streams |
WO2014025741A2 (en) | 2012-08-06 | 2014-02-13 | Vid Scale, Inc. | Sampling grid information for spatial layers in multi-layer video coding |
WO2014052740A1 (en) | 2012-09-28 | 2014-04-03 | Vid Scale, Inc. | Adaptive upsampling for multi-layer video coding |
US20140177706A1 (en) | 2012-12-21 | 2014-06-26 | Samsung Electronics Co., Ltd | Method and system for providing super-resolution of quantized images and video |
US9251572B2 (en) | 2013-07-26 | 2016-02-02 | Qualcomm Incorporated | System and method of correcting image artifacts |
CN104754357B (zh) * | 2015-03-24 | 2017-08-11 | 清华大学 | 基于卷积神经网络的帧内编码优化方法及装置 |
CN104700099B (zh) | 2015-03-31 | 2017-08-11 | 百度在线网络技术(北京)有限公司 | 识别交通标志的方法和装置 |
US9940539B2 (en) | 2015-05-08 | 2018-04-10 | Samsung Electronics Co., Ltd. | Object recognition apparatus and method |
KR102450971B1 (ko) | 2015-05-08 | 2022-10-05 | 삼성전자주식회사 | 객체 인식 장치 및 방법 |
US9805305B2 (en) | 2015-08-07 | 2017-10-31 | Yahoo Holdings, Inc. | Boosted deep convolutional neural networks (CNNs) |
US11196992B2 (en) | 2015-09-03 | 2021-12-07 | Mediatek Inc. | Method and apparatus of neural network based processing in video coding |
KR101974261B1 (ko) | 2016-06-24 | 2019-04-30 | 한국과학기술원 | Cnn 기반 인루프 필터를 포함하는 부호화 방법과 장치 및 복호화 방법과 장치 |
US10623775B1 (en) * | 2016-11-04 | 2020-04-14 | Twitter, Inc. | End-to-end video and image compression |
US11593632B2 (en) * | 2016-12-15 | 2023-02-28 | WaveOne Inc. | Deep learning based on image encoding and decoding |
KR20180100976A (ko) | 2017-03-03 | 2018-09-12 | 한국전자통신연구원 | 딥 신경망 기반 블러 영상 학습을 이용한 영상 부호화/복호화 방법 및 장치 |
KR101885855B1 (ko) | 2017-03-30 | 2018-08-07 | 단국대학교 산학협력단 | 고해상도 추정 기법을 활용한 영상 신호 전송 |
CN109118459B (zh) * | 2017-06-23 | 2022-07-19 | 南开大学 | 图像显著性物体检测方法和装置 |
US11190784B2 (en) | 2017-07-06 | 2021-11-30 | Samsung Electronics Co., Ltd. | Method for encoding/decoding image and device therefor |
US10986356B2 (en) | 2017-07-06 | 2021-04-20 | Samsung Electronics Co., Ltd. | Method for encoding/decoding image and device therefor |
KR102082815B1 (ko) | 2018-04-24 | 2020-02-28 | 주식회사 지디에프랩 | 인공지능 기반 해상도 개선 시스템 |
KR102022648B1 (ko) | 2018-08-10 | 2019-09-19 | 삼성전자주식회사 | 전자 장치, 이의 제어 방법 및 서버의 제어 방법 |
-
2018
- 2018-02-06 US US16/468,338 patent/US11190784B2/en active Active
- 2018-02-06 KR KR1020207000378A patent/KR102285737B1/ko active IP Right Grant
- 2018-02-06 EP EP19183429.0A patent/EP3567857A1/en active Pending
- 2018-02-06 WO PCT/KR2018/001542 patent/WO2019009490A1/ko active Application Filing
- 2018-02-06 CN CN201880013752.8A patent/CN110337813B/zh active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080140594A1 (en) * | 2001-12-21 | 2008-06-12 | International Business Machines Corporation | System for scaling images using neural networks |
KR101375663B1 (ko) * | 2007-12-06 | 2014-04-03 | 삼성전자주식회사 | 영상을 계층적으로 부호화/복호화하는 방법 및 장치 |
KR101425602B1 (ko) * | 2008-03-12 | 2014-07-31 | 삼성전자주식회사 | 영상 부호화/복호화 장치 및 그 방법 |
WO2016132148A1 (en) * | 2015-02-19 | 2016-08-25 | Magic Pony Technology Limited | Machine learning for visual processing |
Non-Patent Citations (1)
Title |
---|
MAO, XIAO-JIAO ET AL.: "Image Restoration Using Convolutional Auto-encoders with Symmetric Skip Connections", ARXIV, 30 August 2016 (2016-08-30), pages 1 - 17, XP055565060, Retrieved from the Internet <URL:https://arxiv.org/abs/1606.08921> * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020238439A1 (zh) * | 2019-05-24 | 2020-12-03 | 浙江大学 | 无线自组织网络带宽受限下的视频业务质量增强方法 |
CN113994691A (zh) * | 2019-06-05 | 2022-01-28 | 三星电子株式会社 | 用于对图像执行人工智能编码和人工智能解码的设备和方法 |
WO2020246756A1 (en) | 2019-06-05 | 2020-12-10 | Samsung Electronics Co., Ltd. | Apparatus and method for performing artificial intelligence encoding and artificial intelligence decoding on image |
EP3954127A4 (en) * | 2019-06-05 | 2022-06-08 | Samsung Electronics Co., Ltd. | APPARATUS AND METHOD FOR PERFORMING ARTIFICIAL INTELLIGENCE CODING AND ARTIFICIAL INTELLIGENCE DECODING ON AN IMAGE |
EP3767477A1 (en) * | 2019-07-05 | 2021-01-20 | Samsung Electronics Co., Ltd. | Artificial intelligence processor and method of performing neural network operation thereof |
CN112183736A (zh) * | 2019-07-05 | 2021-01-05 | 三星电子株式会社 | 人工智能处理器及其执行神经网络运算的方法 |
US11495037B2 (en) | 2019-07-05 | 2022-11-08 | Samsung Electronics Co., Ltd. | Artificial intelligence processor and method of performing neural network operation thereof |
CN112183736B (zh) * | 2019-07-05 | 2024-07-05 | 三星电子株式会社 | 人工智能处理器及其执行神经网络运算的方法 |
WO2021054697A1 (ko) * | 2019-09-17 | 2021-03-25 | 삼성전자 주식회사 | 영상의 ai 부호화 방법 및 장치, 영상의 ai 복호화 방법 및 장치 |
WO2021086016A2 (en) | 2019-10-28 | 2021-05-06 | Samsung Electronics Co., Ltd. | Apparatus and method for performing artificial intelligence (ai) encoding and ai decoding on image |
EP4014173A4 (en) * | 2019-10-28 | 2022-09-28 | Samsung Electronics Co., Ltd. | APPARATUS AND METHOD FOR PERFORMING ARTIFICIAL INTELLIGENCE (AI) CODING AND AI DECODING ON AN IMAGE |
US11610341B2 (en) | 2019-10-28 | 2023-03-21 | Samsung Electronics Co., Ltd. | Apparatus and method for performing artificial intelligence (AI) encoding and AI decoding on image |
US11810332B2 (en) | 2019-10-28 | 2023-11-07 | Samsung Electronics Co., Ltd. | Apparatus and method for performing artificial intelligence (AI) encoding and AI decoding on image |
CN114631315A (zh) * | 2019-10-29 | 2022-06-14 | 三星电子株式会社 | 图像编码方法和设备以及图像解码方法和设备 |
CN111147862A (zh) * | 2020-01-03 | 2020-05-12 | 南京大学 | 一种基于目标编码的端到端图像压缩方法 |
US12072806B2 (en) | 2020-01-22 | 2024-08-27 | Alibaba Group Holding Limited | Compression and decompression module in a cache controller for reducing off-chip data traffic |
Also Published As
Publication number | Publication date |
---|---|
CN110337813A (zh) | 2019-10-15 |
KR20200009118A (ko) | 2020-01-29 |
US20200389658A1 (en) | 2020-12-10 |
EP3567857A1 (en) | 2019-11-13 |
KR102285737B1 (ko) | 2021-08-05 |
CN110337813B (zh) | 2023-04-18 |
US11190784B2 (en) | 2021-11-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019009490A1 (ko) | 영상을 부호화/복호화 하는 방법 및 그 장치 | |
WO2019009489A1 (ko) | 영상을 부호화/복호화 하는 방법 및 그 장치 | |
WO2019009447A1 (ko) | 영상을 부호화/복호화 하는 방법 및 그 장치 | |
WO2019009488A1 (ko) | 영상을 부호화 또는 복호화하는 방법 및 장치 | |
WO2017065525A2 (ko) | 영상을 부호화 또는 복호화하는 방법 및 장치 | |
WO2017122997A1 (ko) | 영상 부호화 방법 및 장치와 영상 복호화 방법 및 장치 | |
WO2018221817A1 (ko) | 영상 코딩 시스템에서 인트라 예측에 따른 영상 디코딩 방법 및 장치 | |
WO2016064185A1 (ko) | 최적화 함수를 이용하여 그래프 기반 예측을 수행하는 방법 및 장치 | |
WO2017014585A1 (ko) | 그래프 기반 변환을 이용하여 비디오 신호를 처리하는 방법 및 장치 | |
WO2019009491A1 (ko) | 영상을 부호화 또는 복호화하는 방법 및 장치 | |
WO2019143026A1 (ko) | 특징맵 압축을 이용한 이미지 처리 방법 및 장치 | |
WO2018070554A1 (ko) | 루마 블록 및 크로마 블록을 부호화 또는 복호화하는 방법 및 장치 | |
WO2021101243A1 (en) | Apparatus and method for using ai metadata related to image quality | |
WO2017010850A1 (ko) | 분리 가능한 그래프 기반 변환을 이용하여 비디오 신호를 처리하는 방법 및 장치 | |
WO2019022537A1 (ko) | 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
WO2019143027A1 (ko) | 이미지 파이프라인 처리 방법 및 장치 | |
WO2021172956A1 (ko) | 영상 특징 정보 시그널링을 위한 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 | |
WO2019143024A1 (ko) | 라인 단위 연산을 이용한 초해상화 방법 및 장치 | |
WO2019009452A1 (ko) | 영상을 부호화 또는 복호화하는 방법 및 장치 | |
WO2017091001A1 (ko) | 픽셀의 기울기에 기초하여 인트라 또는 인터 예측 블록을 후처리하는 방법 및 장치 | |
WO2022108361A1 (ko) | 신경망 특징맵 양자화 방법 및 장치 | |
WO2016076624A1 (ko) | 그래프 기반 변환(graph based transform)을 이용한 비디오 신호 처리 방법 및 이를 위한 장치 | |
WO2016056709A1 (ko) | 이미지 재부호화 방법 및 그 장치 | |
WO2019143025A1 (ko) | 라인 입력 및 출력을 이용한 이미지 처리 방법 및 장치 | |
WO2019190197A1 (ko) | 비디오 신호 처리 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18828862 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2018828862 Country of ref document: EP Effective date: 20190530 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18828862 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20207000378 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |