WO2020080751A1 - Procédé de codage et appareil associé, procédé de décodage et appareil associé - Google Patents
Procédé de codage et appareil associé, procédé de décodage et appareil associé Download PDFInfo
- Publication number
- WO2020080751A1 WO2020080751A1 PCT/KR2019/013344 KR2019013344W WO2020080751A1 WO 2020080751 A1 WO2020080751 A1 WO 2020080751A1 KR 2019013344 W KR2019013344 W KR 2019013344W WO 2020080751 A1 WO2020080751 A1 WO 2020080751A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- image
- upscale
- information
- dnn
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 152
- 238000004891 communication Methods 0.000 claims description 44
- 238000013473 artificial intelligence Methods 0.000 description 1360
- 238000012549 training Methods 0.000 description 156
- 230000008569 process Effects 0.000 description 98
- 230000004913 activation Effects 0.000 description 31
- 230000006870 function Effects 0.000 description 28
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 27
- 238000010586 diagram Methods 0.000 description 19
- 238000012545 processing Methods 0.000 description 19
- 238000013139 quantization Methods 0.000 description 12
- 238000007906 compression Methods 0.000 description 11
- 230000006835 compression Effects 0.000 description 11
- 238000013528 artificial neural network Methods 0.000 description 9
- 238000006243 chemical reaction Methods 0.000 description 7
- 238000004590 computer program Methods 0.000 description 7
- 238000013507 mapping Methods 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 239000013598 vector Substances 0.000 description 5
- 230000003044 adaptive effect Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 238000012546 transfer Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 230000008685 targeting Effects 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000010839 reverse transcription Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/184—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
Definitions
- the present disclosure relates to the field of image processing. More specifically, the present disclosure relates to an apparatus and method for encoding and decoding an image using a deep neural network.
- the image is encoded by a codec conforming to a predetermined data compression standard, for example, a Moving Picture Expert Group (MPEG) standard, and then stored in a recording medium in the form of a bitstream or transmitted through a communication channel.
- a codec conforming to a predetermined data compression standard, for example, a Moving Picture Expert Group (MPEG) standard, and then stored in a recording medium in the form of a bitstream or transmitted through a communication channel.
- MPEG Moving Picture Expert Group
- a method and apparatus for encoding and decoding an image according to an embodiment of the present invention is a technical task of encoding and decoding an image at a low bit rate using a deep neural network (DNN).
- DNN deep neural network
- a computer-readable recording medium recording a program for executing a video encoding method and a video decoding method according to an embodiment of the present disclosure is disclosed.
- the AI-encoded data includes image data including encoding information of a low-resolution image generated by AI down-scaling a high-resolution image, And AI data for AI upscale of the low resolution image reconstructed according to the image data, wherein the AI data includes AI target data indicating whether AI upscale is applied to one or more frames, and AI to the one or more frames.
- a computer recordable recording medium comprising AI auxiliary data on upscale DNN information used for AI upscale of the one or more frames among a plurality of preset DNN configuration information preset.
- a video decoding method using AI upscale comprising: receiving a video file including image data and AI encoded data including AI data related to AI upscale of the image data; Acquiring AI data of the AI-encoded data from a metadata box, and obtaining image data of the AI-encoded data from a media data box of the video file, decoding the image data, and restoring a low-resolution image of the current frame A high resolution image corresponding to the low resolution image by performing AI upscaling the low resolution image according to the upscale DNN information of the current frame, and obtaining the upscale DNN information of the current frame from the AI data.
- Video decoding method comprising the step of generating It is provided.
- a video encoding method using AI downscale determining downscale DNN information for AI downscaling a high resolution image of a current frame into a low resolution image, according to the downscale DNN information, the current Generating a low-resolution image of the current frame by AI down-scaling the high-resolution image of the frame, for upscale DNN information corresponding to the downscale DNN information, used for AI upscale of the low-resolution image of the current frame, Generating AI data, obtaining image data by encoding a low resolution image of the current frame, generating AI encoded data including the image data and the AI data, and image data of the AI encoded data Media data box and AI encoding data
- a video encoding method comprising outputting a video file including a metadata box in which AI data of the data is inserted.
- a video decoding apparatus that performs a video decoding method using AI upscale, comprising: receiving a video file including image data and AI encoded data including AI data on AI upscale of the image data A communication unit, a parsing unit that obtains AI data of the AI encoded data from the metadata box of the video file, and obtains image data of the AI encoded data from the media data box of the video file, decodes the image data, A first decoding unit for restoring a low-resolution image of the current frame, and obtaining the upscale DNN information of the current frame from the AI data, and AI up-scaling the low-resolution image according to the upscale DNN information of the current frame , To generate a high resolution image corresponding to the low resolution image
- the video decoding apparatus comprising AI upscaled portion is provided.
- downscale DNN information for AI downscaling a high resolution image of a current frame into a low resolution image is determined, and the downscale DNN information
- AI downscaler for generating a low resolution image of the current frame and generating AI data used for AI upscaling of the low resolution image of the current frame by AI downscaling the high resolution image of the current frame is determined, and the downscale DNN information.
- a video encoding apparatus including a communication unit for outputting the video file containing the inserted metadata box.
- a method for generating and parsing AI data according to the data structure is provided along with a data structure of AI data required for AI upscale of a low-resolution image.
- the efficiency of AI upscale may be improved.
- FIG. 1 is a diagram for explaining an AI encoding process and an AI decoding process according to an embodiment.
- FIG. 2 is a block diagram showing the configuration of an AI decoding apparatus according to an embodiment.
- FIG. 3 is an exemplary diagram showing a second DNN for AI upscale of a second image.
- FIG. 5 is an exemplary diagram illustrating a mapping relationship between various image related information and various DNN configuration information.
- FIG. 6 is a view showing a second image composed of a plurality of frames.
- FIG. 7 is a block diagram showing the configuration of an AI encoding apparatus according to an embodiment.
- FIG. 8 is an exemplary diagram illustrating a first DNN for AI downscale of an original image.
- FIG. 9 is a diagram for explaining a method of training the first DNN and the second DNN.
- 10 is a view for explaining the training process of the first DNN and the second DNN by the training device.
- 11 is an exemplary diagram illustrating an apparatus for AI downscale of an original image and an apparatus for AI upscale of a second image.
- FIG. 12 illustrates the structure of image data and AI data, and the corresponding relationship between image data and AI data.
- 13A illustrates the flow of data in the AI decoding apparatus when AI data is embedded in video data.
- 13B illustrates the flow of data in the AI decoding apparatus when AI data and video data are separated and included in two files.
- FIG. 14 shows an embodiment of AI-encoded data when AI data and video data are separated in a single file.
- 15A shows an embodiment of AI-encoded data when AI data is inserted into image data in a single file.
- 15B shows an embodiment of AI encoded data when AI data is inserted into image data in a single file.
- 15C illustrates an embodiment of AI encoded data when some AI data is inserted into image data in a single file and the other AI data is separated from image data.
- FIG. 16 shows an embodiment of AI encoded data divided in units of video segments when AI data and image data are separated as shown in FIG. 14.
- FIG 17 illustrates an embodiment of AI data and image data that are transmitted as two files.
- FIGS. 14 to 17 shows an embodiment of a data structure that can be applied to the video AI data described in FIGS. 14 to 17.
- FIG. 18B shows an embodiment of a data structure applicable to the video segment AI data described in FIG. 16 or the frame group AI data of FIGS. 14, 15A to 15C, and FIG. 17.
- FIG. 19 shows a syntax table in which the data structure of FIG. 18A is implemented.
- FIGS. 14 to 17 shows an embodiment of a data structure that can be applied to the frame group AI data or frame AI data described in FIGS. 14 to 17.
- FIG. 21 shows a syntax table in which the data structure of FIG. 20 is implemented.
- 22 is a flowchart of an embodiment of an image decoding method according to an AI decoder.
- FIG. 23 is a flowchart of an embodiment of an image encoding method according to an AI encoder.
- 24 is a block diagram showing the configuration of an image decoding apparatus according to an embodiment.
- 25 is a block diagram showing the configuration of an image encoding apparatus according to an embodiment.
- the AI-encoded data includes image data including encoding information of a low-resolution image generated by AI down-scaling a high-resolution image, And AI data for AI upscale of the low resolution image reconstructed according to the image data, wherein the AI data includes AI target data indicating whether AI upscale is applied to one or more frames, and AI to the one or more frames.
- a computer recordable recording medium comprising AI auxiliary data on upscale DNN information used for AI upscale of the one or more frames among a plurality of preset DNN configuration information preset.
- one component when one component is referred to as “connected” or “connected” with another component, the one component may be directly connected to the other component, or may be directly connected, but in particular, It should be understood that, as long as there is no objection to the contrary, it may or may be connected via another component in the middle.
- two or more components are expressed as ' ⁇ unit (unit)', 'module', or two or more components are combined into one component or one component is divided into more detailed functions. It may be differentiated into.
- each of the components to be described below may additionally perform some or all of the functions of other components in addition to the main functions in charge of them, and some of the main functions of each component are different. Needless to say, it may be carried out exclusively by components.
- 'image' or 'picture' may represent a still image, a moving image composed of a plurality of continuous still images (or frames), or a video.
- 'DNN deep neural network
- 'DNN deep neural network
- the 'parameter' is a value used in a calculation process of each layer constituting a neural network, and may include, for example, a weight used when applying an input value to a predetermined calculation expression.
- the parameter may be expressed in a matrix form.
- the parameter is a value that is set as a result of training, and may be updated through separate training data as needed.
- 'first DNN' means a DNN used for AI downscale of the image
- 'second DNN' means a DNN used for AI upscale of the image
- 'DNN setting information' includes information related to elements constituting the DNN and includes the aforementioned parameters.
- the first DNN or the second DNN may be set using DNN setting information.
- 'original image' refers to an image that is a target of AI encoding
- 'first image' refers to an image obtained as a result of AI downscale of the original image in the AI encoding process
- 'second image' refers to an image obtained through the first decoding in the AI decoding process
- 'third image' refers to an image obtained by AI upscaling the second image in the AI decoding process.
- 'AI downscale' refers to a process of reducing the resolution of an image based on AI
- 'first encoding' refers to a encoding process by a frequency transform-based image compression method.
- 'first decoding' refers to a decoding process using a frequency conversion-based image reconstruction method
- 'AI upscale' refers to a process of increasing the resolution of an image based on AI.
- AI 1 is a diagram for explaining an artificial intelligence (AI) encoding process and an AI decoding process according to an embodiment.
- AI artificial intelligence
- the first image 115 is obtained by AI down-scaling 110 of the original image 105 having high resolution.
- the first encoding 120 and the first decoding 130 are performed on the first image 115 having a relatively small resolution, the first encoding 120 and the original image 105 are performed. Compared to the case where the first decoding 130 is performed, the processed bit rate can be greatly reduced.
- the original image 105 is AI downscaled 110 to obtain the first image 115, and the first image 115 is the first.
- Encode 120 In the AI decoding process, AI-encoded data including AI data and image data obtained as a result of AI encoding is received, the second image 135 is obtained through the first decoding 130, and the second image 135 is obtained. AI upscale 140 to obtain a third image 145.
- the original image 105 is AI downscaled 110 to obtain the first image 115 of a predetermined resolution or a predetermined image quality.
- the AI downscale 110 is performed based on AI, and the AI for the AI downscale 110 is trained in connection with the AI for the AI upscale 140 of the second image 135 (joint trained). do. Because, when the AI for the AI downscale 110 and the AI for the AI upscale 140 are trained separately, between the original image 105 that is the AI encoding target and the third image 145 reconstructed through AI decoding. This is because the difference between them becomes larger.
- AI data may be used to maintain such a linkage relationship between the AI encoding process and the AI decoding process. Therefore, the AI data obtained through the AI encoding process must include information indicating the upscale target, and in the AI decoding process, the AI image is scaled up to the second image 135 according to the upscale target identified based on the AI data. 140).
- the AI for the AI downscale 110 and the AI for the AI upscale 140 may be implemented as a deep neural network (DNN).
- DNN deep neural network
- the AI encoding apparatus uses target information used when the first DNN and the 2 DNN are jointly trained.
- the AI decoding device may AI upscale 140 to a resolution targeting the second image 135 based on the received target information.
- the first encoding 120 is a process of generating prediction data by predicting the first image 115, a process of generating residual data corresponding to a difference between the first image 115 and the prediction data, which is a spatial domain component. It may include a process of transforming the residual data into frequency domain components, a process of quantizing the residual data transformed into frequency domain components, and an entropy encoding of the quantized residual data.
- the first encoding process 120 includes MPEG-2, H.264 AVC (Advanced Video Coding), MPEG-4, HEVC (High Efficiency Video Coding), VC-1, VP8, VP9 and AV1 (AOMedia Video 1). It can be implemented through one of the image compression method using the frequency conversion.
- the second image 135 corresponding to the first image 115 may be reconstructed through the first decoding 130 of image data.
- the first decoding 130 includes entropy decoding the image data to generate quantized residual data, inverse quantizing the quantized residual data, transforming residual data of a frequency domain component into a spatial domain component, and predictive data And a process of restoring the second image 135 using prediction data and residual data.
- image compression using frequency conversion such as MPEG-2, H.264, MPEG-4, HEVC, VC-1, VP8, VP9, and AV1 used in the first encoding 120 process is performed. It may be implemented through an image restoration method corresponding to one of the methods.
- the AI encoded data obtained through the AI encoding process may include image data obtained as a result of the first encoding 120 of the first image 115 and AI data related to the AI downscale 110 of the original image 105. You can.
- the image data may be used in the first decoding 130 process, and the AI data may be used in the AI upscale 140 process.
- the image data can be transmitted in the form of a bitstream.
- the image data may include data obtained based on pixel values in the first image 115, for example, residual data that is a difference between prediction data of the first image 115 and the first image 115.
- the image data includes information used in the first encoding 120 of the first image 115.
- the image data includes prediction mode information used for first encoding 120 of the first image 115, motion information, and information related to quantization parameters used in the first encoding 120. can do.
- the image data of the image compression method used in the first encoding 120 of the image compression method using the frequency conversion such as MPEG-2, H.264 AVC, MPEG-4, HEVC, VC-1, VP8, VP9 and AV1 It can be generated according to a rule, for example, syntax.
- AI data is used for AI upscale 140 based on the second DNN.
- the AI data includes information that enables accurate AI upscale 140 of the second image 135 through the second DNN.
- the AI upscale 140 may be performed with a resolution and / or image quality targeting the second image 135 based on the AI data.
- AI data may be transmitted together with image data in the form of a bitstream. Alternatively, depending on the implementation, AI data may be transmitted separately from the image data in the form of a frame or a packet. Image data and AI data obtained as a result of AI encoding may be transmitted through the same network or different networks.
- FIG. 2 is a block diagram showing the configuration of an AI decoding apparatus 200 according to an embodiment.
- the AI decoding apparatus 200 may include a receiving unit 210 and an AI decoding unit 230.
- the reception unit 210 may include a communication unit 212, a parsing unit 214, and an output unit 216.
- the AI decoder 230 may include a first decoder 232 and an AI upscaler 234.
- the receiving unit 210 receives and parses the AI-encoded data obtained as a result of the AI encoding, distinguishes the image data from the AI data, and outputs the AI data to the AI decoding unit 230.
- the communication unit 212 receives AI encoded data obtained as a result of AI encoding through a network.
- the AI encoded data obtained as a result of AI encoding includes image data and AI data.
- Image data and AI data may be received through a homogeneous network or a heterogeneous network.
- the parsing unit 214 receives AI-encoded data received through the communication unit 212 and parses it to divide it into image data and AI data. For example, the header of data obtained from the communication unit 212 may be read to distinguish whether the corresponding data is video data or AI data. In one example, the parsing unit 214 classifies the image data and AI data through the header of the data received through the communication unit 212 and delivers them to the output unit 216, and the output unit 216 is divided into each The data is transferred to the first decoding unit 232 and the AI upscaler 234.
- the video data included in the AI coded data is video data obtained through a predetermined codec (eg, MPEG-2, H.264, MPEG-4, HEVC, VC-1, VP8, VP9 or AV1) It can also be confirmed as.
- a predetermined codec eg, MPEG-2, H.264, MPEG-4, HEVC, VC-1, VP8, VP9 or AV1
- corresponding information may be transmitted to the first decoding unit 232 through the output unit 216 so that the image data can be processed with the identified codec.
- the AI-encoded data parsed by the parsing unit 214 includes magnetic media such as hard disks, floppy disks, and magnetic tapes, optical recording media such as CD-ROMs and DVDs, and floptical disks. It may be obtained from a data storage medium including the same magneto-optical medium.
- the first decoder 232 restores the second image 135 corresponding to the first image 115 based on the image data.
- the second image 135 obtained by the first decoder 232 is provided to the AI upscaler 234.
- first decoding related information such as prediction mode information, motion information, and quantization parameter information included in the image data may be further provided to the AI upscaler 234.
- the AI upscaler 234 receiving the AI data AI upscales the second image 135 based on the AI data.
- AI up-scaling may be further performed using first decoding related information such as prediction mode information and quantization parameter information included in the image data.
- the receiving unit 210 and the AI decoding unit 230 are described as separate devices, but may be implemented through a single processor.
- the receiving unit 210 and the AI decoding unit 230 may be implemented as a dedicated processor, or may be implemented through a combination of a general-purpose processor such as an AP or a CPU and a GPU and S / W.
- a dedicated processor it may be implemented including a memory for implementing an embodiment of the present disclosure, or a memory processor for using an external memory.
- the receiving unit 210 and the AI decoding unit 230 may be composed of a plurality of processors. In this case, it may be implemented as a combination of dedicated processors, or may be implemented through a combination of a number of general-purpose processors, such as an AP or CPU, GPU, and S / W. Similarly, the AI upscaler 234 and the first decoder 232 may each be implemented with different processors.
- the AI data provided to the AI upscale unit 234 includes information that enables AI upscale of the second image 135.
- the upscale target should correspond to the downscale of the first DNN. Therefore, the AI data should include information that can identify the downscale target of the first DNN.
- information included in the AI data includes difference information between the resolution of the original image 105 and the resolution of the first image 115, and information related to the first image 115.
- the difference information may be expressed as information (for example, resolution conversion rate information) of a resolution conversion degree of the first image 115 compared to the original image 105.
- the difference information may be expressed only with the resolution information of the original image 105. It might be.
- the resolution information may be expressed by a screen size of horizontal / vertical, or may be expressed by a ratio (16: 9, 4: 3, etc.) and the size of one axis.
- it may be expressed in the form of an index or a flag.
- the information related to the first image 115 is stored in at least one of a bit rate of image data obtained as a result of the first encoding of the first image 115 and a codec type used in the first encoding of the first image 115.
- Information is stored in at least one of a bit rate of image data obtained as a result of the first encoding of the first image 115 and a codec type used in the first encoding of the first image 115.
- the AI upscaler 234 may determine an upscale target of the second image 135 based on at least one of difference information included in AI data and information related to the first image 115.
- the upscale target may indicate, for example, how much resolution the second image 135 should be upscaled to.
- AI upscales the second image 135 through the second DNN to obtain a third image 145 corresponding to the upscale target.
- FIG. 3 is an exemplary view showing a second DNN 300 for AI upscale of the second image 135, and FIG. 4 is a convolution operation in the first convolution layer 310 shown in FIG. City.
- the second image 135 is input to the first convolution layer 310.
- 3X3X4 displayed on the first convolution layer 310 illustrated in FIG. 3 illustrates that convolution processing is performed on one input image using four filter kernels having a size of 3 ⁇ 3.
- four feature maps are generated by four filter kernels.
- Each feature map represents unique characteristics of the second image 135.
- each feature map may indicate vertical characteristics, horizontal characteristics, or edge characteristics of the second image 135.
- the feature map 450 can be generated. Since four filter kernels are used in the first convolution layer 310, four feature maps may be generated through a convolution operation process using four filter kernels.
- I1 to I49 displayed on the second image 135 represent pixels of the second image 135, and F1 to F9 displayed on the filter kernel 430 represent parameters of the filter kernel 430.
- M1 to M9 displayed on the feature map 450 represent samples of the feature map 450.
- the second image 135 includes 49 pixels, but this is only one example, and when the second image 135 has a resolution of 4K, for example, 3840 X 2160 images It may contain pixels.
- pixel values of I1, I2, I3, I8, I9, I10, I15, I16, and I17 of the second image 135 and F1, F2, F3, F4, F5 of the filter kernel 430, respectively , F6, F7, F8, and F9 each of the multiplication operation is performed, and a value obtained by combining (eg, addition operation) the result values of the multiplication operation may be assigned as the value of M1 of the feature map 450.
- the parameters of the second DNN for example, the parameters of the filter kernel used in the convolutional layers of the second DNN (eg, filters) through the joint training of the first DNN and the second DNN
- the values of F1, F2, F3, F4, F5, F6, F7, F8, and F9 of the kernel 430 may be optimized.
- the AI upscale unit 234 determines an upscale target corresponding to the downscale target of the first DNN based on the AI data, and convolves the parameters corresponding to the determined upscale target into the second DNN. It can be determined by the parameters of the filter kernel used in the solution layers.
- the convolution layers included in the first DNN and the second DNN may be processed according to the convolution operation process described with reference to FIG. 4, but the convolution operation process described in FIG. 4 is only an example, and is limited to this. It is not.
- feature maps output from the first convolution layer 310 are input to the first activation layer 320.
- the first activation layer 320 may impart a non-linear characteristic to each feature map.
- the first activation layer 320 may include a sigmoid function, a tanh function, and a rectified linear unit (ReLU) function, but is not limited thereto.
- the imparting of the nonlinear characteristic in the first activation layer 320 means changing and outputting some sample values of the feature map, which is the output of the first convolution layer 310. At this time, the change is performed by applying a nonlinear characteristic.
- the first activation layer 320 determines whether to transfer sample values of feature maps output from the first convolution layer 310 to the second convolution layer 330. For example, some of the sample values of the feature maps are activated by the first activation layer 320 and transferred to the second convolutional layer 330, and some sample values by the first activation layer 320 It is inactive and is not delivered to the second convolution layer 330. The unique characteristics of the second image 135 represented by the feature maps are emphasized by the first activation layer 320.
- the feature maps 325 output from the first activation layer 320 are input to the second convolution layer 330.
- One of the feature maps 325 illustrated in FIG. 3 is a result of the feature map 450 described in connection with FIG. 4 being processed in the first activation layer 320.
- 3X3X4 displayed on the second convolution layer 330 illustrates convolution processing on the input feature maps 325 using four filter kernels having a size of 3 ⁇ 3.
- the output of the second convolution layer 330 is input to the second activation layer 340.
- the second activation layer 340 may impart nonlinear characteristics to input data.
- the feature maps 345 output from the second activation layer 340 are input to the third convolution layer 350.
- 3X3X1 displayed on the third convolution layer 350 illustrated in FIG. 3 illustrates that convolution processing is performed to create one output image using one filter kernel having a size of 3 ⁇ 3.
- the third convolution layer 350 is a layer for outputting the final image, and generates one output using one filter kernel. According to an example of the present disclosure, the third convolution layer 350 may output the third image 145 through the convolution operation result.
- the DNN setting information indicating the number of filter kernels of the first convolution layer 310, the second convolution layer 330, and the third convolution layer 350 of the second DNN 300, parameters of the filter kernel, etc.
- a plurality of DNN configuration information should be associated with the plurality of DNN configuration information of the first DNN.
- the linkage between the plurality of DNN configuration information of the second DNN and the plurality of DNN configuration information of the first DNN may be implemented through linkage learning of the first DNN and the second DNN.
- FIG. 3 shows that the second DNN 300 includes three convolutional layers 310, 330, and 350 and two activation layers 320, 340, but this is only an example, and is an implementation example. Accordingly, the number of convolutional layers and activation layers can be variously changed. Also, depending on the implementation, the second DNN 300 may be implemented through a recurrent neural network (RNN). In this case, it means changing the CNN structure of the second DNN 300 according to the example of the present disclosure to an RNN structure.
- RNN recurrent neural network
- the AI upscale unit 234 may include at least one Arithmetic Logic Unit (ALU) for the above-described convolution operation and operation of the activation layer.
- ALU can be implemented as a processor.
- the ALU includes a multiplier that performs a multiplication operation between the sample values of the feature map output from the second image 135 or the previous layer and the sample values of the filter kernel, and an adder that adds the result values of the multiplication. You can.
- the ALU multiplies a weight used in a predetermined sigmoid function, a Tanh function, or a ReLU function to an input sample value, and compares the multiplied result with a predetermined value to obtain the input sample value. It may include a comparator to determine whether to deliver to the next layer.
- the AI upscaler 234 may store a plurality of DNN setting information that can be set in the second DNN.
- the DNN configuration information may include information on at least one of the number of convolution layers included in the second DNN, the number of filter kernels per convolution layer, and parameters of each filter kernel.
- the plurality of DNN configuration information may respectively correspond to various upscale targets, and the second DNN may operate based on DNN configuration information corresponding to a specific upscale target.
- the second DNN may have different structures according to the DNN configuration information.
- the second DNN may include three convolutional layers according to some DNN configuration information, and the second DNN may include four convolutional layers according to other DNN configuration information.
- DNN configuration information may include only the parameters of the filter kernel used in the second DNN.
- the structure of the second DNN is not changed, but only the parameters of the internal filter kernel can be changed according to the DNN configuration information.
- the AI upscaler 234 may acquire DNN configuration information for AI upscale of the second image 135 among the plurality of DNN configuration information.
- Each of the plurality of DNN setting information used herein is information for obtaining a third image 145 of a predetermined resolution and / or a predetermined image quality, and is trained in connection with the first DNN.
- Each of the plurality of DNN setting information is created in association with the DNN setting information of the first DNN of the AI encoding apparatus 600, and the AI upscaler unit 234 enlarges the ratio corresponding to the reduction ratio of the DNN setting information of the first DNN. Accordingly, one DNN configuration information among a plurality of DNN configuration information is acquired. To this end, the AI upscaler 234 must check the information of the first DNN. In order for the AI upscaler 234 to check the information of the first DNN, the AI decoding apparatus 200 according to an embodiment receives AI data including the information of the first DNN from the AI encoding apparatus 600. .
- the AI upscaler 234 checks information targeted by the DNN setting information of the first DNN used to obtain the first image 115 using the information received from the AI encoding apparatus 600. And, it is possible to obtain the DNN setting information of the second DNN trained with it.
- input data may be processed based on the second DNN that operates according to the obtained DNN configuration information.
- the first convolution layer 310, the second convolution layer 330, and the third convolution layer (2) of the second DNN 300 shown in FIG. 350 For each, the number of filter kernels included in each layer and parameters of the filter kernels are set to values included in the obtained DNN configuration information.
- parameters of the filter kernel of 3 X 3 used in any one convolution layer of the second DNN shown in FIG. 4 are ⁇ 1, 1, 1, 1, 1, 1, 1, 1, 1 ⁇ . It is to be set, and if there is a change in the DNN configuration information, it can be replaced with ⁇ 2, 2, 2, 2, 2, 2, 2, 2, 2 ⁇ parameters included in the changed DNN configuration information.
- the AI upscaler 234 may acquire DNN setting information for upscaling the second image 135 among a plurality of DNN setting information based on information included in the AI data, and is used to obtain DNN setting information
- the AI data to be described will be described in detail.
- the AI upscaler 234 may obtain DNN setting information for upscaling the second image 135 among the plurality of DNN setting information based on the difference information included in the AI data. For example, based on the difference information, the resolution of the original image 105 (eg, 4K (4096 * 2160)) is greater than the resolution of the first image 115 (eg, 2K (2048 * 1080)). If it is confirmed that it is twice as large, the AI upscaler 234 may acquire DNN setting information capable of doubling the resolution of the second image 135.
- the AI upscaler 234 sets the DNN for AI upscaling the second image 135 among the plurality of DNN setting information based on the information related to the first image 115 included in the AI data. Information can be obtained.
- the AI upscaler 234 may determine a mapping relationship between image related information and DNN setting information in advance, and obtain DNN setting information mapped to the first image 115 related information.
- FIG. 5 is an exemplary diagram illustrating a mapping relationship between various image related information and various DNN configuration information.
- the AI encoding / AI decoding process does not only consider a change in resolution.
- DNN setting information considering the resolutions such as SD, HD, and Full HD, bit rates such as 10Mbps, 15Mbps, and 20Mbps, and codec information such as AV1, H.264, HEVC individually or all The choice can be made.
- training considering each element in the AI training process should be performed in connection with the encoding and decoding processes (see FIG. 9).
- the first image received in the AI decoding process ( 115) DNN setting information for AI upscale of the second image 135 may be obtained based on the related information.
- the AI upscaler 234 matches the image related information shown on the left side of the table shown in FIG. 5 with the DNN setting information on the right side of the table, so that DNN setting information according to the image related information can be used.
- the resolution of the first image 115 from the information related to the first image 115 is SD, and the bit rate of the image data obtained as a result of the first encoding of the first image 115 is 10 Mbps.
- the AI upscaler 234 may use A DNN setting information among a plurality of DNN setting information.
- the resolution of the first image 115 is HD from the information related to the first image 115
- the bit rate of the image data obtained as a result of the first encoding is 15 Mbps
- the first image 115 is an H.264 codec. If it is confirmed that the first encoding is performed, the AI upscaler 234 may use B DNN configuration information among a plurality of DNN configuration information.
- the resolution of the first image 115 from the information related to the first image 115 is Full HD
- the bit rate of the image data obtained as a result of the first encoding of the first image 115 is 20 Mbps
- the first image ( If it is confirmed that 115) is first encoded by the HEVC codec the AI upscaler unit 234 uses C DNN setting information among the plurality of DNN setting information, and the resolution of the first image 115 is Full HD, and
- the AI upscaler 234 may include a plurality of DNNs.
- D DNN configuration information may be used.
- One of C DNN setting information and D DNN setting information is selected according to whether the bit rate of the image data obtained as a result of the first encoding of the first image 115 is 20 Mbps or 15 Mbps.
- the bit rates of the image data are different from each other, which means that the quality of the restored image is different from each other.
- the first DNN and the second DNN may be trained in association based on a predetermined image quality, and accordingly, the AI upscale unit 234 represents the image quality of the second image 135, and the DNN according to the bit rate of the image data Setting information can be obtained.
- the AI upscaler 234 includes information provided from the first decoder 232 (prediction mode information, motion information, quantization parameter information, etc.) and the first image 115 included in the AI data.
- DNN configuration information for AI upscaling of the second image 135 among the plurality of DNN configuration information may be obtained.
- the AI upscaler 234 receives quantization parameter information used in the first encoding process of the first image 115 from the first decoder 232, and receives the first image 115 from the AI data.
- the bit rate of the image data obtained as a result of encoding may be checked, and DNN configuration information corresponding to the quantization parameter and bit rate may be obtained.
- bit rate is a value representing the entirety of the first coded first image 115 and the frame rate of each frame is also within the first image 115.
- Image quality may be different. Accordingly, considering the prediction mode information, motion information, and / or quantization parameters that can be obtained for each frame from the first decoder 232, DNN setting more suitable for the second image 135 than using only AI data Information can be obtained.
- the AI data may include identifiers of mutually promised DNN configuration information.
- the identifier of the DNN setting information is an upscale target corresponding to the downscale target of the first DNN, and the DNN setting information trained in cooperation between the first DNN and the second DNN is configured to AI upscale the second image 135. Information for distinguishing pairs of.
- the AI upscaler 234 may AI upscale the second image 135 using the DNN setting information corresponding to the identifier of the DNN setting information. .
- an identifier indicating each of a plurality of DNN setting information settable in the first DNN and an identifier indicating each of a plurality of DNN setting information settable in the second DNN may be previously specified.
- the same identifier may be specified for a pair of DNN configuration information that can be set for each of the first DNN and the second DNN.
- the AI data may include an identifier of DNN setting information set in the first DNN for AI downscale of the original image 105.
- the AI upscaler 234 receiving the AI data may AI upscale the second image 135 using the DNN setting information indicated by the identifier included in the AI data among the plurality of DNN setting information.
- AI data may include DNN configuration information.
- the AI upscaler 234 may acquire the DNN configuration information included in the AI data and then AI upscale the second image 135 using the DNN configuration information.
- the AI upscaler 234 obtains DNN setting information by combining some of the lookup table values based on information included in the AI data, and AI-ups the second image 135 using the obtained DNN setting information. You can also scale.
- the AI upscaler 234 may acquire DNN configuration information corresponding to the determined DNN structure, for example, parameters of the filter kernel. .
- the AI upscaler 234 obtains DNN configuration information of the second DNN through AI data including information related to the first DNN, and obtains a second image 135 through the second DNN set as the obtained DNN configuration information. ) To AI upscaling, which can be compared with upscaling by directly analyzing the features of the second image 135, thereby reducing memory usage and computational power.
- the AI upscaler 234 may independently acquire DNN setting information for each predetermined number of frames, or common to all frames DNN configuration information may be obtained.
- FIG. 6 is a view showing a second image 135 composed of a plurality of frames.
- the second image 135 may be formed of frames corresponding to t0 to tn.
- the AI upscaler 234 may acquire DNN configuration information of the second DNN through AI data, and AI upscale the frames corresponding to t0 to tn based on the obtained DNN configuration information. That is, the frames corresponding to t0 to tn may be AI upscaled based on common DNN setting information.
- the AI upscaler unit 234 uses 'A' DNN setting information obtained from AI data of frames corresponding to t0 to tn, for example, frames corresponding to t0 to ta. AI upscaling and AI upscaling of frames corresponding to ta + 1 to tb with 'B' DNN configuration information obtained from AI data may be performed.
- the AI upscaler 234 may AI upscale the frames corresponding to tb + 1 to tn with 'C' DNN setting information obtained from AI data.
- the AI upscaler 234 independently acquires DNN configuration information for each group including a predetermined number of frames among a plurality of frames, and uses AI as DNN configuration information for independently acquiring frames included in each group. It can be upscaled.
- the AI upscaler 234 may independently acquire DNN setting information for each frame constituting the second image 135. That is, when the second image 135 is composed of three frames, the AI upscaler 234 AI upscales the first frame with DNN setting information obtained in relation to the first frame, and the second frame The second frame may be AI upscaled with the DNN configuration information obtained in relation to, and the third frame may be AI upscaled with the DNN configuration information obtained with respect to the third frame. DNN setting information is obtained based on the information provided from the first decoding unit 232 (prediction mode information, motion information, quantization parameter information, etc.) and information related to the first image 115 included in the AI data. Depending on the method, DNN setting information may be independently obtained for each frame constituting the second image 135. This is because mode information, quantization parameter information, and the like can be independently determined for each frame constituting the second image 135.
- the AI data may include information indicating to which frame the DNN configuration information obtained based on the AI data is valid. For example, when the information that the DNN setting information is valid up to the ta frame is included in the AI data, the AI upscaler 234 AI upscales t0 to ta frames with the DNN setting information obtained based on the AI data. do. In addition, when the information that the DNN setting information is valid up to the tn frame is included in other AI data, the AI upscaler 234 is a frame of ta + 1 to tn as DNN setting information obtained based on the other AI data. You can upscale AI.
- FIG. 7 is a block diagram showing the configuration of an AI encoding apparatus 600 according to an embodiment.
- the AI encoding apparatus 600 may include an AI encoding unit 610 and a transmission unit 630.
- the AI encoder 610 may include an AI downscaler 612 and a first encoder 614.
- the transmission unit 630 may include a data processing unit 632 and a communication unit 634.
- the AI encoding unit 610 and the transmission unit 630 may be implemented through a single processor.
- it may be implemented as a dedicated processor, or may be implemented through a combination of a general-purpose processor such as an AP or a CPU or GPU and S / W.
- a dedicated processor it may be implemented including a memory for implementing an embodiment of the present disclosure, or a memory processor for using an external memory.
- the AI encoding unit 610 and the transmission unit 630 may be configured with a plurality of processors. In this case, it may be implemented as a combination of dedicated processors, or may be implemented through a combination of a number of general-purpose processors, such as an AP or CPU, GPU, and S / W.
- the AI downscaler 612 and the first encoder 614 may also be implemented with different processors.
- the AI encoder 610 performs AI downscale of the original image 105 and first encoding of the first image 115, and transmits AI data and image data to the transmitter 630.
- the transmission unit 630 transmits AI data and image data to the AI decoding apparatus 200.
- the image data includes data obtained as a result of the first encoding of the first image 115.
- the image data may include data obtained based on pixel values in the first image 115, for example, residual data that is a difference between prediction data of the first image 115 and the first image 115.
- the image data includes information used in the first encoding process of the first image 115.
- the image data may include prediction mode information used to first encode the first image 115, motion information, quantization parameter related information used to first encode the first image 115, and the like. .
- the AI data includes information that enables the AI upscaler 234 to AI upscale the second image 135 to an upscale target corresponding to the downscale target of the first DNN.
- the AI data may include difference information between the original image 105 and the first image 115.
- the AI data may include information related to the first image 115.
- Information related to the first image 115 is used when the resolution of the first image 115, the bit rate of the image data obtained as a result of the first encoding of the first image 115, and the first encoding of the first image 115. It may include information on at least one of the codec types.
- the AI data may include identifiers of mutually promised DNN configuration information so that the second image 135 can be AI upscaled to an upscale target corresponding to the downscale target of the first DNN. .
- the AI data may include DNN setting information that can be set in the second DNN.
- the AI downscaler 612 may acquire the AI downscaled first image 115 from the original image 105 through the first DNN.
- the AI downscaler 612 may determine a downscale target of the original image 105 based on a predetermined criterion.
- the AI downscaler 612 may store a plurality of DNN setting information that can be set in the first DNN.
- the AI downscaler 612 acquires DNN configuration information corresponding to a downscale target among a plurality of DNN configuration information, and AI downscales the original image 105 through the first DNN set with the obtained DNN configuration information. .
- Each of the plurality of DNN configuration information may be trained to obtain a first image 115 having a predetermined resolution and / or a predetermined image quality.
- any one DNN configuration information among a plurality of DNN configuration information is a first image 115 having a resolution 1/2 times smaller than that of the original image 105, for example, 4K (4096 * 2160).
- the AI downscaler 612 may obtain DNN setting information by combining a part of the lookup table values according to the downscale target, and AI downscale the original image 105 using the obtained DNN setting information.
- the AI downscaler 612 may determine the structure of the DNN corresponding to the downscale target, and obtain DNN configuration information corresponding to the determined DNN structure, for example, parameters of the filter kernel. .
- the plurality of DNN setting information for AI downscale of the original image 105 may have an optimized value by first training the first DNN and the second DNN.
- each DNN configuration information includes at least one of the number of convolutional layers included in the first DNN, the number of filter kernels per convolutional layer, and the parameters of each filter kernel.
- the AI downscaler 612 sets the first DNN with the DNN setting information determined for AI downscale of the original image 105, and the first image 115 of a predetermined resolution and / or a predetermined quality is received by the first DNN. Can be obtained through.
- each layer in the first DNN can process input data based on information included in the DNN configuration information. .
- the downscale target may indicate, for example, how much the first image 115 having the reduced resolution should be obtained from the original image 105.
- the AI downscale unit 612 includes a compression rate (eg, a difference in resolution between the original image 105 and the first image 115, a target bit rate), and a compression quality (eg, bit rate). Type), compressed history information, and a type of the original image 105 to determine a downscale target.
- a compression rate eg, a difference in resolution between the original image 105 and the first image 115, a target bit rate
- a compression quality eg, bit rate
- the AI downscaler 612 may be preset or determine a downscale target based on a compression rate or compression quality received from a user.
- the AI downscaler 612 may determine a downscale target using compressed history information stored in the AI encoding apparatus 600. For example, according to the compressed history information available to the AI encoding apparatus 600, a user's preferred encoding quality or compression rate may be determined, and a downscale target may be determined according to the encoding quality determined based on the compressed history information. You can. For example, the resolution and image quality of the first image 115 may be determined according to the encoding quality that has been most frequently used according to the compressed history information.
- the AI downscaler 612 may use encoding quality that has been used more than a predetermined threshold value according to compression history information (eg, average quality of encoding quality that has been used more than a predetermined threshold value). It is also possible to determine the downscale target based on the.
- the AI downscaler 612 may determine the downscale target based on the resolution and type (eg, file format) of the original image 105.
- the AI downscaler 612 may independently determine a downscale target for each predetermined number of frames, or a common downscale for all frames Targets can also be determined.
- the AI downscaler 612 may divide frames constituting the original image 105 into a predetermined number of groups, and independently determine a downscale target for each group. The same or different downscale targets for each group can be determined. The number of frames included in the groups may be the same or different for each group.
- the AI downscaler 612 may independently determine the downscale target for each frame constituting the original image 105. The same or different downscale targets for each frame can be determined.
- FIG. 8 is an exemplary diagram illustrating a first DNN 700 for AI downscale of the original image 105.
- the original image 105 is input to the first convolution layer 710.
- the first convolution layer 710 performs convolution processing on the original image 105 using 32 filter kernels having a size of 5 ⁇ 5.
- the 32 feature maps generated as a result of the convolution process are input to the first activation layer 720.
- the first activation layer 720 may impart non-linear characteristics to 32 feature maps.
- the first activation layer 720 determines whether to transfer sample values of feature maps output from the first convolution layer 710 to the second convolution layer 730. For example, some of the sample values of the feature maps are activated by the first activation layer 720 and transferred to the second convolutional layer 730, and some sample values by the first activation layer 720 It is inactive and is not delivered to the second convolution layer 730. The information indicated by the feature maps output from the first convolution layer 710 is highlighted by the first activation layer 720.
- the output 725 of the first activation layer 720 is input to the second convolution layer 730.
- the second convolution layer 730 performs convolution processing on the input data using 32 filter kernels having a size of 5 x 5.
- the 32 feature maps output as a result of the convolution process are input to the second activation layer 740, and the second activation layer 740 can impart nonlinear characteristics to the 32 feature maps.
- the output 745 of the second activation layer 740 is input to the third convolution layer 750.
- the third convolution layer 750 performs convolution processing on the input data using one filter kernel having a size of 5 x 5. As a result of the convolution process, one image may be output from the third convolution layer 750.
- the third convolution layer 750 is a layer for outputting the final image and acquires one output using one filter kernel. According to an example of the present disclosure, the third convolution layer 750 may output the first image 115 through the convolution operation result.
- the DNN setting information indicating the number of filter kernels of the first convolution layer 710, the second convolution layer 730, and the third convolution layer 750 of the first DNN 700, parameters of the filter kernel, etc.
- the plurality of DNN configuration information should be associated with the plurality of DNN configuration information of the second DNN.
- the linkage between the plurality of DNN configuration information of the first DNN and the plurality of DNN configuration information of the second DNN may be implemented through linkage learning of the first DNN and the second DNN.
- the first DNN 700 includes three convolutional layers 710, 730, and 750 and two activation layers 720, 740, but this is only an example, and an implementation Accordingly, the number of convolutional layers and activation layers can be variously changed. Also, depending on the implementation, the first DNN 700 may be implemented through a recurrent neural network (RNN). In this case, it means changing the CNN structure of the first DNN 700 to the RNN structure according to the example of the present disclosure.
- RNN recurrent neural network
- the AI downscaler 612 may include at least one ALU for convolution and activation layer computation.
- ALU can be implemented as a processor.
- the ALU may include a multiplier that performs a multiplication operation between the sample values of the feature map output from the original image 105 or the previous layer and the sample values of the filter kernel, and an adder that adds the result values of the multiplication. have.
- the ALU multiplies a weight used in a predetermined sigmoid function, a Tanh function, or a ReLU function to an input sample value, and compares the multiplied result with a predetermined value to obtain the input sample value. It may include a comparator to determine whether to deliver to the next layer.
- the first encoding unit 614 receiving the first image 115 from the AI downscaler 612 first encodes the first image 115 so that the first image 115 is transmitted. Branches can reduce the amount of information. As a result of the first encoding by the first encoding unit 614, image data corresponding to the first image 115 may be obtained.
- the data processing unit 632 processes at least one of AI data and image data to be transmitted in a predetermined form. For example, when it is necessary to transmit AI data and video data in the form of a bit stream, the data processing unit 632 processes the AI data so that the AI data is expressed in the form of a bit stream, and one bit stream through the communication unit 634 Transmit AI data and video data in the form. As another example, the data processing unit 632 processes the AI data so that the AI data is expressed in the form of a bit stream, and through the communication unit 634, each of the bit stream corresponding to the AI data and the bit stream corresponding to the image data is communicated to the communication unit 634 ). As another example, the data processing unit 632 processes the AI data so that the AI data is expressed as a frame or a packet, and transmits the bitstream image data and the frame or packet type AI data through the communication unit 634.
- the communication unit 634 transmits AI encoded data obtained as a result of AI encoding through a network.
- the AI encoded data obtained as a result of AI encoding includes image data and AI data.
- Image data and AI data may be transmitted through a homogeneous network or a heterogeneous network.
- the AI-encoded data obtained as a result of processing by the data processing unit 632 includes magnetic media such as hard disks, floppy disks, and magnetic tapes, optical recording media such as CD-ROMs and DVDs, and floptical disks. It may be stored in a data storage medium including a magneto-optical medium such as.
- FIG. 9 is a diagram for explaining a method of training the first DNN 700 and the second DNN 300.
- the original image 105 that is AI-encoded through the AI encoding process is restored to the third image 145 through the AI decoding process.
- the third image 145 and the original image 105 obtained as a result of AI decoding are restored.
- it is necessary to relate to the AI encoding process and the AI decoding process That is, the information lost in the AI encoding process must be able to be restored in the AI decoding process, and for this, a joint training of the first DNN 700 and the second DNN 300 is required.
- the quality loss information 830 is used for both training of the first DNN 700 and the second DNN 300.
- the original training image 801 is a target image of AI downscale
- the first training image 802 is AI downscaled from the original training image 801. It is a video.
- the third training image 804 is an AI upscaled image from the first training image 802.
- the original training image 801 includes a still image or a video composed of a plurality of frames.
- the original training image 801 may include a still image or a luminance image extracted from a video composed of a plurality of frames.
- the original training image 801 may include a still image or a patch image extracted from a video composed of a plurality of frames.
- the first training image 802, the second training image, and the third training image 804 are also composed of a plurality of frames.
- a plurality of frames of the original training image 801 are sequentially input to the first DNN 700, the first training image 802 and the second training image through the first DNN 700 and the second DNN 300 And a plurality of frames of the third training image 804 may be sequentially obtained.
- the original training image 801 is input to the first DNN 700.
- the original training image 801 input to the first DNN 700 is AI downscaled and output as the first training image 802, and the first training image 802 is input to the second DNN 300.
- the third training image 804 is output.
- the first training image 802 is input to the second DNN 300, and according to an embodiment, the first training image 802 is obtained through the first encoding and first decoding processes.
- a second training image may be input to the second DNN 300.
- any codec of MPEG-2, H.264, MPEG-4, HEVC, VC-1, VP8, VP9 and AV1 can be used.
- MPEG-2, H.264, MPEG-4, HEVC, VC-1 , VP8, VP9 and AV1 can be used.
- a legacy downscaled reduced training image 803 is obtained from the original training image 801.
- the legacy downscale may include at least one of a bilinear scale, a bicubic scale, a lanczos scale, and a stair step scale.
- the first DNN 700 and the second DNN 300 may be set with predetermined DNN setting information.
- structural loss information 810, complexity loss information 820, and quality loss information 830 may be determined.
- the structural loss information 810 may be determined based on a comparison result of the reduced training image 803 and the first training image 802.
- the structural loss information 810 may correspond to a difference between the structural information of the reduced training image 803 and the structural information of the first training image 802.
- the structural information may include various features that can be extracted from the image, such as luminance, contrast, and histogram of the image.
- the structural loss information 810 indicates to what extent structural information of the original training image 801 is maintained in the first training image 802. As the structural loss information 810 is smaller, structural information of the first training image 802 becomes similar to structural information of the original training image 801.
- the complexity loss information 820 may be determined based on the spatial complexity of the first training image 802. In one example, as spatial complexity, a total variance value of the first training image 802 may be used.
- the complexity loss information 820 is related to the bit rate of the image data obtained by first encoding the first training image 802. It is defined that the smaller the complexity loss information 820, the smaller the bit rate of the image data.
- the quality loss information 830 may be determined based on a comparison result of the original training image 801 and the third training image 804.
- the quality loss information 830 includes L1-norm values, L2-norm values, Structural Similarity (SSIM) values, and PSNR-HVS (Peak Signal-To) values for the difference between the original training image 801 and the third training image 804.
- SSIM Structural Similarity
- PSNR-HVS Peak Signal-To
- MS-SSIM Multiscale SSIM
- VIF Very Inflation Factor
- VMAF Video Multimethod Assessment Fusion
- structural loss information 810, complexity loss information 820, and quality loss information 830 are used for training of the first DNN 700, and quality loss information 830 is the second DNN ( 300). That is, the quality loss information 830 is used for training of the first DNN 700 and the second DNN 300.
- the first DNN 700 may update the parameter so that the final loss information determined based on the structural loss information 810, complexity loss information 820, and quality loss information 830 is reduced or minimized.
- the second DNN 300 may update the parameter such that the quality loss information 830 is reduced or minimized.
- the final loss information for training the first DNN 700 and the second DNN 300 may be determined as shown in Equation 1 below.
- LossDS a * structural loss information + b * complexity loss information + c * quality loss information
- LossDS d * quality loss information
- LossDS represents the final loss information to be reduced or minimized for the training of the first DNN 700
- LossUS represents the final loss information to be reduced or minimized for the training of the second DNN 300 Shows.
- a, b, c, and d may correspond to predetermined weights.
- the first DNN 700 updates parameters in the direction in which LossDS of Equation 1 is reduced
- the second DNN 300 updates parameters in the direction in which LossUS is decreased.
- the first training image 802 obtained based on the updated parameter is different from the first training image 802 in the previous training course
- the third training image 804 is also different from the third training image 804 in the previous training process. If the third training image 804 is different from the third training image 804 in the previous training process, the quality loss information 830 is also newly determined, and accordingly, the second DNN 300 updates the parameters.
- the LossDS is also newly determined, so the first DNN 700 updates parameters according to the newly determined LossDS. That is, the parameter update of the first DNN 700 causes the parameter update of the second DNN 300, and the parameter update of the second DNN 300 causes the parameter update of the first DNN 700.
- the parameters of the first DNN 700 and the parameters of the second DNN 300 are It can be optimized with relevance.
- LossUS is determined according to the quality loss information 830, but this is an example, and LossUS includes at least one of structural loss information 810 and complexity loss information 820, It may be determined based on the quality loss information 830.
- the AI upscaler 234 of the AI decoding apparatus 200 and the AI downscaler 612 of the AI encoding apparatus 600 were described as storing a plurality of DNN configuration information, but the AI upscaler 234 ) And a method of training each of the plurality of DNN configuration information stored in the AI downscaler 612 will be described.
- the degree of similarity between the structural information of the first training image 802 and the structural information of the original training image 801 (structural loss information 810) )
- the bit rate (complexity loss information 820) of the image data obtained as a result of the first encoding of the first training image 802 and the difference (quality loss) between the third training image 804 and the original training image 801 The parameters are updated in consideration of the information 830.
- the first training image 802 having a small bit rate of image data obtained when performing the first encoding is obtainable, and at the same time, the first training image.
- the parameters of the first DNN 700 may be updated so that the second DNN 300 up-scaling 802 can acquire a third training image 804 similar to the original training image 801.
- the direction in which the parameters of the first DNN 700 are optimized is different. For example, if the weight of b is determined to be high, the parameter of the first DNN 700 may be updated with more importance on lowering the bit rate than the quality of the third training image 804. In addition, when the weight of c is determined to be high, the bit rate is increased, or the importance of allowing the quality of the third training image 804 to increase is increased rather than maintaining the structural information of the original training image 801. 1 The parameters of DNN 700 may be updated.
- a direction in which parameters of the first DNN 700 are optimized may be different according to the type of codec used to first encode the first training image 802. This is because the second training image to be input to the second DNN 300 may vary according to the type of codec.
- the parameters of the first DNN 700 and the parameters of the second DNN 300 are linked based on the weight a, the weight b, the weight c, and the type of codec for the first encoding of the first training image 802. It can be updated. Accordingly, when each of the weights a, b, and c is determined as a predetermined value, and after determining the type of codec as a predetermined type, the first DNN 700 and the second DNN 300 are trained to be linked to each other. Optimized parameters of the first DNN 700 and parameters of the second DNN 300 may be determined.
- parameters of the first DNN 700 optimized in association with each other And parameters of the second DNN 300 may be determined.
- a plurality of DNN setting information trained in association with each other is the first. It can be determined by the DNN 700 and the second DNN 300.
- a plurality of DNN configuration information of the first DNN 700 and the second DNN 300 may be mapped to the first image related information.
- the first training image 802 output from the first DNN 700 is first encoded with a specific codec according to a specific bit rate, and the bitstream obtained as a result of the first encoding is first decoded.
- the acquired second training image may be input to the second DNN 300.
- the first DNN setting mapped to the resolution of the training image 802, the type of codec used for the first encoding of the first training image 802, and the bitrate of the bitstream obtained as a result of the first encoding of the first training image 802 Information pairs can be determined.
- the resolution of the first training image 802, the type of codec used for the first encoding of the first training image 802, and the bit rate of the bitstream obtained according to the first encoding of the first training image 802 vary.
- a mapping relationship between a plurality of DNN configuration information of the first DNN 700 and the second DNN 300 and the first image-related information may be determined.
- FIG. 10 is a view for explaining a training process of the first DNN 700 and the second DNN 300 by the training apparatus 1000.
- Training of the first DNN 700 and the second DNN 300 described with reference to FIG. 9 may be performed by the training apparatus 1000.
- the training device 1000 includes a first DNN 700 and a second DNN 300.
- the training device 1000 may be, for example, an AI encoding device 600 or a separate server.
- DNN configuration information of the second DNN 300 obtained as a result of training is stored in the AI decoding apparatus 200.
- the training apparatus 1000 initially sets DNN setting information of the first DNN 700 and the second DNN 300 (S840, S845). Accordingly, the first DNN 700 and the second DNN 300 may operate according to predetermined DNN setting information.
- the DNN configuration information includes at least one of the number of convolutional layers included in the first DNN 700 and the 2 DNN 300, the number of filter kernels per convolutional layer, the size of the filter kernel per convolutional layer, and the parameters of each filter kernel. It can contain information about one.
- the training apparatus 1000 inputs the original training image 801 as the first DNN 700 (S850).
- the original training image 801 may include at least one frame constituting a still image or a video.
- the first DNN 700 processes the original training image 801 according to the initially set DNN setting information, and outputs the AI-downscaled first training image 802 from the original training image 801 (S855).
- 10 shows that the first training image 802 output from the first DNN 700 is directly input to the second DNN 300, but the first training image 802 output from the first DNN 700 is shown in FIG. ) May be input to the second DNN 300 by the training device 1000.
- the training apparatus 1000 may first encode and first decode the first training image 802 with a predetermined codec, and then input the second training image to the second DNN 300.
- the second DNN 300 processes the first training image 802 or the second training image according to the initially set DNN setting information, and is AI upscaled from the first training image 802 or the second training image.
- the training image 804 is output (S860).
- the training apparatus 1000 calculates complexity loss information 820 based on the first training image 802 (S865).
- the training apparatus 1000 compares the reduced training image 803 and the first training image 802 to calculate structural loss information 810 (S870).
- the training apparatus 1000 compares the original training image 801 and the third training image 804 to calculate quality loss information 830 (S875).
- the first DNN 700 updates DNN configuration information initially set through a back propagation process based on the final loss information (S880).
- the training apparatus 1000 may calculate final loss information for training of the first DNN 700 based on the complexity loss information 820, the structural loss information 810, and the quality loss information 830.
- the second DNN 300 updates DNN configuration information initially set through a reverse transcription process based on quality loss information or final loss information (S885).
- the training apparatus 1000 may calculate final loss information for training of the second DNN 300 based on the quality loss information 830.
- the training apparatus 1000, the first DNN 700 and the second DNN 300 repeat the process of S850 to S885 until the final loss information is minimized to update the DNN setting information.
- the first DNN 700 and the second DNN operate according to DNN configuration information updated in the previous process.
- Table 1 shows the effects of AI encoding and AI decoding of the original image 105 and encoding and decoding of the original image 105 with HEVC according to an embodiment of the present disclosure.
- FIG. 11 is an exemplary diagram showing a device 20 for AI downscale of the original image 105 and a device 40 for AI upscale of the second image 135.
- the device 20 receives the original image 105 and provides the image data 25 and the AI data 30 to the device 40 using the AI downscaler 1124 and the transform-based encoding unit 1126 do.
- the image data 25 corresponds to the image data in FIG. 1
- the AI data 30 corresponds to the AI data in FIG. 1.
- the transform-based encoding unit 1126 corresponds to the first encoding unit 614 of FIG. 7, and the AI downscale unit 1124 corresponds to the AI downscale unit 612 of FIG. 7. .
- the device 40 receives the AI data 30 and the image data 25 and acquires the third image 145 using the transform-based decoding unit 1146 and the AI upscaler 1144.
- the transform-based decoder 1146 corresponds to the first decoder 232 of FIG. 2
- the AI upscaler 1144 corresponds to the AI upscaler 234 of FIG. 2.
- the device 20 includes a computer program comprising a CPU, memory and instructions. Computer programs are stored in memory. In one embodiment, upon execution of the computer program by the CPU, the device 20 performs the functions described with respect to FIG. 11. In one embodiment, a function to be described in connection with FIG. 11 is performed by a dedicated hardware chip and / or CPU.
- the device 40 includes a computer program including a CPU, memory and instructions. Computer programs are stored in memory. In one embodiment, upon execution of the computer program by the CPU, the device 40 performs the functions described in connection with FIG. 11. In one embodiment, a function to be described in connection with FIG. 11 is performed by a dedicated hardware chip and / or CPU.
- the configuration control unit 1122 receives one or more input values 10.
- the one or more input values 10 are the target resolution difference for the AI downscaler 1124 and AI upscaler 1144, the bitrate of the image data 25, the bits of the image data 25 It may include at least one of a rate type (eg, variable bitrate type, constant bitrate type, average bitrate type, etc.) and a codec type for the transform-based encoding unit 1126.
- the one or more input values 10 may be stored in advance in the device 20 or include values input from a user.
- the configuration control unit 1122 controls operations of the AI downscaler 1124 and the transform-based encoding unit 1126 based on the received input value 10.
- the configuration control unit 1122 obtains the DNN setting information for the AI downscaler 1124 according to the received input value 10, and the AI downscaler 1124 with the obtained DNN setting information Settings.
- the configuration control unit 1122 transmits the received input value 10 to the AI downscaler 1124, and the AI downscaler 1124 transmits the original image based on the received input value 10.
- DNN configuration information for down-scaling 105 may be obtained.
- the configuration control unit 1122 along with the input value 10, additional information, for example, a color format to which AI downscale is applied (luminance component, color difference component, red component, green component, blue component, etc.) Information, tone mapping information of a high dynamic range (HDR) is provided to the AI downscaler 1124, and the AI downscaler 1124 sets the DNN in consideration of the input value 10 and additional information Information can also be obtained.
- the configuration control unit 1122 transmits at least a portion of the received input value 10 to the transformation-based encoding unit 1126, so that the transformation-based encoding unit 1126 has a specific value of a bit rate and a specific type of bit.
- the first image 115 is first encoded with a rate and a specific codec.
- the AI downscaler 1124 receives the original image 105 and performs the operations described with respect to at least one of FIGS. 1, 7, 7, 8, 9, and 10 to obtain the first image 115. Perform.
- AI data 30 is provided to device 40.
- the AI data 30 may include at least one of resolution difference information between the original image 105 and the first image 115 and information related to the first image 115. Resolution difference information may be determined based on a target resolution difference of the input value 10, and information related to the first image 115 may be determined based on at least one of a target bit rate, a bit rate type, and a codec type.
- AI data 30 may include parameters used in the AI upscale process. AI data may be provided from the AI downscaler 1124 to the device 40.
- the first image 105 is processed by the transform-based encoding unit 1126 to obtain image data 25 and the image data 25 is delivered to the device 40.
- the transform-based encoding unit 1126 may process the first image 115 according to MPEG-2, H.264 AVC, MPEG-4, HEVC, VC-1, VP8, VP9 or AV1.
- the configuration control unit 1142 controls the operation of the AI upscaler unit 1144 based on the AI data 30.
- the configuration control unit 1142 obtains the DNN setting information for the AI upscaler 1144 according to the received AI data 30, and the AI upscaler 1144 with the obtained DNN setting information Settings.
- the configuration control unit 1142 transfers the received AI data 30 to the AI upscaler 1144, and the AI upscaler 1144 is based on the AI data 30 to generate a second image ( DNN configuration information for AI upscale of 135) may be obtained.
- the configuration control unit 1142 together with the AI data 30, additional information, for example, a color format to which AI upscale is applied (luminance component, chrominance component, red component, green component or blue component, etc.) Information, tone mapping information of high dynamic range (HDR) is provided to the AI upscaler 1144, and the AI upscaler 1144 obtains DNN setting information in consideration of the AI data 30 and additional information. It might be.
- the AI upscaler 1144 receives the AI data 30 from the configuration controller 1142, and receives at least one of prediction mode information, motion information, and quantization parameter information from the transform-based decoder 1146. DNN setting information may be obtained based on at least one of prediction mode information, motion information, and quantization parameter information and AI data 30.
- the transformation-based decoder 1146 processes the image data 25 to restore the second image 135.
- the transform-based decoder 1146 may process the image data 25 according to MPEG-2, H.264 AVC, MPEG-4, HEVC, VC-1, VP8, VP9 or AV1.
- the AI upscaler 1144 AI upscales the second image 135 provided from the transform-based decoder 1146 based on the set DNN setting information to obtain a third image 145.
- the AI downscaler 1124 includes a first DNN, and the AI upscaler 1144 may include a second DNN.
- DNN configuration information for the first DNN and the second DNN is Training is performed according to the training method described with reference to FIGS. 9 and 10.
- the upscale DNN is a DNN used to AI upscale a low-resolution image such as the second image 135 to a high-resolution image such as the third image 145 like the second DNN 300 of FIG. 3.
- the upscale DNN information represents DNN configuration information specified according to AI data, and an upscale DNN may be set based on the upscale DNN information.
- the low-resolution image represents an image having a small resolution, such as the first image 115 and the second image 135.
- the high-resolution image represents a high-resolution image such as the original image 105 and the third image 145.
- FIG. 12 illustrates the structure of the image data 1200 and the AI data 1240, and the corresponding relationship between the image data 1200 and the AI data 1240.
- FIG. 12 a video-frame group-frame hierarchy of image data 1200 is described.
- the video 1202 of FIG. 12 is a data unit including all consecutive frames of the image data 1200.
- Parameter information of a video parameter set may be applied to all frames included in the video 1202.
- the video parameter set is included in video header 1204.
- the video 1202 may include a plurality of frame groups.
- a frame group is a data unit composed of one or more consecutive frames sharing parameter information of a frame group parameter set.
- the frame group may be GOP (Group pf Pictures) or CVS (Coded Video Sequence).
- the frame group parameter set may be included in the frame group header.
- the frame group parameter set of the first frame group 1210 may be included in the first frame group header 1212.
- the frame group parameter set of the second frame group 1214 may be included in the second frame group header 1216.
- the frame group includes an IDR (Instantaneous decoding refresh) frame or an IRAP (Intra Random Access Pictures) frame encoded without referring to other frames. And the remaining frames of the frame group are encoded with reference to the IDR frame (or IRAP frame). Therefore, the first frame group 1210 is independently encoded without referring to other frame groups of the video 1202.
- the first frame 1220 which is the first coded frame of the first frame group 1210, is an IDR frame (or IRAP frame).
- the remaining frames of the first frame group 1210 including the second frame 1230 are encoded with reference to the first frame 1220.
- the frame represents one still image included in the video.
- the frame header may include a frame parameter set including parameter information applied to the frame.
- the first frame header 1222 of the first frame 1220 may include a set of frame parameters applied to the first frame 1220.
- the second frame header 1232 of the second frame 1230 may include a set of frame parameters applied to the second frame 1230.
- the AI data 1240 may be classified into video AI data 1242, frame group AI data 1250, and frame AI data 1260 according to an application range.
- the video AI data 1242 refers to AI data commonly applied to all frame groups included in the video.
- the frame group AI data 1250 refers to AI data commonly applied to frames included in the current frame group.
- the frame AI data 1260 refers to AI data applied to the current frame.
- Video AI data 1242 corresponds to video header 1204. Therefore, the video AI data 1242 can be decoded in parallel with the video header 1204. Alternatively, the video AI data 1242 may be decoded immediately before decoding of the video header 1204. Alternatively, the video AI data 1242 may be decoded immediately after decoding the video header 1204.
- the frame group AI data 1250 corresponds to the frame group header.
- the first frame group AI data 1252 corresponds to the first frame group header 1212.
- the second frame group AI data 1254 corresponds to the second frame group header 1216.
- the first frame group AI data 1252 and the second frame group AI data 1254 may be decoded in parallel with the first frame group header 1212 and the second frame group header 1216, respectively.
- the first frame group AI data 1252 and the second frame group AI data 1254 may be decoded immediately before decoding of the first frame group header 1212 and the second frame group header 1216, respectively.
- the first frame group AI data 1252 and the second frame group AI data 1254 may be decoded immediately after decoding of the first frame group header 1212 and the second frame group header 1216, respectively.
- the frame AI data 1260 corresponds to the frame header.
- the first frame AI data 1262 corresponds to the first frame header 1222.
- the second frame AI data 1264 corresponds to the second frame header 1232.
- the first frame AI data 1262 and the second frame AI data 1264 may be decoded in parallel with the first frame header 1222 and the second frame header 1232, respectively.
- the first frame AI data 1262 and the second frame AI data 1264 may be decoded immediately before decoding of the first frame header 1222 and the second frame header 1232, respectively.
- the first frame AI data 1262 and the second frame AI data 1264 may be decoded immediately after decoding of the first frame header 1222 and the second frame header 1232, respectively.
- the data processing unit 632 of FIG. 7 may generate AI encoded data in the form of a single file including both the image data 1200 and the AI data 1240.
- the communication unit 634 transmits AI encoding data in the form of a single file to the communication unit 212 of FIG. 2.
- a file means a collection of data stored in memory.
- the video file means a group of video data stored in the memory, and the video data may be implemented in the form of a bitstream.
- the AI data 1240 is not inserted into the image data 1200 but may be configured separately from the image data 1200 in a single file. Therefore, even though the AI encoded data is composed of a single file, since the AI data 1240 and the image data 1200 are separated, the AI data 1240 and / or the image data 1200 are the AI data 1240 and the image data 1200 ).
- the communication unit 212 may receive AI-encoded data.
- the parsing unit 214 may extract AI data and image data from AI encoded data.
- the output unit 216 transmits the image data to the first decoding unit 232 and the AI data to the AI upscaler unit 234.
- the first decoder 232 decodes the image data to generate a low-resolution image.
- the AI upscaler 234 obtains upscale DNN information suitable for upscale of the low resolution image based on the AI data, and uses the upscale DNN set according to the upscale DNN information to AI upscale the low resolution image. do.
- synchronization data for synchronization of AI data and image data may be included in the AI encoding data.
- the synchronization data may be included in the AI encoded data independently of the AI data and image data.
- the synchronization data may be included in AI data or image data.
- the parsing unit 214 may synchronize image data and AI data according to the synchronization data.
- the AI upscaler 234 may synchronize image data and AI data according to the synchronization data.
- appropriate upscale DNN information for AI upscaling the low resolution image may be selected.
- the AI data 1240 may be embedded in the image data 1200.
- video AI data 1242 may be inserted into video header 1204.
- the video header 1204 may include video AI data 1242 along with a video parameter set. Therefore, the video AI data 1242 can be decoded together with the video parameter set.
- the video AI data 1242 may be inserted into a single file to be located before or after the video header 1204, independently from the video header 1204. Therefore, decoding of the video AI data 1242 may be performed immediately before or after decoding the video header 1204.
- the first frame group header 1212 may include the first frame group AI data 1252 together with the frame group parameter set. Therefore, the first frame group AI data 1252 can be decoded together with the frame group parameter set.
- the first frame group AI data 1252 may be inserted into a single file to be located before or after the first frame group header 1212, independently from the first frame group header 1212. Therefore, decoding of the first frame group AI data 1252 may be performed immediately before or immediately after decoding of the first frame group header 1212.
- the first frame header 1222 may include the first frame AI data 1262 together with the frame parameter set. Therefore, the first frame AI data 1262 can be decoded together with the frame parameter set.
- the first frame AI data 1262 may be inserted into a single file to be located before or after the first frame header 1222 independently of the first frame header 1222. Therefore, decoding of the first frame AI data 1262 may be performed immediately before or after decoding of the first frame header 1222.
- AI encoded data when AI data is embedded in image data, AI data cannot be independently decoded without decoding the image data. Therefore, while the first decoding unit 232 decodes the image data, AI data embedded in the image data is extracted from the image data. Then, the AI data extracted from the image data is transmitted from the first decoder 232 to the AI upscaler 234.
- a part of the AI data is inserted into the image data, and the other part of the AI data may be included in the AI encoded data independently of the image data.
- the video AI data may be included in the AI encoded data independently of the image data, and the frame group AI data and the frame AI data may be inserted in the image data.
- the first AI data existing independently of the image data may be separated from the image data in the parsing unit 214.
- the first AI data separated from the image data may be transmitted from the output unit 216 to the AI upscaler unit 234.
- the first AI data may be video AI data and / or video segment AI data.
- the second AI data inserted in the image data is extracted from the image data by the first decoder 232.
- the extracted second AI data is transmitted from the first decoder 232 to the AI upscaler 234.
- the AI upscaler 234 acquires upscale DNN information necessary for AI upscale of the low resolution image according to the first AI data and the second AI data.
- the second AI data may be frame group AI data and / or frame AI data.
- the data processing unit 632 may separately generate a file corresponding to the image data 1200 and a file corresponding to the AI data 1240. Therefore, the communication unit 634 transmits AI-encoded data to the communication unit 212 in two file formats. Accordingly, the communication unit 634 may transmit a file corresponding to the image data 1200 and a file corresponding to the AI data 1240 to other communication channels. In addition, the communication unit 634 may sequentially transmit the file corresponding to the image data 1200 and the file corresponding to the AI data 1240 with a time difference.
- the file corresponding to the AI data 1240 is subjected to a decoding process of the file corresponding to the image data 1200. It can be subordinately decoded.
- the file corresponding to the image data 1200 and the file corresponding to the AI data 1240 are separated, the file corresponding to the image data 1200 and the file corresponding to the AI data 1240 are the two files. It may include information about synchronization.
- 13B illustrates the flow of data in the AI decoding apparatus 200 when AI data and image data are separated and composed of two files.
- the communication unit 212 may separately receive a file including image data and a file including AI data instead of a single file containing AI encoded data. Also, the communication unit 212 may acquire synchronization data necessary for synchronization of image data and AI data from a file including image data or a file containing AI data. According to an embodiment, synchronization data may be transmitted from a separate file. In FIG. 13B, synchronization data is represented as data independent of AI data, but according to an embodiment, AI data or image data may include synchronization data.
- the parsing unit 214 may synchronize image data and AI data according to synchronization data.
- the output unit 216 may transmit the synchronized image data to the first decoding unit 232 and the synchronized AI data to the AI upscaler unit 234.
- the output unit 216 may transmit image data to the first decoder 232.
- AI data and synchronization data may be transmitted to the AI upscale unit 234.
- the AI upscaler 234 AI upscales the low-resolution image output from the first decoder 232 by using the synchronization data and the upscale DNN information obtained according to the AI data.
- AI-encoded data when AI data and image data are separated in a single file.
- AI-encoded data is included in a video file 1400 in a predetermined container format.
- the predetermined container format may be MP4, AVI, MKV, FLV, and the like.
- the video file 1400 includes a metadata box 1410 and a media data box 1430.
- the metadata box 1410 includes information about media data included in the media data box 1430.
- the metadata box 1410 may include information about the type of media data, the type of codec used to encode the media data, and the playback time of the media.
- the metadata box 1410 may include synchronization data 1415 and AI data 1420.
- the synchronization data 1415 and AI data 1420 are encoded according to an encoding method provided in a predetermined container format and stored in the metadata box 1410.
- the parsing unit 214 may extract synchronization data 1415 and AI data 1420 from the metadata box 1410. In addition, the parsing unit 214 may extract image data 1431 from the media data box 1430.
- the output unit 216 may transmit the image data 1431 to the first decoder 232 and the AI data 1420 to the AI upscaler unit 234 according to the synchronization data 1415. Alternatively, the output unit 216 transmits the synchronization data 1415 to the AI upscaler unit 234, and the AI upscaler unit 234, according to the synchronization data 1415, image data 1431 and AI data ( 1420).
- the AI data 1420 includes video AI data 1422, frame group AI data 1424, and frame AI data 1426.
- the video AI data 1422 is set to correspond to the video header 1432, the frame group AI data 1424, the frame group header 1436, and the frame AI data 1426 to correspond to the frame header 1440.
- the frame group AI data 1424 and the frame AI data 1426 may be omitted from the AI data 1420.
- the frame AI data 1426 may be omitted from the AI data 1420.
- the synchronization data 1415 relates to synchronization of the video AI data 1422, the frame group AI data 1424, and the frame AI data 1426 and the video header 1432, the frame group header 1436, and the frame header 1440.
- the synchronization data 1415 may include reproduction order information or decoding order information of the image data 1431 of the media data box 1430.
- the AI upscaler 234 may obtain upscale DNN information necessary for AI upscale of the low resolution image from AI data determined according to reproduction order information or decoding order information of the synchronization data 1415.
- the parsing unit 214 or the AI upscaler unit 234 is a frame group corresponding to the frame group 1434 based on the synchronization data 1415
- the AI data 1424 and the frame AI data 1426 corresponding to the frame 1438 may be determined.
- the AI upscaler 234 AI-ups the low-resolution image of the frame 1438. Upscale DNN information for scale may be obtained.
- the AI data 1420 of the metadata box 1410 may be decoded before the image data 1431 of the media data box 1430. Therefore, the AI upscaler 234 may acquire the upscale DNN information before decoding the image data 1431 according to the AI data 1420.
- the upscale DNN information can be applied to the entire video. Alternatively, for adaptive AI upscale in frame group units, different upscale DNN information for each frame group may be obtained. Alternatively, for adaptive AI upscale on a frame-by-frame basis, upscale DNN information may be obtained in advance differently for each frame.
- the AI upscaler 234 may decode the AI data 1420 in the metadata box 1410 according to the decoding order of the image data 1431 in the media data box 1430. Decoding of the video AI data 1422 may be performed immediately before or after decoding of the video header 1432. Alternatively, the video AI data 1422 may be decoded in parallel with the video header 1432. In order to decode the video AI data 1422 according to the decoding order of the video header 1432, synchronization data 1415 may be referenced.
- the AI upscaler 234 may perform decoding of the frame group AI data 1424 immediately before or after decoding the frame group header 1436 according to the first decoder 232. Alternatively, the AI upscaler 234 may decode the frame group AI data 1424 in parallel with the decoding of the frame group header 1436 according to the first decoder 232. In order to decode the frame group AI data 1424 according to the decoding order of the frame group header 1436, synchronization data 1415 may be referenced.
- the AI upscaler 234 may perform decoding of the frame AI data 1426 immediately before or after decoding the frame header 1440 according to the first decoder 232.
- the AI upscaler 234 may decode the frame AI data 1426 in parallel with the decoding of the frame header 1440 according to the first decoder 232.
- synchronization data 1415 may be referenced.
- the video file 1400 includes one metadata box 1410 and one media data box 1430.
- the video file 1400 may include two or more metadata boxes and two or more media data boxes. Accordingly, two or more image data segments in which the image data is divided into predetermined time units may be stored in two or more media data boxes.
- information on image data segments stored in the two or more media data boxes may be included in the two or more metadata boxes.
- two or more metadata boxes may each include AI data.
- 15A shows an embodiment of AI-encoded data when AI data is embedded in image data.
- the video file 1500 includes a metadata box 1502 and a media data box 1504, like the video file 1400 of FIG. 14.
- the metadata box 1502 does not include AI data. Instead, the image data in which AI data is inserted is included in the media data box 1504.
- AI data may be encoded according to a video codec of image data.
- the video codec of the image data may be H.264, HEVC, AVS2.0, Xvid, or the like.
- the parsing unit 214 does not extract AI data from the metadata box 1502. Instead, the first decoder 232 may extract AI data from the image data 1505 and transfer the extracted AI data to the AI upscaler 234. And based on the upscale DNN information obtained by the AI data, the AI upscaler 234 may AI upscale the low-resolution image reconstructed by the first decoder 232.
- the video file 1500 may not include synchronization data. Accordingly, the AI data is sequentially decoded as the image data 1505 is decoded.
- the video AI data 1508 may be located immediately after the video header 1506 including parameter information of the video. Accordingly, the video AI data 1508 may be decoded after video parameters included in the video header 1506 are decoded. According to an embodiment, unlike in FIG. 15A, video AI data 1508 may be located immediately before video header 1506.
- the frame group AI data 1514 may be located immediately after the frame group header 1512 including parameter information of the frame group 1510. Therefore, the frame group AI data 1514 may be decoded after the frame group parameters included in the frame group header 1512 are decoded. According to an embodiment, unlike in FIG. 15A, the frame group AI data 1514 may be located immediately before the frame group header 1512. The decoding order of the frame group header and the frame group AI data of the remaining frame groups decoded after the frame group 1510 may be determined in the same manner as the decoding order of the frame group header 1512 and the frame group AI data 1514.
- the frame AI data 1520 may be located immediately after the frame header 1518 including parameter information of the frame 1516. Accordingly, the frame AI data 1520 may be decoded after frame group parameters included in the frame header 1518 are decoded. According to an embodiment, unlike in FIG. 15A, the frame AI data 1520 may be located immediately before the frame header 1518.
- the decoding order of the frame header and frame AI data of the remaining frames decoded after the frame 1516 may also be determined in the same manner as the decoding order of the frame header 1518 and frame AI data 1520.
- 15B shows another embodiment of AI encoded data when AI data is embedded in image data.
- the video file 1520 includes a metadata box 1522 and a media data box 1524.
- the metadata box 1522 does not include AI data, and instead, the image data 1525 into which AI data is inserted is included in the media data box 1524.
- AI data is inserted into the corresponding data header in the video file 1520.
- the video AI data 1528 may be included in a video header 1526 including parameter information of the video. Accordingly, the video AI data 1528 may be decoded together with video parameters included in the video header 1526.
- the frame group AI data 1534 may be included in a frame group header 1532 including parameter information of the frame group 1530. Therefore, the frame group AI data 1534 can be decoded together with the frame group parameters included in the frame group header 1532. Frame group AI data of the remaining frame groups decoded after the frame group 1530 may also be included in the frame group header.
- the frame AI data 1540 may be included in the frame header 1538 including parameter information of the frame 1536. Accordingly, the frame AI data 1540 can be decoded together with the frame group parameters included in the frame header 1538. Frame AI data of the remaining frames decoded after the frame 1536 may also be included in the frame header.
- 15C shows an embodiment of AI encoded data when some AI data is inserted into the image data and the other AI data is separated from the image data.
- the video file 1550 includes a metadata box 1552 and a media data box 1556.
- the metadata box 1552 contains video AI data 1554 applied to all frames of the video.
- the frame group AI data and the frame AI data are included in the video data of the media data box 1556.
- the video AI data 1554 included in the metadata box 1552 may be decoded before decoding the image data.
- the frame group AI data and frame AI data are sequentially decoded as the image data is decoded.
- the parsing unit 214 can extract the video AI data 1554 from the metadata box 1552.
- the communication unit 216 may transmit the video AI data 1554 to the AI upscale unit 234.
- the communication unit 216 may transmit the image data 1557 to the first decoding unit 232.
- the first decoder 232 may reconstruct the low-resolution image by decoding the image data 1557, and extract the frame group AI data 1564 and the frame AI data 1570.
- the first decoder 232 may transmit the frame group AI data 1564 and the frame AI data 1570 to the AI upscaler 234.
- the AI upscaler 234 may acquire upscale DNN information for AI upscale of the low resolution image according to the video AI data 1554, the frame group AI data 1564, and the frame AI data 1570. .
- the frame group AI data 1564 may be located immediately after the frame group header 1562 including parameter information of the frame group 1560. However, according to an embodiment, the frame group AI data 1564 may be located immediately before the frame group header 1562. Also, the frame group AI data 1564 may be included in the frame group header 1562.
- the frame AI data 1570 may be located immediately after the frame header 1568 including parameter information of the frame 1566. However, according to an embodiment, the frame AI data 1570 may be located immediately before the frame header 1568. Also, the frame AI data 1570 may be included in the frame header 1568.
- the frame group AI data may be additionally included in the metadata box 1552.
- a part of the frame group AI data may be included in the metadata box 1552.
- a part of the frame AI data may be included in the metadata box 1552.
- the frame group AI data 1514 and 1564 and the frame AI data 1520 and 1570 inserted into the media data box may be inserted in the form of a Supplementary Enhancement Information (SEI) message.
- SEI Supplementary Enhancement Information
- the SEI message is a data unit that contains additional information about an image that is not necessarily necessary for decoding an image.
- the SEI message may be transmitted in frame group units or frame units.
- the SEI message may be extracted from the first decoding unit 232 and transmitted to the AI upscaler unit 234, similar to the second AI data described above in FIG. 13A.
- FIG. 16 shows an embodiment of AI encoded data divided in units of video segments when AI data and image data are separated as shown in FIG. 14.
- the AI encoded data is divided into video segment units. It may be stored in the video file 1600.
- the video segment is part of the video and contains frames for a certain period of time.
- a video segment may include only one or more frame groups. If the video segment includes one frame group, the video file 1600 may include as many video segments as the number of frame groups of image data.
- the video segment may include a metadata box and a media data box, respectively.
- Meta data including AI data is divided and stored in a plurality of video segments. Therefore, AI data is divided and stored in metadata boxes for video segments as well as metadata boxes 1610 for the entire image data. Therefore, when AI upscales a low-resolution image of a specific portion of the image data, compared to the case where all the AI data is stored in the metadata box 1610 for the entire image data, the AI data includes the metadata box 1610 and the like.
- AI data for obtaining upscale DNN information suitable for the specific portion may be obtained more quickly.
- the video AI data 1612 of the metadata box 1610 and the segment of the current video segment data box 1620 Only AI data in the metadata box 1630 is referenced. And AI data of metadata boxes of other video segments is not referenced. Therefore, the overhead associated with decoding AI data can be reduced.
- AI data may not be transmitted at the same time at the first playback, but may be distributed and transmitted in units of video segments. Therefore, since the AI data is distributed and sequentially transmitted, the overhead associated with decoding the AI data can be reduced. Therefore, it may be advantageous to divide and transmit video segments.
- the metadata box 1610 for the entire image data includes video AI data 1612.
- the video AI data 1612 is applied to all video segments included in the video.
- the metadata box 1610 may be decoded before the current video segment data box 1620.
- the current video segment data box 1620 includes a segment metadata box 1630 and a segment media data box 1640.
- the segment metadata box 1630 may include synchronization data 1163 and AI data 1632.
- the segment media data box 1640 includes video segment data 1641.
- the AI data 1632 of the current video segment data box 1620 may include video segment AI data 1634, frame group AI data 1636, and frame AI data 1638.
- the video segment AI data 1634 is applied to all frame groups included in the current video segment.
- the frame group AI data 1636 is applied to all frames included in the current frame group.
- Frame AI data 1638 is applied to the current frame.
- the frame group AI data 1636 and the frame AI data 1638 in the AI data 1632 of the current video segment data box 1620 May be omitted.
- the frame AI data in the AI data 1632 of the current video segment data box 1620 (1638) may be omitted.
- the video segment AI data 1634 may be omitted from the AI data 1632 of the current video segment data box 1620.
- the frame group AI data 1636 may serve as the video segment AI data 1634.
- the video segment AI data 1634 to AI data 1634 of the current video segment data box 1620 And frame AI data 1638 may be omitted.
- AI upscale may be applied to all frames of the video segment.
- Synchronization data 1163 includes video segment AI data 1634, frame group AI data 1636, and frame AI data 1638 with synchronization of video segment header 1642, frame group header 1646, and frame header 1648.
- the video segment header 1642 includes a video segment parameter commonly applied to frames included in the video segment.
- the synchronization data 1631 may include playback order information or decoding order information of the video segment data 1641 of the segment media data box 1640.
- the AI upscaler 234 may obtain upscale DNN information necessary for AI upscale of the low resolution image from the AI data determined according to the synchronization data 1631.
- the parsing unit 214 or the AI upscaler unit 234 is a frame group corresponding to the frame group 1644 based on the synchronization data 1631.
- AI data 1636 and frame AI data 1636 corresponding to the frame 1649 may be acquired.
- the AI upscaler 234 AI of the low-resolution image of the frame 1649 Upscale DNN information for upscale may be obtained.
- the AI data 1632 of the segment metadata box 1630 may be decoded before the video segment data 1641 of the segment media data box 1640. Accordingly, the AI upscaler 234 may acquire upscale DNN information before decoding of the video segment data 1641 according to the AI data 1632.
- the obtained upscale DNN information can be applied to the entire video segment.
- upscale DNN information may be previously obtained differently for each frame group.
- upscale DNN information may be obtained in advance differently for each frame.
- the AI upscaler 234 decodes the AI data 1632 of the segment metadata box 1630 according to a decoding order of the video segment data 1641 of the segment media data box 1640. can do.
- the AI upscaler 234 may perform decoding of the frame group AI data 1636 according to a decoding order of the frame group header 1646 according to the first decoder 232.
- the AI upscaler 234 may perform decoding of the frame AI data 1638 according to the decoding order of the frame header 1648 according to the first decoder 232.
- the decoding order of the header 1648 may be synchronized.
- the remaining video segment data boxes after the current video segment data box 1620 may also be sequentially decoded in the same manner as the current video segment data box 1620.
- FIG. 17 illustrates an embodiment of AI data 1740 and image data 1700 transmitted as two separate files.
- the communication unit 212 When the communication unit 212 does not receive the AI data 1740, the low-resolution image obtained from the image data 1700 is not AI upscaled. If the communication unit 212 receives the AI data 1740, the upscale required for the AI upscale of the low-resolution image according to the AI data 1740 transmitted to the AI upscaler unit 234 via the output unit 216. DNN information is obtained.
- the image data 1700 may include a video header 1710, a frame group header 1722 of the frame group 1720, and a frame header 1732 of the frame 1730.
- the AI data 1740 may include video AI data 1742, frame group AI data 1750, and frame AI data 1760. Since the image data 1700 and the AI data 1740 are transmitted as separate files, the image data 1700 and / or the AI data 1740 are synchronization data necessary for synchronizing the image data 1700 and the AI data 1740. It may include.
- the synchronization data may indicate a decoding order or a playback order of the image data 1700.
- the parsing unit 214 or the AI upscaler 234 has a video header 1710 in which the video AI data 1742, the frame group AI data 1750, and the frame AI data 1760 play order or decoding order according to the synchronization data. ), Can be set to match the playback order or decoding order of the frame group header 1722 and the frame header 1732.
- Dotted lines between video AI data 1742 and video header 1710 in FIG. 17, dotted lines between frame group AI data 1750 and frame group header 1722, and dotted lines between frame AI data 1760 and frame header 1732 Indicates synchronization between AI data and data header.
- An identification number for matching two data may be included in the image data 1700 and the AI data 1740.
- AI data 1740 may include an identification number of image data 1700 to which AI data 1740 is applied.
- the image data 1700 may include an identification number of AI data 1740 applied to the image data 1700.
- identification numbers may be included in both the image data 1700 and the AI data 1740. Accordingly, even when the image data 1700 and the AI data 1740 are not transmitted at the same time, the image data 1700 and the AI data 1740 may match each other according to the identification number.
- the frame AI data 1760 when upscale DNN information is acquired in a frame group unit, the frame AI data 1760 may be omitted from the AI data 1740.
- the frame group AI data 1750 and the frame AI data 1760 may be omitted from the AI data 1740.
- FIGS. 14 to 17 show an embodiment of a data structure 1800 that can be applied to the video AI data described in FIGS. 14 to 17.
- the data structure 1800 of the video AI data includes elements related to upscale DNN information used for AI upscale.
- the elements are ai_codec_info (1802), ai_codec_applied_channel_info (1804), target_bitrate_info (1806), res_info (1808), ai_codec_DNN_info (1814), ai_codec_supplementary_info (1816) And the like.
- the arrangement order of the elements shown in FIG. 18A is only an example, and a person skilled in the art can change the arrangement order of the elements.
- ai_codec_info 1802 refers to AI target data indicating whether AI upscale is applied to a low resolution image.
- the data structure 1800 includes AI auxiliary data for obtaining the upscale DNN information used for the AI upscale.
- the data structure 1800 does not include AI auxiliary data regarding the AI upscale.
- the AI auxiliary data includes ai_codec_applied_channel_info (1804), target_bitrate_info (1806), res_info (1808), ai_codec_DNN_info (1814), ai_codec_supplementary_info (1816), and the like.
- ai_codec_applied_channel_info is channel information indicating a color channel to which AI upscale is applied.
- the image may be expressed in RGB format, YUV format, YCbCr format, or the like. If the low-resolution image reconstructed from the image data is in YCbCr format, the low-resolution image includes a low-resolution image of the Y channel for luminance, a low-resolution image of the Cb channel for chrominance, and a low-resolution image of the Cr channel. .
- ai_codec_applied_channel_info 1804 may indicate a color channel to which AI upscale is applied among 3 channels.
- the AI upscaler 234 may AI upscale the low-resolution image of the color channel indicated by ai_codec_applied_channel_info 1804.
- the AI upscaler 234 may acquire different upscale DNN information for each color channel.
- the AI upscale may be applied only to the low-resolution image of the Y channel for luminance. Since the human eye is more sensitive to the image quality of luminance than the color difference, the subjective image quality perceived by a person is insignificant between the low-resolution images of all color channels and the AI up-scale only of the Y-channel. You can.
- ai_codec_applied_channel_info 1804 may indicate whether the low resolution image of the Cb channel and the low resolution image of the Cr channel are AI upscaled.
- the AI upscaler 234 may apply AI upscale only to the low resolution image of the Y channel.
- the AI upscaler 234 may apply AI upscale to the low-resolution images of all channels. .
- target_bitrate_info (1806) is information indicating the bit rate of the image data obtained as a result of the first encoding (614).
- the AI upscaler 234 may acquire upscale DNN information suitable for the image quality of the low-resolution image according to target_bitrate_info 1806.
- res_info 1808 indicates resolution information related to the resolution of the AI upscaled high resolution image, such as the third image 145.
- res_info 1808 may include pic_width_org_luma 1810 and pic_height_org_luma 1812.
- pic_width_org_luma (1810) and pic_height_org_luma (1812) are high-resolution image width information and high-resolution image height information representing width and height of a high-resolution image, respectively.
- the AI upscaler 234 may determine the AI upscale ratio according to the resolution of the high-resolution image determined according to pic_width_org_luma 1810 and pic_height_org_luma 1812 and the resolution of the low-resolution image restored by the first decoder 232. .
- res_info (1808) does not include pic_width_org_luma (1810) and pic_height_org_luma (1812), but instead may include resolution ratio information indicating a resolution ratio between a low resolution image and a high resolution image. Therefore, the AI upscaler 234 may determine the resolution of the high-resolution image according to the resolution of the low-resolution image reconstructed by the first decoder 232 and the resolution ratio according to the resolution ratio information.
- the resolution ratio information may include vertical resolution ratio information and horizontal resolution ratio information.
- the AI upscaler 234 may acquire upscale DNN information suitable for AI upscale of the low resolution image according to the AI upscale ratio determined according to res_info 1808. Alternatively, the AI upscaler 234 may obtain upscale DNN information suitable for AI upscale of the low resolution image according to the resolution of the low resolution image determined according to res_info 1808 and the resolution of the high resolution image.
- ai_codec_DNN_info is information indicating mutually promised upscale DNN information used for AI upscale of a low resolution image.
- the AI upscaler 234 may determine one of a plurality of default DNN setting information previously stored as upscale DNN information according to ai_codec_applied_channel_info (1804), target_bitrate_info (1806), res_info (1808), and the like.
- the AI upscaler 234 may additionally consider other characteristics of the high-resolution image (genre, maximum luminance, color gamut, etc. of the image), and determine one of a plurality of preset DNN settings previously stored as upscale DNN information. .
- ai_codec_DNN_info 1814 may indicate one of the two or more default DNN configuration information. Then, using the upscale DNN information indicated by ai_codec_DNN_info 1814, the AI upscaler 234 may AI upscale the low-resolution image.
- ai_codec_DNN_info (1814) indicates two or more upscale DNN information that can be applied to a current video file among two or more default DNN setting information You can.
- one or more upscale DNN information represented by ai_codec_DNN_info 1814 may be adaptively selected in a frame group or frame unit.
- ai_codec_supplementary_info (1816) represents additional information on AI upscale.
- ai_codec_supplementary_info (1816) may include information necessary for determination of upscale DNN information applied to a video.
- the ai_codec_supplementary_info 1816 may include genre, HDR maximum illuminance, HDR color gamut, HDR PQ information, rate control type information, information on codec used for the first encoding, and the like.
- Video AI data having a data structure 1800 including the above elements may be applied to all frame groups.
- FIG. 18B shows an embodiment of a data structure 1820 that can be applied to the video segment AI data described in FIG. 16 or the frame group AI data of FIGS. 14, 15A to 15C, and FIG. 17.
- the data structure 1820 has a structure similar to the data structure 1800 of FIG. 18A.
- the data structure 1820 may additionally include AI auxiliary data dependency information (dependent_ai_condition_info 1824) indicating whether AI auxiliary data is the same between the successive previous data unit and the current data unit.
- the data structure 1820 may include dependent_ai_condition_info 1824 when ai_codec_info 1822 indicates that AI upscale is applied to a low-resolution image. If ai_codec_info 1822 indicates that AI upscale is not applied to the low-resolution image, video dependent_ai_condition_info 1824 may be omitted from the data structure 1820.
- dependent_ai_condition_info (1824) indicates that the AI auxiliary data is the same between the successive previous data unit and the current data unit
- ai_codec_applied_channel_info (1826), target_bitrate_info (1828), res_info (1830), pic_width_org_luma (1832), pic_height_org_luma (1834), ann (1836), ai_codec_supplementary_info (1838) may be omitted from the data structure 1820.
- channel information, target bit rate information, resolution information, DNN information, and additional information include ai_codec_applied_channel_info (1826), target_bitrate_info (1828), res_info (1830), pic_width_org_luma (1832), pic_height_org_luma (1834), ai_codec_DNN 1836), ai_codec_supplementary_info (1838). Accordingly, when the same AI auxiliary data is applied to a plurality of data units, the size of the AI data may be reduced according to dependent_ai_condition_info (1824).
- dependent_ai_condition_info (1824) indicates that the AI auxiliary data is not the same between the previous successive data unit and the current data unit
- the data structures are ai_codec_applied_channel_info (1826), target_bitrate_info (1828), res_info (1830), pic_width_org_luma (1832), pic_height_org_luma 1834), ai_codec_DNN_info (1836), ai_codec_supplementary_info (1838).
- the upscale DNN information of the current data unit can be obtained independently of the upscale DNN information of the previous data unit.
- AI auxiliary data dependency information of the first data unit transmitted may be omitted. Therefore, from the second transmitted video segment, AI auxiliary data dependency information is included in the header of the video segment.
- AI auxiliary data dependency information may indicate only whether or not specific AI auxiliary data is dependent.
- AI auxiliary data dependency information may indicate whether the current data unit inherits the resolution information of the previous data unit.
- the AI data inherited according to the AI auxiliary data dependency information may include at least one of ai_codec_info (1100), ai_codec_applied_channel_info (1804), target_bitrate_info (1806), res_info (1808), ai_codec_DNN_info (1814), ai_codec_supplementary_info (1816).
- the data unit may be a video segment or a group of frames.
- dependent_ai_condition_info (1824) may indicate whether AI auxiliary data is identical between successive previous video segments and the current video segment.
- dependent_ai_condition_info (1824) may indicate whether AI auxiliary data is the same between the previous previous frame group and the current frame group.
- FIG. 19 shows a syntax table 1900 on which the data structure 1800 of FIG. 18A is implemented.
- ai_codec_info ai_codec_applied_channel_info, target_bitrate, pic_width_org_luma, pic_height_org_luma, ai_codec_DNN_info, ai_codec_supplementary_info_flag
- a syntax element such as is parsed.
- ai_codec_info is AI target data corresponding to ai_codec_info 1802 of FIG. 18A.
- ai_codec_info indicates that AI upscale is allowed (if (ai_codec_info))
- syntax elements corresponding to AI auxiliary data are parsed. If ai_codec_info indicates that AI upscale is not allowed, syntax elements corresponding to AI auxiliary data are not parsed.
- Syntax elements corresponding to AI auxiliary data include ai_codec_applied_channel_info, target_bitrate, pic_width_org_luma, pic_height_org_luma, ai_codec_DNN_info, and ai_codec_supplementary_info_flag.
- ai_codec_applied_channel_info is channel information corresponding to ai_codec_applied_channel_info 1804 of FIG. 18A.
- target_bitrate is target bitrate information corresponding to target_bitrate_info 1806 of FIG. 18A.
- pic_width_org_luma and pic_height_org_luma are high resolution image width information and high resolution image height information corresponding to pic_width_org_luma 1810 and pic_height_org_luma 1812, respectively, in FIG. 18A.
- ai_codec_DNN_info is DNN information corresponding to ai_codec_DNN_info 1814 in FIG. 18A.
- ai_codec_supplementary_info_flag is an additional information flag indicating whether ai_codec_supplementary_info (1816) in FIG. 18A is included in the syntax table 1900. If ai_codec_supplementary_info_flag indicates that the auxiliary information used for AI upscale is not parsed, additional auxiliary information is not obtained. However, when ai_codec_supplementary_info_flag indicates that the auxiliary information used for AI upscale is parsed (if (ai_codec_supplementary_info_flag)), additional auxiliary information is obtained.
- Additional auxiliary information obtained may include genre_info, hdr_max_luminance, hdr_color_gamut, hdr_pq_type, and rate_control_type, and codec_type.
- genre_info is the genre of the content of the image data
- hdr_max_luminance is the maximum dynamic range (HDR) applied to the high resolution image
- hdr_color_gamut is the HDR color gamut applied to the high resolution image
- hdr_pq_type is HDR PQ (Perceptual Quantizer) information applied to the high resolution image
- rate_control_type is the rate control type applied to the image data obtained as a result of the first encoding
- codec_type represents the codec used for the first encoding.
- a specific syntax element among the syntax elements corresponding to auxiliary information may be parsed.
- the syntax table 1900 of FIG. 19 is only an example, and some of the elements of the data structure 1800 of FIG. 18A may be included in the syntax table 1900. Also, elements not included in the data structure 1800 may be included in the syntax table 1900.
- the syntax table 1900 of FIG. 19 may include AI auxiliary data dependency information such as dependent_ai_condition_info, similar to the data structure 1820 of FIG. 18B. Accordingly, the syntax table 1900 to which AI auxiliary data dependency information is added can be applied to a video segment or a frame group.
- FIGS. 14 to 17 shows an embodiment of a data structure 2000 that can be applied to (frame group AI data) or frame AI data described in FIGS. 14 to 17.
- the data structure 2000 includes elements for adaptively determining upscale DNN information on a frame-by-frame basis.
- the elements are ai_codec_frame_info (2002), dependent_ai_condition_frame_info (2004), ai_codec_frame_DNN_info (2006), ai_codec_enhancement_flag (2008), ai_codec_artifact_removal_flag (2014) And the like.
- the arrangement order of the elements shown in FIG. 20 is only an example, and a person skilled in the art can change the arrangement order of the elements.
- ai_codec_frame_info is frame AI target data indicating whether AI upscale is allowed in the current frame.
- the data structure 2000 includes frame AI auxiliary data related to AI upscale of the current frame. Otherwise, AI upscale is not applied to the current frame, and the data structure 2000 does not include the frame AI auxiliary data.
- the frame AI auxiliary data means AI auxiliary data applied to the frame.
- the data structure 2000 may include dependent_ai_condition_frame_info (2004).
- dependent_ai_condition_frame_info (2004) is frame AI auxiliary data dependency information indicating whether frame AI auxiliary data is the same between successive previous frames and the current frame.
- dependent_ai_condition_frame_info (2004) indicates that the frame AI auxiliary data is the same between the successive previous frame and the current frame
- the data structure 2000 does not include additional frame AI auxiliary data for the current frame, and upscale DNN information of the current frame Is determined equal to the upscale DNN information of the previous frame.
- dependent_ai_condition_frame_info (2004) indicates that the frame AI auxiliary data is not the same between the successive previous frame and the current frame
- the data structure 2000 includes additional frame AI auxiliary data for the current frame, and upscale DNN information of the current frame Is obtained independently of the upscale DNN information of the previous frame.
- the additional frame AI auxiliary data may include ai_codec_frame_DNN_info (2006), ai_codec_enhancement_flag (2008), and ai_codec_artifact_removal_flag (2014).
- ai_codec_frame_DNN_info (2006) is frame DNN information indicating upscale DNN information of the current frame among a plurality of upscale DNN information for upper data units of the current frame.
- ai_codec_DNN_info 1814 of FIG. 18A indicates two or more upscale DNN information for a video
- ai_codec_frame_DNN_info (2006) may determine upscale DNN information of the current frame from the two or more upscale DNN information.
- ai_codec_frame_DNN_info (2006) determines upscale DNN information of the current frame from the two or more upscale DNN information You can. If ai_codec_DNN_info (1814) of FIG. 18A or ai_codec_DNN_info (1834) of FIG. 18B represents only one upscale DNN information, ai_codec_frame_DNN_info (2006) may be omitted.
- ai_codec_enhancement_flag (2008) is AI enhancement information indicating whether an AI upscale accuracy enhancement process is activated.
- ai_codec_enhancement_flag (2008) indicates that the AI upscale accuracy enhancement process is activated, some of the samples of the AI upscaled high resolution image are adjusted according to encoding parameter information. If ai_codec_enhancement_flag (2008) indicates that the AI upscale accuracy enhancement process is not activated, the enhancement process is omitted.
- the encoding parameter is generated when the encoding end encodes the original image 105 or the first image 115.
- Coding parameters may be generated according to a prediction, transformation, and in-loop filtering process of a data unit (maximum coding unit, coding unit, prediction unit, transformation unit, or pixel unit).
- Coding parameters include information such as motion vectors, predicted motion vectors, intra modes, residual signal related information, and SAO parameters.
- the encoding parameter information is information necessary for the enhancement process according to the encoding parameter.
- the encoding parameter information may include encoding parameter type information indicating a type of encoding parameter referenced for the enhancement process and encoding parameter map information indicating an application area of the enhancement process in the current frame.
- the data structure 2000 may include encoding parameter type information encod_param_type (2010) and encoding parameter map information encod_param_map (2012).
- encod_param_type (2010) may represent a motion vector.
- encod_param_map (2012) may indicate an application area of an enhancement process according to a motion vector in an AI upscaled high resolution image.
- pixels of the application area may be modified according to a motion vector.
- encod_param_type (2010) may indicate two or more coding parameters.
- encod_param_map (2012) may indicate an application area of an enhancement process for each encoding parameter.
- encod_param_type (2010) and encod_param_map (2012) may be omitted in the data structure 2000. Accordingly, one or more encoding parameters referenced in the enhancement process and an application area of each encoding parameter may be determined in advance.
- ai_codec_artifact_removal_flag is artifact removal information indicating whether artifact removal of AI upscaled high resolution image is performed.
- artifacts may be removed by correcting pixels of a low resolution image before AI upscale, or correcting pixels of a high resolution image after AI upscale according to the second DNN, according to the artifact removal information. .
- Artifact removal information may include artifact type information indicating an artifact type and artifact map information indicating an artifact area in which the artifact is located. Also, the artifact removal information may include information on the number of artifact types indicating the number of artifact types of the image. Accordingly, the data structure 2000 may include the artifact type information and the artifact map information as many as the number according to the artifact type number information.
- Types of artifacts include contour artifacts, ringing artifacts, aliasing artifacts, and the like.
- one or more artifact areas are determined for each type of artifact. For example, one or more artifact regions may be determined for contour artifacts, and one or more artifact regions may be determined for ringing artifacts.
- the data structure 2000 may include num_artifact_type (2016), which is the number of artifact types. If ai_codec_artifact_removal_flag (2014) indicates that artifact removal of an AI upscaled high resolution image is not performed, num_artifact_type (2016) and the like are omitted in the data structure 2000.
- the data structure 2000 may include artifact_type (2016), which is artifact type information, as many as the number indicated by num_artifact_type (2016). Also, the data structure 2000 may include artifact map information for each artifact_type (2016). The artifact map information of the data structure 2000 may include num_artifact_map 2020 indicating the number of artifact regions. In addition, the data structure 2000 may include map_x_pos 2022, map_y_pos 2024, map_width 2026, and map_height 2028 indicating the location and size of each artifact area.
- a part of dependent_ai_condition_frame_info (2004), ai_codec_frame_DNN_info (2006), ai_codec_enhancement_flag (2008), and ai_codec_artifact_removal_flag (2014) of FIG. 20 may be omitted.
- the data structure 2000 of FIG. 20 may be applied to a frame group instead of a frame.
- the same upscale DNN information, AI enhancement information, and artifact removal information may be applied to all frames included in the frame group.
- the data structure 2000 when the data structure 2000 is applied to a frame group, the data structure 2000 includes ai_codec_frame_group_info, dependent_ai_condition_frame_group_info, and ai_code_CN_frame_group_info, and ai_code_frame_info, and ai_code_c_frame_group_info, dependent_ai_condition_frame_info (2002), dependent_ai_condition_frame_info (2004), and ai_codec_frame_DNN_info (2006). have.
- the data structure 1800 of FIG. 18A may be applied to video AI data.
- the data structure 2000 of FIG. 20 may be applied to frame AI data.
- the video AI data according to the data structure 1800 includes AI data commonly applied to all frames
- the frame AI data according to the data structure 2000 includes AI data adaptively applied to frames.
- the data structure 1800 of FIG. 18A may be applied to video AI data.
- the data structure 2000 of FIG. 20 may be applied to frame group AI data.
- the video AI data according to the data structure 1800 includes AI data commonly applied to all frames
- the frame group AI data according to the data structure 2000 includes AI data adaptively applied to the frame groups. And the same AI data is applied to all the frames included in the frame group.
- the data structure 1820 of FIG. 18B may be applied to video segment AI data or frame group AI data.
- the data structure 2000 of FIG. 20 may be applied to frame AI data.
- the video segment AI data or frame group AI data according to the data structure 1820 includes AI data commonly applied to all frames of the video segment or frame group, and the frame AI data according to the data structure 2000 adapts to the frame It contains the AI data applied to the enemy.
- the data structure 1820 of FIG. 18B may be applied to video segment AI data.
- the data structure 2000 of FIG. 20 may be applied to frame group AI data.
- the video segment AI data according to the data structure 1800 includes AI data commonly applied to all frames of the video segment
- the frame group AI data according to the data structure 2000 is AI data adaptively applied to the frame group. It includes. And the same AI data is applied to all the frames included in the frame group.
- FIG. 21 shows a syntax table 2100 in which the data structure 2000 of FIG. 20 is implemented.
- ai_codec_frame_info dependent_ai_condition_frame_info, ai_codec_frame_DNN_info, ai_codec_enhancement_flag, and ai_codec_artifact_removal_flag
- the syntax element like is parsed.
- ai_codec_frame_info is frame AI target data corresponding to ai_codec_frame_info (2002) in FIG. 20.
- ai_codec_frame_info indicates that AI upscale is allowed (if (ai_codec_frame_info))
- syntax elements corresponding to the frame AI auxiliary data are parsed. If ai_codec_frame_info indicates that AI upscale is not allowed, syntax elements corresponding to the frame AI auxiliary data are not parsed.
- the syntax elements corresponding to the frame AI auxiliary data may include dependent_ai_condition_frame_info, ai_codec_frame_DNN_info, ai_codec_enhancement_flag, ai_codec_artifact_removal_flag.
- Dependent_ai_condition_frame_info, ai_codec_frame_DNN_info, ai_codec_enhancement_flag, and ai_codec_artifact_removal_flag of FIG. 21 correspond to dependent_ai_condition_frame_info (2004), ai_codec_frame_DNN_info (2006), ai_codec_flag_flag
- dependent_ai_condition_frame_info When ai_codec_frame_info indicates that AI upscale is allowed, dependent_ai_condition_frame_info is obtained. When dependent_ai_condition_frame_info indicates that the frame AI auxiliary data is the same between the successive previous frame and the current frame, the upscale DNN information of the current frame is determined the same as the upscale DNN information of the previous frame.
- dependent_ai_condition_frame_info indicates that the frame AI auxiliary data is not the same between the previous previous frame and the current frame, according to ai_codec_frame_DNN_info, ai_codec_enhancement_flag, ai_codec_artifact_removal_flag, upscale DNN information of the current frame, AI enhancement information, and artifact removal information are determined.
- ai_codec_frame_DNN_info indicates upscale DNN information of the current frame among the plurality of upscale DNN information for the upper data unit of the current frame.
- ai_codec_enhancement_flag indicates whether the AI upscale accuracy improvement process is activated.
- ai_codec_enhancement_flag indicates that the enhancement process is activated (if (ai_codec_enhancement_flag))
- encod_param_type indicating an encoding parameter type
- encod_param_map indicating an encoding parameter area
- ai_codec_artifact_removal_flag indicates whether artifact removal of an AI upscaled high resolution image is performed.
- ai_codec_artifact_removal_flag indicates that artifact removal of a high resolution image is performed (if (ai_codec_artifact_removal_flag))
- num_artifact_type indicating the number of artifact types is obtained.
- artifact_type indicating the type of artifact and num_artifact_map indicating the number of artifact regions are obtained.
- map_height indicating the location and size of the artifact area
- the syntax table 2100 of FIG. 21 is only an example, and some of the elements of the data structure 2000 of FIG. 20 may be included in the syntax table 2100. Also, elements not included in the data structure 2000 may be included in the syntax table 2100.
- 22 is a flowchart of an embodiment of an image decoding method according to an AI decoder.
- step 2210 a video file including AI encoded data including image data and AI data related to AI upscale of the image data is received.
- the reception of AI-encoded data according to step 2210 may be performed by the communication unit 212.
- step 2220 AI data of the AI encoded data is obtained from the metadata box of the video file, and image data of the AI encoded data is obtained from the media data box of the video file.
- the acquisition of the image data and the AI data according to step 2220 may be performed by the parsing unit 212.
- the AI data may be obtained from the image data by the first decoder 232.
- the AI encoding data may include synchronization data regarding synchronization of image data and AI data.
- the synchronization data may indicate a relationship between image data and AI data according to a decoding order or reproduction order of image data.
- the synchronization data may include information on synchronization of the video header and video AI data, synchronization of the frame group header and frame group AI data, and synchronization of the frame header and frame AI data.
- step 2230 the image data is decoded, and the low-resolution image of the current frame is restored.
- the reconstruction of the low-resolution image according to step 2230 may be performed by the first decoder 232.
- step 2240 from the AI data, upscale DNN information of the current frame is obtained. Acquisition of the upscale DNN information of the current frame according to step 2240 may be performed by the AI upscaler 234.
- the AI data may be composed of video AI data, frame group AI data, and frame AI data according to the hierarchical structure of AI data in FIG. 12. Additionally, the AI data may include video segment AI data of FIG. 16. AI data according to the hierarchical structure may indicate upscale DNN information applied to a corresponding layer.
- the AI data may include video AI data.
- upscale DNN information applied to all frames of image data may be obtained from a plurality of default DNN setting information according to the video AI data.
- the AI data may include frame group AI data.
- upscale DNN information applied to all frames of the frame group may be adaptively obtained from the plurality of default DNN configuration information according to the frame group AI data.
- the AI data may include frame AI data.
- upscale DNN information applied to the frame may be adaptively applied to the frame from a plurality of default DNN configuration information according to the frame AI data.
- the AI data may include frame group AI data together with video AI data.
- the AI data includes only the video AI data and the frame group AI data
- one or more upscale DNN information may be obtained from a plurality of default DNN configuration information according to the video AI data.
- upscale DNN information applied to frames of the frame group may be selected from the one or more upscale DNN information.
- the AI data may include frame AI data together with video AI data.
- one or more upscale DNN information may be obtained from a plurality of default DNN configuration information according to the video AI data.
- upscale DNN information applied to a frame may be selected from the one or more upscale DNN information.
- the AI data may include frame AI data together with the frame group AI data.
- the AI data includes only the frame group AI data and the frame AI data
- one or more upscale DNN information may be obtained from a plurality of default DNN configuration information according to the frame group AI data.
- upscale DNN information applied to a frame may be selected from the one or more upscale DNN information.
- the AI data may include video segment AI data.
- upscale DNN information applied to all frames of the video segment may be obtained from a plurality of default DNN configuration information according to the video segment AI data.
- the AI data may include video AI data together with video segment AI data.
- one or more upscale DNN information may be obtained from a plurality of default DNN configuration information according to the video AI data.
- upscale DNN information applied to all frames of the video segment may be selected from the one or more upscale DNN information.
- the AI data may include frame group AI data or frame AI data together with video segment AI data.
- the AI data includes frame group AI data or frame AI data together with video segment AI data
- one or more upscale DNN information may be obtained from a plurality of default DNN configuration information according to the video segment AI data.
- upscale DNN information applied to the frame group or frame may be selected from the one or more upscale DNN information.
- the AI data may include video AI data, video segment AI data, frame group AI data, and frame AI data.
- the AI data includes all of the video AI data, the video segment AI data, the frame group AI data, and the frame AI data
- one or more upscale DNNs that can be applied to the video from a plurality of default DNN configuration information. Information may be limited.
- one or more upscale DNN information applicable to the video segment may be selected from one or more upscale DNN information applicable to the video.
- one or more upscale DNN information applicable to the frame group may be selected from one or more upscale DNN information applicable to the video segment.
- upscale DNN information applied to a frame may be selected from one or more upscale DNN information applicable to the frame group.
- some of the video AI data, video segment AI data, frame group AI data, and frame AI data may be excluded from the hierarchical AI data structure described above.
- the AI data may include AI target data indicating whether AI upscale is applied, and AI auxiliary data on upscale DNN information used for AI upscale when AI upscale is applied. have.
- the video AI data may include video AI target data indicating whether AI upscale is applied to the video data, and video AI auxiliary data regarding one or more upscale DNN information applicable to AI upscale of frames included in the video data. You can. When the video AI target data indicates that AI upscale is applied to frames included in the video data, one or more upscale DNN information is obtained according to the video AI auxiliary data.
- the video segment AI data is video segment AI target data indicating whether AI upscale is applied to the video segment and video segment AI auxiliary data regarding one or more upscale DNN information applicable to AI upscale of frames included in the video segment. It may include.
- the video segment AI target data indicates that AI upscale is applied to frames included in the video segment
- one or more upscale DNN information of the video segment is obtained according to the video segment AI auxiliary data.
- the frame group AI data is frame group AI target data indicating whether AI upscale is applied to the frame group, and frame group AI auxiliary data on one or more upscale DNN information applicable to AI upscale of frames included in the frame group. It may include.
- the frame group AI target data indicates that AI upscale is applied to frames included in the frame group
- one or more upscale DNN information of the frame group is obtained according to the frame group AI auxiliary data.
- the frame AI data may include frame AI target data indicating whether AI upscale is applied to the frame, and frame AI auxiliary data on upscale DNN information used for AI upscale of the frame.
- frame AI target indicates that AI upscale is applied to the current frame
- upscale DNN information of the current frame is obtained according to the frame AI auxiliary data.
- the video segment AI data may include video segment AI auxiliary data dependency information indicating whether video segment AI auxiliary data is identical between successive previous video segments and the current video segment.
- video segment AI auxiliary data subordinate information indicates that the video segment AI auxiliary data is the same between the successive previous video segment and the current video segment
- the video segment AI auxiliary data of the current video segment and the video segment AI auxiliary data of the previous video segment It is determined equally.
- the frame group AI data may include frame group AI auxiliary data dependency information indicating whether the frame group AI auxiliary data is the same between the successive previous frame group and the current frame group. And when the frame group AI auxiliary data dependency information indicates that the frame group AI auxiliary data is the same between the previous frame group and the current frame group, the frame group AI auxiliary data of the current frame group is compared with the frame group AI auxiliary data of the previous frame group. It is determined equally.
- the frame AI data may include frame AI auxiliary data dependency information indicating whether the frame AI auxiliary data is the same between the successive previous frame and the current frame.
- the frame AI auxiliary data dependency information indicates that the frame AI auxiliary data is the same between the successive previous frame and the current frame
- the frame AI auxiliary data of the current frame is determined to be the same as the frame AI auxiliary data of the previous frame.
- video AI data applied to the entire image data video segment AI data corresponding to the video segment
- frame group AI data applied to the frame group frame AI data applied to the frame Can be determined.
- the AI data may include channel information indicating a color channel to which AI upscale is applied.
- upscale DNN information may be obtained for the color channel indicated by the channel information.
- the AI data may include at least one of target bitrate information indicating a bit rate of a low resolution image according to image data and resolution information related to a resolution of an AI upscaled high resolution image.
- two or more upscale DNN information for a video, a video segment, or a group of frames may be determined according to at least one of target bitrate information and resolution information.
- upscale DNN information of the current frame may be determined from the two or more upscale DNN information.
- a high resolution image corresponding to the low resolution image is generated by AI upscaling the low resolution image according to the upscale DNN information of the current frame.
- the high-resolution image generation according to step 2260 may be performed by the AI upscaler 234.
- the accuracy of the generated high resolution image may be improved according to an AI upscale enhancement process according to encoding parameters.
- artifacts of the generated high resolution image may be removed according to the artifact removal process.
- the function for AI upscale described in FIGS. 12 to 21 can be applied to the video decoding method of FIG. 22.
- FIG. 23 is a flowchart of an embodiment of an image encoding method according to an AI encoder.
- step 2310 downscale DNN information for down-scaling the high-resolution image of the current frame into a low-resolution image is determined.
- the determination of AI data according to step 2310 may be performed by the AI downscaler 612.
- two or more downscale DNN information may be determined for a video, a video segment, or a group of frames. And, according to the frame AI data, downscale DNN information of the current frame may be determined from the two or more downscale DNN information.
- step 2320 using the downscale DNN information, AI downscales the high-resolution image of the current frame, thereby generating a low-resolution image of the current frame.
- the low-resolution image generation according to step 2330 may be performed by the AI downscaler 612.
- step 2330 AI data, which is used for AI upscale of the low-resolution image of the current frame, is generated.
- the low-resolution image generation according to step 2330 may be performed by the AI downscaler 612.
- AI data used for AI upscale may be generated by referring to downscale DNN information generated in step 2310.
- the AI data may be composed of video AI data, frame group AI data, and frame AI data according to the hierarchical structure of AI data in FIG. 12. Additionally, the AI data may include video segment AI data of FIG. 16. AI data according to the hierarchical structure may indicate upscale DNN information applied to a corresponding layer.
- the AI data may include video AI data.
- the video AI data may represent upscale DNN information applied to all frames of the image data from a plurality of default DNN configuration information.
- the AI data may include frame group AI data.
- the frame group AI data may indicate upscale DNN information applied to all frames of the frame group from a plurality of default DNN configuration information.
- the AI data may include frame AI data.
- the frame AI data may represent upscale DNN information applied to the frame from a plurality of default DNN configuration information.
- the AI data may include frame group AI data together with video AI data.
- the video AI data may represent one or more upscale DNN information from a plurality of default DNN configuration information. And, according to the frame group AI data, from the one or more upscale DNN information, upscale DNN information applied to frames of the frame group may be indicated.
- the AI data may include frame AI data together with video AI data.
- the video AI data may represent one or more upscale DNN information from a plurality of default DNN configuration information.
- the frame AI data may indicate upscale DNN information applied to a frame from the one or more upscale DNN information.
- the AI data may include frame AI data together with the frame group AI data.
- the frame group AI data may represent one or more upscale DNN information from a plurality of default DNN configuration information.
- the frame AI data may indicate upscale DNN information applied to a frame from the one or more upscale DNN information.
- the AI data may include video segment AI data.
- the video segment AI data may indicate upscale DNN information applied to all frames of the video segment from a plurality of default DNN configuration information.
- the AI data may include video AI data together with video segment AI data.
- the video AI data may indicate one or more upscale DNN information from a plurality of default DNN configuration information.
- the video segment AI data may indicate upscale DNN information applied to all frames of the video segment from the one or more upscale DNN information.
- the AI data may include frame group AI data or frame AI data together with video segment AI data.
- the video segment AI data may represent one or more upscale DNN information from a plurality of default DNN configuration information.
- the frame group AI data or the frame AI data may indicate upscale DNN information applied to the frame group or frame from the one or more upscale DNN information.
- the AI data may include video AI data, video segment AI data, frame group AI data, and frame AI data.
- the video AI data is one or more upscale DNN information that can be applied to the video from a plurality of default DNN setting information.
- the video segment AI data may indicate one or more upscale DNN information applicable to the video segment from one or more upscale DNN information applicable to the video.
- the frame group AI data may indicate one or more upscale DNN information applicable to the frame group from one or more upscale DNN information applicable to the video segment.
- the frame AI data may indicate upscale DNN information applied to a frame from one or more upscale DNN information applicable to the frame group.
- one of video AI data, video segment AI data, frame group AI data, and frame AI data may be excluded from the hierarchical AI data structure described above.
- the AI data may include AI target data indicating whether AI upscale is applied.
- the AI data may include AI auxiliary data on the upscale DNN information corresponding to the downscale DNN information used for the AI downscale.
- the video AI data may include video AI target data indicating whether AI upscale is applied to the video data, and video AI auxiliary data regarding one or more upscale DNN information applicable to AI upscale of frames included in the video data. You can.
- Video AI target data may be determined according to whether AI upscale is applied to the image data.
- video AI auxiliary data may be determined according to one or more upscale DNN information of frames included in the image data.
- the video segment AI data is video segment AI target data indicating whether AI upscale is applied to the video segment and video segment AI auxiliary data regarding one or more upscale DNN information applicable to AI upscale of frames included in the video segment. It may include.
- the video segment AI target data may be determined according to whether AI upscale is applied to the video segment.
- the video segment AI auxiliary data may be included according to one or more upscale DNN information of frames included in the video segment.
- the frame group AI data is frame group AI target data indicating whether AI upscale is applied to the frame group, and frame group AI auxiliary data on one or more upscale DNN information applicable to AI upscale of frames included in the frame group. It may include.
- Frame group AI target data may be determined according to whether AI upscale is applied to the frame group.
- the frame group AI auxiliary data may be determined according to one or more upscale DNN information that can be applied to the AI upscale of the frames included in the frame group.
- the frame AI data may include frame AI target data indicating whether AI upscale is applied to the frame, and frame AI auxiliary data on upscale DNN information used for AI upscale of the frame.
- Frame AI target data may be determined according to whether AI upscale is applied to the current frame. Also, frame AI auxiliary data may be determined according to upscale DNN information used for AI upscale of the current frame.
- the video segment AI data may include video segment AI auxiliary data dependency information indicating whether video segment AI auxiliary data is identical between successive previous video segments and the current video segment.
- the video segment AI auxiliary data dependency information is determined according to whether the video segment AI auxiliary data is the same between the successive previous video segment and the current video segment.
- the frame group AI data may include frame group AI auxiliary data dependency information indicating whether the frame group AI auxiliary data is the same between the successive previous frame group and the current frame group.
- Frame group AI auxiliary data dependency information is determined according to whether the frame group AI auxiliary data is the same between the successive previous frame group and the current frame group.
- the frame AI data may include frame AI auxiliary data dependency information indicating whether the frame AI auxiliary data is the same between the successive previous frame and the current frame.
- Frame AI auxiliary data dependency information is determined according to whether the frame AI auxiliary data is the same between the successive previous frame and the current frame.
- the AI encoded data may include synchronization data for synchronization of AI data and image data.
- the synchronization data may include data related to synchronization of video data and video AI data, synchronization of video segment and video segment AI data, synchronization of frame group and frame group AI data, and synchronization of current frame and frame AI data.
- the AI data may include channel information indicating a color channel applied to AI upscale of the current frame.
- upscale DNN information for the color channel indicated by the channel information may be determined.
- the AI data may include at least one of target bitrate information indicating a bit rate of a low resolution image and resolution information indicating a resolution of a high resolution image.
- step 2340 image data is obtained by encoding a low-resolution image of the current frame.
- Image data acquisition according to step 2340 may be performed by the first encoding unit 614.
- AI coded data including image data and AI data is generated.
- AI encoded data generation according to step 2350 may be performed by the data processing unit 632.
- the image data and the AI data are not included in one single file, but may be composed of separate files.
- step 2360 a video file including a media data box in which image data of the generated AI encoded data is inserted and a metadata box in which AI data of the AI encoded data is inserted is output.
- the AI encoded data output according to step 2360 may be performed by the communication unit 634.
- the function of AI downscale corresponding to the AI upscale described in FIGS. 12 to 21 may be applied to the video encoding method of FIG. 23.
- 24 is a block diagram showing the configuration of an image decoding apparatus according to an embodiment.
- the image decoding apparatus 2400 may include a communication unit 2410, a processor 2420, and a memory 2430.
- the communication unit 2410 may receive AI-encoded data. Alternatively, the communication unit 2410 may receive AI data and image data from an external device (eg, a server) under the control of the processor 2420.
- an external device eg, a server
- the processor 2420 may overall control the image decoding apparatus 2400.
- the processor 2420 may execute one or more programs stored in the memory 2430.
- the processor 2420 may perform functions of the first decoder 232 and the AI upscaler 234.
- the processor 2420 may be composed of one or more general purpose processors.
- the processor 2420 may include a graphics processor 2422 and an AI-only processor 2424.
- the processor may be implemented in the form of a system on chip (SoC) incorporating at least one of a graphics processor 2422 and an AI-only processor 2424.
- SoC system on chip
- the processor 2422 controls the overall operation of the image decoding apparatus 2400 and the signal flow between the internal components of the image decoding apparatus 2400 and processes data.
- the graphics processor 2422 is a processor designed to specialize in decoding and post-processing images. Accordingly, the graphic processor 2422 can perform processing on the image data received by the image decoding apparatus 2400 and can efficiently perform the low-resolution image restoration function of the first decoding unit 232.
- the AI-only processor 2424 is a processor designed to specialize in AI computation. Therefore, the AI-only processor 2424 can efficiently perform the AI upscale function of the low-resolution image of the AI upscale unit 234.
- the image data and AI data input to the image decoding apparatus 2400 through the communication unit 2410 are processed by the processor 2420.
- the operation related to the decoding of the image data is performed by the graphic processor 2422, so that a low-resolution image can be generated.
- the operation on the AI upscale of the low-resolution image may be performed by the AI-only processor 2424. Accordingly, a high resolution image in which the low resolution image is AI upscaled may be generated by the AI-only processor 2424.
- the processor 2420 is depicted as including one graphics processor 2422, but may include two or more graphics processors 2422 according to embodiments. Also, although the processor 2420 is described as including one AI-only processor 2424, according to an embodiment, the processor 2420 may include two or more AI-only processors 2424. Also, the processor 2420 may include one or more general-purpose processors. And, depending on one or more general purpose processors, additional processes required for AI upscale may be performed.
- the AI-only processor 2424 may be implemented as a hardware FPGA.
- the memory 2430 may store various data, programs, or applications for driving and controlling the image decoding apparatus 2400.
- the program stored in the memory 2430 may include one or more instructions.
- a program (one or more instructions) or an application stored in the memory 2430 may be executed by the processor 2420.
- the memory 2430 may store data originating from the communication unit 2410 and the processor 2420. Also, the memory 2430 may transmit data required by the processor 2420 to the processor 2420.
- the image decoding apparatus 2400 may perform at least one of each function of the image decoding apparatus described in FIG. 2 and each step of the image decoding method described in FIG. 22.
- 25 is a block diagram showing the configuration of an image encoding apparatus according to an embodiment.
- the video encoding apparatus 2500 may include a communication unit 2510, a processor 2520, and a memory 2530.
- the processor 2520 may overall control the image encoding apparatus 2500.
- the processor 2520 may execute one or more programs stored in the memory 2530.
- the processor 2520 may perform the functions of the AI downscaler 612 and the first encoder 614.
- the processor 2520 may be composed of one or more general purpose processors.
- the processor 2520 may include a graphics processor 2522 and an AI-dedicated processor 2524.
- the processor may be implemented in the form of a system on chip (SoC) incorporating at least one of a graphics processor 2522 and an AI-only processor 2524.
- SoC system on chip
- the processor 2522 controls the overall operation of the image encoding apparatus 2500 and the signal flow between the internal components of the image encoding apparatus 2500 and processes data.
- the graphics processor 2522 is a processor designed to specialize in encoding and post-processing images. Therefore, the graphic processor 2522 can efficiently perform the low-resolution image encoding function of the first encoding unit 614.
- the AI dedicated processor 2524 is a processor designed to specialize in AI computation. Therefore, the AI dedicated processor 2524 can efficiently perform the AI downscale function of the high resolution image of the AI downscaler 612.
- the AI dedicated processor 2524 may be implemented as a hardware FPGA.
- AI downscale of the high resolution image and encoding of the low resolution image are performed.
- the operation for AI downscale of the high resolution image is performed by the AI dedicated processor 2524, so that a low resolution image can be generated, and by the AI dedicated processor 2524, AI data required for AI upscale of the low resolution image. Can be generated.
- the operation for encoding the low-resolution image is performed by the graphic processor 2522, whereby image data can be generated.
- the communication unit 2510 may generate a single file of AI encoded data including AI data and image data.
- the communication unit 2510 may output a single file of AI-encoded data to the outside of the image encoding apparatus 2500 under the control of the processor 2520.
- the communication unit 2510 may generate a file including AI data and a file including image data, respectively, under the control of the processor 2520.
- the communication unit 2510 may output a file including AI data and a file including image data to the outside of the image encoding apparatus 2500 under the control of the processor 2520.
- the processor 2520 is depicted as including one graphics processor 2522, but may include two or more graphics processors 2522 according to embodiments.
- the processor 2520 is described as including one AI-only processor 2524, but may include two or more AI-only processors 2524 according to embodiments.
- the processor 2520 may include one or more general-purpose processors. And, depending on one or more general purpose processors, additional processes required for AI upscale may be performed.
- the memory 2530 may store various data, programs, or applications for driving and controlling the image encoding apparatus 2500.
- the program stored in the memory 2530 may include one or more instructions.
- the program (one or more instructions) or application stored in the memory 2530 may be executed by the processor 2520.
- the memory 2530 may store a high-resolution image, such as the original image 105.
- the memory 2530 may store data originating from the communication unit 2510 and the processor 2520. Also, the memory 2530 may transmit data required by the processor 2520 to the processor 2520.
- the image encoding apparatus 2500 may perform at least one of each function of the image decoding apparatus described in FIG. 7 and each step of the image encoding method described in FIG. 23.
- the above-described embodiments of the present disclosure can be written as a program that can be executed on a computer, and the created program can be stored in a medium.
- the medium may be a computer that continuously stores executable programs or may be temporarily stored for execution or download.
- the medium may be various recording means or storage means in the form of a single or several hardware combinations, and is not limited to a medium directly connected to a computer system, but may be distributed on a network.
- Examples of the medium include magnetic media such as hard disks, floppy disks, and magnetic tapes, optical recording media such as CD-ROMs and DVDs, and magneto-optical media such as floptical disks, And program instructions including ROM, RAM, flash memory, and the like.
- examples of other media include an application store for distributing applications or a site for distributing or distributing various software, a recording medium or a storage medium managed by a server, or the like.
- the model related to the DNN described above may be implemented as a software module.
- the DNN model may be stored in a computer-readable recording medium.
- the DNN model may be integrated in the form of a hardware chip to be part of the above-described image decoding apparatus 200 or image encoding apparatus 600.
- the DNN model may be manufactured in the form of a dedicated hardware chip for artificial intelligence, or as part of an existing general purpose processor (eg, CPU or application processor) or graphics only processor (eg, GPU). It may be.
- the DNN model may be provided in the form of downloadable software.
- the computer program product may include a product (eg, a downloadable application) in the form of a software program that is electronically distributed through a manufacturer or an electronic market. For electronic distribution, at least a portion of the software program may be stored on a storage medium or temporarily generated.
- the storage medium may be a server of a manufacturer or an electronic market, or a storage medium of a relay server.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
L'invention concerne un support d'enregistrement enregistrable par ordinateur comportant un fichier vidéo contenant des données de codage par IA stockées dans celui-ci. Les données de codage par IA contiennent : des données d'image comportant des informations de codage d'une image à basse résolution générée en réduisant l'échelle par IA d'une image à haute résolution ; et des données d'IA permettant une augmentation d'échelle par IA de l'image à basse résolution reconstruite en fonction des données d'image. Les données d'IA contiennent des données d'IA cibles indiquant si l'augmentation d'échelle par IA est appliquée à une ou plusieurs trames et indiquant, lorsque l'augmentation d'échelle par IA est appliquée auxdites une ou plusieurs trames, des données d'IA auxiliaires relatives à des informations DNN d'augmentation d'échelle utilisées pour l'augmentation d'échelle par IA desdites une ou plusieurs trames parmi une pluralité d'informations de paramétrage DNN par défaut prédéfinies.
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201980068908.7A CN112889283A (zh) | 2018-10-19 | 2019-10-11 | 编码方法及其设备以及解码方法及其设备 |
EP19873762.9A EP3866466A1 (fr) | 2018-10-19 | 2019-10-11 | Procédé de codage et appareil associé, procédé de décodage et appareil associé |
US16/743,613 US10819992B2 (en) | 2018-10-19 | 2020-01-15 | Methods and apparatuses for performing encoding and decoding on image |
US16/860,563 US10819993B2 (en) | 2018-10-19 | 2020-04-28 | Methods and apparatuses for performing encoding and decoding on image |
US17/080,827 US11190782B2 (en) | 2018-10-19 | 2020-10-26 | Methods and apparatuses for performing encoding and decoding on image |
US17/498,859 US11647210B2 (en) | 2018-10-19 | 2021-10-12 | Methods and apparatuses for performing encoding and decoding on image |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2018-0125406 | 2018-10-19 | ||
KR20180125406 | 2018-10-19 | ||
KR20190041111 | 2019-04-08 | ||
KR10-2019-0041111 | 2019-04-08 | ||
KR10-2019-0076569 | 2019-06-26 | ||
KR1020190076569A KR102525578B1 (ko) | 2018-10-19 | 2019-06-26 | 부호화 방법 및 그 장치, 복호화 방법 및 그 장치 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/743,613 Continuation US10819992B2 (en) | 2018-10-19 | 2020-01-15 | Methods and apparatuses for performing encoding and decoding on image |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020080751A1 true WO2020080751A1 (fr) | 2020-04-23 |
Family
ID=70283518
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2019/013344 WO2020080751A1 (fr) | 2018-10-19 | 2019-10-11 | Procédé de codage et appareil associé, procédé de décodage et appareil associé |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2020080751A1 (fr) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112153387A (zh) * | 2020-08-28 | 2020-12-29 | 山东云海国创云计算装备产业创新中心有限公司 | 一种ai视频解码系统 |
WO2022124546A1 (fr) * | 2020-12-09 | 2022-06-16 | 삼성전자 주식회사 | Appareil de codage par intelligence artificielle et son procédé de fonctionnement, et appareil de décodage par intelligence artificielle et son procédé de fonctionnement |
US12073595B2 (en) | 2020-12-09 | 2024-08-27 | Samsung Electronics Co., Ltd. | AI encoding apparatus and operation method of the same, and AI decoding apparatus and operation method of the same |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140177706A1 (en) * | 2012-12-21 | 2014-06-26 | Samsung Electronics Co., Ltd | Method and system for providing super-resolution of quantized images and video |
US20170347061A1 (en) * | 2015-02-19 | 2017-11-30 | Magic Pony Technology Limited | Machine Learning for Visual Processing |
KR101885855B1 (ko) * | 2017-03-30 | 2018-08-07 | 단국대학교 산학협력단 | 고해상도 추정 기법을 활용한 영상 신호 전송 |
KR20180100976A (ko) * | 2017-03-03 | 2018-09-12 | 한국전자통신연구원 | 딥 신경망 기반 블러 영상 학습을 이용한 영상 부호화/복호화 방법 및 장치 |
KR20180108288A (ko) * | 2017-03-24 | 2018-10-04 | 주식회사 엔씨소프트 | 영상 압축 장치 및 방법 |
-
2019
- 2019-10-11 WO PCT/KR2019/013344 patent/WO2020080751A1/fr unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140177706A1 (en) * | 2012-12-21 | 2014-06-26 | Samsung Electronics Co., Ltd | Method and system for providing super-resolution of quantized images and video |
US20170347061A1 (en) * | 2015-02-19 | 2017-11-30 | Magic Pony Technology Limited | Machine Learning for Visual Processing |
KR20180100976A (ko) * | 2017-03-03 | 2018-09-12 | 한국전자통신연구원 | 딥 신경망 기반 블러 영상 학습을 이용한 영상 부호화/복호화 방법 및 장치 |
KR20180108288A (ko) * | 2017-03-24 | 2018-10-04 | 주식회사 엔씨소프트 | 영상 압축 장치 및 방법 |
KR101885855B1 (ko) * | 2017-03-30 | 2018-08-07 | 단국대학교 산학협력단 | 고해상도 추정 기법을 활용한 영상 신호 전송 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3866466A4 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112153387A (zh) * | 2020-08-28 | 2020-12-29 | 山东云海国创云计算装备产业创新中心有限公司 | 一种ai视频解码系统 |
WO2022124546A1 (fr) * | 2020-12-09 | 2022-06-16 | 삼성전자 주식회사 | Appareil de codage par intelligence artificielle et son procédé de fonctionnement, et appareil de décodage par intelligence artificielle et son procédé de fonctionnement |
US12073595B2 (en) | 2020-12-09 | 2024-08-27 | Samsung Electronics Co., Ltd. | AI encoding apparatus and operation method of the same, and AI decoding apparatus and operation method of the same |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021086016A2 (fr) | Appareil et procédé de réalisation d'un codage par intelligence artificielle (ia) et d'un décodage par ia sur une image | |
WO2020246756A1 (fr) | Appareil et procédé de réalisation d'un codage par intelligence artificielle et d'un décodage par intelligence artificielle sur une image | |
WO2020080765A1 (fr) | Appareils et procédés permettant d'effectuer un codage par intelligence artificielle et un décodage par intelligence artificielle sur une image | |
WO2020080827A1 (fr) | Appareil de codage d'ia et son procédé de fonctionnement, et appareil de décodage d'ia et son procédé de fonctionnement | |
WO2020080873A1 (fr) | Procédé et appareil pour diffuser en continu des données | |
EP3868096A1 (fr) | Procédés et appareils de codage à intelligence artificielle et de décodage à intelligence artificielle utilisant un réseau neuronal profond | |
WO2020080698A1 (fr) | Procédé et dispositif d'évaluation de la qualité subjective d'une vidéo | |
WO2021033867A1 (fr) | Appareil de décodage, procédé de fonctionnement correspondant, appareil de mise à l'échelle supérieure d'intelligence artificielle (ai) et procédé de fonctionnement correspondant | |
WO2020080665A1 (fr) | Procédés et appareils permettant d'effectuer un codage par intelligence artificielle et un décodage par intelligence artificielle sur une image | |
WO2021251611A1 (fr) | Appareil et procédé pour effectuer un codage et un décodage à intelligence artificielle sur une image au moyen d'un réseau neuronal de faible complexité | |
EP3811618A1 (fr) | Procédé et appareil pour diffuser en continu des données | |
WO2021177652A1 (fr) | Procédé et dispositif de codage/décodage d'image permettant d'effectuer une quantification/déquantification de caractéristiques et support d'enregistrement permettant de stocker un flux binaire | |
WO2020080782A1 (fr) | Dispositif de codage par intelligence artificielle (ai) et son procédé de fonctionnement et dispositif de décodage par ai et son procédé de fonctionnement | |
WO2015133712A1 (fr) | Procédé de décodage d'image et dispositif associé, et procédé de codage d'image et dispositif associé | |
WO2021086032A1 (fr) | Procédé et appareil de codage d'image et procédé et appareil de décodage d'image | |
WO2020080709A1 (fr) | Procédés et appareils de codage à intelligence artificielle et de décodage à intelligence artificielle utilisant un réseau neuronal profond | |
WO2021172834A1 (fr) | Appareil et procédé de réalisation d'un codage par intelligence artificielle et d'un décodage par intelligence artificielle sur une image au moyen d'un prétraitement | |
WO2020080751A1 (fr) | Procédé de codage et appareil associé, procédé de décodage et appareil associé | |
WO2021242066A1 (fr) | Appareil et procédé de réalisation d'un codage par intelligence artificielle et d'un décodage par intelligence artificielle sur une image | |
WO2016195455A1 (fr) | Procédé et dispositif de traitement de signal vidéo au moyen d'une transformée basée graphique | |
WO2021091178A1 (fr) | Appareil de codage par intelligence artificielle (ia) et son procédé de fonctionnement et dispositif de décodage par ia et son procédé de fonctionnement | |
EP3868097A1 (fr) | Dispositif de codage par intelligence artificielle (ai) et son procédé de fonctionnement et dispositif de décodage par ai et son procédé de fonctionnement | |
EP3844962A1 (fr) | Procédés et appareils permettant d'effectuer un codage par intelligence artificielle et un décodage par intelligence artificielle sur une image | |
WO2021086022A1 (fr) | Procédé et dispositif de codage/décodage d'image utilisant une transformée de couleur adaptative, et procédé de transmission de flux binaire | |
WO2021251659A1 (fr) | Procédé et appareil de réalisation de codage par intelligence artificielle et de décodage par intelligence artificielle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19873762 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2019873762 Country of ref document: EP Effective date: 20210519 |