EP3811619A1 - Appareil de codage d'ia et son procédé de fonctionnement, et appareil de décodage d'ia et son procédé de fonctionnement - Google Patents

Appareil de codage d'ia et son procédé de fonctionnement, et appareil de décodage d'ia et son procédé de fonctionnement

Info

Publication number
EP3811619A1
EP3811619A1 EP19874036.7A EP19874036A EP3811619A1 EP 3811619 A1 EP3811619 A1 EP 3811619A1 EP 19874036 A EP19874036 A EP 19874036A EP 3811619 A1 EP3811619 A1 EP 3811619A1
Authority
EP
European Patent Office
Prior art keywords
dnn
image
values
layer
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP19874036.7A
Other languages
German (de)
English (en)
Other versions
EP3811619A4 (fr
Inventor
Quockhanh DINH
Minseok Choi
Kwangpyo CHOI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority claimed from PCT/KR2019/013595 external-priority patent/WO2020080827A1/fr
Publication of EP3811619A1 publication Critical patent/EP3811619A1/fr
Publication of EP3811619A4 publication Critical patent/EP3811619A4/fr
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • H04N19/426Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements using memory downsizing methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the disclosure relates to an artificial intelligence (AI) encoding apparatus including a deep neural network (DNN) for AI-downscaling an image and an operation method of the AI encoding apparatus and an AI decoding apparatus including a DNN for AI-upscaling an image and an operation method of the AI decoding apparatus, and more particularly, to an AI encoding apparatus for reducing the amounts of memory and calculation necessary for performing a convolution operation in a plurality of convolution layers included in a DNN and an operation method of the AI encoding apparatus, and an AI decoding apparatus for reducing the amounts of memory and calculation necessary for performing a convolution operation in a plurality of convolution layers included in a DNN and an operation method of the AI decoding apparatus.
  • AI artificial intelligence
  • an AI decoding apparatus capable of minimizing a transformation error when expressing, with low precision, intermediate result values generated in a second DNN included in the AI decoding apparatus, and an operation method of the AI decoding apparatus.
  • An AI encoding apparatus may reduce the amount of necessary memory by expressing, with low precision, intermediate result values generated during down-scaling of an image by using a first DNN, and may improve the performance of the first DNN by minimizing a transformation error during transformation of the intermediate result values into low-precision values.
  • An AI decoding apparatus may reduce the amount of necessary memory by expressing, with low precision, intermediate result values generated during up-scaling of an image by using a second DNN, and may improve the performance of the second DNN by minimizing a transformation error during transformation of the intermediate result values into low-precision values.
  • a first image 115 is obtained by performing AI down-scaling 110 on an original image 105 having high resolution. Then, first encoding 120 and first decoding 130 are performed on the first image 115 having relatively low resolution, and thus a bitrate may be largely reduced compared to when the first encoding 120 and the first decoding 130 are performed on the original image 105.
  • the AI down-scaling 110 is performed on the original image 105 to obtain the first image 115 of certain resolution or certain quality.
  • the AI down-scaling 110 is performed based on AI, and AI for the AI down-scaling 110 is trained jointly with AI for the AI up-scaling 140 of the second image 135.
  • the AI for the AI down-scaling 110 and the AI for the AI up-scaling 140 may be embodied as a DNN.
  • an AI encoding apparatus may provide target information used during joint training of the first DNN and the second DNN to an AI decoding apparatus, and the AI decoding apparatus may perform the AI up-scaling 140 on the second image 135 to target resolution based on the provided target information.
  • Such first encoding 120 may be performed via one of image compression methods using frequency transformation, such as MPEG-2, H.264 Advanced Video Coding (AVC), MPEG-4, High Efficiency Video Coding (HEVC), VC-1, VP8, VP9, and AOMedia Video 1 (AV1).
  • frequency transformation such as MPEG-2, H.264 Advanced Video Coding (AVC), MPEG-4, High Efficiency Video Coding (HEVC), VC-1, VP8, VP9, and AOMedia Video 1 (AV1).
  • the AI data may be transmitted together with the image data in a form of a bitstream.
  • the AI data may be transmitted separately from the image data, in a form of a frame or a packet.
  • the AI data and the image data obtained as a result of the AI encoding may be transmitted through the same network or through different networks.
  • the AI decoding apparatus 200 may include a receiver 210 and an AI decoder 230.
  • the receiver 210 may include a communication interface 212, a parser 214, and an output interface 216.
  • the AI decoder 230 may include a first decoder 232 and an AI up-scaler 234.
  • the receiver 210 receives and parses AI encoding data obtained as a result of AI encoding, and distinguishably outputs image data and AI data to the AI decoder 230.
  • the receiver 210 and the AI decoder 230 may be configured by a plurality of processors.
  • the receiver 210 and the AI decoder 230 may be implemented through a combination of dedicated processors or through a combination of software and general-purpose processors such as AP, CPU or GPU.
  • the AI up-scaler 234 and the first decoder 232 may be implemented by different processors.
  • the filter kernel 430 moves along the stride to the last pixel of the second image 135, the convolution operation is performed between the pixel values in the second image 135 and the parameters of the filter kernel 430, and thus the feature map 450 having a certain size may be generated.
  • Convolution layers included in the first DNN and the second DNN may perform processes according to the convolution operation process described with reference to FIG. 4, but the convolution operation process described with reference to FIG. 4 is only an example and is not limited thereto.
  • the first activation layer 320 may assign a non-linear feature to each feature map.
  • the first activation layer 320 may include a sigmoid function, a Tanh function, a rectified linear unit (ReLU) function, or the like, but is not limited thereto.
  • the AI up-scaler 234 may perform AI up-scaling on some of the frames t0 through tn, for example, the frames t0 through ta, by using 'A' DNN setting information obtained from AI data, and perform AI up-scaling on the frames ta+1 through tb by using 'B' DNN setting information obtained from the AI data. Also, the AI up-scaler 234 may perform AI up-scaling on the frames tb+1 through tn by using 'C' DNN setting information obtained from the AI data. In other words, the AI up-scaler 234 may independently obtain DNN setting information for each group including a number of frames among the plurality of frames, and perform AI up-scaling on frames included in each group by using the independently obtained DNN setting information.
  • the AI data may include DNN setting information settable in a second DNN.
  • the AI down-scaler 612 may determine the down-scaling target based on the compression ratio, the compression quality, or the like, which is pre-set or input from a user.
  • the first activation layer 720 determines whether to transmit sample values of the feature maps output from the first convolution layer 710 to a second convolution layer 730. For example, some of the sample values of the feature maps are activated by the first activation layer 720 and transmitted to the second convolution layer 730, and some of the sample values are deactivated by the first activation layer 720 and not transmitted to the second convolution layer 730. Information represented by the feature maps output from the first convolution layer 710 is emphasized by the first activation layer 720.
  • An output 725 of the first activation layer 720 is input to a second convolution layer 730.
  • the second convolution layer 730 performs a convolution process on input data by using 32 filter kernels having a size of 5x5.
  • 32 feature maps output as a result of the convolution process are input to a second activation layer 740, and the second activation layer 740 may assign a non-linear feature to the 32 feature maps.
  • the training of the first DNN 700 and the second DNN 300 described with reference FIG. 9 may be performed by the training apparatus 1000.
  • the training apparatus 1000 includes the first DNN 700 and the second DNN 300.
  • the training apparatus 1000 may be, for example, the AI encoding apparatus 600 or a separate server.
  • the DNN setting information of the second DNN 300 obtained as the training result is stored in the AI decoding apparatus 200.
  • the training apparatus 1000 inputs the original training image 801 into the first DNN 700, in operation S850.
  • the original training image 801 may include a still image or at least one frame included in a moving image.
  • the plurality of layers will now be described as including a first layer 910 and a second layer 920, and the second layer 920 will now be described as a layer that is next to the first layer 910.
  • the DNN 900 may include a low-precision transformation unit 950 between the first layer 910 and the second layer 920.
  • normalization may be performed in which different scale factors are applied to the first result values 1010 and the second result values 1020 having different distributions such that first result values 1010 and second result values 1020 to which the different scale factors have been applied, respectively, have the same or similar distributions.
  • Slope a and slope b may have different values (non-linearity), and slope a may be less than 1.
  • slope a may be less than 1.
  • the disclosure is not limited thereto.
  • final loss information for training the first DNN is also newly determined, and the first parameters and the scale factors of the first DNN are updated in a direction of minimizing the newly-determined final loss information for training the first DNN.
  • the first parameters and the scale factors of the first DNN and the second parameters and the scale factors of the second DNN are updated in connection with each other, and accordingly the scale factors of a training-completed first DNN and those of a training-completed second DNN have associated values.
  • DNN setting information of the first DNN for example, the first parameters, and the number of filter kernels included in the first DNN
  • the scale factors of the first DNN for example, the second parameters, and the number of filter kernels included in the second DNN
  • DNN setting information of the second DNN for example, the second parameters, and the number of filter kernels included in the second DNN
  • the scale factors of the second DNN which are determined after completion of training
  • DNN setting information of the first DNN from among a plurality of pieces of DNN setting information of the first DNN is determined
  • scale factors of the first DNN corresponding to the determined DNN setting information of the first DNN may also be determined
  • DNN setting information and scale factors of the second DNN corresponding to the determined DNN setting information of the first DNN may be determined.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Neurology (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un appareil de décodage d'intelligence artificielle (IA) comprenant une mémoire stockant une ou plusieurs instructions; et un processeur configuré, pour exécuter, lorsqu'une image est entrée dans un second réseau neuronal profond (RNP) comprenant une pluralité de couches, la ou les instructions pour obtenir des premières valeurs de résultat sur la base d'une opération entre l'image et un premier noyau de filtre et obtenir des secondes valeurs de résultat sur la base d'une opération entre l'image et un second noyau de filtre, à partir d'une première couche comprenant les premier et second noyaux de filtre parmi la pluralité de couches, effectuer une normalisation par transformation des premières valeurs de résultat en premières valeurs en utilisant un premier facteur d'échelle, effectuer une normalisation par transformation des secondes valeurs de résultat en secondes valeurs en utilisant un second facteur d'échelle et transformer les premières valeurs et les secondes valeurs en valeurs entières comprises dans une plage prédéfinie.
EP19874036.7A 2018-10-19 2019-10-16 Appareil de codage d'ia et son procédé de fonctionnement, et appareil de décodage d'ia et son procédé de fonctionnement Pending EP3811619A4 (fr)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
KR20180125406 2018-10-19
KR20180148905 2018-11-27
KR20190041100 2019-04-08
KR1020190078344A KR102312338B1 (ko) 2018-10-19 2019-06-28 Ai 부호화 장치 및 그 동작방법, 및 ai 복호화 장치 및 그 동작방법
PCT/KR2019/013595 WO2020080827A1 (fr) 2018-10-19 2019-10-16 Appareil de codage d'ia et son procédé de fonctionnement, et appareil de décodage d'ia et son procédé de fonctionnement

Publications (2)

Publication Number Publication Date
EP3811619A1 true EP3811619A1 (fr) 2021-04-28
EP3811619A4 EP3811619A4 (fr) 2021-08-18

Family

ID=70466765

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19874036.7A Pending EP3811619A4 (fr) 2018-10-19 2019-10-16 Appareil de codage d'ia et son procédé de fonctionnement, et appareil de décodage d'ia et son procédé de fonctionnement

Country Status (3)

Country Link
EP (1) EP3811619A4 (fr)
KR (1) KR102312338B1 (fr)
CN (1) CN112715029A (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102554709B1 (ko) * 2020-10-06 2023-07-13 한국전자통신연구원 특징 맵 부호화 및 복호화 장치 및 이를 이용한 방법
KR20230172914A (ko) * 2022-06-16 2023-12-26 주식회사 유엑스팩토리 이미지 분석을 위한 파생 이미지를 생성하기 위한 방법, 시스템 및 비일시성의 컴퓨터 판독 가능한 기록 매체

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108345939B (zh) * 2017-01-25 2022-05-24 微软技术许可有限责任公司 基于定点运算的神经网络

Also Published As

Publication number Publication date
KR102312338B1 (ko) 2021-10-14
EP3811619A4 (fr) 2021-08-18
KR20200044668A (ko) 2020-04-29
CN112715029A (zh) 2021-04-27

Similar Documents

Publication Publication Date Title
WO2020080827A1 (fr) Appareil de codage d'ia et son procédé de fonctionnement, et appareil de décodage d'ia et son procédé de fonctionnement
WO2021086016A2 (fr) Appareil et procédé de réalisation d'un codage par intelligence artificielle (ia) et d'un décodage par ia sur une image
WO2020080765A1 (fr) Appareils et procédés permettant d'effectuer un codage par intelligence artificielle et un décodage par intelligence artificielle sur une image
EP3868096A1 (fr) Procédés et appareils de codage à intelligence artificielle et de décodage à intelligence artificielle utilisant un réseau neuronal profond
WO2020080873A1 (fr) Procédé et appareil pour diffuser en continu des données
WO2020246756A1 (fr) Appareil et procédé de réalisation d'un codage par intelligence artificielle et d'un décodage par intelligence artificielle sur une image
WO2020080698A1 (fr) Procédé et dispositif d'évaluation de la qualité subjective d'une vidéo
WO2021033867A1 (fr) Appareil de décodage, procédé de fonctionnement correspondant, appareil de mise à l'échelle supérieure d'intelligence artificielle (ai) et procédé de fonctionnement correspondant
WO2020080665A1 (fr) Procédés et appareils permettant d'effectuer un codage par intelligence artificielle et un décodage par intelligence artificielle sur une image
WO2020080782A1 (fr) Dispositif de codage par intelligence artificielle (ai) et son procédé de fonctionnement et dispositif de décodage par ai et son procédé de fonctionnement
EP3811618A1 (fr) Procédé et appareil pour diffuser en continu des données
WO2021251611A1 (fr) Appareil et procédé pour effectuer un codage et un décodage à intelligence artificielle sur une image au moyen d'un réseau neuronal de faible complexité
WO2021091178A1 (fr) Appareil de codage par intelligence artificielle (ia) et son procédé de fonctionnement et dispositif de décodage par ia et son procédé de fonctionnement
WO2012044105A2 (fr) Procédé et dispositif pour interpoler des images au moyen d'un filtre d'interpolation et de lissage
WO2015133712A1 (fr) Procédé de décodage d'image et dispositif associé, et procédé de codage d'image et dispositif associé
WO2018047995A1 (fr) Procédé de traitement d'image basé sur un mode d'intraprédiction et appareil associé
EP3868097A1 (fr) Dispositif de codage par intelligence artificielle (ai) et son procédé de fonctionnement et dispositif de décodage par ai et son procédé de fonctionnement
WO2020080709A1 (fr) Procédés et appareils de codage à intelligence artificielle et de décodage à intelligence artificielle utilisant un réseau neuronal profond
WO2021172834A1 (fr) Appareil et procédé de réalisation d'un codage par intelligence artificielle et d'un décodage par intelligence artificielle sur une image au moyen d'un prétraitement
WO2016195455A1 (fr) Procédé et dispositif de traitement de signal vidéo au moyen d'une transformée basée graphique
WO2015137785A1 (fr) Procédé de codage d'image pour une compensation de valeur d'échantillon et appareil correspondant, et procédé de décodage d'image pour une compensation de valeur d'échantillon et appareil correspondant
WO2021086032A1 (fr) Procédé et appareil de codage d'image et procédé et appareil de décodage d'image
WO2021242066A1 (fr) Appareil et procédé de réalisation d'un codage par intelligence artificielle et d'un décodage par intelligence artificielle sur une image
WO2021162446A1 (fr) Procédé et appareil de diffusion en continu d'une image de vr
EP3844962A1 (fr) Procédés et appareils permettant d'effectuer un codage par intelligence artificielle et un décodage par intelligence artificielle sur une image

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20210119

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

A4 Supplementary search report drawn up and despatched

Effective date: 20210720

RIC1 Information provided on ipc code assigned before grant

Ipc: H04N 19/132 20140101AFI20210714BHEP

Ipc: H04N 19/85 20140101ALI20210714BHEP

Ipc: H04N 19/50 20140101ALI20210714BHEP

Ipc: H04N 19/184 20140101ALI20210714BHEP

Ipc: G06N 3/08 20060101ALI20210714BHEP

Ipc: G06T 1/20 20060101ALI20210714BHEP

Ipc: G06T 1/60 20060101ALI20210714BHEP

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)