WO2019132091A1 - Procédé d'élimination du bruit d'une vidéo d'échelle augmentée par des paramètres dynamiques basés sur l'apprentissage automatique - Google Patents

Procédé d'élimination du bruit d'une vidéo d'échelle augmentée par des paramètres dynamiques basés sur l'apprentissage automatique Download PDF

Info

Publication number
WO2019132091A1
WO2019132091A1 PCT/KR2018/000086 KR2018000086W WO2019132091A1 WO 2019132091 A1 WO2019132091 A1 WO 2019132091A1 KR 2018000086 W KR2018000086 W KR 2018000086W WO 2019132091 A1 WO2019132091 A1 WO 2019132091A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame image
image
noise
current frame
cnn
Prior art date
Application number
PCT/KR2018/000086
Other languages
English (en)
Korean (ko)
Inventor
이진학
황부군
김성수
Original Assignee
주식회사 케이블티비브이오디
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 케이블티비브이오디 filed Critical 주식회사 케이블티비브이오디
Publication of WO2019132091A1 publication Critical patent/WO2019132091A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present invention relates to a method and apparatus for removing noise from a video image, and more particularly, to a method and apparatus for noise reduction, And more particularly to a noise cancellation method that can be optimized for upscaled motion pictures by dynamically determining the weight parameters required for the upscaled motion picture.
  • UHD since UHD contents require considerable cost investment in comparison with existing HD contents, content produced by the UHD is not universally activated yet. Therefore, terrestrial, IPTV, cable TV industry, etc., (UHD) video contents by applying image processing technology such as upscaling and so on.
  • noise removal in the up-scaling process may become more important than noise removal in the general image.
  • a temporal moving average filter which is a kind of time domain filter such as the following conversion equation using pixel values of a previous frame, is mainly used.
  • a method of removing noise using this conversion formula is disclosed in Korean Unexamined Patent Application Publication No. 10-2007-0089485 (Aug. 31, 2007).
  • the parameter a is a weight parameter that indicates the temporal correlation coefficient of the image. As the coefficient increases, the influence of the previous frame becomes larger and the noise removal effect becomes larger. However, There is a new problem that many residual images are generated.
  • this technique has a problem that the efficiency of noise removal is extremely lowered in a high-quality image such as a UHD, in particular, only a parameter determination method is simple and only an edge effect can be exhibited to some extent.
  • Patent Document 1 Korean Laid-Open Patent Publication No. 10-2009-0039830 (Apr. 22, 2009)
  • Patent Document 2 Korean Published Patent Application No. 10-2007-0089485 (Aug. 31, 2007)
  • Patent Document 3 Korean Patent Registration No. 10-1558532 (Oct. 20, 2015)
  • the present invention has been made in order to solve the above problems, and it is an object of the present invention to provide a method and apparatus for dynamically determining a weight parameter of a time domain filter that can be optimized for a frame image based on a machine learning technique, (UHD) image from a high-resolution image (HD image) to a high-resolution image (UHD) image, and a recording medium on which the method is implemented.
  • UHD machine learning technique
  • a method of removing noise of an up-scaled moving image by using a dynamic learning-based dynamic parameter comprising: inputting an object image, which is a noise- A P (P is a natural number equal to or larger than 1) preceding frame image preceding the current frame image, and a Q (1) preceding the current frame image, based on the current frame image, (Q is a natural number equal to or greater than 1) neighboring frames; A matching step of performing motion estimation processing on the basis of a common object and generating a compensated frame image in which the P preceding frame images and Q following frame images are motion matched based on the current frame image, Using the compensation frame image and the current frame image, A parameter calculating step of calculating a parameter; And an arithmetic processing step of removing noise of the current frame image according to the following equation.
  • a current frame image from which noises have been removed A current frame image including noise, The compensation frame image.
  • the present invention may further include a block dividing step of dividing each of the current frame image and the compensated frame image into units of a first reference block.
  • the current frame image and the compensated frame image are used to calculate a weight parameter for each frame ),
  • the arithmetic processing step may be configured to remove noise of a target area forming a first reference block of the current frame image.
  • the parameter calculation step of the present invention may include calculating a parameter having a size smaller than a first reference block unit for a current frame image and a compensated frame image, each of which is divided in units of the first reference block through learning about a noise image and a noise-
  • a first CNN processing step based on a convolution neural network (CNN) for calculating a value block; (CNN) based convolutional neural network (CNN) for calculating a feature value block hierarchically with respect to a feature value block calculated in the first CNN processing step until a final 1x1 size feature value block is outputted as a result CNN processing step; And the value of the final 1x1 size feature value block is divided into a weight parameter for each frame ), And calculating a result of the calculation.
  • CNN convolution neural network
  • CNN convolutional neural network
  • the present invention may further comprise an upscaling step of performing upscaling of the noise-removed image from the arithmetic processing step and outputting the upscaled image.
  • the present invention may further include a post-processing step of removing the subsequent noise from the up-scaled image, wherein the post-processing step of the present invention includes: a second input step of inputting the up-scaling image; A second current frame image to be subjected to noise removal among the frame images constituting the up-scaling image, a second preceding frame image P (P is a natural number of 1 or more) preceding the second current frame image, A second object selecting step of selecting a second trailing frame image of Q (Q is a natural number of 1 or more) trailing based on the current frame image; A motion compensation processing unit for performing motion estimation processing on the basis of the common object, and for outputting the P second preceding frame image and the Q second rear frame image in accordance with a result of the motion estimation processing, A second matching step of generating an image; The second compensating frame image and the second current frame image are used to calculate a second weighting parameter for each frame A second parameter calculating step of calculating a second parameter; And a second a
  • a second current frame image from which noises have been removed A second current frame image including noise, Is a second compensation frame image.
  • the present invention may further include a second block dividing step of dividing each of the second current frame image and the second compensation frame image into a first reference block unit.
  • the second parameter calculating step Using a second current frame image and a second compensation frame image that are respectively divided into the first reference block unit and the second weighting parameter for each frame
  • the second arithmetic processing step may be configured to remove noise of a target area forming a first reference block of the second current frame image.
  • the second parameter calculation step of the present invention may include calculating a first reference value for a second current frame image and a second compensation frame image, each of which is divided in units of the first reference block through learning about a noise image and a noise- A second CNN processing step based on a Convolution Neural Network (CNN) for calculating a feature value block having a size smaller than a block unit;
  • CNN Convolution Neural Network
  • a CNN (Convolution Neural Network) -based apparatus for calculating a feature value block hierarchically on a feature value block calculated in a second CNN processing step until a final 1x1 feature value block is outputted as a result 2 subsequent CNN processing step; And a value of the final 1x1 size feature value block is divided into a second weight parameter for each frame And a second result calculating step of calculating the second result.
  • the second matching step of the present invention is configured to perform motion estimation processing using the motion estimation processing result performed in the matching step and the estimated area determined by the upscaled multiple information in the upscaling step .
  • the noise removal method eliminates the method of determining the weight parameter of the time domain filter by the fixed value and removes the weight parameter dynamically determined using the hierarchical application of the machine learning technique, By applying differentially for each image, it is possible to provide an effect of optimizing the noise reduction of an upscaled moving picture / video.
  • the result of the motion estimation and motion compensation (registration movement) processing used in the preprocessing process is applied cyclically in the post- It is possible to provide an effect of further maximizing the efficiency.
  • FIG. 1 is a block diagram showing the configuration of a noise elimination device with a preprocessing unit as a center in accordance with a preferred embodiment of the present invention.
  • FIG. 1 is a block diagram showing the configuration of a noise elimination device with a preprocessing unit as a center in accordance with a preferred embodiment of the present invention.
  • FIG. 2 is a block diagram showing a configuration of a noise elimination apparatus, which is mainly composed of a post-processing unit according to a preferred embodiment of the present invention
  • FIG. 3 is a view for explaining motion estimation and motion compensation or registration movement used in the noise removal method of the present invention
  • FIG. 4 is a block diagram showing a detailed configuration of a parameter calculating unit according to an embodiment of the present invention shown in FIG. 1;
  • FIG. 5 is a flowchart illustrating processing for implementing a noise removal method in a preprocessing process according to an exemplary embodiment of the present invention
  • FIG. 6 is a flowchart illustrating a processing procedure for dynamically determining weight parameters using machine learning according to a preferred embodiment of the present invention.
  • FIG. 7 is a flowchart illustrating a process of removing noise based on a post-process according to an exemplary embodiment of the present invention.
  • FIG. 8 is a diagram showing the basic structure of machine learning.
  • FIG. 1 is a block diagram showing a detailed configuration of a noise removing apparatus 1000 in which a noise canceling method (to be referred to as 'noise removing method' hereinafter) of an up-scaled moving image by a dynamic learning- .
  • a noise canceling method to be referred to as 'noise removing method' hereinafter
  • the noise reduction method according to the present invention can be implemented by a method in which the noise reduction method is installed and operated in the form of software in a hardware device such as a computer in which hardware resources such as a ROM, a RAM, an input / output module, a communication module, It is needless to say that the present invention can also be implemented by a dedicated terminal device in which a system on chip (SOC) implemented with a noise reduction method according to the present invention and other hardware configurations are combined.
  • SOC system on chip
  • the noise removing apparatus 1000 represents a terminal apparatus or a system implementing the noise removing method according to the present invention using the hardware resources
  • the noise removing method according to the present invention can be implemented or implemented It is to be understood that the present invention is not limited to the name of the apparatus, the terminal, or the system, but may be a noise removal apparatus 100 according to the present invention.
  • the noise removing apparatus 1000 of the present invention includes a pre-processing unit 100, an up-scaling unit 200, and a post-processing unit 300.
  • a pre-processing unit 100 constituting the noise canceller 1000 of the present invention
  • an up-scaling unit 200 constituting the noise canceller 1000 of the present invention
  • a post-processing unit 300 constituting the noise canceller 1000 of the present invention
  • the preprocessing unit 100 of the present invention includes an input unit 110, an object selection unit 120, a matching unit 130, a block division unit 140, a parameter calculation unit 150, (160).
  • the components of the noise removing apparatus 1000, the preprocessing unit 100, and the post-processing unit 300 shown in FIGS. 1 and 2 are not physically separated components It should be understood as a logically distinct component.
  • each constitution corresponds to a logical constituent element for realizing the technical idea of the present invention, even if each constituent element is constituted integrally or separately, if the function performed by the logical constitution of the present invention can be realized, It is to be understood that any component that performs the same or similar function should be interpreted as falling within the scope of the present invention regardless of the consistency of the name.
  • the input unit 110 of the present invention is configured to input an image to be noise canceled (hereinafter, referred to as an 'object image') and includes an interface function between another external apparatus and the noise removing apparatus 1000 of the present invention .
  • the noise removal method basically employs a time domain filter (time domain function).
  • the object selecting unit 120 of the present invention when the object image is inputted through the input unit 110 of the present invention, A P previous frame image temporally preceding with respect to the current frame image, and Q trailing frame images trailing with respect to the current frame image are selected from the plurality of frame images constituting the current frame image (S500 in Fig. 5).
  • the current frame, the preceding frame, and the trailing frame have sequential timestamp properties so that the current frame in a particular processing can be a trailing frame or a preceding frame in previous or subsequent processing.
  • the number P of preceding frames and the number of trailing frames Q selected by the object selecting unit 120 of the present invention may be the same or different from each other as natural numbers of 1 or more, The number of pixels of the frame, the computation efficiency, the noise characteristic, the distribution, and the like.
  • the matching unit 130 of the present invention When the current frame image, the P previous frame images, and the Q current frame images are selected as described above, the matching unit 130 of the present invention generates a motion vector or a motion (S510), and performs motion compensation or motion-matched processing on the P preceding frame images and Q following frame images based on the current frame image, (S520).
  • Each motion or motion-matched preceding or following frame image based on the current frame is referred to as a compensated frame image hereinafter for the sake of descriptive efficiency.
  • the motion-estimated image g (x, y) can be defined as the sum of noise-free image "f (x, y) and noise ⁇ (x, y) as follows.
  • Equation 2 (Including the current frame image) g (x, y) divided by K, The (X, y) as an average of the noise components, only the image f (x, y) without noise is averaged.
  • the (X, y) as the variance of the motion estimation image K, and divides it by K, it represents the average noise size. Since it is assumed that f (x, y) is the same in K frames, only the ⁇ (x, y) is obtained by obtaining the variance, and since it is divided by K, the noise size is reduced to 1 / K.
  • the parameter calculation unit 150 of the present invention calculates the weight parameter for each current frame using the compensation frame image and the current frame image, "(S540).
  • Weight parameter The subscript "i" of the current frame means a corresponding order of the current frame to be subjected to the noise removal, and in order to overcome the conventional method having a fixed value, Lt; / RTI > The specific processing for calculating the weight parameter using the machine learning will be described later.
  • Equation (3) means time-domain filtering using the current frame (n-th frame), P preceding frames and Q (n) following frames as the time domain filter proposed by the present invention.
  • the noise reduction method according to the present invention may be configured to divide the current frame image into a plurality of reference block units and perform the noise removal processing on the divided blocks in order to further improve the accuracy of noise removal .
  • the block division unit 140 of the present invention divides each of the current frame image and the compensated frame image (the P compensated (preceding) frame and the Q following frame) into a first reference block unit (S530). ≪ / RTI >
  • the first reference block can have a variable size based on the efficiency of the operation processing, etc. since the noise removal accuracy is high as the unit of noise reduction is small, but the processing speed is slow.
  • the first reference block has an 8x8 size as an example.
  • the parameter calculating unit 150 calculates the current frame image and the compensation frame, which are respectively divided into the first reference block (8 ⁇ 8)
  • the arithmetic processing unit 160 removes noise of a target area forming an 8x8 reference block to be subject to noise removal processing in the current frame image (S550) .
  • the operation processing unit 160 of the present invention shifts the first reference block until the noise of all the target areas inside the first reference block is removed in step S560 Performs the above-described S540 to S560 processing cyclically.
  • a weighting parameter for each reference block of K frames based on a machine learning So that the blur phenomenon due to the temporal filtering of the motion image can be minimized.
  • the up-scaling effect is further improved by performing noise removal before up-scaling and after up-scaling.
  • the description is based on the weight parameter Will be described later.
  • the parameter calculation unit 150 of the present invention may include a first CNN processing unit 151, one or more subsequent CNN processing units 153, and one or more ReLU units 152.
  • each frame of the object image (low-resolution image) inputted through the input unit 110 is 1, 2, ... n, n + 1, ... (Current frame), the motion estimation and motion compensation (matching) processing is performed on the P preceding frames and the Q subsequent frames as described above .
  • the frame in which the current motion estimation and the motion compensation are performed is the n-mth frame.
  • the subsequent process is performed in the same manner for the P preceding frames and the Q subsequent frames.
  • the motion between the n-th frame (current frame) and the n-m-th frame is estimated to calculate motion information for each pixel.
  • the n-mth frame (n-mth compensation frame or compensation frame image) is compensated by moving the corresponding pixels by the calculated motion information using the calculated motion information of each pixel.
  • each 8x8 reference block is divided into 8x8 reference block units each having a corresponding weight parameter Is used as a unit for determining
  • the weight parameter The noise is removed from the 4x4 (target area) block, which is the internal area of the 8x8 block.
  • the noise cancellation unit may be 2x2 or 1x1 instead of 4x4, etc.
  • the size of the noise cancellation unit can be variably applied depending on the trade-off relationship between the accuracy of noise cancellation and the computation processing speed to be.
  • a difference may occur in the value of the weight parameter of the neighboring target area block, and when the difference of the weight parameter is large, the block boundary may become prominent. Therefore, in order to effectively suppress the block boundary, 4 is made to overlap with the next target area block so that the noise can be removed.
  • the parameter calculation unit 150 of the present invention performs training using machine learning (deep learning) and calculates a weight parameter based on the training result using a convolution neural network (CNN) based first CNN unit And subsequent CNN units 151 and 153 and an ReLU unit 152.
  • CNN convolution neural network
  • the number of the CNN unit 151 and the subsequent CNN unit 153 may be variously set.
  • the CNN unit 151 and the ReLU unit 152 have three hidden layers, (151, 153) illustrate a structure having 3 ⁇ 3 CNN 2 parts, 4 ⁇ 4 CNN 1 parts and 1 ⁇ 1 CNN 1 parts, and each hidden layer has C1, C2 and C3 feature maps.
  • CNN is an application of the human brain structure and corresponds to modeling a CNN as a cluster of single CNNs, as if the human brain consisted of a minimum unit of neurons.
  • Neuron which is a basic structure of deep learning, has an affine transform part for multiplying an input by a weight and calculating a sum as shown in FIG. 8, and an active function (Activation Function) part.
  • CNN is used as the affine transform part and ReLU (Rectified Linear Unit) is used as the active function part.
  • ReLU Rectified Linear Unit
  • the first CNN unit 151 of the present invention learns a noise image and a noise-eliminated image to obtain a current frame image and a compensation frame image, which are respectively divided into a first reference block (8 ⁇ 8) 3CNN to calculate a feature value block (6 ⁇ 6) smaller than the first reference block (8 ⁇ 8) (S600).
  • a 3 ⁇ 3 CNN is performed on an 8 ⁇ 8 basic block to calculate and mapped one characteristic value, And 3 ⁇ 3 CNN are sequentially performed, thereby finally outputting 6 ⁇ 6 blocks.
  • the affine transform is performed on nine pixels of 3 ⁇ 3 blocks in the input 8 ⁇ 8 block, and the result is obtained by multiplying each pixel by weight and summing the weight. Then, the result is added to the first (leftmost Top) position. This process is performed by moving one pixel at a time to finally fill all the values of the 6x6 block (S600).
  • ReLU After the first CNN is performed, ReLU is performed (S610). ReLU outputs 0 when the input is smaller than 0, and outputs the input as it is when the input is 0 or more.
  • the last succeeding CNN unit 153 performs 1 ⁇ 1 CNN on C3 inputs (S660), and finally calculates a weight parameter.
  • the subsequent CNN unit 153 calculates the feature value block in a hierarchical manner with respect to the feature value block calculated by the first CNN processing unit 151 until the final 1x1 feature value block is outputted as a result ,
  • the result calculation unit 155 of the present invention calculates a weighting parameter for the noise removal target region of the corresponding frame ).
  • the method of training the deep running structure of the present invention as described above can be configured as follows.
  • the inserted noise can be white Gaussian noise, Poisson noise, Salt & Pepper noise, coding noise, ringing artifact, alias artifact, jaggy artifact.
  • the difference between the 4 ⁇ 4 block of the n-th frame from which the noise is removed and the 4 ⁇ 4 block of the n-th frame without noise is obtained by using the MSE (Mean Square Error) as follows.
  • Equation (4) yi is the pixel of the 4x4 block of the nth frame with no noise, Is the pixel of the 4x4 block of the n-th frame from which noise is removed, and d is the number of pixels in the block, which is 16 in this embodiment.
  • the training process is repeated until the MSE is at a minimum and no longer decreases.
  • hyper parameters for deep training training are: Optimization uses the Adam algorithm, the learning rate is 0.1, the momentum is 0.9, the weight decay is 0.0001, and the learning rate policy is "fixed.” We use the "msra” constant "is used.
  • weight values obtained through the above-described machine learning (deep running) training process are used in the diving structure to determine the weight parameters of the time domain filter.
  • the up-scaling unit 200 of the present invention performs the up-scaling Processing is performed (S710) and finally the upscaled image is outputted.
  • the post-processing unit 300 of the present invention includes a second input unit 310, a second object selection unit 320, a second matching unit 320, A first block dividing unit 130, a second block dividing unit 140, a second parameter calculating unit 350, and a second calculation processing unit 360.
  • the second object selection unit 320 of the present invention When the upscaled image is input from the upscaling unit 200 through the second input unit 310 of the post-processing unit 300, the second object selection unit 320 of the present invention generates a frame image (P is a natural number equal to or greater than 1) preceding the second current frame image and the second current frame image, which are subjected to the intermediate noise removal, Q (Q is a natural number of 1 or more) second rear frame images are selected (S721).
  • the second matching unit 330 performs motion estimation processing on the basis of the common object and outputs the P second preceding frame image and Q second following frame image as the second current frame image A second compensation frame image that is a motion-matched image is generated (S723).
  • the second matching unit 330 of the present invention has characteristics corresponding to both before and after the up-scaling of the motion of the common object, so that the processing such as motion estimation and compensation is not newly performed on the up- , And to perform motion estimation and compensation processing using the estimated region determined by the result of the motion estimation (motion vector and the like) performed in the matching unit 130 of the preprocessing unit 100 and the upscaled multiple information desirable.
  • the motion estimation processing may be performed using the determined range of the upper, lower, right, and left three pixels.
  • the second block division unit 340 of the post-processing unit 300 divides each of the second current frame image and the second compensation frame image into a first reference block unit S723).
  • the second parameter calculation unit 350 of the present invention may use the second compensation frame image and the second current frame image that are divided in units of the first reference block through the same processing as the parameter calculation unit 150 described above
  • the second weight parameter for each frame (Step S725).
  • the weight parameter calculated in the post-process using the up-scaled image, etc., .
  • the second operation processing unit 30 of the present invention cyclically applies the movement of the reference unit block S729 by using the following equation (5) and the method corresponding to the above-described method, (S728), the noise of the second current frame image is removed (S727).
  • a second current frame image from which noises have been removed A second current frame image including noise, Is a second compensation frame image.
  • the second CNN unit 351, the ReLU unit 352 and the second succeeding CNN unit 353, which constitute the second parameter calculation unit 350 of the post-processing unit 300, Is the same as the parameter calculating unit 150 of the controller 100, detailed description thereof will be omitted.
  • the second weight parameter It is preferable to use an image in which the ratio of the upscaling of the original image is reversed and the downscale is performed. For example, downsampling is performed 1/2 times up-scaling, and down-scaling is performed 1/3 times up-scaling.
  • the above-described deep-running training is performed using noise-free moving pictures before downscaling and moving noise-containing moving pictures produced by performing up-scaling by incorporating various noise into the downscaled image.
  • the noise canceling method of the up-scaled moving image by the above-described machine learning based dynamic parameter of the present invention can be implemented as a computer-readable code on a computer-readable recording medium.
  • the computer-readable recording medium includes all kinds of recording devices (CD-ROM, RAM, ROM, floppy disk, magnetic disk, hard disk, magneto-optical disk, etc.) in which data that can be read by a computer is stored. It also includes a server for transmission.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Picture Signal Circuits (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un procédé d'élimination du bruit d'une vidéo d'échelle augmentée par des paramètres dynamiques basés sur l'apprentissage automatique, comprenant : une étape d'entrée consistant à entrer une image d'objet dont le bruit doit être éliminé ; une étape de sélection de cible consistant à sélectionner une image de trame en cours dont le bruit doit être éliminé, parmi les images de trame constituant l'image d'objet, P (où P est un nombre naturel supérieur ou égal à 1) images de trame précédentes qui précèdent l'image de trame en cours et Q (où Q est un nombre naturel supérieur ou égal à 1) images de trame suivantes qui suivent l'image de trame en cours ; une étape de mise en correspondance consistant à effectuer un traitement d'estimation de mouvement sur la base d'un objet commun et, en fonction du résultat de celle-ci, créer une image de trame compensée dans laquelle les P images de trame précédentes et les Q images de trame suivantes sont mises en correspondance de mouvement sur la base de l'image de trame en cours ; une étape de calcul de paramètre consistant à calculer un paramètre de pondération (ai) pour chaque trame à l'aide de l'image de trame compensée et de l'image de trame en cours ; et une étape de traitement d'opération consistant à éliminer le bruit de l'image de trame en cours.
PCT/KR2018/000086 2017-12-28 2018-01-03 Procédé d'élimination du bruit d'une vidéo d'échelle augmentée par des paramètres dynamiques basés sur l'apprentissage automatique WO2019132091A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2017-0181812 2017-12-28
KR1020170181812A KR101987079B1 (ko) 2017-12-28 2017-12-28 머신러닝 기반의 동적 파라미터에 의한 업스케일된 동영상의 노이즈 제거방법

Publications (1)

Publication Number Publication Date
WO2019132091A1 true WO2019132091A1 (fr) 2019-07-04

Family

ID=66848225

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/000086 WO2019132091A1 (fr) 2017-12-28 2018-01-03 Procédé d'élimination du bruit d'une vidéo d'échelle augmentée par des paramètres dynamiques basés sur l'apprentissage automatique

Country Status (2)

Country Link
KR (1) KR101987079B1 (fr)
WO (1) WO2019132091A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349110A (zh) * 2019-07-16 2019-10-18 天津师范大学 一种基于累帧过融合的模糊图像增强方法及应用
CN111523513A (zh) * 2020-05-09 2020-08-11 陈正刚 通过大数据筛选进行人员入户安全验证的工作方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022131655A1 (fr) * 2020-12-18 2022-06-23 삼성전자 주식회사 Dispositif de traitement d'image et procédé de traitement d'image multi-trames l'utilisant
KR20230082514A (ko) 2021-12-01 2023-06-08 고려대학교 산학협력단 적응형 자기지도학습을 이용한 노이즈 제거방법
KR20240007420A (ko) * 2022-07-08 2024-01-16 한화비전 주식회사 머신 러닝을 이용한 영상 노이즈 학습 서버 및 영상 노이즈 저감 장치

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR19980067578A (ko) * 1997-02-06 1998-10-15 김광호 동영상부호화시스템에서의 노이즈 감소를 위한 필터링방법 및 장치
JP2003284076A (ja) * 2001-12-31 2003-10-03 Pentamicro Inc Mpeg映像圧縮技術を利用したデジタル映像格納装置における動き検出装置及びその方法
KR20040024888A (ko) * 2002-09-17 2004-03-24 (주)펜타마이크로 시간 필터링을 이용한 영상잡음 제거시 움직임 검출 장치및 그 방법
KR20100020068A (ko) * 2008-08-12 2010-02-22 엘지전자 주식회사 움직임 추정을 이용한 잡음 제거장치
KR20170058277A (ko) * 2015-11-09 2017-05-26 톰슨 라이센싱 잡음 있는 이미지들을 업스케일링하기 위한 방법, 및 잡음 있는 이미지들을 업스케일링하기 위한 장치

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4462823B2 (ja) * 2002-11-20 2010-05-12 ソニー株式会社 画像信号の処理装置および処理方法、それに使用される係数データの生成装置および生成方法、並びに各方法を実行するためのプログラム
KR100809687B1 (ko) 2006-02-28 2008-03-06 삼성전자주식회사 영상신호에 포함된 잡음을 제거할 수 있는 영상신호처리장치 및 방법
US9013511B2 (en) 2006-08-09 2015-04-21 Qualcomm Incorporated Adaptive spatial variant interpolation for image upscaling
KR101558532B1 (ko) 2014-06-09 2015-10-12 아스텔 주식회사 영상의 잡음 제거장치

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR19980067578A (ko) * 1997-02-06 1998-10-15 김광호 동영상부호화시스템에서의 노이즈 감소를 위한 필터링방법 및 장치
JP2003284076A (ja) * 2001-12-31 2003-10-03 Pentamicro Inc Mpeg映像圧縮技術を利用したデジタル映像格納装置における動き検出装置及びその方法
KR20040024888A (ko) * 2002-09-17 2004-03-24 (주)펜타마이크로 시간 필터링을 이용한 영상잡음 제거시 움직임 검출 장치및 그 방법
KR20100020068A (ko) * 2008-08-12 2010-02-22 엘지전자 주식회사 움직임 추정을 이용한 잡음 제거장치
KR20170058277A (ko) * 2015-11-09 2017-05-26 톰슨 라이센싱 잡음 있는 이미지들을 업스케일링하기 위한 방법, 및 잡음 있는 이미지들을 업스케일링하기 위한 장치

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349110A (zh) * 2019-07-16 2019-10-18 天津师范大学 一种基于累帧过融合的模糊图像增强方法及应用
CN111523513A (zh) * 2020-05-09 2020-08-11 陈正刚 通过大数据筛选进行人员入户安全验证的工作方法

Also Published As

Publication number Publication date
KR101987079B1 (ko) 2019-06-10

Similar Documents

Publication Publication Date Title
WO2019132091A1 (fr) Procédé d'élimination du bruit d'une vidéo d'échelle augmentée par des paramètres dynamiques basés sur l'apprentissage automatique
Su et al. Spatially adaptive block-based super-resolution
Shah et al. Resolution enhancement of color video sequences
US7412107B2 (en) System and method for robust multi-frame demosaicing and color super-resolution
EP2164040B1 (fr) Système et procédé pour une grande qualité d'image et l'interpolation vidéo
WO2010038941A2 (fr) Appareil et procédé pour obtenir une image à haute résolution
KR20100139030A (ko) 이미지들의 수퍼 해상도를 위한 방법 및 장치
WO2020017871A1 (fr) Appareil de traitement d'image et son procédé de fonctionnement
JP2009545794A (ja) 動き解析への用途を有するスパース積分画像記述子
Gal et al. Progress in the restoration of image sequences degraded by atmospheric turbulence
TWI827771B (zh) 圖像處理設備和方法
Choi et al. Motion-blur-free camera system splitting exposure time
JP2000152250A (ja) 画像処理装置、方法及びコンピュータ読み取り可能な記憶媒体
WO2022270854A1 (fr) Procédé de lissage l0 à base d'informations d'avance par gradient de profondeur pour améliorer la netteté
JP4173705B2 (ja) 動画像合成方法および装置並びにプログラム
Schultz et al. Video resolution enhancement
Patanavijit et al. An iterative super-resolution reconstruction of image sequences using a Bayesian approach with BTV prior and affine block-based registration
CN108681988B (zh) 一种基于多幅图像的鲁棒的图像分辨率增强方法
Anagün et al. Super resolution using variable size block-matching motion estimation with rotation
KR102340942B1 (ko) 영상 처리 방법 및 이를 이용한 표시장치
JP4104937B2 (ja) 動画像合成方法および装置並びにプログラム
JP6854629B2 (ja) 画像処理装置、画像処理方法
WO2022250372A1 (fr) Procédé et dispositif d'interpolation de trame à base d'ia
Peng et al. Image restoration for interlaced scan CCD image with space-variant motion blurs
Xu et al. Interlaced scan CCD image motion deblur for space-variant motion blurs

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18896010

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18896010

Country of ref document: EP

Kind code of ref document: A1