WO2014010817A1 - Système de réduction adaptative de bruit pour images numériques et procédé d'élimination de bruit - Google Patents

Système de réduction adaptative de bruit pour images numériques et procédé d'élimination de bruit Download PDF

Info

Publication number
WO2014010817A1
WO2014010817A1 PCT/KR2013/003390 KR2013003390W WO2014010817A1 WO 2014010817 A1 WO2014010817 A1 WO 2014010817A1 KR 2013003390 W KR2013003390 W KR 2013003390W WO 2014010817 A1 WO2014010817 A1 WO 2014010817A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
noise
image information
noise reduction
motion
Prior art date
Application number
PCT/KR2013/003390
Other languages
English (en)
Korean (ko)
Inventor
송지호
Original Assignee
매크로영상기술(주)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 매크로영상기술(주) filed Critical 매크로영상기술(주)
Publication of WO2014010817A1 publication Critical patent/WO2014010817A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20182Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering

Definitions

  • the present invention relates to an adaptive image noise reduction system for a digital image and a method of removing the noise thereof, and more particularly, by adaptively reducing noise by temporal and spatial noise reduction methods of image information including noise.
  • the present invention relates to an adaptive image noise reduction system for a digital image capable of obtaining clear image quality, and a method of removing the noise.
  • a spatial method using spatial data such as a temporal method using data on a time axis.
  • the temporal image noise suppression method is an image processing method for pixels at the same position on a plurality of images.
  • important information is the degree of similarity between the pixel of the previous screen and the pixel of the current screen. Similarity information of the pixel is also called motion information.
  • the temporal image noise suppression method has an adaptive property by suppressing noise.
  • the spatial image noise suppression method is an image processing method using information on each pixel and surrounding pixels on a screen.
  • important information is on-screen noise level and edge information.
  • edge information plays an important role in performance evaluation of spatial image noise suppression filters.
  • edges are preserved and have adaptive properties by adjusting the suppression strength according to the noise level.
  • the present invention has an object of solving the above technical problem, and is an adaptive noise reduction system for a digital image capable of outputting a clear image by simultaneously reducing not only temporal but also spatial noise. Its purpose is to provide a noise reduction method.
  • the temporal noise reduction unit for reducing the temporal noise of the image information
  • a spatial noise reduction unit for reducing spatial noise of the image information
  • An image combiner which combines a first output image from the temporal noise reduction unit and a second output image from the spatial noise reduction unit
  • a motion degree calculator configured to calculate a motion degree of image information input according to time.
  • the motion degree calculated from the motion degree calculator may include: a motion accumulation information value up to the video information of the current step; And a motion intensity information value for measuring the intensity of the motion by comparing the image information of the current step with the image information before the first step of the current step.
  • the adaptive noise reduction system for a digital image of the present invention includes: a pixel noise degree calculator for calculating noise for a pixel; And using the pixel noise calculated from the pixel noise degree calculating unit, calculating a ratio of pixels without noise among the pixels subtracted from the pixel determined as an edge from all the image information forming one frame, and then calculating the image information in the next step. If is input, the image noise calculation unit for calculating the intensity control coefficient of the noise reduction of the temporal noise reduction unit; preferably further includes.
  • the pixel noise degree calculator includes: a first noise calculator configured to calculate a standard deviation between a center pixel and neighboring pixels; A second noise calculator for calculating an average value of the peripheral pixels, calculating an absolute value of a difference between the center pixel and each of the peripheral pixels and the average value, and calculating a median value of the absolute values; And a third noise calculator configured to calculate an absolute value of the difference between the center pixel and the peripheral pixels, obtain a log value, and then calculate an average value of the log value.
  • the pixel noise degree calculator may be configured to generate the pixel noise degree calculator based on a predetermined compensation pattern according to a standard of an image information signal so that the noise values calculated from the first and second noise calculators have the same value for a signal of image information of a different standard. It is preferable to compensate and output the noise values calculated from the first and second noise calculators.
  • the movement degree calculation unit of the present invention the first low-pass filter for low-pass passing the video information of the current step; A second low pass filter configured to low pass image information before the first step of the current step; And a third low pass filter configured to low pass the image information obtained by mixing the image information of the current step and the image information before the first step.
  • the weight ratio of the R: G: B pixels is 1: 2: 1.
  • the movement degree calculator may include an image obtained by mixing the output from the first, second and third low pass filters, the image information of the current stage, the image information of the current stage, and the image information before the first stage.
  • a motion intensity calculator for calculating the intensity of the motion of the image information of the current step from the image information before the first step by using the information and the image information before the first step.
  • the input image information is a Bayer pattern signal
  • the input of the adaptive noise reduction system for the digital image of the present invention is characterized in that it comprises the image information of the current step, the motion accumulation information value up to the image information of the current step, and the image information before the first step. do.
  • the image combiner may select respective weights for combining the first output image and the second output image using values calculated from the motion degree calculator.
  • the motion degree calculating unit may further include a motion accumulation information value up to the current stage image information; And a motion intensity information value for measuring the intensity of the motion by comparing the image information of the current step with the image information before the first step, wherein the image combiner is configured to compare the first output image with the motion accumulation information value.
  • a motion accumulation information value up to the current stage image information
  • a motion intensity information value for measuring the intensity of the motion by comparing the image information of the current step with the image information before the first step
  • the image combiner is configured to compare the first output image with the motion accumulation information value.
  • a clear image can be output by simultaneously reducing not only temporal but also spatial noise.
  • FIG. 1 is a block diagram of an adaptive noise reduction system for a digital image according to an embodiment of the present invention.
  • FIGS. 2A to 2C are illustrations of processing for input signals of a video interface unit of the present invention.
  • FIG. 3 is a block diagram of a movement degree calculation unit according to an embodiment of the present invention.
  • FIG. 4 is an illustration of a mask used in the first, second and third low pass filters in accordance with a preferred embodiment of the present invention.
  • FIG. 5 is a block diagram of a pixel noise degree calculator according to an exemplary embodiment of the present invention.
  • FIG. 6 is a flowchart of an adaptive noise reduction method for a digital image according to an exemplary embodiment of the present invention.
  • FIG. 1 is a block diagram of an adaptive noise reduction system 100 for a digital image according to an embodiment of the present invention.
  • the adaptive noise reduction system 100 for a digital image according to an exemplary embodiment of the present invention includes an image interface unit 10, a movement degree calculator 20, and a pixel noise degree calculator 30. ), A temporal noise reducer 40, an image noise calculator 50, a spatial noise reducer 60, an image combiner 70, and a storage 80.
  • the image interface unit 10 determines which index among the R, G, and B center pixels, It skips pixels of different colors and reconstructs image information by surrounding pixels of the same color as the center pixel.
  • the input from the image input device may be a Y / C signal or a Bayer pattern signal, which allows the two types of input signals to be processed in the same way at the rear of the image interface unit 10. Play a role.
  • the video signal from the video input device such as the image sensor of the front stage input to the video interface unit 10 includes the motion cumulative information value (Mh k-1 (p)) up to the video information one step before in time, the current video. Information I k (p), and image information I k-1 (p) of the previous step.
  • 2A to 2C show exemplary diagrams of processing of an input signal of the video interface unit 10 of the present invention.
  • 2A illustrates a case in which a Y / C signal is input
  • the image interface unit 10 according to the present invention uses the upper 5x9 as the Y data and the lower 5x9 as the C data, assuming a 10x9 data space.
  • 35 pixel data of the Y data is transferred to the rear end
  • 25 pixel data of the C data is transferred to the rear end.
  • FIG. 2B illustrates an embodiment of processing when a Bayer pattern signal having a center pixel of G is input to the image interface unit 10. Assuming a 10x9 data space, only 9x9 data space is used to input the Bayer pattern signal, and 37 pixels are delivered to the rear end when the center pixel is a G pixel. At this time, the C data is deactivated.
  • FIG. 2C illustrates an embodiment of processing when a Bayer pattern signal having a center pixel of B or R is input to the image interface unit 10. Assuming a 10x9 data space, only 9x9 data space is used to input the Bayer pattern signal, and 25 pixels are delivered to the rear end when the center pixel is a B or R pixel. At this time, the C data is deactivated.
  • the motion cumulative information value (Mh k ) from the video interface unit 10 to the video information before the first step -1 (p)), the current image information (I k (p)) and the previous stage image information (I k-1 (p)), the degree of movement of the image information input over time is calculated .
  • Motion degree calculating section (20) of the present invention the motion cumulative values to the video information (I k (p)) of the current step (Mh k (p)) and the image information of the current step (I k (p))
  • the motion intensity information value M ok (p) for measuring the intensity of the motion is calculated by comparing the image information I k-1 (p) before step 1 with the image information.
  • the motion accumulation information value (Mh k-1 (p)) up to the image information I k-1 (p) before step 1 is transmitted to the image combiner 70.
  • the motion cumulative information value (Mh k (p)) is a cumulative value for how many frames the motion is not detected in comparison with the image of the previous stage up to the image information of the current stage.
  • the motion accumulation information value Mh k (p) may be set to '2'.
  • the motion cumulative information value (Mh k (p)) reaches a certain value, the value is maintained until a motion is detected, and a predetermined value is subtracted when the motion is detected.
  • the motion intensity values (M ok (p)) is that from the video information (I k-1 (p) ) from the previous step, Step 1 there was how many motion picture information (I k (p)) of the current step That is, the intensity of the movement may be expressed by a predetermined value as a value for indicating the intensity of the movement.
  • FIG. 3 shows a block diagram of the movement degree calculation unit 20 according to an embodiment of the present invention.
  • the motion degree calculator 20 of the present invention includes a fixed temporal image mixer 21, a first low pass filter 22, a second low pass filter 23, and a third low pass.
  • the fixed temporal image mixer 21 mixes the current image information I k (p) and the image information I k-1 (p) of the previous step and outputs the output I mk (p).
  • An example of a blending technique is Alpha Blending.
  • the moving degree calculating section 20 of the present invention includes three low pass filters 22, 23, 24. That is, the first low pass filter 22 which low pass the image information I k (p) of the current stage, and the second low pass which low pass the image information I k-1 (p) of the previous stage The low pass of the pass filter 23 and the image information I mk (p), which is a mixture of the image information I k (p) of the current stage and the image information I k-1 (p) of the previous stage. And a third low pass filter 24 to pass through.
  • the masks used for the first, second and third low pass filters 22, 23 and 24 have a 1: 2: 1 weight ratio of R: G: B pixels when the Bayer pattern video information is input. It is characterized by that.
  • the center portion is mainly composed of the Y component, and when analyzing the ratio of the center portion, the ratio of R: G: B is about 1: 2: 1. Therefore, by using a filter in which the ratio of R: G: B is 1: 2: 1 to the Bayer pattern image information, a low pass Y component for each pixel can be obtained.
  • FIG. 4 is an exemplary diagram of a mask used in the first, second and third low pass filters 22, 23, 24 according to a preferred embodiment of the present invention. If it is assumed that the Bayer pattern image information is viewed as a 5x5 region, it can be seen that all masks of FIG. 4 have a weight ratio of R: G: B pixels of 1: 2: 1.
  • the movement degree calculation unit 20 outputs the output I gk (p) of the first low pass filter 22 and the output I gk-1 (p) of the second low pass filter 23. And the output from the third low pass filter 24 (I gmk (p)), the current stage image information I k (p), the current stage image information I k (p) and the previous stage Image information (I mk (p)) obtained by mixing the video information (I k-1 (p)) and the image information (I k-1 (p)) of the previous step 1, From the image information I k-1 (p), there is a motion intensity calculator 25 which calculates how many movements there are in the image information I k (p) at the present stage, that is, the intensity of the movement.
  • the motion intensity calculated by the motion intensity calculator 25 may be calculated through a normalization method with a threshold value.
  • the pixel noise degree calculator 30 uses the image information I'k (p) of the current stage reconstructed by the image interface unit 10 to apply to the pixel. To calculate the noise
  • the pixel noise degree calculator 30 includes a first noise calculator 31, a second noise calculator 32, and a third noise calculator. (33), a first noise figure compensator 34, a second noise figure compensator 35, and a third noise figure compensator 36.
  • the first noise calculator 31 calculates the standard deviation (N_1 'k (p)) between the center pixel and the surrounding pixels.
  • the second noise estimator 32 calculates the average value of the surrounding pixels and the center pixel and the surrounding pixels calculated by the absolute value of the difference of each of the average value, and finally the absolute value of the median value (N_2 'k (p Calculate)).
  • the third noise calculator 33 calculates the absolute value of the difference value between the center pixel and the surrounding pixels, calculates the log value, and then calculates the average value N_3 ' k (p) of the log value. do. That is, the first noise calculator 31 numerically displays the relationship between the center pixel and the peripheral pixels, and the second noise calculator 32 numerically displays the relationship between the peripheral pixels except the center pixel.
  • the third noise calculator 33 displays an impulse noise pixel numerically.
  • the first noise figure compensator 34 and the second noise figure compensator 35 have the same noise values calculated from the first and second noise calculators 31 and 32 for the signals of the image information of different standards. Compensates for the noise values calculated from the first and second noise calculators 31 and 32 by a certain compensation pattern according to the standard of the image information signal. That is, when the video signal inputted to the video interface unit 10 is a Y / C signal or a Bayer pattern signal, the same value is outputted.
  • the third noise figure compensator 36 compensates for not outputting an incorrect impulse noise pixel from the third noise calculator 33, for example, an edge portion of an image, etc., as being recognized as noise. Do it.
  • the image noise calculator 50 of the present invention uses pixel noise calculated by the pixel noise degree calculator 30 to subtract pixels determined as edges from all the image information forming one frame.
  • the image information is input in the next step by calculating the ratio of pixels without noise, it serves to calculate the intensity control coefficient k of the noise reduction of the temporal noise reduction unit 40. In other words, it is possible to determine how much the noise of the temporal noise reduction unit 40 is to be reduced by the intensity adjustment coefficient k of the noise reduction.
  • the temporal noise reduction unit 40 of the present invention serves to reduce noise on time axis data of the image, that is, to reduce temporal noise of the image information. Specifically, the temporal noise reduction unit 40 uses the current stage image information I k (p) and the previous stage stage image information I k-1 (p) to reduce the noise in the current stage. Image information OT k (p) is output.
  • the spatial noise reduction unit 60 of the present invention serves to reduce the noise of the spatial axis data of the image, that is, reduce the spatial noise of the image information. Specifically, the spatial noise reduction unit 60 uses the image information I ' k (p) of the current stage reconstructed by the image interface unit 10 and the output from the pixel noise degree calculation unit 30. The reduced video information OS ' k (p) is output.
  • the image combiner 70 of the present invention combines the first output image from the temporal noise reduction unit 40 and the second output image from the spatial noise reduction unit 60.
  • the image combiner 70 combines the first output image and the second output image by using the motion accumulation information value M h k (p) to make the first combined image, and the motion intensity information value M ok.
  • the first output image and the second output image are combined to form a second combined image. That is, the first combined image and the first combined image are generated by using the motion accumulation information value (Mh k (p)) and the motion intensity information value (M ok (p)) as weights for combining the first output image and the second output image. 2 Create a combined image.
  • the final combined image O mix'k (p) is output by combining the first combined image and the second combined image.
  • the two images are combined using an appropriate weight.
  • an image combining method in the image combiner 70 an alpha blending technique may be used.
  • the storage unit 80 of the present invention stores the signal output from the image combiner 70 and transmits it to the image output device, or feeds back to the image interface unit 10 adaptive noise for the digital image of the present invention
  • the abatement system 100 can adaptively reduce noise. That is, Mh k (p) and O feedback'k (p) shown in FIG. 1 are the input signals Mh k-1 (p) and I k-1 (the input signals of the image interface unit 10 during the image processing of the next step). p).
  • the temporal noise reduction unit 40, the sympathetic noise reduction unit 50, and the image combiner 70 may convert the Y signal and the C signal. I use it.
  • the motion degree calculator 20, the pixel noise degree calculator 30, and the image noise calculator 50 use only the Y signal.
  • the input image information I k -1 before the first stage other than the reconstructed image information I ' k (p) of the current stage. (p)) motion cumulative information value (Mh k-1 (p)), current image information (I k (p)) and previous stage image information (I k-1 (p)) signals Use a signal that is not reconstructed by (10).
  • FIG. 6 is a flowchart of an adaptive noise reduction method for a digital image according to an exemplary embodiment of the present invention.
  • the adaptive noise reduction method for a digital image according to an embodiment of the present invention when the input image information is a Bayer pattern signal, the index of the center pixel is R, G, B Image interface step (S10) of skipping pixels of a different color from the center pixel and reconstructing the image information by neighboring pixels of the same color as the central pixel, and calculating a degree of movement of the image information input over time.
  • a noise reduction step S50 is included.
  • the adaptive noise reduction method for the digital image of the present invention uses the pixel noise calculated in the pixel noise degree calculation step S40 to subtract the pixel determined as an edge from all the image information forming one frame.
  • the image noise calculation steps (S60) and the temporal noise reduction step (S30) when the ratio of pixels without noise is calculated and the following image information is input, the intensity adjustment coefficient of the noise reduction in the temporal noise reduction step is calculated.
  • the method may further include an image combining step S70 of combining the first output image and the second output image from the spatial noise reduction step S40.
  • An adaptive noise reduction system and a noise reduction method for a digital image according to an exemplary embodiment of the present invention can be applied to various image processing fields.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Picture Signal Circuits (AREA)
  • Image Processing (AREA)

Abstract

Le système de réduction adaptative de bruit pour images numériques d'après un mode de réalisation préféré de la présente invention comprend : une unité de réduction de bruit temporel conçue pour réduire un bruit temporel dans des informations d'image ; une unité de réduction de bruit spatial conçue pour réduire un bruit spatial dans des informations d'image ; et une unité de combinaison d'images conçue pour combiner une première image de sortie provenant de l'unité de réduction de bruit temporel et une seconde image de sortie provenant de l'unité de réduction de bruit spatial. Le système de réduction adaptative de bruit pour images numériques et le procédé d'élimination de bruit d'après un mode de réalisation préféré de la présente invention permettent de réduire simultanément un bruit temporel et un bruit spatial de façon à sortir une image claire.
PCT/KR2013/003390 2012-07-12 2013-04-22 Système de réduction adaptative de bruit pour images numériques et procédé d'élimination de bruit WO2014010817A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2012-0076270 2012-07-12
KR1020120076270A KR101361114B1 (ko) 2012-07-12 2012-07-12 디지털 영상을 위한 적응적 잡음 저감 시스템 및 그 방법

Publications (1)

Publication Number Publication Date
WO2014010817A1 true WO2014010817A1 (fr) 2014-01-16

Family

ID=49916231

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2013/003390 WO2014010817A1 (fr) 2012-07-12 2013-04-22 Système de réduction adaptative de bruit pour images numériques et procédé d'élimination de bruit

Country Status (2)

Country Link
KR (1) KR101361114B1 (fr)
WO (1) WO2014010817A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018021850A1 (fr) * 2016-07-28 2018-02-01 삼성전자 주식회사 Procédé et dispositif de traitement d'image, et support d'enregistrement
EP3308534A4 (fr) * 2015-06-12 2019-02-27 GoPro, Inc. Module de mise à l'échelle de réseau de filtres colorés
US10530995B2 (en) 2015-06-12 2020-01-07 Gopro, Inc. Global tone mapping

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102390408B1 (ko) * 2017-11-08 2022-04-25 한화테크윈 주식회사 영상잡음제거장치 및 방법
KR102240054B1 (ko) * 2019-08-30 2021-04-14 (주)미래컴퍼니 영상 처리 장치

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009002675A1 (fr) * 2007-06-25 2008-12-31 The Hong Kong University Of Science And Technology Optimisation de la distorsion du taux pour débruitage vidéo
KR101119268B1 (ko) * 2010-03-10 2012-03-20 삼성전자주식회사 영상의 컬러 잡음 제거 방법 및 장치
KR101133520B1 (ko) * 2011-01-03 2012-04-04 엠텍비젼 주식회사 색 노이즈 제거 방법 및 장치
KR20120032170A (ko) * 2010-09-28 2012-04-05 엘지전자 주식회사 영상 신호의 잡음 제거방법 및 그 방법을 이용한 영상 신호 처리장치
KR20120070961A (ko) * 2010-12-22 2012-07-02 중앙대학교 산학협력단 베이글릿-웨이블릿 분해에 의한 실시간 영상 복원 장치 및 방법
KR20120078832A (ko) * 2011-01-03 2012-07-11 엠텍비젼 주식회사 적응적 잡음 제거 방법 및 장치

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100429804B1 (ko) * 2001-12-29 2004-05-03 삼성전자주식회사 적응적 영상 노이즈 감쇄 장치 및 그 방법
US7199838B2 (en) * 2004-06-17 2007-04-03 Samsung Electronics Co., Ltd. Motion adaptive noise reduction apparatus and method for video signals
KR101665137B1 (ko) * 2010-04-07 2016-10-12 삼성전자주식회사 이미지 센서에서 발생되는 잡음을 제거하기 위한 장치 및 방법

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009002675A1 (fr) * 2007-06-25 2008-12-31 The Hong Kong University Of Science And Technology Optimisation de la distorsion du taux pour débruitage vidéo
KR101119268B1 (ko) * 2010-03-10 2012-03-20 삼성전자주식회사 영상의 컬러 잡음 제거 방법 및 장치
KR20120032170A (ko) * 2010-09-28 2012-04-05 엘지전자 주식회사 영상 신호의 잡음 제거방법 및 그 방법을 이용한 영상 신호 처리장치
KR20120070961A (ko) * 2010-12-22 2012-07-02 중앙대학교 산학협력단 베이글릿-웨이블릿 분해에 의한 실시간 영상 복원 장치 및 방법
KR101133520B1 (ko) * 2011-01-03 2012-04-04 엠텍비젼 주식회사 색 노이즈 제거 방법 및 장치
KR20120078832A (ko) * 2011-01-03 2012-07-11 엠텍비젼 주식회사 적응적 잡음 제거 방법 및 장치

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3308534A4 (fr) * 2015-06-12 2019-02-27 GoPro, Inc. Module de mise à l'échelle de réseau de filtres colorés
US10530995B2 (en) 2015-06-12 2020-01-07 Gopro, Inc. Global tone mapping
US11218630B2 (en) 2015-06-12 2022-01-04 Gopro, Inc. Global tone mapping
US11849224B2 (en) 2015-06-12 2023-12-19 Gopro, Inc. Global tone mapping
WO2018021850A1 (fr) * 2016-07-28 2018-02-01 삼성전자 주식회사 Procédé et dispositif de traitement d'image, et support d'enregistrement
US10854010B2 (en) 2016-07-28 2020-12-01 Samsung Electronics Co., Ltd. Method and device for processing image, and recording medium

Also Published As

Publication number Publication date
KR20140009726A (ko) 2014-01-23
KR101361114B1 (ko) 2014-02-13

Similar Documents

Publication Publication Date Title
WO2014010817A1 (fr) Système de réduction adaptative de bruit pour images numériques et procédé d'élimination de bruit
KR100314097B1 (ko) 백색 신호 성분을 구하고 영상의 밝기를 조절하는 방법 및 장치
KR101433952B1 (ko) 테스트 패턴 신호 생성 장치, 테스트 패턴 신호 생성 방법,색 측정 시스템 및 표시 장치
EP1335584A2 (fr) Procédé et appareil de changer la brilliance d'une image
WO2015002423A1 (fr) Procédé et appareil de traitement d'images pour dispositif d'affichage incurvé
US8160369B2 (en) Image processing apparatus and method
JP5414691B2 (ja) 画像処理装置及び画像処理方法
WO2018079877A1 (fr) Dispositif de traitement d'images et procédé de compression de plage dynamique pour des images à large plage
US8111308B2 (en) Signal processing apparatus, signal processing method, and image pickup apparatus
AU2017336406A1 (en) ISP bias-compensating noise reduction systems and methods
WO2017213335A1 (fr) Procédé pour combiner des images en temps réel
WO2014193021A1 (fr) Procédé et système de traitement d'images médicales
WO2014133270A1 (fr) Appareil et procédé de traitement de signal vidéo
WO2010074386A1 (fr) Procédé de détection et de correction de pixels corrompus dans un capteur d'images
US10970822B2 (en) Image processing method and electronic device thereof
US20110193877A1 (en) Method for adjusting the color of images
US20080298710A1 (en) Image signal processing apparatus for generating bright signal of high reproducibility
US10645337B1 (en) Video line inversion for reducing impact of periodic interference signals on analog video transmission
US20100085486A1 (en) Image processing apparatus and method
CN113129389A (zh) 判断摩尔纹的方法、抑制摩尔纹的方法与电路系统
DE102004064129B3 (de) Kabellängen-Erfassungsvorrichtung und Verfahren für einen Tastatur-Video-Maus-Schalter
WO2017142364A1 (fr) Procédé et appareil de traitement d'image dans un système de réalité virtuelle
JP4194859B2 (ja) 映像信号監視装置
WO2021096167A1 (fr) Système de correction d'image de caméra sous-marine et procédé d'inspection de fond de navire
US20130038762A1 (en) Image processing apparatus and control method for the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13816352

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13816352

Country of ref document: EP

Kind code of ref document: A1