WO2015129250A1 - Image processing device, and image processing method - Google Patents

Image processing device, and image processing method Download PDF

Info

Publication number
WO2015129250A1
WO2015129250A1 PCT/JP2015/000897 JP2015000897W WO2015129250A1 WO 2015129250 A1 WO2015129250 A1 WO 2015129250A1 JP 2015000897 W JP2015000897 W JP 2015000897W WO 2015129250 A1 WO2015129250 A1 WO 2015129250A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
input
correction
unit
size
Prior art date
Application number
PCT/JP2015/000897
Other languages
French (fr)
Japanese (ja)
Inventor
松山 好幸
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Publication of WO2015129250A1 publication Critical patent/WO2015129250A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing

Definitions

  • the present invention relates to an image processing apparatus and an image processing method for removing snow noise that appears in an image captured during snowfall (hereinafter referred to as “snow correction”).
  • an image processing device that removes snow by performing median filter processing on pixels at the same coordinates of each frame image for a plurality of time-series frame images Is known (see, for example, Patent Document 1).
  • This image processing apparatus stores images of a plurality of frames (the number of acquired frames is 2k + 1, k is a positive integer) captured by a fixed surveillance camera, and the pixel value is a luminance value for each pixel at the same coordinate. Are arranged in descending order, and the (k + 1) th luminance value is set as the output luminance at the current time.
  • this image processing apparatus even if a snowflake is captured at a pixel of interest in a frame image at a certain point in time, in the past frame image or the future frame image, there is almost no chance of snow.
  • By adopting the brightness value of the intermediate frame image it is possible to remove snowflakes in the image.
  • the present invention removes snowfall noise and corrects an afterimage generated on a relatively large moving object such as a monitoring target when performing snowfall correction by performing a median filter process on the image.
  • An object of the present invention is to provide an image processing apparatus and an image processing method for reducing the image processing apparatus.
  • the present invention includes an image input unit that inputs an image, a median processing unit that performs median filter processing on the input image input by the image input unit, and a moving region that detects a moving region in which the input image moves.
  • a detection unit an input operation unit that receives an input of a snow correction parameter or a rain correction parameter for the input image, a specified size setting unit that sets a specified size of the moving region corresponding to the snow correction parameter or the rain correction parameter,
  • a moving area size detecting unit for detecting a moving area larger than the specified size set by the specified size setting unit; and if the size of the moving area is less than the specified size, the median filter is used as an image of the moving area.
  • the image is input as the moving area image.
  • the input image by parts comprising an image generator for generating a snowfall corrected image or rainfall corrected image, and an image processing apparatus.
  • the present invention is an image processing method of an image processing apparatus that performs snowfall correction or rain correction, the step of inputting an image, the step of performing median filter processing on the input image that has been input, and the movement of the input image Detecting a region; receiving a snow correction parameter or a rain correction parameter for the input image; setting a designated size of the moving region corresponding to the snow correction parameter or the rain correction parameter; A moving area size detection unit that detects a moving area that is larger than the size, and if the size of the moving area is less than the specified size, use the image that has been subjected to the median filter processing as the moving area image; If the size is greater than or equal to the size, using the input image as the moving region image, a snowfall correction image is displayed. Or it has a step of generating a rain corrected image, and an image processing method.
  • the present invention it is possible to remove snowfall noise and to reduce afterimages that occur on relatively large moving objects such as monitoring objects.
  • FIG. 1 is a block diagram showing the configuration of the image processing apparatus according to this embodiment.
  • FIG. 2A is a diagram for explaining an operation of detecting the size of the moving area in the moving area size detection unit.
  • FIG. 2B is a diagram for explaining a motion region size detection operation in the motion region size detection unit.
  • FIG. 2C is a diagram for explaining a motion region size detection operation in the motion region size detection unit.
  • FIG. 3 is a flowchart for explaining the snowfall correction image generation processing procedure.
  • FIG. 4 is a diagram illustrating an example of a UI screen of the screen display unit when the removal intensity is set to the minimum value.
  • FIG. 5 is a diagram illustrating an example of a UI screen of the screen display unit when the removal intensity is set to an intermediate value.
  • FIG. 4 is a diagram illustrating an example of a UI screen of the screen display unit when the removal intensity is set to the minimum value.
  • FIG. 5 is a diagram illustrating an example of a UI screen of the
  • FIG. 6 is a diagram illustrating an example of a UI screen of the screen display unit when the removal intensity is set to the maximum value.
  • FIG. 7 is a diagram illustrating a display example of a moving image immediately before reproduction, in which the moving image viewer display area and the image processing menu display area are displayed on the same screen.
  • FIG. 8 is a diagram illustrating a display example of a moving image immediately after reproduction, in which the moving image viewer display area and the image processing menu display area are displayed on the same screen.
  • FIG. 9 is a diagram illustrating a display example of a moving image after the snowfall and rain removal are performed, in which the moving image viewer display area and the image processing menu display area are displayed on the same screen.
  • FIG. 7 is a diagram illustrating a display example of a moving image immediately before reproduction, in which the moving image viewer display area and the image processing menu display area are displayed on the same screen.
  • FIG. 8 is a diagram illustrating a display example of a moving image immediately after reproduction, in which the moving image viewer display
  • FIG. 10 is a diagram showing an example in which the display area of the moving image viewer and the display area of the image processing menu are displayed on the same screen and the details of the snowfall / falling removal menu are shown.
  • FIG. 11 is a diagram showing an example in which the display area of the moving image viewer and the display area of the image processing menu are displayed on the same screen and the details of the snowfall / falling removal menu are shown.
  • FIG. 12 is a diagram illustrating an example in which the display area of the moving image viewer and the display area of the image processing menu are displayed on the same screen, and the details of the snowfall / rain removal menu are shown.
  • FIG. 13 shows a moving image viewer display area and each image processing menu display area on the same screen, and a moving image table after each image processing of spatial gradation correction, multiple image composition NR, and snow and rain removal. It is a figure which shows an example.
  • Embodiments of an image processing apparatus and an image processing method according to the present invention will be described with reference to the drawings.
  • the image processing apparatus of this embodiment is applied to an image processing apparatus that processes video output from a camera, video, recorder, or the like.
  • FIG. 1 is a block diagram showing the configuration of the image processing apparatus 1 in the present embodiment.
  • the image processing apparatus 1 includes a processing control unit 11, a video input unit 12, a median processing unit 13, a moving region detecting unit 14, a moving region size detecting unit 15, a snowfall correction image generating unit 16, and a screen display unit. 17, an input image storage unit 21, a median image storage unit 22, and a moving area processing image storage unit 23.
  • the video input unit 12 as an example of the image input unit inputs video output from a camera, video, recorder, or the like, and temporarily stores the image data in the input image storage unit 21.
  • the median processing unit 13 executes median filter processing in the time axis direction using a plurality of images (input images) stored in the input image storage unit 21 to generate a median image.
  • the median processing unit 13 calculates, for example, a luminance value for each pixel having the same coordinates with respect to an image of a plurality of frames accumulated in time series (here, the number of frames is 2k + 1, k is a positive integer). Then, the pixel values are arranged in descending order of the luminance value, and the k + 1th luminance value is adopted as the output luminance.
  • the moving area detecting unit 14 detects moving moving areas including snow and moving objects in units of pixels using the image stored in the input image accumulating unit 21.
  • a method of calculating a difference between pixel values of the previous frame and the current frame (interframe difference method), a difference between pixel values of the background image and the current frame (background difference method), or the like is used.
  • the moving region size detection unit 15 as an example of the designated size setting unit performs processing for detecting the moving region detected by the moving region detection unit 14 using a contraction / expansion process to be described later. Then, an image (moving region processed image) including the moving region obtained by this detection processing is generated.
  • the input image storage unit 21 is a memory that stores images (input images) input from the video input unit 12 for several frames.
  • the median image storage unit 22 is a memory for storing the image (median image) generated by the median processing unit 13. This median image includes the luminance value for each pixel calculated by the median processing unit 13.
  • the moving area processed image accumulating unit 23 is a memory for accumulating images (moving area processed images) processed by the moving area size detecting unit 15.
  • the snowfall correction image generation unit 16 uses the input image, the median image, and the moving region processing image stored in the input image storage unit 21, the median image storage unit 22, and the moving region processing image storage unit 23, respectively, to detect snow and the others.
  • the moving object is separated and determined, and an image from which snow has been removed (snowfall correction image) is generated and output.
  • the screen display unit 17 is a user interface (UI: UI) that specifies a parameter value (removal intensity) corresponding to the size of the moving region used by the moving region size detecting unit 15 and the display unit that displays the input image and the snowfall correction image. Includes an input operation unit as User Interface).
  • UI user interface
  • the screen display unit 17 is configured using a touch panel.
  • the processing control unit 11 controls the operation of each unit of the image processing apparatus 1 described above, acquires a parameter value (removal intensity) from the UI of the screen display unit 17, and transfers the set value to the moving region size detection unit 15. Do.
  • FIG. 2A to FIG. 2C are diagrams for explaining the motion region size detection operation in the motion region size detection unit 15.
  • white pixels 41 are pixels (moving region pixels) included in the moving region
  • black pixels 42 are pixels not included in the moving region.
  • Shrinkage / expansion processing is used to detect the size of the moving region.
  • the white pixel 41 is contracted in the horizontal direction and the vertical direction by the specified contraction size m (pixel) with respect to the moving area image A, respectively.
  • the logical sum of the two images B and C contracted in the horizontal direction and the vertical direction is obtained.
  • the white pixels of the image D obtained by the logical sum are expanded in the horizontal direction and the vertical direction by the same size as the contraction size m (pixels).
  • a moving area having a pixel size (S pixel) of 2m + 1 in the vertical direction or the horizontal direction (one direction) with respect to the specified contraction size m is detected as a moving area that is equal to or larger than the specified size.
  • FIG. 2A shows a case where the moving area pixels are 3 ⁇ 3 pixels.
  • An image B and an image C that are contracted by two pixels in the horizontal direction and the vertical direction are generated for the image A including a 3 ⁇ 3 pixel moving area.
  • the white pixel 41 disappears.
  • FIG. 2B shows a case where the moving area pixels are 5 ⁇ 5 pixels.
  • an image B and an image C that are contracted by two pixels in the horizontal direction and the vertical direction are generated for the image A including the moving region of 5 ⁇ 5 pixels.
  • the image B contracted by two pixels in the horizontal direction five pixels in one vertical column located at the center are white pixels 41.
  • the image C contracted by two pixels in the vertical direction five pixels in one horizontal row located at the center are white pixels 41.
  • the white pixel 41 is the portion of the vertical and horizontal rows of ten characters located in the center.
  • FIG. 2C shows a case where the moving region pixels are 5 ⁇ 3 pixels.
  • an image B and an image C that are contracted by two pixels in the horizontal direction and the vertical direction are generated for the image A including the moving region of 5 ⁇ 3 pixels.
  • the image B contracted by two pixels in the horizontal direction three pixels in one vertical column located at the center are white pixels.
  • the image C contracted by two pixels in the vertical direction white pixels disappear.
  • the vertical column located in the center is a white pixel.
  • the image processing apparatus 1 Since the snow particles are generally of a constant size regardless of the direction, the image processing apparatus 1 performs the process of combining the shrinking process and the expanding process in the horizontal direction and the vertical direction, so that snow and other moving objects are obtained. It is possible to improve the separation performance.
  • FIG. 3 is a flowchart for explaining the snowfall correction image generation processing procedure.
  • the snowfall correction image generation process shown in FIG. 3 is executed on a pixel-by-pixel basis for the input image after the moving area size detection process is executed and the moving area image is stored in the moving area processed image storage unit 23.
  • the snowfall correction image generation unit 16 determines whether or not the pixel to be determined is a moving region pixel (S1). If the pixel is a moving region pixel, the snowfall correction image generation unit 16 determines whether or not the luminance value of the pixel to be determined has increased (S2). The determination of the increase in the luminance value is performed by comparing the luminance values of the previous frame and the current frame.
  • the snowfall correction image generation unit 16 determines that the size of the moving region pixel including the pixel to be determined corresponds to the parameter value indicated by the UI of the screen display unit 17. It is determined whether or not the number of pixels is greater than or equal to S detected by the size detector 15 (S3).
  • the S pixel is a specified value, and is represented by 2m + 1 as the contraction size m as described above.
  • the determination in S3 refers to the pixel value of the moving area processed image of the image E described with reference to FIG.
  • the snowfall correction image generation unit 16 determines that the moving object is a moving object, and determines the pixel value of the input image as the output image. Used as a pixel value (S4). If the luminance value of the pixel to be determined has not increased in step S2, the snowfall correction image generation unit 16 determines that the luminance value is not high, such as white snow, and is input in step S4. The pixel value of the image is used as the pixel value of the output image.
  • the snowfall correction image generation unit 16 determines that it is snow and uses the pixel value of the median image as the output image. Are used (S5).
  • the snowfall correction image generation unit 16 determines that the pixel is a background and uses the pixel value of the median image as the pixel value of the output image (S6).
  • an input image may be used instead of the median image, but using the median image has an effect of removing noise.
  • step S4 the snowfall correction image generation unit 16 determines whether or not the determination has been completed for all the pixels in the input image (S7). If all the pixels have not been determined, the snowfall correction image generation unit 16 returns to the process of step S1.
  • the snowfall correction image generation unit 16 when the determination of all pixels is completed, the snowfall correction image generation unit 16 generates a snowfall correction image based on the determination result of all pixels and displays it on the screen display unit 17 (S8). Thereafter, the snowfall correction image generation unit 16 ends this process.
  • FIG. 4 is a diagram showing a UI screen of the screen display unit 17 when the removal intensity that is the parameter value is set to the minimum value.
  • the UI screen of the screen display unit 17 includes a window 31 in which an input video is displayed, a window 32 in which a corrected video is displayed, a radio button 33 for selecting whether or not to perform an afterimage reduction process, and a slide bar for adjusting the removal intensity. 35 and a numerical value display section 37 are displayed.
  • the radio button 33 is selected as “with afterimage reduction processing”, the afterimage reduction processing described in the present embodiment is performed, whereas when “without afterimage reduction processing” is selected, the afterimage reduction processing is performed. Not done.
  • the slide bar 35 is input in the horizontal direction by the user's finger or the like, and indicates the removal intensity in units of pixels.
  • the removal intensity is represented by 5 bits (0 to 32) in pixel units.
  • the removal intensity is a parameter value for performing snowfall correction, and corresponds to the S pixel that is the specified value represented by 2m + 1 described above. Therefore, when the snowflake has a moving area size less than S pixels corresponding to the removal intensity indicated by the slide bar 35, the snowflake is removed.
  • the removal intensity is a small set value
  • large snowflakes are not removed, but the afterimage (multiplexing or disappearance) of the moving object is reduced.
  • the removal intensity is a large set value
  • small to large snowflakes are removed, but the afterimage (multiplexing or disappearance) of the moving object increases.
  • the user can adjust with the slide bar 35 whether to prioritize the extinction of snow or the generation of no afterimage.
  • the user can adjust the slide bar 35 even when the large snowflake in the foreground is impossible but many small snowflakes in the back are desired to be erased.
  • the slide bar 35 input by the user is at the left end, and the removal strength is 0.
  • the corrected video (snow-corrected image) projected on the window 32 is substantially the same as the input video (input image) projected on the window 31, and the snow 51 is not removed at all.
  • FIG. 5 is a diagram showing a UI screen of the screen display unit 17 when the removal intensity is set to an intermediate value.
  • the slide bar 35 input by the user is in the middle, and the removal strength is a value of 16.
  • the small snow 51a on the back side is removed from the corrected video (snowfall correction image) displayed on the window 32, but the large snow 51b on the near side is not removed.
  • FIG. 6 is a diagram showing a UI screen of the screen display unit 17 when the removal intensity is set to the maximum value.
  • the slide bar 35 that is input by the user is at the right end, and the removal strength is a value 32.
  • the removal strength is a value 32.
  • snow has almost completely disappeared in the corrected video (snowfall correction image) displayed in the window 32.
  • the image processing apparatus 1 uses an input image as an image of a moving area that is equal to or larger than a specified size in the input image when the image is subjected to median filter processing and snowfall correction is performed. Since the median image is used for the image of the moving area that is smaller than the designated size, afterimages that occur in a moving area that is larger than the designated size, such as a monitoring object, can be reduced. In addition, the image processing apparatus 1 can remove snow particle noise less than a specified size from the image.
  • the image processing apparatus 1 performs image processing on the snow particles in the input image, but is not limited to snow particles, and may be rain particles, for example.
  • the image processing apparatus 1 can generate an image after snowfall correction processing (snowfall correction image), but can also generate an image after rainfall correction processing (rainfall correction image).
  • snowfall is read as “rainfall”
  • snowdrop is read as “raindrop”
  • snowdrop size is read as “raindrop size”.
  • the processing control unit 11 of the image processing apparatus 1 of the present embodiment reads out moving image data stored in the input image storage unit 21 and removes snow or rain in the input image of the present embodiment.
  • the menu screen of the correction (hereinafter referred to as “snow and rain removal”) processing and the image data of the moving image read from the input image storage unit 21 may be displayed on the same screen of the screen display unit 17 (FIG. 7 to FIG. 12).
  • screen examples displayed on the same screens WD1 to WD6 of the screen display unit 17 will be described with reference to FIGS.
  • FIG. 7 is a diagram showing a display example of a moving image immediately before reproduction, in which the moving image viewer display area VIW and the image processing menu display area MNU are displayed on the same screen WD1.
  • FIG. 8 is a diagram showing a display example of a moving image immediately after reproduction, with the moving image viewer display area VIW and the image processing menu display area MNU displayed on the same screen WD2.
  • FIG. 9 is a diagram showing a display example of the moving image after the snowfall and rain removal are displayed on the same screen WD3 where the moving image viewer display area VIW and the image processing menu display area MNU are displayed.
  • FIG. 7 shows the display area VIW of the viewer of the moving image data in the state immediately before reproduction at the time of reading from the input image storage unit 21 and the image processing that can be executed by the image processing apparatus 1 (specifically, A display area MNU of a menu (image processing menu) of spatial gradation correction, multi-image composition NR (Noise Reduction), snowfall / rain removal) is shown as the same screen WD1.
  • a display area MNU of a menu (image processing menu) of spatial gradation correction, multi-image composition NR (Noise Reduction), snowfall / rain removal) is shown as the same screen WD1.
  • FIG. 7 and FIG. 8 the detailed contents of each of these image processing menus are not displayed, only the names of the respective image processing menus are displayed, and the image of the moving image to be subjected to image processing in the image processing apparatus 1 is displayed.
  • the relationship between the data and a list of image processing menus that can be executed by the image processing apparatus 1 is visible to the user at a glance. Therefore, the user can confirm the image data of the moving image to be subjected to image processing and the image processing menu in a comparative manner.
  • the operation button set BST related to the playback, pause, stop, fast forward, rewind, recording, rewind operation, etc.
  • the snowfall correction image generation unit 16 of the image processing apparatus 1 is described above.
  • correction processing for removing snow and rain is performed on the image data of the moving image displayed in the display area VIW.
  • the menu bar MNU3 of the snowfall / rain removal correction process is further pressed again by the cursor CSR, the snowfall correction image generation unit 16 of the image processing apparatus 1 performs the snowfall / fall for the moving image data displayed in the display area VIW. Cancel correction processing.
  • the image processing apparatus 1 is displayed in the display area MNU by a simple user operation (that is, whether or not the menu bar MNU3 is pressed) in a state in which the moving image data displayed in the display area VIW is reproduced.
  • the image processing corresponding to any one of the image processing menus can be executed accurately or can be interrupted, and the processing result before and after the image processing can be easily confirmed by the user.
  • the processing control unit 11 of the image processing apparatus 1 may display a cursor CSR1 having a different shape when the cursor CSR is on or near any menu bar of the image processing menu.
  • a cursor CSR may be displayed.
  • FIGS. 10 to 12 are diagrams showing examples in which the moving image viewer display area VIW and the image processing menu display area MNU3D are displayed on the same screen WD4 to WD6, respectively, and the details of the snowfall and rain removal menu are shown. It is.
  • the processing control unit 11 of the image processing apparatus 1 sets a detailed operation screen SRC (FIG. 4 to FIG. 4) for setting input parameters (for example, correction strength) related to correction processing for snowfall and rain removal. 6) is displayed on the display area MNU3D.
  • the correction mode and the correction intensity which are input parameters relating to the snow and rain removal correction processing, are respectively set to a predetermined mode and a predetermined value, and an automatically set check box ATC (see FIG. 12) ) Is added and displayed.
  • the correction mode and the correction intensity at the time of automatic setting may be initial values, or may be dynamically set based on, for example, a detection result in the moving area size detection unit 15, and so on.
  • the processing selection options shown in FIGS. 4 to 6 that is, a selection box for selecting either no afterimage reduction processing or afterimage reduction processing
  • the image processing apparatus 1 is operated by the user moving the cursor CSR individually to the left and right with respect to the seek bar (see the slider 35 shown in FIGS. 4 to 6) provided for the correction strength. Then, with respect to the image data of the moving image being reproduced in the display area VIW, correction processing for removing snow and rain is performed using the input parameter (that is, the correction strength) after the moving operation.
  • the image processing apparatus 1 displays the display area VIW.
  • correction processing for removing snow and rain is performed using the correction strength after input.
  • the correction strength is “16” in FIG. 10, the correction strength is changed to “30” in FIG. 11, so that the moving image data shown in FIG. 11 is compared with the moving image data shown in FIG. For example, the removal amount of snow particles at the time of snowfall or raindrops at the time of rain increases, and the user can sufficiently grasp the contents of the moving image.
  • the image processing apparatus 1 performs a snowfall / rain removal correction process on the image data of the moving image being reproduced in the display area VIW by using the correction mode and the default value of the correction intensity.
  • the image processing apparatus 1 can provide a typical initial value for the correction process for snowfall and rain removal, for example, to a user who does not know what value should be input as the correction mode and correction strength for snow and rain removal.
  • the value By using the value as the default value, it is possible to easily perform the snow and rain removal correction process on the moving image data.
  • the image processing apparatus 1 of the present embodiment displays the moving image data read from the input image storage unit 21 and a plurality of image processing menu screens including correction processing for removing snow and rain on the same screen of the screen display unit 17. May be displayed (see FIG. 13).
  • a screen example displayed on the same screen WD7 of the screen display unit 17 will be described with reference to FIG.
  • FIG. 13 shows the display area VIW of the moving image viewer and the display areas MNU1, MNU2, and MNU3 of each image processing menu on the same screen WD7, and each of spatial gradation correction, multi-sheet composite NR, and snowfall removal. It is a figure which shows the example of a display of the moving image after image processing.
  • the menu bar MNU1 for correcting the spatial gradation the menu bar MNU2 for combining multiple NRs
  • the menu bar MNU3 for removing snow and rain are each pressed by the cursor CSR1, so that the image processing apparatus 1
  • the image data of the moving image that has been subjected to the image processing corresponding to each menu bar is displayed in the display area VIW.
  • the image processing apparatus 1 is not limited to the single image processing (for example, correction processing for removing snow and rain) described with reference to FIGS.
  • a plurality of image processing can be executed in accordance with a simple operation of the user while being displayed, and a plurality of image processing results can be intuitively and visually shown to the user.
  • spatial gradation correction refers to another parameter (for example, weighting coefficient, etc.) corresponding to input or specification of predetermined parameters (for example, correction method, correction intensity, color enhancement degree, brightness, and correction range). Convert to weighting range, histogram upper limit clip amount, histogram lower limit clip amount, histogram distribution coefficient setting value, distribution start / end position, image blend ratio, color gain), and input parameters using the parameters obtained after conversion Image processing for generating and shaping a local histogram and further generating tone curves to perform tone conversion and color enhancement.
  • the multi-sheet composite NR is an image of the current frame and the frame input immediately before (previous frame) according to the contrast of the image of the input frame (current frame) and a preset motion detection level. This is an image processing for reducing the noise component appearing in the image of the current frame by further calculating the synthesis ratio and synthesizing the image according to the synthesis ratio.
  • a spatial tone correction menu bar MNU 1 a multi-image composite NR menu bar MNU 2, and a snow and rain removal menu bar corresponding to the marker DT 1 shown in FIG.
  • the processing control unit 11 of the image processing apparatus 1 is configured to set a plurality of input parameters related to spatial gradation correction, a plurality of composite NRs, and snow and rain removal.
  • the detailed operation screen display area MNU3D is expanded and displayed on the screen. It should be noted that a detailed operation screen display area for setting a plurality of input parameters related to spatial gradation correction and a plurality of composite NRs is not shown.
  • the above-described correction method correction intensity, color enhancement degree, brightness, and correction range
  • a parameter that is, the NR level
  • Parameters for specifying the motion detection level for example, the detection accuracy and detection range of the camera when the image processing apparatus 1 is a camera (that is, detection accuracy and detection range) are respectively input or specified.
  • the image processing apparatus 1 reproduces the moving image data displayed in the display area VIW while the user performs a simple operation (that is, whether or not the menu bars MNU1, MNU2, and MNU3 related to a plurality of image processing have been pressed). ),
  • the image processing corresponding to the pressing operation of the display areas MNU1, MNU2, and MNU3 can be accurately executed or the execution can be interrupted, and the user can easily check the processing results before and after the image processing.
  • the image processing apparatus 1 appropriately performs image processing according to an operation for changing any one of the parameters displayed in the display areas MNU1D, MNU2D, and MNU3D of detailed operation screens for setting each parameter in a plurality of image processing. Or the execution can be interrupted, and the user can easily check the processing results before and after the image processing.
  • the present invention relates to an image processing apparatus and an image processing method capable of removing snowfall noise and reducing afterimages generated on a relatively large moving object such as a monitoring object when performing snowfall correction by performing median filter processing on an image. Useful as.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

 This image processing device is provided with: an image input unit for inputting an image; a median processing unit for applying median filtering to the input image inputted by the image input unit; a movement area detection unit for detecting an area of the input image in which there is movement; an input operation unit for accepting the input of a snowfall correction parameter for the input image; a designated size setting unit for setting the designated size of the movement area in correspondence to the snow correction parameter; a movement area size detection unit for detecting a movement area greater than or equal to the size set by the designated size setting unit; and an image generation unit for generating a snowfall-corrected image by using, as the image of the movement area, the image to which median filtering was applied when the size of the movement area is less than the designated size, and by using, as the image of the movement area, the image inputted by the image input unit when the size of the movement area is greater than or equal to the designated size.

Description

画像処理装置及び画像処理方法Image processing apparatus and image processing method
 本発明は、降雪時に撮像された画像に現れる降雪ノイズを除去する(以降、「降雪補正を行う」という)画像処理装置及び画像処理方法に関する。 The present invention relates to an image processing apparatus and an image processing method for removing snow noise that appears in an image captured during snowfall (hereinafter referred to as “snow correction”).
 従来、画像において降っている雪を取り除く技術として、時系列に連続する複数のフレーム画像について、各フレーム画像の同一座標の画素に対してメディアンフィルタ処理を施すことで、降雪を除去する画像処理装置が知られている(例えば特許文献1参照)。 Conventionally, as a technique for removing snow falling on an image, an image processing device that removes snow by performing median filter processing on pixels at the same coordinates of each frame image for a plurality of time-series frame images Is known (see, for example, Patent Document 1).
 この画像処理装置は、固定された監視カメラで撮像された複数フレーム(取得したフレーム数を2k+1とし、kは正の整数)の画像を記憶し、同一座標の各画素について、画素値を輝度値の高い順に並べ、k+1番目の輝度値を現時刻の出力輝度とする。 This image processing apparatus stores images of a plurality of frames (the number of acquired frames is 2k + 1, k is a positive integer) captured by a fixed surveillance camera, and the pixel value is a luminance value for each pixel at the same coordinate. Are arranged in descending order, and the (k + 1) th luminance value is set as the output luminance at the current time.
 従って、この画像処理装置によれば、ある時点のフレーム画像における注目画素に雪片が撮像されていても、過去のフレーム画像または将来のフレーム画像では、確率的に雪が存在しない場合が殆どであるので、中間のフレーム画像の輝度値を採用することで、画像中の雪片を除去することができる。 Therefore, according to this image processing apparatus, even if a snowflake is captured at a pixel of interest in a frame image at a certain point in time, in the past frame image or the future frame image, there is almost no chance of snow. By adopting the brightness value of the intermediate frame image, it is possible to remove snowflakes in the image.
特許第3439669号公報Japanese Patent No. 3439669
 しかし、時系列に連続するフレーム画像に対し、単純にメディアンフィルタ処理を施すと、画像中の雪片を除去することは可能であるが、監視対象物などの画像中に比較的大きな移動物体がある場合には、画像中の移動領域が大きくなるので、メディアンフィルタ処理された画像に顕著な残像が発生するという課題がある。 However, if median filter processing is simply performed on time-series frame images, it is possible to remove snowflakes in the image, but there are relatively large moving objects in the image such as the monitoring object. In this case, since the moving area in the image becomes large, there is a problem that a remarkable afterimage is generated in the image subjected to the median filter processing.
 また、監視カメラがパン・チルトといった移動を行う場合も、相対的に画像が移動し、画像中の移動領域が大きくなるので、メディアンフィルタ処理された画像に顕著な残像が発生してしまうという課題がある。 Also, when the surveillance camera moves such as pan / tilt, the image moves relatively, and the moving area in the image becomes large, so that a remarkable afterimage is generated in the median filtered image. There is.
 本発明は、上述した従来の課題を解決するために、画像にメディアンフィルタ処理を施して降雪補正する場合に、降雪ノイズを除去し、監視対象物などの比較的大きな移動物体に発生する残像を低減する画像処理装置及び画像処理方法を提供することを目的とする。 In order to solve the above-described conventional problems, the present invention removes snowfall noise and corrects an afterimage generated on a relatively large moving object such as a monitoring target when performing snowfall correction by performing a median filter process on the image. An object of the present invention is to provide an image processing apparatus and an image processing method for reducing the image processing apparatus.
 本発明は、画像を入力する画像入力部と、前記画像入力部により入力された入力画像に対し、メディアンフィルタ処理を施すメディアン処理部と、前記入力画像の動きのある動領域を検出する動領域検出部と、前記入力画像に対する降雪補正パラメータ又は降雨補正パラメータの入力を受け付ける入力操作部と、前記降雪補正パラメータ又は降雨補正パラメータに対応する前記動領域の指定サイズを設定する指定サイズ設定部と、前記指定サイズ設定部により設定された前記指定サイズ以上の動領域を検出する動領域サイズ検出部と、前記動領域のサイズが、前記指定サイズ未満である場合、前記動領域の画像として前記メディアンフィルタ処理が施された画像を用い、前記指定サイズ以上である場合、前記動領域の画像として前記画像入力部により入力された画像を用いて、降雪補正画像又は降雨補正画像を生成する画像生成部と、を備える、画像処理装置である。 The present invention includes an image input unit that inputs an image, a median processing unit that performs median filter processing on the input image input by the image input unit, and a moving region that detects a moving region in which the input image moves. A detection unit, an input operation unit that receives an input of a snow correction parameter or a rain correction parameter for the input image, a specified size setting unit that sets a specified size of the moving region corresponding to the snow correction parameter or the rain correction parameter, A moving area size detecting unit for detecting a moving area larger than the specified size set by the specified size setting unit; and if the size of the moving area is less than the specified size, the median filter is used as an image of the moving area. When the processed image is used and the size is not less than the specified size, the image is input as the moving area image. Using the input image by parts, comprising an image generator for generating a snowfall corrected image or rainfall corrected image, and an image processing apparatus.
 本発明は、降雪補正又は降雨補正を行う画像処理装置の画像処理方法であって、画像を入力するステップと、入力された入力画像に対し、メディアンフィルタ処理を施すステップと、前記入力画像の動領域を検出するステップと、前記入力画像に対する降雪補正パラメータ又は降雨補正パラメータの入力を受け付けるステップと、前記降雪補正パラメータ又は降雨補正パラメータに対応する前記動領域の指定サイズを設定するステップと、前記指定サイズ以上の動領域を検出する動領域サイズ検出部と、前記動領域のサイズが、前記指定サイズ未満である場合、前記動領域の画像として前記メディアンフィルタ処理が施された画像を用い、前記指定サイズ以上である場合、前記動領域の画像として前記入力された画像を用いて、降雪補正画像又は降雨補正画像を生成するステップと、を有する、画像処理方法である。 The present invention is an image processing method of an image processing apparatus that performs snowfall correction or rain correction, the step of inputting an image, the step of performing median filter processing on the input image that has been input, and the movement of the input image Detecting a region; receiving a snow correction parameter or a rain correction parameter for the input image; setting a designated size of the moving region corresponding to the snow correction parameter or the rain correction parameter; A moving area size detection unit that detects a moving area that is larger than the size, and if the size of the moving area is less than the specified size, use the image that has been subjected to the median filter processing as the moving area image; If the size is greater than or equal to the size, using the input image as the moving region image, a snowfall correction image is displayed. Or it has a step of generating a rain corrected image, and an image processing method.
 本発明によれば、降雪ノイズを除去でき、更に、監視対象物などの比較的大きな移動物体に発生する残像を低減することができる。 According to the present invention, it is possible to remove snowfall noise and to reduce afterimages that occur on relatively large moving objects such as monitoring objects.
図1は、本実施形態の画像処理装置の構成を示すブロック図である。FIG. 1 is a block diagram showing the configuration of the image processing apparatus according to this embodiment. 図2Aは、動領域サイズ検出部における動領域のサイズの検出動作を説明する図である。FIG. 2A is a diagram for explaining an operation of detecting the size of the moving area in the moving area size detection unit. 図2Bは、動領域サイズ検出部における動領域のサイズの検出動作を説明する図である。FIG. 2B is a diagram for explaining a motion region size detection operation in the motion region size detection unit. 図2Cは、動領域サイズ検出部における動領域のサイズの検出動作を説明する図である。FIG. 2C is a diagram for explaining a motion region size detection operation in the motion region size detection unit. 図3は、降雪補正画像生成処理手順を説明するフローチャートである。FIG. 3 is a flowchart for explaining the snowfall correction image generation processing procedure. 図4は、除去強度が最小値に設定された場合の画面表示部のUI画面の一例を示す図である。FIG. 4 is a diagram illustrating an example of a UI screen of the screen display unit when the removal intensity is set to the minimum value. 図5は、除去強度が中間値に設定された場合の画面表示部のUI画面の一例を示す図である。FIG. 5 is a diagram illustrating an example of a UI screen of the screen display unit when the removal intensity is set to an intermediate value. 図6は、除去強度が最大値に設定された場合の画面表示部のUI画面の一例を示す図である。FIG. 6 is a diagram illustrating an example of a UI screen of the screen display unit when the removal intensity is set to the maximum value. 図7は、動画のビューアの表示領域と画像処理メニューの表示領域とが同一画面上に表示され、かつ再生直前の動画の表示例を示す図である。FIG. 7 is a diagram illustrating a display example of a moving image immediately before reproduction, in which the moving image viewer display area and the image processing menu display area are displayed on the same screen. 図8は、動画のビューアの表示領域と画像処理メニューの表示領域とが同一画面上に表示され、かつ再生直後の動画の表示例を示す図である。FIG. 8 is a diagram illustrating a display example of a moving image immediately after reproduction, in which the moving image viewer display area and the image processing menu display area are displayed on the same screen. 図9は、動画のビューアの表示領域と画像処理メニューの表示領域とが同一画面上に表示され、かつ降雪降雨除去後の動画の表示例を示す図である。FIG. 9 is a diagram illustrating a display example of a moving image after the snowfall and rain removal are performed, in which the moving image viewer display area and the image processing menu display area are displayed on the same screen. 図10は、動画のビューアの表示領域と画像処理メニューの表示領域とが同一画面上に表示され、かつ降雪降雨除去メニューの詳細が示された例を示す図である。FIG. 10 is a diagram showing an example in which the display area of the moving image viewer and the display area of the image processing menu are displayed on the same screen and the details of the snowfall / falling removal menu are shown. 図11は、動画のビューアの表示領域と画像処理メニューの表示領域とが同一画面上に表示され、かつ降雪降雨除去メニューの詳細が示された例を示す図である。FIG. 11 is a diagram showing an example in which the display area of the moving image viewer and the display area of the image processing menu are displayed on the same screen and the details of the snowfall / falling removal menu are shown. 図12は、動画のビューアの表示領域と画像処理メニューの表示領域とが同一画面上に表示され、かつ降雪降雨除去メニューの詳細が示された例を示す図である。FIG. 12 is a diagram illustrating an example in which the display area of the moving image viewer and the display area of the image processing menu are displayed on the same screen, and the details of the snowfall / rain removal menu are shown. 図13は、動画のビューアの表示領域と各画像処理メニューの表示領域とが同一画面上に表示され、かつ空間階調補正、複数枚合成NR及び降雪降雨除去の各画像処理後の動画の表示例を示す図である。FIG. 13 shows a moving image viewer display area and each image processing menu display area on the same screen, and a moving image table after each image processing of spatial gradation correction, multiple image composition NR, and snow and rain removal. It is a figure which shows an example.
 本発明に係る画像処理装置及び画像処理方法の実施形態(以下、「本実施形態」という)について、図面を参照しながら説明する。本実施形態の画像処理装置は、カメラ、ビデオ、レコーダ等から出力される映像を処理する画像処理装置に適用される。 Embodiments of an image processing apparatus and an image processing method according to the present invention (hereinafter referred to as “the present embodiment”) will be described with reference to the drawings. The image processing apparatus of this embodiment is applied to an image processing apparatus that processes video output from a camera, video, recorder, or the like.
 図1は、本実施形態における画像処理装置1の構成を示すブロック図である。画像処理装置1は、処理制御部11と、映像入力部12と、メディアン処理部13と、動領域検出部14と、動領域サイズ検出部15と、降雪補正画像生成部16と、画面表示部17と、入力画像蓄積部21と、メディアン画像蓄積部22と、動領域処理画像蓄積部23とを含む構成である。 FIG. 1 is a block diagram showing the configuration of the image processing apparatus 1 in the present embodiment. The image processing apparatus 1 includes a processing control unit 11, a video input unit 12, a median processing unit 13, a moving region detecting unit 14, a moving region size detecting unit 15, a snowfall correction image generating unit 16, and a screen display unit. 17, an input image storage unit 21, a median image storage unit 22, and a moving area processing image storage unit 23.
 画像入力部の一例としての映像入力部12は、カメラ、ビデオ、レコーダ等から出力される映像を入力し、入力画像蓄積部21に画像データを一時的に格納する。メディアン処理部13は、入力画像蓄積部21に格納された複数枚の画像(入力画像)を用いて、時間軸方向のメディアンフィルタ処理を実行し、メディアン画像を生成する。メディアン処理部13は、メディアンフィルタ処理では、例えば時系列に蓄積された複数フレーム(ここで、フレーム数は2k+1、kは正の整数)の画像に対し、同一座標の各画素について輝度値を算出し、画素値を輝度値の高い順に並べ、k+1番目の輝度値を出力輝度として採用する。 The video input unit 12 as an example of the image input unit inputs video output from a camera, video, recorder, or the like, and temporarily stores the image data in the input image storage unit 21. The median processing unit 13 executes median filter processing in the time axis direction using a plurality of images (input images) stored in the input image storage unit 21 to generate a median image. In the median processing, the median processing unit 13 calculates, for example, a luminance value for each pixel having the same coordinates with respect to an image of a plurality of frames accumulated in time series (here, the number of frames is 2k + 1, k is a positive integer). Then, the pixel values are arranged in descending order of the luminance value, and the k + 1th luminance value is adopted as the output luminance.
 動領域検出部14は、入力画像蓄積部21に格納された画像を用いて、雪や移動物体を含む動きのある動領域を画素単位で検出する。動領域の検出には、前フレームと現フレームの画素値の差分(フレーム間差分法)や、背景画像と現フレームの画素値の差分(背景差分法)等を算出する方法が用いられる。 The moving area detecting unit 14 detects moving moving areas including snow and moving objects in units of pixels using the image stored in the input image accumulating unit 21. For detecting the moving area, a method of calculating a difference between pixel values of the previous frame and the current frame (interframe difference method), a difference between pixel values of the background image and the current frame (background difference method), or the like is used.
 指定サイズ設定部の一例としての動領域サイズ検出部15は、動領域検出部14で検出された動領域を、後述する収縮・膨張処理を用いて指定サイズ以上の動き領域を検出する処理を行い、この検出処理により得られた動き領域を含む画像(動領域処理画像)を生成する。 The moving region size detection unit 15 as an example of the designated size setting unit performs processing for detecting the moving region detected by the moving region detection unit 14 using a contraction / expansion process to be described later. Then, an image (moving region processed image) including the moving region obtained by this detection processing is generated.
 入力画像蓄積部21は、映像入力部12から入力した画像(入力画像)を数フレーム分蓄積するメモリである。 The input image storage unit 21 is a memory that stores images (input images) input from the video input unit 12 for several frames.
 メディアン画像蓄積部22は、メディアン処理部13で生成された画像(メディアン画像)を蓄積するメモリである。このメディアン画像には、メディアン処理部13で算出された画素毎の輝度値が含まれる。 The median image storage unit 22 is a memory for storing the image (median image) generated by the median processing unit 13. This median image includes the luminance value for each pixel calculated by the median processing unit 13.
 動領域処理画像蓄積部23は、動領域サイズ検出部15で処理された画像(動領域処理画像)を蓄積するメモリである。 The moving area processed image accumulating unit 23 is a memory for accumulating images (moving area processed images) processed by the moving area size detecting unit 15.
 降雪補正画像生成部16は、入力画像蓄積部21、メディアン画像蓄積部22及び動領域処理画像蓄積部23にそれぞれ格納された入力画像、メディアン画像及び動領域処理画像を用いて、雪とそれ以外の移動物体との分離判定を行い、雪を除去した画像(降雪補正画像)を生成して出力する。 The snowfall correction image generation unit 16 uses the input image, the median image, and the moving region processing image stored in the input image storage unit 21, the median image storage unit 22, and the moving region processing image storage unit 23, respectively, to detect snow and the others. The moving object is separated and determined, and an image from which snow has been removed (snowfall correction image) is generated and output.
 画面表示部17は、入力画像や降雪補正画像を表示する表示部、及び動領域サイズ検出部15で使用される動領域のサイズに対応するパラメータ値(除去強度)を指定するユーザインタフェース(UI:User Interface)としての入力操作部を含む。例えば、画面表示部17はタッチパネルを用いて構成される。 The screen display unit 17 is a user interface (UI: UI) that specifies a parameter value (removal intensity) corresponding to the size of the moving region used by the moving region size detecting unit 15 and the display unit that displays the input image and the snowfall correction image. Includes an input operation unit as User Interface). For example, the screen display unit 17 is configured using a touch panel.
 処理制御部11は、上述した画像処理装置1の各部の動作を制御し、画面表示部17のUIからパラメータ値(除去強度)を取得し、動領域サイズ検出部15に設定値を引き渡す動作を行う。 The processing control unit 11 controls the operation of each unit of the image processing apparatus 1 described above, acquires a parameter value (removal intensity) from the UI of the screen display unit 17, and transfers the set value to the moving region size detection unit 15. Do.
 図2A~図2Cは、動領域サイズ検出部15における動領域のサイズの検出動作を説明する図である。図中、白い画素41が動領域に含まれる画素(動領域画素)であり、黒い画素42は動領域に含まれない画素である。動領域のサイズの検出には、収縮・膨張処理が用いられる。この収縮・膨張処理では、動領域画像Aに対し、指定された収縮サイズm(画素)分、白い画素41を水平方向及び垂直方向にそれぞれ収縮させる。そして、水平方向及び垂直方向に収縮させた2つの画像B、Cの論理和をとる。その後、論理和がとられた画像Dの白い画素に対し、収縮サイズm(画素)と同じサイズの画素分、水平方向及び垂直方向に膨張させることが行われる。 FIG. 2A to FIG. 2C are diagrams for explaining the motion region size detection operation in the motion region size detection unit 15. In the figure, white pixels 41 are pixels (moving region pixels) included in the moving region, and black pixels 42 are pixels not included in the moving region. Shrinkage / expansion processing is used to detect the size of the moving region. In the contraction / expansion process, the white pixel 41 is contracted in the horizontal direction and the vertical direction by the specified contraction size m (pixel) with respect to the moving area image A, respectively. Then, the logical sum of the two images B and C contracted in the horizontal direction and the vertical direction is obtained. Thereafter, the white pixels of the image D obtained by the logical sum are expanded in the horizontal direction and the vertical direction by the same size as the contraction size m (pixels).
 この処理が行われると、指定された収縮サイズmに対し、垂直方向又は水平方向(一方向)に2m+1の画素サイズ(S画素)を持つ動領域が指定サイズ以上の動領域として検出される。 When this processing is performed, a moving area having a pixel size (S pixel) of 2m + 1 in the vertical direction or the horizontal direction (one direction) with respect to the specified contraction size m is detected as a moving area that is equal to or larger than the specified size.
 具体的に、収縮サイズmが値2である場合を示す。図2Aは、動領域画素が3×3画素である場合を示す。3×3画素の動領域を含む画像Aに対し、水平方向及び垂直方向にそれぞれ2画素収縮した画像B及び画像Cを生成する。2画素収縮した画像B及び画像Cでは、白の画素41は消滅している。 Specifically, the case where the shrinkage size m is 2 is shown. FIG. 2A shows a case where the moving area pixels are 3 × 3 pixels. An image B and an image C that are contracted by two pixels in the horizontal direction and the vertical direction are generated for the image A including a 3 × 3 pixel moving area. In the image B and the image C contracted by two pixels, the white pixel 41 disappears.
 画像Bと画像Cの論理和である画像Dは、白い画素を含まない。従って、画像Dを2画素分膨張させる処理を行っても、膨張処理後の画像Eは、動領域を表す白い画素を含まないことになる。即ち、3×3画素の動領域画素は、一方向に2m(m=2)+1である5画素を含まず、検出されないことになる。 Image D, which is the logical sum of image B and image C, does not include white pixels. Therefore, even if the process of expanding the image D by two pixels is performed, the image E after the expansion process does not include white pixels representing the moving area. That is, the 3 × 3 moving area pixels do not include 5 pixels that are 2m (m = 2) +1 in one direction and are not detected.
 図2Bは、動領域画素が5×5画素である場合を示す。5×5画素の動領域を含む画像Aに対し、同様に、水平方向及び垂直方向にそれぞれ2画素収縮した画像B及び画像Cを生成する。水平方向に2画素収縮した画像Bでは、中央に位置する縦1列の5画素が白い画素41となる。また、垂直方向に2画素収縮した画像Cでは、中央に位置する横1列の5画素が白い画素41となる。 FIG. 2B shows a case where the moving area pixels are 5 × 5 pixels. Similarly, an image B and an image C that are contracted by two pixels in the horizontal direction and the vertical direction are generated for the image A including the moving region of 5 × 5 pixels. In the image B contracted by two pixels in the horizontal direction, five pixels in one vertical column located at the center are white pixels 41. Further, in the image C contracted by two pixels in the vertical direction, five pixels in one horizontal row located at the center are white pixels 41.
 画像Bと画像Cの論理和である画像Dでは、中央に位置する縦及び横1列の十文字である部分が白い画素41となる。水平方向及び垂直方向に2画素分膨張させた画像Eでは、白い画素の範囲が拡がり、検出される。即ち、収縮・膨張処理された5×5画素の動領域画素は、一方向に2m(m=2)+1である5画素を含むので、検出される。 In the image D, which is the logical sum of the images B and C, the white pixel 41 is the portion of the vertical and horizontal rows of ten characters located in the center. In the image E expanded by two pixels in the horizontal direction and the vertical direction, the range of white pixels is expanded and detected. That is, the 5 × 5 moving region pixels subjected to the contraction / expansion processing include five pixels that are 2m (m = 2) +1 in one direction, and are thus detected.
 図2Cは、動領域画素が5×3画素である場合を示す。5×3画素の動領域を含む画像Aに対し、同様に、水平方向及び垂直方向にそれぞれ2画素収縮した画像B及び画像Cを生成する。水平方向に2画素収縮した画像Bでは、中央に位置する縦1列の3画素が白い画素となる。また、垂直方向に2画素収縮した画像Cでは、白い画素は消滅する。 FIG. 2C shows a case where the moving region pixels are 5 × 3 pixels. Similarly, an image B and an image C that are contracted by two pixels in the horizontal direction and the vertical direction are generated for the image A including the moving region of 5 × 3 pixels. In the image B contracted by two pixels in the horizontal direction, three pixels in one vertical column located at the center are white pixels. In the image C contracted by two pixels in the vertical direction, white pixels disappear.
 画像Bと画像Cの論理和である画像Dでは、中央に位置する縦1列の部分が白い画素となる。水平方向及び垂直方向に2画素分膨張させた画像Eでは、白い画素の範囲が5×7画素に拡がり、検出される。即ち、収縮・膨張処理された5×3画素の動領域画素は、一方向に2m(m=2)+1である5画素を含むので、検出される。このように、一方向に長めの形状を持つ動領域画素は検出され易く、水平方向および垂直方向共に指定サイズ(2m+1)未満の動領域画素が消滅する。雪粒は一般的に方向に因らずに一定のサイズであるため、画像処理装置1は、この水平方向及び垂直方向の収縮処理と膨張処理を組み合わせた処理により、雪とそれ以外の移動物体との分離性能を高めることが可能となる。 In the image D that is the logical sum of the image B and the image C, the vertical column located in the center is a white pixel. In the image E expanded by 2 pixels in the horizontal direction and the vertical direction, the range of white pixels extends to 5 × 7 pixels and is detected. That is, the 5 × 3 moving region pixels subjected to the contraction / expansion processing include five pixels that are 2m (m = 2) +1 in one direction, and are thus detected. As described above, a moving area pixel having a long shape in one direction is easily detected, and moving area pixels having a size less than a specified size (2m + 1) disappear in both the horizontal direction and the vertical direction. Since the snow particles are generally of a constant size regardless of the direction, the image processing apparatus 1 performs the process of combining the shrinking process and the expanding process in the horizontal direction and the vertical direction, so that snow and other moving objects are obtained. It is possible to improve the separation performance.
 次に、本実施形態の画像処理装置1の動作について、図3を参照して説明する。図3は、降雪補正画像生成処理手順を説明するフローチャートである。図3に示す降雪補正画像生成処理は、動領域サイズの検出処理が実行され、動領域処理画像蓄積部23に動領域画像が格納された後、入力画像に対し、画素単位に実行される。 Next, the operation of the image processing apparatus 1 of the present embodiment will be described with reference to FIG. FIG. 3 is a flowchart for explaining the snowfall correction image generation processing procedure. The snowfall correction image generation process shown in FIG. 3 is executed on a pixel-by-pixel basis for the input image after the moving area size detection process is executed and the moving area image is stored in the moving area processed image storage unit 23.
 降雪補正画像生成部16は、判定の対象となる画素が動領域画素であるか否かを判別する(S1)。動領域画素である場合、降雪補正画像生成部16は、判定の対象となる画素の輝度値が上昇しているか否かを判別する(S2)。輝度値の上昇判定は、前フレームと現フレームの輝度値を比較することで行われる。 The snowfall correction image generation unit 16 determines whether or not the pixel to be determined is a moving region pixel (S1). If the pixel is a moving region pixel, the snowfall correction image generation unit 16 determines whether or not the luminance value of the pixel to be determined has increased (S2). The determination of the increase in the luminance value is performed by comparing the luminance values of the previous frame and the current frame.
 輝度値が上昇している場合、降雪補正画像生成部16は、判定の対象となる画素を含む動領域画素のサイズが、画面表示部17のUIで指示されたパラメータ値に対応する、動領域サイズ検出部15で検出されるS画素以上であるか否かを判別する(S3)。ここで、S画素は、指定値であり、前述したように、収縮サイズmとする2m+1で表される。S3の判定は、注目画素に対応する図2で説明した画像Eの動領域処理画像の画素値を参照する。 When the luminance value is increased, the snowfall correction image generation unit 16 determines that the size of the moving region pixel including the pixel to be determined corresponds to the parameter value indicated by the UI of the screen display unit 17. It is determined whether or not the number of pixels is greater than or equal to S detected by the size detector 15 (S3). Here, the S pixel is a specified value, and is represented by 2m + 1 as the contraction size m as described above. The determination in S3 refers to the pixel value of the moving area processed image of the image E described with reference to FIG.
 動領域画素のサイズがS画素以上である場合(図2の画像Eの画素値が白)、降雪補正画像生成部16は、移動物体であると判定し、入力画像の画素値を出力画像の画素値として用いる(S4)。また、ステップS2で、判定の対象となる画素の輝度値が上昇していない場合、降雪補正画像生成部16は、白い雪のような輝度値が高いものではないと判定し、ステップS4で入力画像の画素値を出力画像の画素値として用いる。 When the size of the moving region pixel is S pixels or more (the pixel value of the image E in FIG. 2 is white), the snowfall correction image generation unit 16 determines that the moving object is a moving object, and determines the pixel value of the input image as the output image. Used as a pixel value (S4). If the luminance value of the pixel to be determined has not increased in step S2, the snowfall correction image generation unit 16 determines that the luminance value is not high, such as white snow, and is input in step S4. The pixel value of the image is used as the pixel value of the output image.
 一方、動領域画素のサイズがS画素未満である場合(図2の画像Eの画素値が黒)、降雪補正画像生成部16は、雪であると判定し、メディアン画像の画素値を出力画像の画素値を用いる(S5)。 On the other hand, when the size of the moving region pixel is less than S pixels (the pixel value of the image E in FIG. 2 is black), the snowfall correction image generation unit 16 determines that it is snow and uses the pixel value of the median image as the output image. Are used (S5).
 また一方、ステップS1で判定の対象となる画素が動領域の画素でない場合、降雪補正画像生成部16は、背景であると判定し、メディアン画像の画素値を出力画像の画素値として用いる(S6)。ここで、メディアン画像の代わりに入力画像を用いてもよいが、メディアン画像を用いることで、ノイズ除去の効果がある。 On the other hand, if the pixel to be determined in step S1 is not a moving region pixel, the snowfall correction image generation unit 16 determines that the pixel is a background and uses the pixel value of the median image as the pixel value of the output image (S6). ). Here, an input image may be used instead of the median image, but using the median image has an effect of removing noise.
 ステップS4、S5又はS6の処理後、降雪補正画像生成部16は、入力画像中の全ての画素に対し、判定を終了したか否かを判別する(S7)。全ての画素の判定が終了していない場合、降雪補正画像生成部16は、ステップS1の処理に戻る。 After step S4, S5 or S6, the snowfall correction image generation unit 16 determines whether or not the determination has been completed for all the pixels in the input image (S7). If all the pixels have not been determined, the snowfall correction image generation unit 16 returns to the process of step S1.
 一方、全ての画素の判定が終了した場合、降雪補正画像生成部16は、全ての画素の判定結果を基に、降雪補正画像を生成し、画面表示部17に表示する(S8)。この後、降雪補正画像生成部16は本処理を終了する。 On the other hand, when the determination of all pixels is completed, the snowfall correction image generation unit 16 generates a snowfall correction image based on the determination result of all pixels and displays it on the screen display unit 17 (S8). Thereafter, the snowfall correction image generation unit 16 ends this process.
 図4は、パラメータ値である除去強度が最小値に設定された場合の画面表示部17のUI画面を示す図である。画面表示部17のUI画面には、入力映像が表示されるウインドウ31、補正映像が表示されるウインドウ32、残像低減処理を行うか否かを選択するラジオボタン33、除去強度を調整するスライドバー35、及び数値表示部37が表示される。 FIG. 4 is a diagram showing a UI screen of the screen display unit 17 when the removal intensity that is the parameter value is set to the minimum value. The UI screen of the screen display unit 17 includes a window 31 in which an input video is displayed, a window 32 in which a corrected video is displayed, a radio button 33 for selecting whether or not to perform an afterimage reduction process, and a slide bar for adjusting the removal intensity. 35 and a numerical value display section 37 are displayed.
 ラジオボタン33が「残像低減処理あり」に選択されている場合、上記本実施形態で説明した残像低減処理が行われ、一方、「残像低減処理なし」に選択されている場合、残像低減処理は行われない。 When the radio button 33 is selected as “with afterimage reduction processing”, the afterimage reduction processing described in the present embodiment is performed, whereas when “without afterimage reduction processing” is selected, the afterimage reduction processing is performed. Not done.
 スライドバー35は、使用者の指等によって左右方向に入力操作され、除去強度を画素単位で指示する。図4では除去強度を画素単位に5ビット(0~32)で表している。また、除去強度は降雪補正を行うためのパラメータ値であり、前述した2m+1で表される指定値であるS画素に対応する。従って、雪片がスライドバー35で指示される除去強度に対応するS画素未満の動領域のサイズである場合、この雪片は除去されることになる。 The slide bar 35 is input in the horizontal direction by the user's finger or the like, and indicates the removal intensity in units of pixels. In FIG. 4, the removal intensity is represented by 5 bits (0 to 32) in pixel units. Further, the removal intensity is a parameter value for performing snowfall correction, and corresponds to the S pixel that is the specified value represented by 2m + 1 described above. Therefore, when the snowflake has a moving area size less than S pixels corresponding to the removal intensity indicated by the slide bar 35, the snowflake is removed.
 例えば、除去強度が小さな設定値である場合、大きな雪片を除去しなくなるが、移動物体の残像(多重化や消失)は低減する。反対に、除去強度が大きな設定値である場合、小さな雪片から大きな雪片まで除去するが、移動物体の残像(多重化や消失)は増加する。このように、雪を消す、あるいは残像を生じさせない、ことのどちらを優先させるかを、使用者はスライドバー35で調整可能である。また、雪を消す際、手前の大きな雪片は無理でも、奥にある多くの小さな雪片は消したいという場合にも、使用者はスライドバー35で調整可能である。 For example, when the removal intensity is a small set value, large snowflakes are not removed, but the afterimage (multiplexing or disappearance) of the moving object is reduced. On the other hand, when the removal intensity is a large set value, small to large snowflakes are removed, but the afterimage (multiplexing or disappearance) of the moving object increases. In this way, the user can adjust with the slide bar 35 whether to prioritize the extinction of snow or the generation of no afterimage. Further, when erasing the snow, the user can adjust the slide bar 35 even when the large snowflake in the foreground is impossible but many small snowflakes in the back are desired to be erased.
 図4では、使用者によって入力操作されるスライドバー35が左端にあり、除去強度が値0である。この場合、ウインドウ32に映し出される補正映像(降雪補正画像)は、ウインドウ31に映し出される入力映像(入力画像)とほぼ同じ映像となり、雪51が全く除去されていない。 In FIG. 4, the slide bar 35 input by the user is at the left end, and the removal strength is 0. In this case, the corrected video (snow-corrected image) projected on the window 32 is substantially the same as the input video (input image) projected on the window 31, and the snow 51 is not removed at all.
 図5は、除去強度が中間値に設定された場合の画面表示部17のUI画面を示す図である。図5では、使用者によって入力操作されるスライドバー35が中間にあり、除去強度が値16である。この場合、入力映像と比べ、ウインドウ32に映し出される補正映像(降雪補正画像)には、奥側の小さな雪51aが除去されているが、手前側の大きな雪51bは除去されていない。 FIG. 5 is a diagram showing a UI screen of the screen display unit 17 when the removal intensity is set to an intermediate value. In FIG. 5, the slide bar 35 input by the user is in the middle, and the removal strength is a value of 16. In this case, compared with the input video, the small snow 51a on the back side is removed from the corrected video (snowfall correction image) displayed on the window 32, but the large snow 51b on the near side is not removed.
 図6は、除去強度が最大値に設定された場合の画面表示部17のUI画面を示す図である。図6では、使用者によって入力操作されるスライドバー35が右端にあり、除去強度が値32である。この場合、入力映像と比べ、ウインドウ32に映し出される補正映像(降雪補正画像)には、雪がほぼ完全に消失している。 FIG. 6 is a diagram showing a UI screen of the screen display unit 17 when the removal intensity is set to the maximum value. In FIG. 6, the slide bar 35 that is input by the user is at the right end, and the removal strength is a value 32. In this case, as compared with the input video, snow has almost completely disappeared in the corrected video (snowfall correction image) displayed in the window 32.
 以上により、本実施形態の画像処理装置1は、画像にメディアンフィルタ処理を施して降雪補正する場合に、入力画像中の指定サイズ以上となる移動領域の画像には入力画像を用い、入力画像中の指定サイズ未満となる移動領域の画像にはメディアン画像を用いるので、監視対象物などの指定サイズ以上の動領域に発生する残像を低減することができる。また、画像処理装置1は、指定サイズ未満の雪粒ノイズを画像中から除去することができる。 As described above, the image processing apparatus 1 according to the present embodiment uses an input image as an image of a moving area that is equal to or larger than a specified size in the input image when the image is subjected to median filter processing and snowfall correction is performed. Since the median image is used for the image of the moving area that is smaller than the designated size, afterimages that occur in a moving area that is larger than the designated size, such as a monitoring object, can be reduced. In addition, the image processing apparatus 1 can remove snow particle noise less than a specified size from the image.
 以上、図面を参照しながら各種の実施形態について説明したが、本発明はかかる例に限定されないことは言うまでもない。当業者であれば、請求の範囲に記載された範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、それらについても当然に本発明の技術的範囲に属するものと了解される。 Although various embodiments have been described above with reference to the drawings, it goes without saying that the present invention is not limited to such examples. It will be apparent to those skilled in the art that various changes and modifications can be made within the scope of the claims, and these are naturally within the technical scope of the present invention. Is done.
 なお、上述した本実施形態では、画像処理装置1が画像処理する対象は入力画像中の雪粒であるとして説明したが、雪粒に限定されず、例えば雨粒であってもよい。つまり、画像処理装置1は、降雪補正処理後の画像(降雪補正画像)を生成できるが、更に、降雨補正処理後の画像(降雨補正画像)を生成することもできる。この場合には、上述した本実施形態において、「降雪」を「降雨」に読み替え、「雪粒」を「雨粒」に読み替え、更に「雪粒サイズ」を「雨粒サイズ」にそれぞれ読み替えればよい。 In the above-described embodiment, the image processing apparatus 1 performs image processing on the snow particles in the input image, but is not limited to snow particles, and may be rain particles, for example. In other words, the image processing apparatus 1 can generate an image after snowfall correction processing (snowfall correction image), but can also generate an image after rainfall correction processing (rainfall correction image). In this case, in the above-described embodiment, “snowfall” is read as “rainfall”, “snowdrop” is read as “raindrop”, and “snowdrop size” is read as “raindrop size”. .
 また、例えば本実施形態の画像処理装置1の処理制御部11は、入力画像蓄積部21に記憶されている動画の画像データを読み出し、本実施形態の入力画像中の雪又は雨を除去するための補正(以下、「降雪降雨除去」という)処理のメニュー画面と入力画像蓄積部21から読み出した動画の画像データとを画面表示部17の同一画面上に表示してもよい(図7~図12参照)。以下、画面表示部17の同一画面WD1~WD6に表示された画面例について、図7~図12を参照して説明する。 Further, for example, the processing control unit 11 of the image processing apparatus 1 of the present embodiment reads out moving image data stored in the input image storage unit 21 and removes snow or rain in the input image of the present embodiment. The menu screen of the correction (hereinafter referred to as “snow and rain removal”) processing and the image data of the moving image read from the input image storage unit 21 may be displayed on the same screen of the screen display unit 17 (FIG. 7 to FIG. 12). Hereinafter, screen examples displayed on the same screens WD1 to WD6 of the screen display unit 17 will be described with reference to FIGS.
 図7は、動画のビューアの表示領域VIWと画像処理メニューの表示領域MNUとが同一画面WD1上に表示され、かつ再生直前の動画の表示例を示す図である。図8は、動画のビューアの表示領域VIWと画像処理メニューの表示領域MNUとが同一画面WD2上に表示され、かつ再生直後の動画の表示例を示す図である。図9は、動画のビューアの表示領域VIWと画像処理メニューの表示領域MNUとが同一画面WD3上に表示され、かつ降雪降雨除去後の動画の表示例を示す図である。 FIG. 7 is a diagram showing a display example of a moving image immediately before reproduction, in which the moving image viewer display area VIW and the image processing menu display area MNU are displayed on the same screen WD1. FIG. 8 is a diagram showing a display example of a moving image immediately after reproduction, with the moving image viewer display area VIW and the image processing menu display area MNU displayed on the same screen WD2. FIG. 9 is a diagram showing a display example of the moving image after the snowfall and rain removal are displayed on the same screen WD3 where the moving image viewer display area VIW and the image processing menu display area MNU are displayed.
 図7には、入力画像蓄積部21から読み出された時点で再生直前の状態における動画の画像データのビューアの表示領域VIWと、画像処理装置1が実行可能な画像処理(具体的には、空間階調補正、複数枚合成NR(Noise Reduction)、降雪降雨除去)のメニュー(画像処理メニュー)の表示領域MNUとが同一画面WD1として示されている。図7や図8では、これらの各画像処理メニューの詳細内容が表示されておらず、各画像処理メニューの名称だけが表示されており、画像処理装置1における画像処理の対象となる動画の画像データと、画像処理装置1が実行可能な画像処理のメニューの一覧との関係がユーザにとって一目で視認可能となっている。従って、ユーザは、画像処理の対象となる動画の画像データと画像処理メニューとを対比的に確認できる。 FIG. 7 shows the display area VIW of the viewer of the moving image data in the state immediately before reproduction at the time of reading from the input image storage unit 21 and the image processing that can be executed by the image processing apparatus 1 (specifically, A display area MNU of a menu (image processing menu) of spatial gradation correction, multi-image composition NR (Noise Reduction), snowfall / rain removal) is shown as the same screen WD1. In FIG. 7 and FIG. 8, the detailed contents of each of these image processing menus are not displayed, only the names of the respective image processing menus are displayed, and the image of the moving image to be subjected to image processing in the image processing apparatus 1 is displayed. The relationship between the data and a list of image processing menus that can be executed by the image processing apparatus 1 is visible to the user at a glance. Therefore, the user can confirm the image data of the moving image to be subjected to image processing and the image processing menu in a comparative manner.
 図7~図13に示す画面WD1~WD7の下段には、動画の再生、一時停止、停止、早送り、巻戻し、録画、戻し操作等に関する操作ボタンセットBSTと、動画の再生タイミングを視覚的に示す再生バーBARと、動画の始点時刻及び終了時刻を示す動画時間エリアSTEと、再生バーBAR内に表示されたインジケータINDが示す実際の再生時刻(つまり、始点時刻からの経過時間)を直接的に示す再生時刻エリアCRTとが共通して表示される。ユーザの操作によって再生ボタンRPがカーソルCSRにより押下されると、画像処理装置1の処理制御部11は、入力画像蓄積部21から読み出した動画の画像データを再生する(図8~図13参照)。この場合には、画像処理装置1の処理制御部11は、画面WD2~WD7上において、再生ボタンRPを一時停止ボタンHLに変更して表示する。 In the lower part of the screens WD1 to WD7 shown in FIG. 7 to FIG. 13, the operation button set BST related to the playback, pause, stop, fast forward, rewind, recording, rewind operation, etc. A playback bar BAR, a video time area STE indicating the start time and end time of the video, and an actual playback time (that is, an elapsed time from the start time) indicated by the indicator IND displayed in the playback bar BAR. The playback time area CRT shown in FIG. When the playback button RP is pressed by the cursor CSR by a user operation, the processing control unit 11 of the image processing apparatus 1 plays back the moving image data read from the input image storage unit 21 (see FIGS. 8 to 13). . In this case, the processing control unit 11 of the image processing apparatus 1 changes the reproduction button RP to the pause button HL and displays it on the screens WD2 to WD7.
 ここで、ユーザの操作によって、例えば画像処理メニューの表示領域MNUのうち降雪降雨除去のメニューバーMNU3がカーソルCSRにより一度押下されると、画像処理装置1の降雪補正画像生成部16は、上述した本実施形態の方法(例えば図3参照)に従って、表示領域VIWに表示された動画の画像データに降雪降雨除去の補正処理を行う。一方、降雪降雨除去の補正処理のメニューバーMNU3がカーソルCSRにより更にもう一度押下されると、画像処理装置1の降雪補正画像生成部16は、表示領域VIWに表示された動画の画像データに対する降雪降雨除去の補正処理を中断する。これにより、画像処理装置1は、表示領域VIWに表示された動画の画像データを再生した状態で、ユーザの簡単な操作(つまり、メニューバーMNU3の押下の有無)により、表示領域MNUに表示されたいずれかの画像処理メニューに対応した画像処理を的確に実行でき又はその実行を中断でき、画像処理前後の処理結果をユーザに対して簡単に確認させることができる。なお、画像処理装置1の処理制御部11は、カーソルCSRが画像処理メニューのいずれかのメニューバー上又はその近傍にある場合には、異なる形状のカーソルCSR1を表示してもよいし、当初のカーソルCSRを表示してもよい。 Here, for example, when the snowfall / rain removal menu bar MNU3 in the display area MNU of the image processing menu is pressed once by the cursor CSR by the user's operation, the snowfall correction image generation unit 16 of the image processing apparatus 1 is described above. In accordance with the method of the present embodiment (see, for example, FIG. 3), correction processing for removing snow and rain is performed on the image data of the moving image displayed in the display area VIW. On the other hand, when the menu bar MNU3 of the snowfall / rain removal correction process is further pressed again by the cursor CSR, the snowfall correction image generation unit 16 of the image processing apparatus 1 performs the snowfall / fall for the moving image data displayed in the display area VIW. Cancel correction processing. As a result, the image processing apparatus 1 is displayed in the display area MNU by a simple user operation (that is, whether or not the menu bar MNU3 is pressed) in a state in which the moving image data displayed in the display area VIW is reproduced. The image processing corresponding to any one of the image processing menus can be executed accurately or can be interrupted, and the processing result before and after the image processing can be easily confirmed by the user. Note that the processing control unit 11 of the image processing apparatus 1 may display a cursor CSR1 having a different shape when the cursor CSR is on or near any menu bar of the image processing menu. A cursor CSR may be displayed.
 また、ユーザの操作によって、図9に示す降雪降雨除去の補正処理のメニューバーMNU3の右端部に表示されたマーカDT1がカーソルCSR1により押下されると、画像処理装置1の処理制御部11は、降雪降雨除去の補正処理に関する入力パラメータ(例えば補正強度)を設定させるための詳細な操作画面の表示領域MNU3Dを展開して画面WD4~WD6上に表示する(図10~図12参照)。図10~図12は、それぞれ動画のビューアの表示領域VIWと画像処理メニューの表示領域MNU3Dとが同一画面WD4~WD6上に表示され、かつ降雪降雨除去メニューの詳細が示された例を示す図である。 Further, when the marker DT1 displayed on the right end portion of the menu bar MNU3 of the snow and rain removal correction process shown in FIG. 9 is pressed by the cursor CSR1 by the user's operation, the process control unit 11 of the image processing apparatus 1 A detailed operation screen display area MNU3D for setting input parameters (for example, correction strength) relating to the correction process for snow and rain removal is expanded and displayed on the screens WD4 to WD6 (see FIGS. 10 to 12). FIGS. 10 to 12 are diagrams showing examples in which the moving image viewer display area VIW and the image processing menu display area MNU3D are displayed on the same screen WD4 to WD6, respectively, and the details of the snowfall and rain removal menu are shown. It is.
 図10~図12に示すように、画像処理装置1の処理制御部11は、降雪降雨除去の補正処理に関する入力パラメータ(例えば補正強度)を設定させるための詳細な操作画面SRC(図4~図6参照)と同様の画面を表示領域MNU3Dに表示する。図10~図12に示す表示領域MNU3Dでは、降雪降雨除去の補正処理に関する入力パラメータである補正モード、補正強度がそれぞれ所定モード、所定値に予め定められた自動設定のチェックボックスATC(図12参照)が追加して表示されている。なお、自動設定時の補正モード、補正強度は、初期値でも良いし、例えば動領域サイズ検出部15における検出結果に基づいて動的に設定されてもよく、以下同様である。なお、図4~図6に示す処理選択の選択肢(つまり、残像低減処理なし、残像低減処理ありのうちいずれかを選択させるための選択ボックス)は、図10~図12に示す補正モードに対応している。 As shown in FIG. 10 to FIG. 12, the processing control unit 11 of the image processing apparatus 1 sets a detailed operation screen SRC (FIG. 4 to FIG. 4) for setting input parameters (for example, correction strength) related to correction processing for snowfall and rain removal. 6) is displayed on the display area MNU3D. In the display area MNU3D shown in FIG. 10 to FIG. 12, the correction mode and the correction intensity, which are input parameters relating to the snow and rain removal correction processing, are respectively set to a predetermined mode and a predetermined value, and an automatically set check box ATC (see FIG. 12) ) Is added and displayed. It should be noted that the correction mode and the correction intensity at the time of automatic setting may be initial values, or may be dynamically set based on, for example, a detection result in the moving area size detection unit 15, and so on. The processing selection options shown in FIGS. 4 to 6 (that is, a selection box for selecting either no afterimage reduction processing or afterimage reduction processing) correspond to the correction modes shown in FIGS. 10 to 12. is doing.
 図10及び図11でも同様に、ユーザが補正強度に設けられたシークバー(図4~図6に示すスライダ35参照)に対してカーソルCSRの個々に左右に移動する操作により、画像処理装置1は、表示領域VIWにおいて再生中の動画の画像データに対して、移動操作後の入力パラメータ(つまり、補正強度)を用いて、降雪降雨除去の補正処理を行う。 Similarly, in FIGS. 10 and 11, the image processing apparatus 1 is operated by the user moving the cursor CSR individually to the left and right with respect to the seek bar (see the slider 35 shown in FIGS. 4 to 6) provided for the correction strength. Then, with respect to the image data of the moving image being reproduced in the display area VIW, correction processing for removing snow and rain is performed using the input parameter (that is, the correction strength) after the moving operation.
 また、ユーザが補正強度に設けられた入力欄(図4~図6に示す数値表示部37参照)に対する数値(例えば0-32までの値)の入力により、画像処理装置1は、表示領域VIWにおいて再生中の動画の画像データに対して、入力後の補正強度を用いて、降雪降雨除去の補正処理を行う。 Further, when the user inputs a numerical value (for example, a value from 0 to 32) in an input field (see the numerical value display unit 37 shown in FIGS. 4 to 6) provided for the correction strength, the image processing apparatus 1 displays the display area VIW. In the image data of the moving image being reproduced in, correction processing for removing snow and rain is performed using the correction strength after input.
 図10では補正強度が「16」となっているが、図11では補正強度が「30」に変更されたので、図11に示す動画の画像データは、図10に示す動画の画像データに比べれば降雪時の雪粒又は降雨時の雨粒の除去量が増大し、ユーザは動画の内容を十分に把握できる。 Although the correction strength is “16” in FIG. 10, the correction strength is changed to “30” in FIG. 11, so that the moving image data shown in FIG. 11 is compared with the moving image data shown in FIG. For example, the removal amount of snow particles at the time of snowfall or raindrops at the time of rain increases, and the user can sufficiently grasp the contents of the moving image.
 また、ユーザの操作により、図12に示す降雪降雨除去に関する入力パラメータである補正モード、補正強度を既定値(例えば初期値)に自動設定させるためのチェックボックスATCがカーソルCSRにより押下されると、画像処理装置1は、表示領域VIWにおいて再生中の動画の画像データに対して、補正モード及び補正強度の既定値を用いて降雪降雨除去の補正処理を行う。これにより、画像処理装置1は、降雪降雨除去に関する補正モード、補正強度としてどのような値を入力すれば良いか分からないユーザに対しても、例えば降雪降雨除去の補正処理時の典型的な初期値を既定値として用いることで、動画の画像データに対する降雪降雨除去の補正処理を簡易に行うことができる。 In addition, when a check box ATC for automatically setting a correction mode and a correction intensity, which are input parameters relating to snow and rain removal shown in FIG. The image processing apparatus 1 performs a snowfall / rain removal correction process on the image data of the moving image being reproduced in the display area VIW by using the correction mode and the default value of the correction intensity. As a result, the image processing apparatus 1 can provide a typical initial value for the correction process for snowfall and rain removal, for example, to a user who does not know what value should be input as the correction mode and correction strength for snow and rain removal. By using the value as the default value, it is possible to easily perform the snow and rain removal correction process on the moving image data.
 また、本実施形態の画像処理装置1は、入力画像蓄積部21から読み出した動画の画像データと降雪降雨除去の補正処理を含む複数の画像処理のメニュー画面とを画面表示部17の同一画面上に表示してもよい(図13参照)。以下、画面表示部17の同一画面WD7に表示された画面例について、図13を参照して説明する。 In addition, the image processing apparatus 1 of the present embodiment displays the moving image data read from the input image storage unit 21 and a plurality of image processing menu screens including correction processing for removing snow and rain on the same screen of the screen display unit 17. May be displayed (see FIG. 13). Hereinafter, a screen example displayed on the same screen WD7 of the screen display unit 17 will be described with reference to FIG.
 図13は、動画のビューアの表示領域VIWと各画像処理メニューの表示領域MNU1,MNU2,MNU3とが同一画面WD7上に表示され、かつ空間階調補正、複数枚合成NR及び降雪降雨除去の各画像処理後の動画の表示例を示す図である。図13では、ユーザの操作により、空間階調補正のメニューバーMNU1、複数枚合成NRのメニューバーMNU2、及び降雪降雨除去のメニューバーMNU3がそれぞれカーソルCSR1により押下されたことで、画像処理装置1によりそれぞれのメニューバーに対応する画像処理が施された動画の画像データが表示領域VIWに表示されている。つまり、本実施形態の画像処理装置1は、図7~図12を参照して説明した単一の画像処理(例えば降雪降雨除去の補正処理)に限定されず、動画の画像データを表示領域VIWに表示しながら、ユーザの簡易な操作に応じて複数の画像処理を実行でき、複数の画像処理結果をユーザに対して直感的かつ視覚的に示すことができる。 FIG. 13 shows the display area VIW of the moving image viewer and the display areas MNU1, MNU2, and MNU3 of each image processing menu on the same screen WD7, and each of spatial gradation correction, multi-sheet composite NR, and snowfall removal. It is a figure which shows the example of a display of the moving image after image processing. In FIG. 13, by the user's operation, the menu bar MNU1 for correcting the spatial gradation, the menu bar MNU2 for combining multiple NRs, and the menu bar MNU3 for removing snow and rain are each pressed by the cursor CSR1, so that the image processing apparatus 1 Thus, the image data of the moving image that has been subjected to the image processing corresponding to each menu bar is displayed in the display area VIW. That is, the image processing apparatus 1 according to the present embodiment is not limited to the single image processing (for example, correction processing for removing snow and rain) described with reference to FIGS. A plurality of image processing can be executed in accordance with a simple operation of the user while being displayed, and a plurality of image processing results can be intuitively and visually shown to the user.
 なお、空間階調補正とは、所定のパラメータ(例えば、補正方式、補正強度、色強調度、明るさ及び補正範囲)の入力又は指定に応じて、対応する別のパラメータ(例えば、重み付け係数、重み付け範囲、ヒスト上限クリップ量、ヒスト下限クリップ量、ヒスト分配係数設定値、分配開始・終了位置、画像ブレンド率、色ゲイン)に変換し、変換後に得られたパラメータを用いて、入力フレームに対して局所ヒストグラムの生成及び整形し、更に、トーンカーブを生成して階調変換及び色強調を行うための画像処理である。なお、複数枚合成NRとは、入力されたフレーム(現在フレーム)の画像のコントラストと予め設定された動き検出レベルとに応じて、現在フレームと直前に入力されたフレーム(直前フレーム)の各画像の合成比率を算出し、更に合成比率に応じて画像を合成することで、現在フレームの画像に現れるノイズ成分を軽減するための画像処理である。 It should be noted that spatial gradation correction refers to another parameter (for example, weighting coefficient, etc.) corresponding to input or specification of predetermined parameters (for example, correction method, correction intensity, color enhancement degree, brightness, and correction range). Convert to weighting range, histogram upper limit clip amount, histogram lower limit clip amount, histogram distribution coefficient setting value, distribution start / end position, image blend ratio, color gain), and input parameters using the parameters obtained after conversion Image processing for generating and shaping a local histogram and further generating tone curves to perform tone conversion and color enhancement. The multi-sheet composite NR is an image of the current frame and the frame input immediately before (previous frame) according to the contrast of the image of the input frame (current frame) and a preset motion detection level. This is an image processing for reducing the noise component appearing in the image of the current frame by further calculating the synthesis ratio and synthesizing the image according to the synthesis ratio.
 また図13において、図10と同様に、ユーザの操作によって、図9に示すマーカDT1に対応する空間階調補正のメニューバーMNU1、複数枚合成NRのメニューバーMNU2、及び降雪降雨除去のメニューバーMNU3の各マーカがカーソルCSR1により押下されると、画像処理装置1の処理制御部11は、空間階調補正、複数枚合成NR、及び降雪降雨除去に関するそれぞれの複数の入力パラメータを設定させるための詳細な操作画面の表示領域MNU3D等を展開して画面上に表示する。なお、空間階調補正、複数枚合成NRに関するそれぞれの複数の入力パラメータを設定させるための詳細な操作画面の表示領域の図示は省略している。例えば空間階調補正では、ユーザの操作によって、上述した補正方式、補正強度、色強調度、明るさ及び補正範囲)がそれぞれ入力又は指定される。また、複数枚合成NRでは、ユーザの操作によって、入力フレームの画像と前フレームの画像とを合成してノイズを低減するための程度を指定するパラメータ(つまり、NRレベル)、上述した予め設定された動き検出レベル、例えば画像処理装置1がカメラである場合のカメラの検出精度及び検出範囲を指定させるためのパラメータ(つまり、検出精度、検出範囲)が、それぞれ入力又は指定される。 In FIG. 13, as in FIG. 10, a spatial tone correction menu bar MNU 1, a multi-image composite NR menu bar MNU 2, and a snow and rain removal menu bar corresponding to the marker DT 1 shown in FIG. When each marker of the MNU 3 is pressed by the cursor CSR1, the processing control unit 11 of the image processing apparatus 1 is configured to set a plurality of input parameters related to spatial gradation correction, a plurality of composite NRs, and snow and rain removal. The detailed operation screen display area MNU3D is expanded and displayed on the screen. It should be noted that a detailed operation screen display area for setting a plurality of input parameters related to spatial gradation correction and a plurality of composite NRs is not shown. For example, in the spatial gradation correction, the above-described correction method, correction intensity, color enhancement degree, brightness, and correction range) are input or designated by a user operation. Further, in the multi-sheet composite NR, a parameter (that is, the NR level) that specifies the degree to reduce noise by combining the image of the input frame and the image of the previous frame by the user's operation is set in advance as described above. Parameters for specifying the motion detection level, for example, the detection accuracy and detection range of the camera when the image processing apparatus 1 is a camera (that is, detection accuracy and detection range) are respectively input or specified.
 これにより、画像処理装置1は、表示領域VIWに表示された動画の画像データを再生した状態で、ユーザの簡単な操作(つまり、複数の画像処理に関するメニューバーMNU1,MNU2,MNU3の押下の有無)により、表示領域MNU1,MNU2,MNU3の押下操作に対応した画像処理を的確に実行でき又はその実行を中断でき、画像処理前後の処理結果をユーザに対して簡単に確認させることができる。また、画像処理装置1は、複数の画像処理における各パラメータを設定するための詳細な操作画面の表示領域MNU1D,MNU2D,MNU3Dに表示されたいずれかのパラメータの変更操作に応じた画像処理を的確に実行でき又はその実行を中断でき、画像処理前後の処理結果をユーザに対して簡単に確認させることができる。 As a result, the image processing apparatus 1 reproduces the moving image data displayed in the display area VIW while the user performs a simple operation (that is, whether or not the menu bars MNU1, MNU2, and MNU3 related to a plurality of image processing have been pressed). ), The image processing corresponding to the pressing operation of the display areas MNU1, MNU2, and MNU3 can be accurately executed or the execution can be interrupted, and the user can easily check the processing results before and after the image processing. In addition, the image processing apparatus 1 appropriately performs image processing according to an operation for changing any one of the parameters displayed in the display areas MNU1D, MNU2D, and MNU3D of detailed operation screens for setting each parameter in a plurality of image processing. Or the execution can be interrupted, and the user can easily check the processing results before and after the image processing.
 本発明は、画像にメディアンフィルタ処理を施して降雪補正する場合に、降雪ノイズを除去でき、更に、監視対象物などの比較的大きな移動物体に発生する残像を低減する画像処理装置及び画像処理方法として有用である。 The present invention relates to an image processing apparatus and an image processing method capable of removing snowfall noise and reducing afterimages generated on a relatively large moving object such as a monitoring object when performing snowfall correction by performing median filter processing on an image. Useful as.
 1 画像処理装置
 11 処理制御部
 12 映像入力部
 13 メディアン処理部
 14 動領域検出部
 15 動領域サイズ検出部
 16 降雪補正画像生成部
 17 画面表示部
 21 入力画像蓄積部
 22 メディアン画像蓄積部
 23 動領域処理画像蓄積部
 31,32 ウインドウ
 33 ラジオボタン
 35 スライドバー
 37 数値表示部
 WD1,WD2,WD3,WD4,WD5,WD6,WD7 画面
DESCRIPTION OF SYMBOLS 1 Image processing apparatus 11 Processing control part 12 Image | video input part 13 Median processing part 14 Moving area detection part 15 Moving area size detection part 16 Snowfall correction image generation part 17 Screen display part 21 Input image storage part 22 Median image storage part 23 Moving area Processed image storage unit 31, 32 Window 33 Radio button 35 Slide bar 37 Numerical display unit WD1, WD2, WD3, WD4, WD5, WD6, WD7 screen

Claims (7)

  1.  画像を入力する画像入力部と、
     前記画像入力部により入力された入力画像に対し、メディアンフィルタ処理を施すメディアン処理部と、
     前記入力画像の動きのある動領域を検出する動領域検出部と、
     前記入力画像に対する降雪補正パラメータ又は降雨補正パラメータの入力を受け付ける入力操作部と、
     前記降雪補正パラメータ又は降雨補正パラメータに対応する前記動領域の指定サイズを設定する指定サイズ設定部と、
     前記指定サイズ設定部により設定された前記指定サイズ以上の動領域を検出する動領域サイズ検出部と、
     前記動領域のサイズが、前記指定サイズ未満である場合、前記動領域の画像として前記メディアンフィルタ処理が施された画像を用い、前記指定サイズ以上である場合、前記動領域の画像として前記画像入力部により入力された画像を用いて、降雪補正画像又は降雨補正画像を生成する画像生成部と、を備える、
     画像処理装置。
    An image input unit for inputting an image;
    A median processing unit that performs median filter processing on the input image input by the image input unit;
    A moving region detecting unit for detecting a moving region having a movement of the input image;
    An input operation unit that receives input of a snow correction parameter or a rain correction parameter for the input image;
    A designated size setting unit for setting a designated size of the moving region corresponding to the snow correction parameter or the rain correction parameter;
    A moving region size detection unit that detects a moving region larger than the specified size set by the specified size setting unit;
    When the size of the moving area is smaller than the specified size, the image subjected to the median filter processing is used as the moving area image. When the moving area is larger than the specified size, the image input is performed as the moving area image. An image generation unit that generates a snow correction image or a rain correction image using the image input by the unit,
    Image processing device.
  2.  請求項1に記載の画像処理装置であって、
     前記降雪補正画像又は降雨補正画像と前記降雪補正パラメータ又は降雨補正パラメータの操作画面を含む降雪降雨補正処理のメニュー画面とを同一画面上に表示する表示部、を更に備える、
     画像処理装置。
    The image processing apparatus according to claim 1,
    A display unit that displays the snowfall correction image or the rain correction image and the menu screen of the snow and rain correction process including the operation screen of the snow correction parameter or the rain correction parameter on the same screen;
    Image processing device.
  3.  請求項1に記載の画像処理装置であって、
     前記降雪補正画像又は降雨補正画像と前記降雪補正パラメータ又は降雨補正パラメータの操作画面を含む降雪降雨補正処理を有する複数の画像処理のメニュー画面とを同一画面上に表示する表示部、を更に備える、
     画像処理装置。
    The image processing apparatus according to claim 1,
    A display unit that displays the snow correction image or the rain correction image and a plurality of image processing menu screens having a snow and rain correction process including an operation screen of the snow correction parameter or the rain correction parameter on the same screen;
    Image processing device.
  4.  請求項2又は3に記載の画像処理装置であって、
     前記指定サイズ設定部は、
     前記表示部に表示された前記操作画面における前記降雪補正パラメータ又は降雨補正パラメータの変更操作に応じて、前記指定サイズを設定し、
     前記画像生成部は、
     前記指定サイズ設定部により設定された前記指定サイズに応じて、前記降雪補正画像又は降雨補正画像を生成する、
     画像処理装置。
    The image processing apparatus according to claim 2 or 3,
    The designated size setting unit
    In accordance with the change operation of the snow correction parameter or the rain correction parameter on the operation screen displayed on the display unit, the specified size is set,
    The image generation unit
    In accordance with the designated size set by the designated size setting unit, the snow correction image or the rain correction image is generated.
    Image processing device.
  5.  請求項1に記載の画像処理装置であって、
     前記入力操作部は、
     除去する雪粒サイズ又は雨粒サイズを画素単位で指定する、
     画像処理装置。
    The image processing apparatus according to claim 1,
    The input operation unit is
    Specify the snow or raindrop size to be removed in pixels.
    Image processing device.
  6.  請求項1又は5に記載の画像処理装置であって、
     前記動領域サイズ検出部は、
     前記動領域検出部により検出された動領域画像に対して、水平方向及び垂直方向に収縮処理を行った2つの画像の論理和画像を生成し、更に、水平方向及び垂直方向に膨張処理を行うことで生成した画像を用いる、
     画像処理装置。
    The image processing apparatus according to claim 1, wherein:
    The moving region size detection unit
    A logical sum image of two images obtained by performing contraction processing in the horizontal direction and the vertical direction is generated on the moving region image detected by the moving region detection unit, and further, expansion processing is performed in the horizontal direction and the vertical direction. Use the generated image,
    Image processing device.
  7.  降雪補正又は降雨補正を行う画像処理装置の画像処理方法であって、
     画像を入力するステップと、
     入力された入力画像に対し、メディアンフィルタ処理を施すステップと、
     前記入力画像の動領域を検出するステップと、
     前記入力画像に対する降雪補正パラメータ又は降雨補正パラメータの入力を受け付けるステップと、
     前記降雪補正パラメータ又は降雨補正パラメータに対応する前記動領域の指定サイズを設定するステップと、
     前記指定サイズ以上の動領域を検出するステップと、
     前記動領域のサイズが、前記指定サイズ未満である場合、前記動領域の画像として前記メディアンフィルタ処理が施された画像を用い、前記指定サイズ以上である場合、前記動領域の画像として前記入力された画像を用いて、降雪補正画像又は降雨補正画像を生成するステップと、を有する、
     画像処理方法。
    An image processing method of an image processing apparatus for performing snowfall correction or rain correction,
    Inputting an image;
    Applying median filtering to the input image, and
    Detecting a moving region of the input image;
    Receiving an input of a snow correction parameter or a rain correction parameter for the input image;
    Setting a designated size of the moving area corresponding to the snow correction parameter or the rain correction parameter;
    Detecting a moving region of the specified size or more;
    When the size of the moving area is less than the specified size, the image subjected to the median filter process is used as the moving area image, and when the size is equal to or larger than the specified size, the image is input as the moving area image. Generating a snow correction image or a rain correction image using the obtained image,
    Image processing method.
PCT/JP2015/000897 2014-02-25 2015-02-24 Image processing device, and image processing method WO2015129250A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2014-034389 2014-02-25
JP2014034389 2014-02-25
JP2015-022717 2015-02-06
JP2015022717A JP2015180048A (en) 2014-02-25 2015-02-06 Image processing device and image processing method

Publications (1)

Publication Number Publication Date
WO2015129250A1 true WO2015129250A1 (en) 2015-09-03

Family

ID=54008584

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/000897 WO2015129250A1 (en) 2014-02-25 2015-02-24 Image processing device, and image processing method

Country Status (2)

Country Link
JP (1) JP2015180048A (en)
WO (1) WO2015129250A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160078A (en) * 2021-04-09 2021-07-23 长安大学 Method, device and equipment for removing rain from traffic vehicle image in rainy day and readable storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738932A (en) * 2020-05-13 2020-10-02 合肥师范学院 Automatic rain removing method for photographed image of vehicle-mounted camera
KR102189569B1 (en) * 2020-06-24 2020-12-11 주식회사 문창 Multifunctional smart object recognition accident prevention sign guide plate and accident prevention method using the same
JP7273438B1 (en) 2022-07-13 2023-05-15 株式会社岩崎 Sharpening device, sharpening program and sharpening method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006109398A1 (en) * 2005-03-15 2006-10-19 Omron Corporation Image processing device and method, program, and recording medium
WO2008111549A1 (en) * 2007-03-15 2008-09-18 Kansai University Moving object noise elimination processing device and moving object noise elimination processing program
JP2011018269A (en) * 2009-07-10 2011-01-27 Nippon Telegr & Teleph Corp <Ntt> Device and method for detecting motion of translucent object

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006109398A1 (en) * 2005-03-15 2006-10-19 Omron Corporation Image processing device and method, program, and recording medium
WO2008111549A1 (en) * 2007-03-15 2008-09-18 Kansai University Moving object noise elimination processing device and moving object noise elimination processing program
JP2011018269A (en) * 2009-07-10 2011-01-27 Nippon Telegr & Teleph Corp <Ntt> Device and method for detecting motion of translucent object

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KAZUNAGA MIYAKE ET AL.: "Realtime Snowflake Elimination Adaptive to Snowfall Situations", THE JOURNAL OF THE INSTITUTE OF IMAGE ELECTRONICS ENGINEERS OF JAPAN, vol. 32, no. 4, 25 July 2003 (2003-07-25), pages 478 - 482, XP003005817 *
KAZUNAGA MIYAKE ET AL.: "Snowfall Noise Elimination Using a Time Median Filter", THE JOURNAL OF THE INSTITUTE OF IMAGE ELECTRONICS ENGINEERS OF JAPAN, vol. 30, no. 3, 25 May 2001 (2001-05-25), pages 251 - 259 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160078A (en) * 2021-04-09 2021-07-23 长安大学 Method, device and equipment for removing rain from traffic vehicle image in rainy day and readable storage medium
CN113160078B (en) * 2021-04-09 2023-01-24 长安大学 Method, device and equipment for removing rain from traffic vehicle image in rainy day and readable storage medium

Also Published As

Publication number Publication date
JP2015180048A (en) 2015-10-08

Similar Documents

Publication Publication Date Title
JP4139430B1 (en) Image processing apparatus and method, image display apparatus and method
JP6593675B2 (en) Image processing apparatus and image processing method
US10321023B2 (en) Image processing apparatus and image processing method
WO2015129250A1 (en) Image processing device, and image processing method
EP1848209A2 (en) Apparatus and method for adjusting image
CN102629976A (en) Image processing apparatus, and control method thereof
JPWO2008016149A1 (en) Image display control device and image display control method
US11831991B2 (en) Device, control method, and storage medium
JP2016178608A (en) Image processing apparatus, image processing method and program
JP6904842B2 (en) Image processing device, image processing method
JP5192087B2 (en) Image processing apparatus and image processing method
JP2009100270A (en) Video-editing method and television broadcast receiver
JP2010198321A (en) Image processing device and image processing method
WO2015146240A1 (en) Image processing device
JP2007005933A (en) Picture adjusting method, picture processing circuit, and picture display
JP2006093757A (en) Image processing apparatus, imaging apparatus, and image processing program
JP5219771B2 (en) Video processing apparatus and video processing apparatus control method
JP2006078552A (en) Image magnification device
JP4991884B2 (en) Image processing apparatus and image processing method
JP5023092B2 (en) Image processing apparatus and image processing method
JP5167956B2 (en) Video signal processing device
JP2010088133A (en) Image processing apparatus, and image processing method
JP2008104016A (en) Gradation correction device
JP2010154300A (en) Image processing device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15755507

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15755507

Country of ref document: EP

Kind code of ref document: A1