WO2015005196A1 - Dispositif de traitement d'image et procédé de traitement d'image - Google Patents

Dispositif de traitement d'image et procédé de traitement d'image Download PDF

Info

Publication number
WO2015005196A1
WO2015005196A1 PCT/JP2014/067688 JP2014067688W WO2015005196A1 WO 2015005196 A1 WO2015005196 A1 WO 2015005196A1 JP 2014067688 W JP2014067688 W JP 2014067688W WO 2015005196 A1 WO2015005196 A1 WO 2015005196A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
correction
comparison target
index
block size
Prior art date
Application number
PCT/JP2014/067688
Other languages
English (en)
Japanese (ja)
Inventor
佑一郎 小宮
山口 宗明
晴洋 古藤
Original Assignee
株式会社日立国際電気
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立国際電気 filed Critical 株式会社日立国際電気
Priority to US14/902,626 priority Critical patent/US9547890B2/en
Priority to JP2015526283A priority patent/JP5908174B2/ja
Priority to EP14822122.9A priority patent/EP3021575B1/fr
Publication of WO2015005196A1 publication Critical patent/WO2015005196A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6811Motion detection based on the image signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20182Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to an image processing apparatus and an image processing method, and more particularly to an image processing apparatus and an image processing method for reducing image quality deterioration due to a hot flame or the like.
  • a positive flame is a phenomenon that occurs when light is refracted by the local mixing of air with different densities due to temperature differences of the air.
  • the subject is deformed and observed. Therefore, when a captured image is reproduced, a specific area in the image appears to shake greatly. In this way, distortion is generated in the image due to a phenomenon (air fluctuation) in which air appears to fluctuate due to a hot flame or the like. For this reason, the visibility of the subject is lowered.
  • Patent Document 1 it is determined whether or not distortion in an image due to air fluctuation (hereinafter, simply referred to as “distortion”) occurs.
  • a plurality of images are generated by photographing, and a plurality of images are added (averaged) to generate an image in which one distortion is corrected.
  • Patent Document 2 performs image correction such as a hot flame by paying attention to an image one frame before.
  • An object of the present invention is to provide an image processing technique capable of reducing distortion generated in the case of an image including both a stationary region and a moving object.
  • the image processing apparatus obtains the gradation distribution (histogram) of each of the correction target frame (correction target image) and the correction frame (correction image), and the correction target frame and the correction frame according to the similarity. Then, the correction target frame is corrected using the correction frame. Further, each of the histograms of the correction target image and the correction image calculates a plurality of indexes having different robustness against the deformation of the subject from the comparison target pixel block of the correction target image and the correction image, and two indexes having different robustness are calculated. Thus, the relationship between the state of the subject in the comparison target pixel block and the comparison target pixel block size is determined, an appropriate comparison target pixel block size is determined, and the determined comparison target pixel block size is obtained.
  • the image processing apparatus can provide a high-quality image in which distortion in the image due to air fluctuation is reduced in an image including both a stationary region and a moving object.
  • FIG. 1 is a block diagram of an image processing apparatus 4 according to Embodiment 1.
  • 7 is a flowchart of a procedure for optimizing the comparison target pixel block size by the parameter control device 2 of the first embodiment.
  • FIG. 6 is a block diagram illustrating a configuration of an image processing device according to a second embodiment.
  • FIG. 3 is a functional block diagram of the fluctuation correction apparatus 1 according to the first or second embodiment.
  • the figure explaining the operation example of the image smoothing part 11.
  • FIG. The figure which shows a mode that the gradation distribution calculation part 12 calculates a gradation distribution using the pixel of the peripheral region of an object pixel.
  • FIG. A kernel of a sharpening filter used by the high resolution unit 15. 3 is a flowchart showing processing operations of the fluctuation correction apparatus 1;
  • FIG. 10 is a functional block diagram of an imaging apparatus 1500 according to the third embodiment.
  • FIG. 10 is a functional block diagram of a monitoring system 1600 according to a fourth embodiment.
  • FIG. 10 is a functional block diagram of a code decoding system 1700 according to a fifth embodiment.
  • prior application 1 Japanese Patent Application No. 2012-107470 (referred to as prior application 1) and Japanese Patent Application No. 2013-057774 (referred to as prior application 2), which avoid the problem of image quality degradation of a moving subject. did.
  • the technology to be disclosed in these applications detects a region where a subject moving in the image is present in a plurality of images taken along a time series, and has an effect of time smoothing according to the likelihood. It is characterized by adjusting.
  • the background area where the moving body does not exist is corrected by time smoothing, and the area where the moving body exists is corrected by spatial processing by weakening the effect of time smoothing.
  • a histogram is used to detect a motion caused by a moving object. Since the histogram of the pixel block centered on the pixel of interest has a strong characteristic against deformation of the object, the moving object can be detected even under the influence of fluctuation by comparing the histograms of two images that are temporally different. Is possible.
  • the image processing apparatus (fluctuation correction apparatus 1) according to the prior application 2 (described in the third embodiment described later) has a plurality of images (for example, past, present (images) taken in time series. N), future images, etc.) to create a time-smoothed image M, create histograms H1 and H2 for each pixel of image N and time-smoothed image M, and then between histograms H1 and H2.
  • the pixel correction amount is changed for each pixel according to the similarity R.
  • a moving object is basically detected from a histogram of pixel blocks having a fixed size between images.
  • An appropriate comparison target pixel block size is determined by the amount of fluctuation of the image to be processed.
  • the comparison target pixel block size is small relative to the air fluctuation, there is a risk that characteristics robust to changes in the shape of the subject in the histogram may be lost.
  • FIG. 1 is a diagram for explaining the effect of temporal smoothing of pixel values on an image.
  • a plurality of input images 101 to 103 taken at a predetermined time interval are shown in order of time flowing from left to right.
  • the input images 101 to 103 are sequentially input to the image processing apparatus and become correction target images.
  • the smoothed image 104 is obtained by averaging a plurality of input images (in this example, the input images 101 to 103) in the time domain.
  • the smoothed image 104 in an area 800 where the subject of the correction target image is stationary, distortion due to the influence of air fluctuation is corrected by time smoothing. This is the basic principle of air fluctuation removal.
  • the region 801 where the subject is moving is smoothed in the time domain, not only the displacement of the pixel due to distortion but also the displacement of the pixel due to the movement of the subject is smoothed, and there is a blur around the moving subject. It will occur. That is, the temporal smoothing of the pixel value causes deterioration of the image when a moving subject exists in the image. Thus, it can be understood that it is important to accurately estimate whether an actual subject is stationary or moving.
  • the comparison target pixel block size must be larger than the size of the distortion in the image. If the comparison target pixel block size is too large with respect to the magnitude of the distortion, it becomes difficult to separately detect the distortion and the movement of the subject based on the histogram.
  • FIG. 2 is a diagram showing the relationship between the distortion in the image and the histogram.
  • An input image 901 shown on the upper side of FIG. 2 is an enlarged view of a part of the input image (correction target image). 2 is an image obtained by smoothing the area of the input image 901 (an image used for correction). The contour of the actual smoothed image 902 is blurred.
  • Two areas A and B in the input image 901 are comparison target pixel blocks, and the size of the broken-line frame indicates the comparison target pixel block size. Region A and region B have the same center position.
  • two regions A and B at the same pixel position are also shown in the upper smoothed image 902 of FIG.
  • the block sizes of the region A and the region B are different, and the block size of the region A is larger than the block size of the region B.
  • a part of the pixel area of the input image 901 in FIG. 2 is a subject image 96 that is affected by air fluctuations (that is, distortion appears at the edge of the image).
  • the subject image 96 is smoothed to become a subject image 97 with reduced distortion in the smoothed image 902.
  • the histogram 903 to be compared on the lower side of FIG. 2 uses the region A as the comparison target pixel block, and the histogram 904 to be compared uses the region B as the comparison target pixel block.
  • the horizontal axis represents pixel values (luminance), and the vertical axis represents the number of pixels belonging to each bin.
  • a histogram 906 is a partial histogram of the correction target image when the image correction unit 14 sets the block size of the region A as the comparison target pixel block size for the input image 901.
  • a histogram 908 is a histogram of a part of the correction target image when the image correction unit 14 sets the block size of the region B as the comparison target pixel block size for the input image 901.
  • the histogram 907 is a histogram of a part of the image for correction when the image correction unit 14 sets the block size of the region A as the comparison target pixel block size for the smoothed image 902.
  • a histogram 909 is a histogram of a part of the image for correction when the image correction unit 14 sets the block size of the region B as the comparison target pixel block size for the smoothed image 902.
  • the correction image histogram 908 is compared to the correction target image histogram 908.
  • the change of becomes large That is, if the block size is small, the characteristics of the histogram that are robust to the shape change of the subject are lost.
  • FIG. 3 is a diagram for explaining that a moving subject cannot be detected when the comparison target pixel block size is large.
  • FIG. 3 corresponds to FIG. 2 except that the images in the comparison target pixel block are different.
  • the upper input image 1001 is an enlarged view of a part of the input image.
  • the upper smoothed image 1002 is an image obtained by smoothing the area of the input image 1001. Two regions A and B in the input image 1001 are comparison target pixel blocks, and the same two regions A and B are set in corresponding locations in the smoothed image 1002.
  • the lower histogram diagram 1003 in FIG. 3 uses the region A as the comparison target pixel block, and the histogram diagram 1004 uses the region B as the comparison target pixel block.
  • a histogram 1006 is a histogram calculated by the image correction unit 14 when the region A of the input image 1001 is a comparison target pixel block.
  • a histogram 1007 is a histogram when the region A of the smoothed image 1002 is a comparison target pixel block.
  • a histogram 1008 is a histogram when the region B of the input image 1001 is a comparison target pixel block.
  • a histogram 1009 is a histogram when the region B of the smoothed image 1002 is set as the comparison target pixel block size.
  • the histogram diagram 1003 when the comparison target pixel block size is large (region A), the shape change due to the presence or absence of distortion is less different from the correction image histogram 1006 in the correction image histogram 1007. .
  • the histogram diagram 1004 when the comparison target pixel block size is small (region B), the shape change due to the presence or absence of distortion is different from the correction image histogram 1008 relative to the correction image histogram 1008. Is big.
  • Several methods are known for calculating the difference or similarity between two histograms, for example, normalized histogram intersections and Bhattacharyya coefficients can be used.
  • the histogram shown in FIG. 3 is a characteristic of the histogram of the moving subject. However, when the block size is increased, the difference in the histogram is reduced in the same manner as the histogram of the non-moving subject in FIG. 2, and the moving subject is detected. Can not.
  • the moving subject 1068 is certainly shown in the input image 1001 of FIG.
  • the subject 1068 is smoothed, and as shown in the subject 1079, the subject 1068 appears as an image in which a blur (mainly the influence of multiple exposure), which is a type of distortion, appears.
  • the moving subject 1068 exists only at one place in each frame (image), but when a plurality of frames are smoothed, it appears as a blurred image like the subject 1079 of the smoothed image 1002. That is, as the comparison target pixel block size becomes relatively larger with respect to the subject 1068, the influence of the change in the pixel value due to the movement of the subject on the histogram shape becomes smaller, and it becomes difficult to distinguish from the influence of the air fluctuation.
  • Comparison target pixel block size ⁇ Misdetection and oversight of moving objects are in a trade-off relationship with the comparison target pixel block size.
  • the size of the comparison target pixel block for analysis must be larger than the amount of fluctuation (the magnitude of distortion), but if it is more than that, it should be as small as possible in order to suppress blurring due to the moving object.
  • the block size of the comparison target pixel block for analysis is an important parameter that affects the effect of distortion correction in moving object detection using an histogram in an air fluctuation environment.
  • the appropriate parameter is determined by the amount of distortion in the image.
  • the correction intensity is automatically adjusted, and the influence of image deterioration due to distortion correction is minimized.
  • FIG. 4 is a block diagram illustrating a schematic configuration of the image processing apparatus 4 according to the first embodiment.
  • the image processing device 4 is obtained by adding a parameter control device 2 and an image memory 3 to the fluctuation correction device 1 described in FIG. 8 (Example 3 described later).
  • the parameter control device 2 automatically sets parameters suitable for the input image. This parameter includes the above-described comparison target pixel block size.
  • a plurality of input images photographed in time series are input to the input unit 4i.
  • the input unit 4 i sequentially outputs a plurality of input images to the parameter control device 2, the image memory 3, and the fluctuation correction device 1.
  • the image memory 3 stores the input image and provides the parameter control device 2 with a predetermined number of frames among the stored images.
  • the predetermined number of frames is, for example, a number used by the parameter control device 2 and the fluctuation correction device 1 for time smoothing processing for creating a correction image.
  • the parameter control device 2 generates a time-smoothed image from past input images stored in the image memory. Thereafter, parameters are obtained from the input image (the image to be corrected by the fluctuation correction apparatus 1) and the generated time-smoothed image, and are output to the fluctuation correction apparatus 1.
  • This parameter is, for example, the size of a pixel block to be compared such as a histogram in the fluctuation correction apparatus 1.
  • the fluctuation correction device 1 performs distortion correction processing on the input image based on the parameters input from the parameter control device 2, creates a high resolution image, stores it, and outputs it to the outside via the output unit 4o. .
  • the image memory 3 may be a memory (90) built in the fluctuation correction apparatus 1. If the parameter control device 2 uses the time-smoothed image generated in the fluctuation correction device 1, the image memory 3 is not necessary.
  • the parameter control device 2 calculates a plurality of indices having different robustness against deformation of the subject due to air fluctuation from the comparison target pixel block of the correction target image and the correction image.
  • any index is affected by a change in pixel value due to movement of the subject.
  • An index having a characteristic robust to deformation of a subject is, for example, a kind of index in which a plurality of pixels are aggregated and compared, and one of them is the above-mentioned histogram similarity.
  • An index that is not robust to the shape of the subject is, for example, a type that directly compares individual pixel values, one of which is SAD (Sum of Absolute Difference).
  • FIG. 5 shows how to classify the distribution of two different indices.
  • the parameter control device 2 estimates the relationship between the state of the subject in the comparison target pixel block and the comparison target pixel block size based on the relationship between the two different indexes shown in FIG.
  • the horizontal axis is the index I1
  • the vertical axis is the index I2.
  • the index I1 indicates a change in the pixel value due to the movement of the subject
  • the index I2 is a histogram-based index that is robust to subject deformation.
  • the indicator I1 is divided into a state P1 and other states by a threshold Th1. Further, the index I2 is divided into a state P2 and a state P3 by the threshold value Th2.
  • state P1 is assumed. If I1 ⁇ Th1 and I2 ⁇ Th2, state P2 is set. If I1 ⁇ Th1 and I2 ⁇ Th2, state P3 is set. Under an appropriate threshold value, the state P1 is a region that is considered to be a background, the state P2 is a region that is considered to have air fluctuations or a fine moving body, and the state P3 is This is an area considered as a moving body.
  • the parameter control device 2 of this example searches for an appropriate comparison target pixel block size based on this relationship. However, since the threshold value Th1 and the threshold value Th2 depend on the comparison target pixel block size obtained from the index, it is desirable to determine appropriately for each comparison target pixel block size.
  • FIG. 6 is a flowchart of processing for optimizing the comparison target pixel block size.
  • the parameter control device 2 makes it possible to minimize oversight of a moving subject while achieving correction of fluctuations less than a given maximum width.
  • a plurality of states to be classified and threshold values of each index for classification are defined in advance.
  • the maximum width of air fluctuation is roughly calculated from the amount of pixel displacement due to fluctuation, and the image is divided into a grid with a predetermined width (block size) larger than the calculated maximum width. And image block. That is, in the region dividing step S1201, the region is first divided virtually (logically) with a coarse granularity equal to or larger than the width of fluctuation.
  • the virtual division means that an image can be easily accessed in units of image blocks, and it is sufficient if the image can be logically handled as such.
  • the moving subject and the background may be classified at this point. At that time, by reducing the image beyond the width of the fluctuation, it is possible to roughly detect the moving body while mitigating the influence of the fluctuation.
  • comparison target pixel region determination step S1202 some image blocks to be compared in the state evaluation described later are determined from among the plurality of image blocks divided in region division step S1201.
  • the target image blocks may be extracted uniformly from the entire image, or a large number may be extracted from the subject and background near their boundaries. As described above, by limiting the target region, it is possible to expect an improvement in the accuracy of determining the comparison target pixel block size and a reduction in the amount of calculation.
  • the given pixel block (initially, the image block selected in S1202) is set as a comparison target region, and each comparison target region is set in advance as described in FIG. Evaluate which state is based on the index and threshold value.
  • the entire pixel in the given pixel block is set as the comparison target pixel block, but the pixels may be thinned out.
  • the comparison target pixel block size is evaluated starting from one that can correct the fluctuation of the maximum width, and reduced for each loop (from step S1203 through step S1205 and step S1207 to return to step S1203). That is, in the second and subsequent loops, the comparison target pixel block size reduced in step S1207 is applied.
  • state information integration step S1204 information on each pixel block is integrated to calculate a value (integration information) for determining whether the block size is appropriate.
  • the fluctuation rate of the ratio (P2 ratio) that the area in the state P2 in FIG. 5 occupies in the image is calculated as integrated information.
  • the comparison target pixel block size is the initial maximum value, the region with air fluctuation and the region with fine motion are determined to be in the state P2, so the P2 ratio is high.
  • the integrated information determination step S1205 when the integrated information (for example, the change rate) calculated in the state information integration step S1204 has changed by a predetermined rate or more compared to the previous integrated information, the block size determination step S1206 Branch to processing.
  • step S1204 is step S1205 immediately after the integration information is calculated for the first time
  • step S1207 the process branches to block size reduction step S1207.
  • step S1207 the comparison target pixel block size is reduced by a predetermined ratio from the previous block size, and the process proceeds to state evaluation step S1203 as a new comparison target pixel block size.
  • steps S1203, S1204, and S1206 the comparison target pixel block size is decreased, and when the ratio of the region in the state P2 is rapidly changed, the comparison target pixel block size is less than the fluctuation width. It is determined that the process has ended, the process loop is exited, and the process proceeds to block size determination step S1206. At this timing, a part of the region that has stably belonged to the state P2 so far (fluctuating but not a moving body) belongs to the state P3, and the P2 ratio rapidly decreases. it is conceivable that.
  • block size determination step S1206 the comparison target pixel block size at this point is adopted as the final comparison target pixel block size.
  • the block size at this time is slightly smaller than the width of the fluctuation, and a part of the moving body and the air fluctuation is classified into the state P3.
  • the above processing may be performed only once when the image processing apparatus is activated, or may be executed at predetermined time intervals and the comparison target pixel block size may be updated at predetermined time intervals.
  • One pixel block size to be compared may be determined for the entire screen, or may be determined for each divided region by dividing the screen. Alternatively, a special size may be determined for an area designated by the user (particularly, an area where a video is desired to be clear).
  • the block size may be updated by an external trigger such as a change in the zoom magnification of the camera that is shooting.
  • the image processing apparatus includes an input unit that inputs an input image taken in time series, an image memory that stores the input image, a fluctuation correction apparatus, and a parameter control apparatus.
  • the image memory outputs a predetermined number of frames of the stored input image to the parameter device, and the parameter device compares the input image with an image input from the image memory.
  • the target pixel block size is obtained, and the obtained comparison target pixel block size is output to the fluctuation correction device, and the fluctuation correction device creates a time-smoothed image using the input images taken along the time series.
  • an appropriate comparison target pixel block size can be automatically determined, it is possible to generate an image in which the distortion in the image is further reduced and the resolution is improved as compared with the prior application 2.
  • FIG. 7 is a block diagram illustrating a configuration of the image processing apparatus 7 according to the second embodiment.
  • the image processing device 7 is obtained by combining the parameter control device 5 with the same fluctuation correction device 1 as in the first embodiment.
  • the parameter control device 5 performs automatic parameter setting according to the flowchart of FIG. 6 using two types of indicators as in the first embodiment.
  • FIG. 7 a plurality of input images photographed in time series are input to the input unit 4i.
  • the input unit 4 i guides the input images to the parameter control device 5 and the fluctuation correction device 1.
  • the parameter control device 5 performs delay processing as necessary in order to match the timing for inputting and outputting images.
  • the fluctuation correction device 1 performs distortion correction processing on the input image based on the parameter information input from the parameter control device 5, creates a corrected image with high resolution, and outputs the corrected image to the output unit 4 o and the parameter control device 5. To do.
  • the fluctuation correction device 1 feeds back the correction information 6 when this distortion correction processing is executed to the parameter control device 5.
  • the correction information 6 may include past input images or correction images generated by smoothing them.
  • the parameter control device 5 obtains a parameter (comparison target pixel block size) from the input image that is the correction target image and the corrected image output from the fluctuation correction device 1, and outputs it to the fluctuation correction device 1.
  • the parameter control device 5 uses the corrected image as an alternative to the averaged image, receives the region where the fluctuation correction device 1 has detected the moving body as the correction information 6 and uses it for parameter control. For example, when determining a comparison target pixel area in S1202 of FIG. 6, if a certain number of areas are sampled from areas other than the area where the moving object is detected, robustness against changes in the proportion of the moving object in the image is achieved. become.
  • FIG. 8 is a functional block diagram of the image processing apparatus according to the third embodiment of the present invention.
  • the image processing apparatus includes an input unit 10, an image smoothing unit 11, a gradation distribution calculation unit 12, a dissimilarity calculation unit 13, an image correction unit 14, a high-resolution unit 15, a recording unit 16, and a control. Part 91 is provided.
  • the input unit 10 receives moving image data taken in time series by an imaging unit (not shown).
  • the input unit 10 includes an image input terminal, a network connection terminal, and the like, or may be a TV broadcast tuner.
  • the input unit 10 continuously acquires still image data such as a JPEG format, which is captured in time series at a predetermined time interval by an imaging unit such as a monitoring camera, and a plurality of past, present, and future images. May be stored in the memory 90 as an input image.
  • the input unit 10 is provided with MPEG, H.264, and so on.
  • H.264, HD-SDI format or other compressed or uncompressed moving image data may be extracted at predetermined intervals, and a plurality of past, present, and future images may be stored in the memory 90 as input images.
  • the input unit 10 can acquire a plurality of past, present, and future input images from the memory 90 using DMA (Direct Memory Access) or the like. Further, as will be described later, the input unit 10 may store a plurality of past, present, and future input images already stored in a removable recording medium in the memory 90.
  • DMA Direct Memory Access
  • the input unit 10 performs delay processing as necessary in order to make the image at time t, which is a correction target image, the current image, the older image the past, and the new image the future image. That is, the image at time t is actually older than the current time, and the “future image” includes the current (latest) image.
  • the input unit 10 outputs the current image and the previous and subsequent images at the same pixel position to the image smoothing unit 11.
  • the image smoothing unit 11 synthesizes images at the same pixel position among a plurality of past, present (time t), and future input images in time series, and creates a smoothed image corresponding to the image at time t. To do. An example of creating a smoothed image will be described later with reference to FIG.
  • the gradation distribution calculation unit 12 calculates the gradation distribution for each image area centered on the pixel of each image for each of the input image and the smoothed image at time t. This process is performed for each pixel of the input image and the smoothed image at time t at the same pixel position.
  • An example of the tone distribution will be described later with reference to FIG.
  • the dissimilarity calculation unit 13 calculates the similarity or dissimilarity between the gradation distribution of the input image and the gradation distribution of the smoothed image calculated by the gradation distribution calculation unit 12 at time t.
  • An example of the degree of difference between the gradation distributions will be described later with reference to FIG.
  • the image correction unit 14 corrects the input image at time t by combining the input image at time t and the smoothed image.
  • the ratio of combining both is changed according to the difference calculated by the difference calculation unit 13.
  • the high resolution unit 15 creates a high resolution image at time t from the corrected image at time t. An example of increasing the resolution will be described again with reference to FIG.
  • the recording unit 16 stores the high-resolution image at time t in the memory 90 when storing the high-resolution image at time t with reduced heat. When the high resolution image at time t is not stored, the input image is stored.
  • the storage unit 16 can switch the image stored in the memory 90 according to the mode. For example, when the mode is 1, the high resolution image S obtained by the high resolution unit 15 is stored in the memory 90 or the high resolution image S is output to the outside of the fluctuation correction apparatus 1. When the mode is 0, the high-resolution image S is not stored in the memory 90 and the input image is output to the outside of the fluctuation correction apparatus 1.
  • the control unit 91 is connected to each element in the fluctuation correction apparatus 1. Each element of the fluctuation correction apparatus 1 operates autonomously or according to an instruction from the control unit 91.
  • FIG. 9 is a diagram for explaining an operation example of the image smoothing unit 11.
  • the past and future input images image N ⁇ 2, image N ⁇ 1, image N + 1, image N + 2 and the current (time t) input image N with the image N as a base point
  • a smoothed image is created (N is a natural number).
  • an image N-2, an image N-1, an image N + 1, and an image N + 2 are displayed side by side in time series, and time flows from the past to the future from the left to the right of the page.
  • the image N is the current (time t) image
  • the image N-1 on the left side is a past image
  • the image N-2 is a past image.
  • the image N + 1 on the right side is a future image
  • the image N + 2 is a future image.
  • the pixel value of the image N-2 at the coordinates (i, j) is q1 (i, j), the pixel value of the image N-1 is q2 (i, j), the pixel value of the image N is q3 (i, j), The pixel value of the image N + 1 is q4 (i, j), the pixel value of the image N + 2 is q5 (i, j), the pixel value of the smoothed image M at the coordinates (i, j) is m (i, j), Let p1 to p5 be weighting factors.
  • the image smoothing unit 11 generates a smoothed image M obtained by smoothing each image in time series by synthesizing each image using Equation 1 below.
  • i indicates the pixel position in the horizontal direction on the image
  • j indicates the pixel position in the vertical direction on the image.
  • D is the number of images used for image composition.
  • D in Equation 1 is 5.
  • the pixel value in Equation 1 may be in any format, for example, the value of each channel in an arbitrary color space, that is, R (red), G (green), B (blue) in RGB color space, or YCbCr color space. It may be Y (luminance), Cb, Cr (color difference), or each component value of the HSV color space.
  • the amount of displacement when the subject is deformed due to the influence of the sun's flame has a property of statistically following a Gaussian distribution. For this reason, an image close to the original shape of the subject can be acquired by smoothing the pixels using a plurality of past and future images.
  • the base image is reconstructed using a limited range of past and future images, unlike the case where multiple past images are synthesized repeatedly for a long period of time and smoothed, the influence of the past images is reduced. Strong reception can be suppressed.
  • FIG. 10 is a diagram illustrating a state in which the gradation distribution calculation unit 12 calculates the gradation distribution using the pixels in the peripheral region of the target pixel.
  • the gradation distribution calculation unit 12 obtains a histogram H1 representing the gradation distribution of the image area E1 (for example, 32 pixels ⁇ 32 pixels) including the target pixel 310 of the input image N corresponding to the time t.
  • the gradation distribution calculation unit 12 obtains a histogram H2 representing the gradation distribution of the image area E2 corresponding to the image area E1 in the smoothed image M.
  • the image region E ⁇ b> 2 includes the pixel 320 of the smoothed image M corresponding to the target pixel 310.
  • the horizontal axis indicates the gradation value
  • the vertical axis indicates the frequency (number of appearances) of the gradation value. Note that the horizontal axis and vertical axis of other histograms described below are the same as the histogram of FIG.
  • FIG. 11 is a diagram for explaining that the histogram H2 changes due to the movement of the moving object included in the input image.
  • the histogram of the image area E3 including the target pixel 410 of the image N is set to H1.
  • the histogram H2 of the image region E4 of the target pixel 411 of the smoothed image M has a different shape according to the speed of the moving object included in the past, current, and future images.
  • the histogram H2 When the movement of the moving object is stationary to low speed, the histogram H2 has a shape similar to the histogram H1, as shown in FIG. When the movement of the moving object is from low speed to medium speed, a little different pattern information from that of the moving object is included. Therefore, as shown in FIG. 11B, the shape of the histogram H2 is slightly different from the histogram H1. When the movement of the moving object is from medium speed to high speed, many patterns that are significantly different from the moving object are included. Therefore, as shown in FIG. 11 (3), the histogram H2 has a shape that is significantly different from the histogram H1. Therefore, by comparing the histogram H1 of the image N and the histogram of the smoothed image M, the movement of the moving object in the image can be simply checked.
  • the difference calculation unit 13 calculates the gradation distribution obtained by the gradation distribution calculation unit 12, for example, the difference between the histograms described with reference to FIGS.
  • the following formula 2 is used to obtain the distance B32 (c) between the histograms using the Bhattacharyya distance for each RGB component, and the square root of the square sum thereof is set as the dissimilarity L. Since the difference L corresponds to the distance between histograms, the smaller the value of L, the higher the difference. Note that c in Equation 2 represents any of the RGB components.
  • HA (I) represents the frequency of the gradation value I in the histogram A
  • gradation number the number of bins in the histogram
  • the gradation distribution calculation unit 12 can obtain a more reliable difference degree by creating histograms of a plurality of versions having different numbers of gradations and summing the differences between them. That is, when calculating the histograms H1 and H2 of the first version (the number of gradations is 32), a plurality of adjacent gradations are integrated to generate a version histogram with a reduced number of gradations. Such processing is recursively performed until the number of gradations becomes 2, and the difference is accumulated successively. Since distortion within the range of the class width is included in the same class, the image area deformed by the moving flame and the image area deformed by the moving object is kept low with respect to the various flame widths of the flame. Can be more accurately distinguished. A description will be given below together with specific examples.
  • the gradation distribution calculation unit 12 calculates a histogram for all pixels, and the gradation distribution calculation unit 12 calculates the degree of difference for all pixels. However, if it is necessary to reduce the calculation amount, the target pixel may be thinned out. Yes, the number of target pixels may be one for the number of pixels in the image area E1.
  • FIG. 12 is a diagram for explaining the operation of the image correction unit 14.
  • the image correction unit 14 uses the input image N and the smoothed image M to determine the pixel value of the corrected image. Specifically, using the dissimilarity L obtained by the dissimilarity calculating unit 13 and the two threshold values T1 and T2 (T1 ⁇ T2), the ratio of the input image N and the smoothed image M is changed for each pixel and blended. As a result, a corrected image is obtained.
  • L ⁇ T1 (the dissimilarity between the histograms is high) corresponds to the case where the motion of the object in the image is slow to low.
  • the region 600 and the region 601 in FIG. Region 601 may actually be moving at high speed, but a car body with less texture is treated as stationary because it does not change the image.
  • the pixel value of the smoothed image M is used as the correction value as it is.
  • T2 ⁇ L corresponds to the case where the movement of the moving object is from medium speed to high speed.
  • a moving object has existed in the past, but in the present or future, no object has passed.
  • the pixel value of the input image N is used as the correction value without using the smoothed image M.
  • T1 ⁇ L ⁇ T2 corresponds to the case where the movement of the moving object in the image is from low speed to medium speed.
  • the border of the border line of the region 610 in FIG. 12 corresponds to this, and is an intermediate case with respect to the above two cases, so that the input image N and the smoothed image M are A value obtained by blending each pixel value is set as a correction value.
  • the pixel value of the image N at the coordinates (i, j) is n (i, j)
  • the pixel value of the smoothed image M at the coordinates (i, j) is m (i, j)
  • the blend ratio R of each pixel is calculated by bilinear interpolation or a low-pass filter by convolution of discrete R and a sinc function.
  • FIG. 13 is a kernel of a sharpening filter used by the high resolution unit 15.
  • the center cell corresponds to the target pixel, and the value in each cell represents the coefficient by which the pixel at the corresponding position is multiplied.
  • the high resolution unit 15 sharpens each pixel of the corrected image using the filter kernel, and creates a high resolution image S with improved resolution.
  • FIG. 14 is a flowchart showing the operation of the fluctuation correction apparatus 1. Hereinafter, each step of FIG. 14 will be described.
  • Step S701 The input unit 10 acquires past, current, and future image frames and outputs them to the image smoothing unit 11.
  • the image smoothing unit 11 calculates a smoothed image M.
  • the subscript u in step 701 is 2.
  • Step S702 The gradation distribution calculation unit 12 calculates histograms H1 and H2 for each pixel of the image N and the smoothed image M.
  • Step S702 The difference calculation unit 13 calculates a difference L between the histograms H1 and H2.
  • Step S704 The image correction unit 14 compares the difference L with each threshold value (T1, T2). If L ⁇ T1, the process proceeds to step 705. If T1 ⁇ L, the process proceeds to step S706. (Step S705) The image correction unit 14 sets the pixel value of the smoothed image M as a correction value. (Steps S706 to S708) The image correcting unit 14 determines whether or not the dissimilarity L satisfies T1 ⁇ L ⁇ T2 with respect to the thresholds T1 and T2. If the above condition is satisfied, the process proceeds to step 707. If not satisfied (L ⁇ T1, or T2 ⁇ L), the process proceeds to step S708.
  • steps S704 and S706 only determines whether or not the relationship between the degree of difference L and the thresholds T1 and T2 satisfies T1 ⁇ L ⁇ T2. For example, in the processing step including step S704 and step S706, the process proceeds to step S705 if L ⁇ T1, proceeds to step 707 if T1 ⁇ L ⁇ T2, and proceeds to step 708 if T2 ⁇ L. move on.
  • Step S707 When the condition is satisfied in Step S706, the image correction unit 14 sets a value obtained by blending the pixel values of the image N and the smoothed image M based on Expression 3 as a correction value.
  • Step S708 When the condition is not satisfied, the pixel value of the image N is set as a correction value.
  • Step S709 The resolution increasing unit 15 creates the resolution-enhanced image S in which the resolution of the corrected image N ′ obtained by the image correcting unit 14 is improved.
  • Step S710 The image correcting unit 14 and the resolution increasing unit 15 repeat the processes in steps S702 to S709 until the correction value and the pixel value having the increased resolution are obtained for all the pixels in the target image.
  • the dissimilarity calculation unit 13 of this example estimates the speed of the moving object using only the histogram-based dissimilarity L and classifies the pixels. Like the parameter control device 5, the dissimilarity calculation unit 13 classifies the pixels using two types of indices. May be performed.
  • FIG. 15 is a functional block diagram of the imaging apparatus 1500 according to the third embodiment of the present invention.
  • the imaging device 1500 includes an imaging unit 1501, an image processing device 1503, and an image display unit 1502.
  • the imaging unit 1501 is an imaging device that receives light emitted from a subject and converts the received optical image into image data.
  • the image processing device 1503 is the image processing device 4 or 7 according to the first or second embodiment, and receives the image data captured by the imaging unit 1501 and corrects distortion caused by air fluctuation due to the generation of a hot flame.
  • the image display unit 1502 is a device such as a display that displays a corrected image output from the image processing apparatus 1503.
  • the image display unit 1502 switches the image to be displayed according to the operation mode. For example, when the mode is 1, a corrected image in which distortion caused by air fluctuation due to the occurrence of a hot flame is reduced is displayed. When the mode is 0, the input image is displayed as it is.
  • an imaging apparatus that displays to the photographer a corrected image in which distortion caused by air fluctuations due to generation of a heat flame in the entire image including both the stationary region and the moving object is reduced. Can be provided.
  • FIG. 16 is a functional block diagram of the monitoring system 1600 according to the fourth embodiment of the present invention.
  • the monitoring system 1600 includes an imaging device 1601, an image processing device 1604, a server 1602, and a display device 1603.
  • the imaging device 1601 is an imaging device such as one or more surveillance cameras that capture image data.
  • the image processing device 1604 is the image processing device 4 or 7 according to the first or second embodiment, and receives image data captured by the imaging device 1601 and corrects the heat.
  • a server 1602 is a computer on which an image processing apparatus 1604 is installed.
  • the display device 1603 is a device such as a display that displays a corrected image output from the image processing device 1604.
  • the imaging device 1601 and the server 1602 and the server 1602 and the display device 1603 can be connected via a network such as the Internet, for example, depending on the physical arrangement between the monitored location and the monitoring operator. it can.
  • a monitoring system that displays to the monitoring operator a corrected image in which distortion caused by air fluctuations due to generation of a heat flame in the entire image including both the stationary region and the moving object is reduced. Can be provided.
  • FIG. 17 is a functional block diagram of a code decoding system 1700 according to the fifth embodiment of the present invention.
  • the code decoding system 1700 includes an encoding device 1710, a decoder 1721, and a display device 1722.
  • the encoding device 1710 includes an imaging device 1711, an image processing device 1704, and an encoder 1712.
  • the imaging device 1711 is an imaging device such as a surveillance camera that captures image data.
  • the image processing device 1704 is the image processing device 4 or 7 according to the first or second embodiment, and receives image data captured by the imaging device 1711 and corrects the heat.
  • the encoder 1712 encodes the corrected image data output from the image processing apparatus 1704 and transmits the encoded image data to the decoder 1721 via the network.
  • the decoder 1721 decodes the corrected image data that has been transmitted.
  • the display device 1722 displays the image decoded by the decoder 1721.
  • a code decoding system that displays a decoded image in which distortion due to air fluctuation due to generation of a heat flame in the entire image including both a stationary region and a moving object is reduced. be able to. Furthermore, by reducing distortion caused by air fluctuations due to the occurrence of a hot flame in an image, the difference between images to be transmitted by the encoder 1712 is reduced, and the encoding efficiency is improved.
  • the image correction method of this embodiment can perform a wide range of corrections for, for example, water surface fluctuations, tree shakes, haze in the atmosphere, image distortion during underwater photography, etc. in addition to the heat wave. Yes, it can be widely used for the purpose of improving visibility by stopping an irregularly swinging subject.
  • the moving object separation method by area division according to the present embodiment can be used for purposes other than image correction.
  • the amount of processing can be reduced by analyzing an area identified as a moving object in a system that automatically detects an intruder from a surveillance video that includes distortion due to a hot flame, etc., or automatically identifies the number of an approaching vehicle. Can be reduced.
  • each configuration, function, processing unit, processing means, and the like may be realized by hardware by designing a part or all of them with, for example, an integrated circuit.
  • Each of the above-described configurations, functions, and the like may be realized by software by interpreting and executing a program that realizes each function by the processor.
  • Information such as programs, tables, and files for realizing each function should be stored in a non-transitory recording medium such as a memory, a hard disk, a recording device such as SSD (Solid State Drive), an IC card, an SD card, or a DVD. Can do.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un dispositif de traitement d'image qui réduit les perturbations liées à la brume de chaleur dans les images contenant à la fois des régions fixes et des objets mobiles. Ledit dispositif obtient des histogrammes pour les distributions de niveau d'une image cible qui est corrigée ainsi qu'un cadre de correction et utilise ledit cadre de correction pour corriger l'image cible tout en faisant varier une proportion de mélange de l'image cible et du cadre de correction en fonction du degré de similarité entre les histogrammes. En vue de réaliser une commande adaptative de la taille des blocs d'image pour lesquels sont calculés les histogrammes, le dispositif utilise les blocs d'image de l'image cible et une image corrigée pour calculer deux indices ayant des degrés de robustesse différents par rapport à la déformation du sujet et détermine le caractère approprié de la taille du bloc d'image en se basant sur la relation entre lesdits indices. La différence entre les histogrammes peut être utilisée comme l'un des indices et la somme des différences absolues (SAD) entre les valeurs de pixel peut être utilisée comme l'autre indice.
PCT/JP2014/067688 2013-07-09 2014-07-02 Dispositif de traitement d'image et procédé de traitement d'image WO2015005196A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US14/902,626 US9547890B2 (en) 2013-07-09 2014-07-02 Image processing apparatus and image processing method
JP2015526283A JP5908174B2 (ja) 2013-07-09 2014-07-02 画像処理装置及び画像処理方法
EP14822122.9A EP3021575B1 (fr) 2013-07-09 2014-07-02 Dispositif de traitement d'image et procédé de traitement d'image

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013-143702 2013-07-09
JP2013143702 2013-07-09

Publications (1)

Publication Number Publication Date
WO2015005196A1 true WO2015005196A1 (fr) 2015-01-15

Family

ID=52279881

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/067688 WO2015005196A1 (fr) 2013-07-09 2014-07-02 Dispositif de traitement d'image et procédé de traitement d'image

Country Status (4)

Country Link
US (1) US9547890B2 (fr)
EP (1) EP3021575B1 (fr)
JP (1) JP5908174B2 (fr)
WO (1) WO2015005196A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016136151A1 (fr) * 2015-02-25 2016-09-01 パナソニックIpマネジメント株式会社 Dispositif de traitement d'images, procédé de traitement d'images, et programme d'exécution correspondant
WO2016147957A1 (fr) * 2015-03-13 2016-09-22 リコーイメージング株式会社 Dispositif d'imagerie et procédé d'imagerie
CN107018315A (zh) * 2015-11-26 2017-08-04 佳能株式会社 摄像设备、运动矢量检测装置及其控制方法
WO2018011870A1 (fr) * 2016-07-11 2018-01-18 三菱電機株式会社 Procédé, dispositif et programme de traitement d'images en mouvement
CN111062870A (zh) * 2019-12-16 2020-04-24 联想(北京)有限公司 一种处理方法及装置
CN111292268A (zh) * 2020-02-07 2020-06-16 北京字节跳动网络技术有限公司 图像处理方法、装置、电子设备及计算机可读存储介质

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6505237B2 (ja) * 2015-09-18 2019-04-24 株式会社日立国際電気 画像処理装置
US9710911B2 (en) * 2015-11-30 2017-07-18 Raytheon Company System and method for generating a background reference image from a series of images to facilitate moving object identification
JP6772000B2 (ja) * 2016-08-26 2020-10-21 キヤノン株式会社 画像処理装置、画像処理方法およびプログラム
US10176557B2 (en) * 2016-09-07 2019-01-08 The Boeing Company Apparatus, system, and method for enhancing image video data
CN108228679B (zh) * 2016-12-22 2022-02-18 阿里巴巴集团控股有限公司 时序数据计量方法和时序数据计量装置
KR101905128B1 (ko) 2017-03-13 2018-10-05 김창민 빛의 불규칙성을 기반으로 하는 동작영역 검출 제어 방법 및 그 방법을 이용한 동작영역 검출 장치
US10549853B2 (en) 2017-05-26 2020-02-04 The Boeing Company Apparatus, system, and method for determining an object's location in image video data
US10789682B2 (en) 2017-06-16 2020-09-29 The Boeing Company Apparatus, system, and method for enhancing an image
WO2019157717A1 (fr) * 2018-02-14 2019-08-22 北京大学 Procédé et dispositif de compensation de mouvement, et système informatique
CN108681990B (zh) * 2018-04-04 2022-05-24 高明合 一种实时雾霾预警方法及系统
CN110533666B (zh) * 2018-05-25 2022-09-23 杭州海康威视数字技术股份有限公司 一种获取数据块尺寸的方法、处理数据的方法及装置
US11544873B2 (en) * 2019-03-05 2023-01-03 Artilux, Inc. High resolution 3D image processing apparatus and method thereof
JP7421273B2 (ja) 2019-04-25 2024-01-24 キヤノン株式会社 画像処理装置及びその制御方法及びプログラム
EP3935601A4 (fr) * 2019-08-06 2022-04-27 Samsung Electronics Co., Ltd. Mise en correspondance d'histogramme local avec régularisation globale et exclusion de mouvement pour fusion d'image à expositions multiples
CN111401406B (zh) * 2020-02-21 2023-07-18 华为技术有限公司 一种神经网络训练方法、视频帧处理方法以及相关设备
CN111479115B (zh) * 2020-04-14 2022-09-27 腾讯科技(深圳)有限公司 一种视频图像处理方法、装置及计算机可读存储介质
KR102374840B1 (ko) * 2020-10-20 2022-03-15 두산중공업 주식회사 딥러닝 학습용 결함 이미지 생성 방법 및 이를 위한 시스템

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006302115A (ja) * 2005-04-22 2006-11-02 Fujitsu Ltd 物体検出方法および物体検出装置
JP2008160733A (ja) * 2006-12-26 2008-07-10 Sony Corp 撮像装置、撮像信号処理方法及びプログラム
JP2011215695A (ja) * 2010-03-31 2011-10-27 Sony Corp 移動物体検出装置及び方法、並びにプログラム
JP2011229030A (ja) 2010-04-21 2011-11-10 Sony Corp 画像処理装置および方法、記録媒体、並びにプログラム
JP2012107470A (ja) 2010-11-19 2012-06-07 Kmew Co Ltd 建築用板材
JP2012182625A (ja) 2011-03-01 2012-09-20 Nikon Corp 撮像装置
JP2013057774A (ja) 2011-09-08 2013-03-28 Kyocera Document Solutions Inc 定着装置及びそれを備えた画像形成装置
JP2013122639A (ja) * 2011-12-09 2013-06-20 Hitachi Kokusai Electric Inc 画像処理装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4541316B2 (ja) * 2006-04-06 2010-09-08 三菱電機株式会社 映像監視検索システム
JP4650560B2 (ja) * 2008-11-27 2011-03-16 ソニー株式会社 画像処理装置、画像処理方法、及びプログラム
JP5362878B2 (ja) * 2012-05-09 2013-12-11 株式会社日立国際電気 画像処理装置及び画像処理方法
US9292934B2 (en) * 2012-10-29 2016-03-22 Hitachi Kokusai Electric Inc. Image processing device
JP6104680B2 (ja) * 2013-03-21 2017-03-29 株式会社日立国際電気 画像処理装置、撮像装置、監視システム、符号化装置、画像処理方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006302115A (ja) * 2005-04-22 2006-11-02 Fujitsu Ltd 物体検出方法および物体検出装置
JP2008160733A (ja) * 2006-12-26 2008-07-10 Sony Corp 撮像装置、撮像信号処理方法及びプログラム
JP2011215695A (ja) * 2010-03-31 2011-10-27 Sony Corp 移動物体検出装置及び方法、並びにプログラム
JP2011229030A (ja) 2010-04-21 2011-11-10 Sony Corp 画像処理装置および方法、記録媒体、並びにプログラム
JP2012107470A (ja) 2010-11-19 2012-06-07 Kmew Co Ltd 建築用板材
JP2012182625A (ja) 2011-03-01 2012-09-20 Nikon Corp 撮像装置
JP2013057774A (ja) 2011-09-08 2013-03-28 Kyocera Document Solutions Inc 定着装置及びそれを備えた画像形成装置
JP2013122639A (ja) * 2011-12-09 2013-06-20 Hitachi Kokusai Electric Inc 画像処理装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3021575A4

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016136151A1 (fr) * 2015-02-25 2016-09-01 パナソニックIpマネジメント株式会社 Dispositif de traitement d'images, procédé de traitement d'images, et programme d'exécution correspondant
JPWO2016136151A1 (ja) * 2015-02-25 2017-04-27 パナソニックIpマネジメント株式会社 画像処理装置、画像処理方法及びそれを実行させるためのプログラム
EP3131283A4 (fr) * 2015-02-25 2017-06-21 Panasonic Intellectual Property Management Co., Ltd. Dispositif de traitement d'images, procédé de traitement d'images, et programme d'exécution correspondant
WO2016147957A1 (fr) * 2015-03-13 2016-09-22 リコーイメージング株式会社 Dispositif d'imagerie et procédé d'imagerie
JP2016171511A (ja) * 2015-03-13 2016-09-23 リコーイメージング株式会社 撮像装置および撮像方法
CN107018315A (zh) * 2015-11-26 2017-08-04 佳能株式会社 摄像设备、运动矢量检测装置及其控制方法
WO2018011870A1 (fr) * 2016-07-11 2018-01-18 三菱電機株式会社 Procédé, dispositif et programme de traitement d'images en mouvement
JPWO2018011870A1 (ja) * 2016-07-11 2018-10-25 三菱電機株式会社 動画像処理装置、動画像処理方法及び動画像処理プログラム
CN109478319A (zh) * 2016-07-11 2019-03-15 三菱电机株式会社 动态图像处理装置、动态图像处理方法及动态图像处理程序
CN111062870A (zh) * 2019-12-16 2020-04-24 联想(北京)有限公司 一种处理方法及装置
CN111292268A (zh) * 2020-02-07 2020-06-16 北京字节跳动网络技术有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN111292268B (zh) * 2020-02-07 2023-07-25 抖音视界有限公司 图像处理方法、装置、电子设备及计算机可读存储介质

Also Published As

Publication number Publication date
US9547890B2 (en) 2017-01-17
JPWO2015005196A1 (ja) 2017-03-02
EP3021575A4 (fr) 2017-06-07
EP3021575B1 (fr) 2020-06-17
EP3021575A1 (fr) 2016-05-18
US20160171664A1 (en) 2016-06-16
JP5908174B2 (ja) 2016-04-26

Similar Documents

Publication Publication Date Title
JP5908174B2 (ja) 画像処理装置及び画像処理方法
US10521885B2 (en) Image processing device and image processing method
Shin et al. Radiance–reflectance combined optimization and structure-guided $\ell _0 $-Norm for single image dehazing
Rao et al. A Survey of Video Enhancement Techniques.
US9247155B2 (en) Method and system for robust scene modelling in an image sequence
JP6505237B2 (ja) 画像処理装置
US9202263B2 (en) System and method for spatio video image enhancement
JP5144202B2 (ja) 画像処理装置およびプログラム
JP6104680B2 (ja) 画像処理装置、撮像装置、監視システム、符号化装置、画像処理方法
US20190199898A1 (en) Image capturing apparatus, image processing apparatus, control method, and storage medium
US20140177960A1 (en) Apparatus and method of processing image
US10013772B2 (en) Method of controlling a quality measure and system thereof
US9165345B2 (en) Method and system for noise reduction in video systems
CN104335565A (zh) 采用具有自适应滤芯的细节增强滤波器的图像处理方法
JP2018520531A (ja) 画像に対する深度マップを決定するための方法及び装置
US9355435B2 (en) Method and system for adaptive pixel replacement
Fuh et al. Mcpa: A fast single image haze removal method based on the minimum channel and patchless approach
US20230274398A1 (en) Image processing apparatus for reducing influence of fine particle in an image, control method of same, and non-transitory computer-readable storage medium
KR102136716B1 (ko) 관심영역 기반의 화질개선 장치와 그를 위한 컴퓨터로 읽을 수 있는 기록 매체
JP6938282B2 (ja) 画像処理装置、画像処理方法及びプログラム
CN107292853B (zh) 图像处理方法、装置、计算机可读存储介质和移动终端
US9686449B1 (en) Methods and systems for detection of blur artifact in digital video due to high quantization
CN111091136B (zh) 一种视频场景变换检测方法和系统
Rakhshanfar Automated Estimation, Reduction, and Quality Assessment of Video Noise from Different Sources
WO2015040731A1 (fr) Dispositif de traitement d'image et procédé de traitement d'image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14822122

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2015526283

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 14902626

Country of ref document: US

Ref document number: 2014822122

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE