US20150215544A1 - Image processing apparatus, image capturing apparatus, control method and recording medium - Google Patents
Image processing apparatus, image capturing apparatus, control method and recording medium Download PDFInfo
- Publication number
- US20150215544A1 US20150215544A1 US14/605,561 US201514605561A US2015215544A1 US 20150215544 A1 US20150215544 A1 US 20150215544A1 US 201514605561 A US201514605561 A US 201514605561A US 2015215544 A1 US2015215544 A1 US 2015215544A1
- Authority
- US
- United States
- Prior art keywords
- image
- display
- frequency range
- unit
- processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012545 processing Methods 0.000 title claims abstract description 199
- 238000000034 method Methods 0.000 title claims description 27
- 230000009467 reduction Effects 0.000 claims description 16
- 238000000926 separation method Methods 0.000 claims description 14
- 238000006243 chemical reaction Methods 0.000 claims description 13
- 230000003247 decreasing effect Effects 0.000 claims description 4
- 239000000203 mixture Substances 0.000 description 14
- 238000005070 sampling Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 10
- 230000008569 process Effects 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 238000007796 conventional method Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 239000002131 composite material Substances 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- H04N5/23293—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/815—Camera processing pipelines; Components thereof for controlling the resolution by using a single image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/10—Image enhancement or restoration using non-spatial domain filtering
-
- H04N5/23235—
Definitions
- the present invention relates to an image processing apparatus, an image capturing apparatus, a control method, and a recording medium, and particularly to a display technique regarding an electronic viewfinder.
- image capturing apparatuses such as a digital camera implement a viewfinder function (electronic viewfinder) by displaying a captured image on the display device of the image capturing apparatus.
- a viewfinder function electronic viewfinder
- Japanese Patent Laid-Open No. 2005-142707 discloses a technique of reducing the display delay of of an image regarding the electronic viewfinder during continuous shooting.
- An image capturing apparatus in Japanese Patent Laid-Open No. 2005-142707 alternately performs readout of a signal for saving from an image sensor and readout of a signal for the electronic viewfinder, and preferentially performs processing of the latter signal, thereby reducing the display delay of an image regarding the electronic viewfinder.
- the image capturing timing is not coincident between an image for saving and an image for display regarding the electronic viewfinder that are generated by reading out two types of signals, and the synchronism is not ensured.
- the present invention was made in view of such problems in the conventional technique.
- the present invention provides an image processing apparatus, image capturing apparatus, control method, and recording medium for ensuring the synchronism between an image saved during image capturing/saving and an image displayed as the electronic viewfinder.
- the present invention in its first aspect provides an image processing apparatus comprising: an obtaining unit configured to obtain an image signal output by image capturing; a separation unit configured to generate a plurality of frequency range images by separating an image based on the image signal obtained by the obtaining unit into predetermined frequency ranges; an image processing unit configured to apply predetermined image processing to each of the plurality of frequency range images generated by the separation unit; a first generation unit configured to generate an image for saving based on the plurality of frequency range images to which the image processing unit has applied the predetermined image processing; a second generation unit configured to generate an image for display based on some frequency range images, out of the plurality of frequency range images to which the image processing unit has applied the predetermined image processing; and an output unit configured to output one of the image for saving and the image for display.
- the present invention in its second aspect provides a method of controlling an image processing apparatus, comprising: an obtaining step of obtaining an image signal output by image capturing; a separation step of generating a plurality of frequency range images by separating an image based on the image signal obtained in the obtaining step into predetermined frequency ranges; an image processing step of applying predetermined image processing to the frequency range images generated in the separation step; a first generation step of generating an image for saving based on the plurality of frequency range images to which the predetermined image processing has been applied in the image processing step; a second generation step of generating an image for display based on some frequency range images, out of the plurality of frequency range images to which the predetermined image processing has been applied in the image processing step; and an output step of outputting one of the image for saving and the image for display.
- FIG. 1 is a block diagram showing the functional arrangement of a digital camera 100 according to an embodiment of the present invention
- FIG. 2 is a block diagram exemplifying the arrangement of a signal processing unit 104 regarding layer-specific processing according to the embodiment of the present invention
- FIG. 3 is a flowchart exemplifying display/saving processing to be executed by the digital camera 100 according to the embodiment of the present invention
- FIGS. 4A and 4B are timing charts exemplifying display/saving processes by a conventional method and a method according to this embodiment.
- FIG. 5 is a timing chart exemplifying display/saving processing by a method according to a modification.
- layer-specific processing is processing of performing noise reduction processing that differs between images separated into a plurality of layers in accordance with the frequency range and selectively extracting, from a plurality of images (frequency range images) after noise reduction, the respective pixels of an image to be output, thereby generating the image.
- FIG. 1 is a block diagram showing the functional arrangement of a digital camera 100 according to the embodiment of the present invention.
- a CPU 108 controls the operation of each block of the digital camera 100 .
- the CPU 108 will be explained as a programmable processor including a ROM and RAM (neither is shown).
- the CPU 108 controls the operation of each block by reading out the operation program of the block that is stored in the ROM, and expanding and executing it in the RAM.
- An image sensing unit 102 is an image sensor such as a CCD or CMOS sensor.
- the image sensing unit 102 photoelectrically converts an optical image formed on the light receiving surface of the image sensor by an imaging optical system 101 , thereby outputting an analog image signal.
- the imaging optical system 101 includes a lens and a stop.
- a driving unit (not shown) that performs driving in accordance with a control signal output from the CPU 108 performs operation control of the imaging optical system 101 regarding focus adjustment and exposure adjustment.
- An A/D conversion unit 103 applies A/D conversion processing to the analog image signal output from the image sensing unit 102 , thereby generating digital image data.
- the digital image data generated by the A/D conversion unit 103 is so-called RAW data obtained by simply applying A/D conversion to an analog image signal.
- RAW data obtained by simply applying A/D conversion to an analog image signal.
- the image sensor of the image sensing unit 102 is constituted by arranging photoelectric conversion elements in accordance with a so-called Bayer array, and the color components of the respective pixels of RAW data obtained by the A/D conversion unit 103 comply with the Bayer array. That is, when color filters applied to the respective pixels of the image sensor are primary colors R, G, and B, RAW data is so-called dot-sequential data in which each pixel has only information about one of R, G, and B.
- a signal processing unit 104 applies various signal processes to the RAW data output from the A/D conversion unit 103 .
- the signal processing unit 104 applies processes such as noise removal processing, gamma processing, interpolation processing, and matrix transformation to the RAW data.
- the signal processing unit 104 performs even the above-mentioned layer-specific processing.
- the signal processing unit 104 performs development processing (synchronization processing) of converting RAW data in which each pixel has only the pixel value of one of the R, G, and B color components, into normal image data in which each pixel has the pixel values of all the R, G, and B color components.
- the image data obtained by performing various signal processes by the signal processing unit 104 is stored in a memory 107 via a memory I/F 106 .
- An image processing unit 105 performs, on the image data stored in the memory 107 , necessary processes such as image processing and encoding processing corresponding to an application purpose.
- the image processing unit 105 performs image processing regarding color conversion and enlargement/reduction of image data, and encoding processing complying with a predetermined recording format.
- a display unit 109 is a display device, for example, an LCD or the like, in the digital camera 100 .
- the display unit 109 functions as an electronic viewfinder by displaying image data corresponding to an image signal obtained by image capturing.
- a recording medium 110 is, for example, the built-in memory of the digital camera 100 , or a recording device detachably connected to the digital camera 100 , such as a memory card or HDD.
- a recording device detachably connected to the digital camera 100 , such as a memory card or HDD.
- image data for saving that has been generated by the image processing unit 105 is saved on the recording medium 110 .
- each block serving as hardware in the digital camera 100 implements processing.
- processing of each block may be implemented by a program that performs the same processing as that of the block.
- FIG. 2 is a block diagram showing the internal arrangement of the signal processing unit 104 regarding layer-specific processing.
- a separation unit 200 a processing memory 210 , a noise reduction unit 220 , and a composition unit 230 perform the layer-specific processing and can generate an image in which noise is preferably reduced for each frequency range.
- the signal processing unit 104 performs processing of reducing noise respectively in the high frequency range and two types of low frequency ranges of image data, and outputs an image.
- the separation unit 200 generates image data (frequency range images) of the two types of low frequency ranges from input RAW data, and stores these data and the RAW data in the processing memory 210 . More specifically, the separation unit 200 stores the input RAW data in the processing memory 210 without any change, and also outputs it to a low-pass filter (LPF) 201 .
- LPF low-pass filter
- the LPF 201 is a filter that only a predetermined low frequency range of the input RAW data passes through. More specifically, the LPF 201 outputs RAW data of the low frequency range by applying a low-pass filter to, for example, image data of each of the R, G, and B components obtained from RAW data, or data obtained by converting RAW data into a YUV space.
- the output RAW data is input to a down sampling (DS) unit 202 .
- DS down sampling
- the DS unit 202 performs down sampling processing (conversion processing for decreasing the resolution) on the input RAW data of the low frequency range, thereby generating low-resolution image data. Since the high frequency component of image data is removed from image data of the low frequency range, the influence of an information loss by this down sampling processing is small, unlike thinning processing. Since down sampling processing decreases the resolution of image data, that is, the number of pixels, the circuit scale in noise removal processing (to be described later) can be reduced, and the calculation amount regarding noise removal processing can be decreased.
- the level of down sampling processing is decided in accordance with a transfer function applied by the LPF 201 .
- the DS unit 202 performs processing of halving the numbers of pixels in the horizontal and vertical directions.
- the DS unit 202 stores the low-resolution image data (first low-frequency-range image data) in the processing memory 210 , and outputs the data to an LPF 203 .
- the LPF 203 and a DS unit 204 are the same components as the LPF 201 and the DS unit 202 .
- the LPF 203 and the DS unit 204 further perform processing of extracting image data of the low frequency range from the input first low-frequency-range image data and decreasing the resolution.
- the DS unit 204 stores, in the processing memory 210 , low-resolution image data (second low-frequency-range image data) obtained by down sampling processing.
- image data stored in the processing memory 210 from the separation unit 200 are RAW data, the first low-frequency-range image data in which the numbers of pixels in the horizontal and vertical directions are 1 ⁇ 2 of those of the RAW data, and the second low-frequency-range image data in which the numbers of pixels in the horizontal and vertical directions are 1 ⁇ 4.
- the first low-frequency-range image data and the second low-frequency-range image data undergo synchronization processing for convenience and are output because the DS unit 202 or the DS unit 204 generates one pixel from four pixels, that is, two pixels in the horizontal direction ⁇ two pixels in the vertical direction.
- the noise reduction unit 220 executes noise removal processing on RAW data or image data stored in the processing memory 210 .
- three, first, second, and third lines are arranged and illustrated sequentially from the top in the vertical direction on the drawing. These lines explicitly indicate data input to the respective lines when outputting image data for saving. That is, RAW data is stored in the processing memory 210 from the separation unit 200 via the first line, the first low-frequency-range image data is stored via the second line, and the second low-frequency-range image data is stored via the third line.
- RAW data is input to the first line of the noise reduction unit 220 , the first low-frequency-range image data is input to the second line, and the second low-frequency-range image data is input to the third line.
- a high-pass filter (HPF) 221 is a filter that only a predetermined high frequency range of input image data, such as an edge component, passes through. More specifically, when RAW data is input, the HPF 221 outputs RAW data of the high frequency range by applying a high-pass filter to, for example, image data of each of the R, G, and B components. The output RAW data is input to a noise removal unit 222 .
- the noise removal unit 222 removes noise from the image data input from the HPF 221 .
- Noise removal may adopt a method of detecting, for example, the representative direction of an edge component included in image data, and applying a low-pass filter along the detected representative direction to perform smoothing.
- HPF 223 and noise removal unit 224 on the second line are the same as those of the HPF 221 and noise removal unit 222 on the first line.
- a noise removal unit 225 on the third line is identical to the noise removal units 222 and 224 .
- noise reduction unit 220 data are synchronized at the time of input to the respective noise removal units. This is because image information is less lost in comparison with dot-sequential image data, and noise can be suppressed at high precision in noise removal processing.
- the composition unit 230 composites image data of a plurality of frequency ranges when outputting image data. More specifically, image data input to the respective three lines have different resolutions.
- an up sampling (US) unit 231 or 234 makes the resolutions of composition target image data match each other.
- LPFs 232 and 235 are arranged at the subsequent stages of the US units 231 and 234 , and perform processing of returning the frequency range to an original one.
- the image data of different frequency ranges having the same resolution are composited into one image data by a pixel composition unit 233 or 236 .
- the composition unit 230 composites the noise-reduced high-frequency-range image data, first low-frequency-range image data, and second low-frequency-range image data.
- the display/saving processing is processing representing an operation when performing processing of saving an image during image capturing/saving such as continuous shooting of sequentially performing image capturing, and in addition, displaying the image as the electronic viewfinder on the display unit 109 .
- Processing corresponding to this flowchart can be implemented when, for example, the CPU 108 reads out a corresponding processing program stored in the ROM, and loads and executes it in the RAM to control the signal processing unit 104 and the image processing unit 105 .
- the display/saving processing starts when, for example, RAW data sequentially obtained by image capturing are input to the signal processing unit 104 .
- step S 301 the signal processing unit 104 causes the separation unit 200 to generate first low-frequency-range image data and second low-frequency-range image data from input RAW data.
- the signal processing unit 104 stores the respective data in the processing memory 210 .
- the signal processing unit 104 selects, from the data stored in the processing memory 210 , data used to generate image data for display on the display unit 109 . More specifically, the signal processing unit 104 selects data used to generate image data for display, from the RAW data, the first low-frequency-range image data, and the second low-frequency-range image data in accordance with the display resolution (number of pixels) of the display unit 109 . For example, a case will be examined, in which the display resolution of the display unit 109 is 640 pixels in the horizontal direction ⁇ 480 pixels in the vertical direction, and the resolution of RAW data is 2,560 pixels in the horizontal direction ⁇ 1,920 pixels in the vertical direction.
- the resolution of the first low-frequency-range image data is 1,280 pixels in the horizontal direction ⁇ 960 pixels in the vertical direction
- the resolution of the second low-frequency-range image data is 640 pixels in the horizontal direction ⁇ 480 pixels in the vertical direction.
- the signal processing unit 104 determines that the second low-frequency-range image data matches image data for display, and selects it as data to be used. For example, a case will be examined, in which the display resolution of the display unit 109 is 640 pixels in the horizontal direction ⁇ 480 pixels in the vertical direction, and the resolution of RAW data is 1,280 pixels in the horizontal direction ⁇ 560 pixels in the vertical direction.
- the resolution of the first low-frequency-range image data is 640 pixels in the horizontal direction ⁇ 480 pixels in the vertical direction
- the resolution of the second low-frequency-range image data is 320 pixels in the horizontal direction ⁇ 240 pixels in the vertical direction.
- the signal processing unit 104 determines that the first low-frequency-range image data and second low-frequency-range image data having resolutions equal to or lower than that of the display unit 109 match image data for display, and selects the first low-frequency-range image data and second low-frequency-range image data as data to be used. That is, in this step, the signal processing unit 104 selects data having a resolution lower than the display resolution, as data used to generate image data for display.
- the signal processing unit 104 causes the noise reduction unit 220 to perform noise removal processing on the data used to generate image data for display. More specifically, the signal processing unit 104 inputs data to be used to the first to third lines of the noise reduction unit 220 in the descending order of the resolution, and performs noise removal processing. For example, when data to be used is only the second low-frequency-range image data, the signal processing unit 104 inputs the second low-frequency-range image data to the first line, and causes the noise reduction unit 220 to output image data from which noise of the high frequency range has been removed.
- the signal processing unit 104 inputs the first low-frequency-range image data to the first line and the second low-frequency-range image data to the second line. Then, the signal processing unit 104 causes the noise reduction unit 220 to output image data of two types of resolutions from which noise of the high frequency range has been removed.
- step S 304 the signal processing unit 104 causes the composition unit 230 to perform up sampling processing, application of an LPF, and composition processing, as needed, and outputs image data for display. More specifically, when image data are input to a plurality of lines in step S 303 , the signal processing unit 104 performs composition processing of the image data in step S 304 . When image data is input to only the first line in step S 303 , the signal processing unit 104 directly outputs, as image data for display, image data in which noise has been reduced without performing composition processing by the composition unit 230 in step S 304 .
- step S 305 the signal processing unit 104 performs other signal processes (for example, gamma processing, interpolation processing, and matrix transformation) on the image data for display.
- other signal processes for example, gamma processing, interpolation processing, and matrix transformation
- the processing of the signal processing unit 104 regarding the image data for display is completed.
- the CPU 108 in parallel performs display processing of the image data for display and saving processing of image data for saving.
- step S 306 the image processing unit 105 applies, to the image data for display, image processing such as color adjustment necessary for display on the display unit 109 , thereby generating an image signal for display.
- the image processing unit 105 outputs the image signal for display to the display unit 109 to display an image serving as the electronic viewfinder. After that, the processing regarding display, out of the display/saving processing, is completed.
- step S 307 the signal processing unit 104 causes the noise reduction unit 220 to perform noise removal processing on all data stored in the processing memory 210 . More specifically, the signal processing unit 104 inputs the RAW data to the first line, the first low-frequency-range image data to the second line, and the second low-frequency-range image data to the third line, and performs noise removal processing.
- step S 308 the signal processing unit 104 causes the composition unit 230 to perform up sampling processing, application of an LPF, and composition processing, and outputs image data for saving.
- step S 309 the signal processing unit 104 performs other signal processes (for example, gamma processing, interpolation processing, and matrix transformation) on the image data for saving.
- step S 310 the image processing unit 105 applies, to the image data for saving, image processes such as color adjustment and enlargement/reduction necessary for saving on the recording medium 110 . Further, the image processing unit 105 applies necessary processing such as encoding processing, completing the image data for saving.
- step S 311 the CPU 108 transmits the generated image data for saving to the recording medium 110 to save it, and then completes the processing regarding saving, out of the display/saving processing.
- the display unit 109 can display an image having synchronism with image data to be saved.
- the method according to this embodiment can decrease the readout count, compared to the method described in Japanese Patent Laid-Open No. 2005-142707 for one image recording and display, and thus can cancel the display delay. More specifically, as shown in FIG. 4A , the method in Japanese Patent Laid-Open No. 2005-142707 performs processing regarding image data for display (monitor image) after performing sensor readout twice, and then presents an image on the display unit 109 . To the contrary, as shown in FIG. 4B , the method according to this embodiment performs processing regarding image data for display after performing sensor readout once, and then presents an image on the display unit 109 .
- the method according to this embodiment can therefore shorten the delay time from shooting to display.
- the display resolution of the display unit 109 is lower than the resolution of image data for saving. Therefore, the method according to this embodiment uses not all frequency range images but only some frequency range images to generate image data for display. That is, the time taken for processing regarding display is shorter than the time taken for processing regarding saving, so the start timing of the processing regarding saving can be quickened in comparison with the conventional method.
- image data for display is an image generated by the low-pass filter and down sampling, unlike an image generated by conventional pixel thinning.
- image quality is improved.
- the image processing apparatus can ensure the synchronism between an image saved during image capturing/saving and an image displayed as the electronic viewfinder. More specifically, the image processing apparatus generates a plurality of frequency range images by separating an image based on an obtained image signal into predetermined frequency ranges. Then, the image processing apparatus generates an image for saving based on a plurality of images by applying predetermined image processing to a plurality of frequency range images. Also, the image processing apparatus generates an image for display based on some of the plurality of frequency range images to which the predetermined image processing has been applied.
- data used to generate image data for display is selected based on only the display resolution of the display unit 109 .
- the practice of the present invention is not limited to this.
- the signal processing unit 104 may select data having a resolution higher than the display resolution in display/saving processing of step S 302 .
- the signal processing unit 104 may select data having a resolution higher than the display resolution and one or more data having a resolution lower than this resolution. For example, a case will be examined, in which the display resolution of the display unit 109 is 640 pixels in the horizontal direction ⁇ 480 pixels in the vertical direction, and the resolution of RAW data is 2,000 pixels in the horizontal direction ⁇ 1,500 pixels in the vertical direction.
- the resolution of the first low-frequency-range image data is 1,000 pixels in the horizontal direction ⁇ 750 pixels in the vertical direction
- the resolution of the second low-frequency-range image data is 500 pixels in the horizontal direction ⁇ 375 pixels in the vertical direction.
- the signal processing unit 104 may determine that the first low-frequency-range image data matches image data for display, and select the first low-frequency-range image data as data to be used. Since the image quality is improved by compositing image data of a plurality of frequency ranges, the signal processing unit 104 may determine that the first low-frequency-range image data and the second low-frequency-range image data match image data for display, and select them as data to be used.
- the signal processing unit 104 may select only one data in display/saving processing of step S 302 regardless of the display resolution. For example, as in the above-described example, even when the resolution of the second low-frequency-range image data is 500 pixels in the horizontal direction ⁇ 375 pixels in the vertical direction and is lower than the display resolution of 640 pixels in the horizontal direction ⁇ 480 pixels in the vertical direction, the signal processing unit 104 may select the second low-frequency-range image data as data to be used. In this case, the image processing unit 105 applies processing of enlargement up to the display resolution.
- the calculation amount of noise removal processing in the noise reduction unit 220 can be reduced, and composition processing in the composition unit 230 can be skipped.
- the display delay amount can be reduced, as shown in FIG. 5 , compared to the method of the above-described embodiment. Note that it is also possible to select, as data to be used, one data having a resolution higher than the display resolution, like the first low-frequency-range image data, and reduce the image data by the image processing unit 105 at the time of display. However, in terms of reducing the calculation amount regarding noise reduction, it is preferable to use the smallest number of data, as a matter of course.
- Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s).
- computer executable instructions e.g., one or more programs
- a storage medium which may also be referred to more fully as a
- the computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions.
- the computer executable instructions may be provided to the computer, for example, from a network or the storage medium.
- the storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)TM), a flash memory device, a memory card, and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
An image processing apparatus generates a plurality of frequency range images by separating an image based on an obtained image signal into predetermined frequency ranges. The apparatus applies predetermined image processing to the plurality of frequency range images, and generates an image for saving based on the plurality of images. The apparatus generates an image for display based on some of the plurality of frequency range images to which the predetermined image processing has been applied.
Description
- 1. Field of the Invention
- The present invention relates to an image processing apparatus, an image capturing apparatus, a control method, and a recording medium, and particularly to a display technique regarding an electronic viewfinder.
- 2. Description of the Related Art
- Some of image capturing apparatuses such as a digital camera implement a viewfinder function (electronic viewfinder) by displaying a captured image on the display device of the image capturing apparatus. Recently, there is an image capturing apparatus that is not equipped with an optical viewfinder and uses only the electronic viewfinder. An image presented as the electronic viewfinder in such an apparatus has high importance.
- Japanese Patent Laid-Open No. 2005-142707 discloses a technique of reducing the display delay of of an image regarding the electronic viewfinder during continuous shooting.
- An image capturing apparatus in Japanese Patent Laid-Open No. 2005-142707 alternately performs readout of a signal for saving from an image sensor and readout of a signal for the electronic viewfinder, and preferentially performs processing of the latter signal, thereby reducing the display delay of an image regarding the electronic viewfinder. However, the image capturing timing is not coincident between an image for saving and an image for display regarding the electronic viewfinder that are generated by reading out two types of signals, and the synchronism is not ensured.
- The present invention was made in view of such problems in the conventional technique. The present invention provides an image processing apparatus, image capturing apparatus, control method, and recording medium for ensuring the synchronism between an image saved during image capturing/saving and an image displayed as the electronic viewfinder.
- The present invention in its first aspect provides an image processing apparatus comprising: an obtaining unit configured to obtain an image signal output by image capturing; a separation unit configured to generate a plurality of frequency range images by separating an image based on the image signal obtained by the obtaining unit into predetermined frequency ranges; an image processing unit configured to apply predetermined image processing to each of the plurality of frequency range images generated by the separation unit; a first generation unit configured to generate an image for saving based on the plurality of frequency range images to which the image processing unit has applied the predetermined image processing; a second generation unit configured to generate an image for display based on some frequency range images, out of the plurality of frequency range images to which the image processing unit has applied the predetermined image processing; and an output unit configured to output one of the image for saving and the image for display.
- The present invention in its second aspect provides a method of controlling an image processing apparatus, comprising: an obtaining step of obtaining an image signal output by image capturing; a separation step of generating a plurality of frequency range images by separating an image based on the image signal obtained in the obtaining step into predetermined frequency ranges; an image processing step of applying predetermined image processing to the frequency range images generated in the separation step; a first generation step of generating an image for saving based on the plurality of frequency range images to which the predetermined image processing has been applied in the image processing step; a second generation step of generating an image for display based on some frequency range images, out of the plurality of frequency range images to which the predetermined image processing has been applied in the image processing step; and an output step of outputting one of the image for saving and the image for display.
- Further features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
-
FIG. 1 is a block diagram showing the functional arrangement of adigital camera 100 according to an embodiment of the present invention; -
FIG. 2 is a block diagram exemplifying the arrangement of asignal processing unit 104 regarding layer-specific processing according to the embodiment of the present invention; -
FIG. 3 is a flowchart exemplifying display/saving processing to be executed by thedigital camera 100 according to the embodiment of the present invention; -
FIGS. 4A and 4B are timing charts exemplifying display/saving processes by a conventional method and a method according to this embodiment; and -
FIG. 5 is a timing chart exemplifying display/saving processing by a method according to a modification. - An exemplary embodiment of the present invention will now be described in detail with reference to the accompanying drawings. An embodiment to be described below will explain an example of applying the present invention to a digital camera capable of performing saving of an image during image capturing/saving, and display of an image regarding the electronic viewfinder, as an example of an image processing apparatus. However, the present invention is applicable to an arbitrary device capable of generating an image for saving and an image for display from an image obtained by image capturing. In this embodiment, “layer-specific processing” is processing of performing noise reduction processing that differs between images separated into a plurality of layers in accordance with the frequency range and selectively extracting, from a plurality of images (frequency range images) after noise reduction, the respective pixels of an image to be output, thereby generating the image.
- <<Arrangement of Digital Camera 100>>
-
FIG. 1 is a block diagram showing the functional arrangement of adigital camera 100 according to the embodiment of the present invention. - A
CPU 108 controls the operation of each block of thedigital camera 100. In this embodiment, theCPU 108 will be explained as a programmable processor including a ROM and RAM (neither is shown). TheCPU 108 controls the operation of each block by reading out the operation program of the block that is stored in the ROM, and expanding and executing it in the RAM. - An
image sensing unit 102 is an image sensor such as a CCD or CMOS sensor. Theimage sensing unit 102 photoelectrically converts an optical image formed on the light receiving surface of the image sensor by an imaging optical system 101, thereby outputting an analog image signal. The imaging optical system 101 includes a lens and a stop. A driving unit (not shown) that performs driving in accordance with a control signal output from theCPU 108 performs operation control of the imaging optical system 101 regarding focus adjustment and exposure adjustment. - An A/
D conversion unit 103 applies A/D conversion processing to the analog image signal output from theimage sensing unit 102, thereby generating digital image data. The digital image data generated by the A/D conversion unit 103 is so-called RAW data obtained by simply applying A/D conversion to an analog image signal. This embodiment assumes that the image sensor of theimage sensing unit 102 is constituted by arranging photoelectric conversion elements in accordance with a so-called Bayer array, and the color components of the respective pixels of RAW data obtained by the A/D conversion unit 103 comply with the Bayer array. That is, when color filters applied to the respective pixels of the image sensor are primary colors R, G, and B, RAW data is so-called dot-sequential data in which each pixel has only information about one of R, G, and B. - A
signal processing unit 104 applies various signal processes to the RAW data output from the A/D conversion unit 103. Thesignal processing unit 104 applies processes such as noise removal processing, gamma processing, interpolation processing, and matrix transformation to the RAW data. In this embodiment, thesignal processing unit 104 performs even the above-mentioned layer-specific processing. In addition, thesignal processing unit 104 performs development processing (synchronization processing) of converting RAW data in which each pixel has only the pixel value of one of the R, G, and B color components, into normal image data in which each pixel has the pixel values of all the R, G, and B color components. The image data obtained by performing various signal processes by thesignal processing unit 104 is stored in amemory 107 via a memory I/F 106. - An
image processing unit 105 performs, on the image data stored in thememory 107, necessary processes such as image processing and encoding processing corresponding to an application purpose. In this embodiment, theimage processing unit 105 performs image processing regarding color conversion and enlargement/reduction of image data, and encoding processing complying with a predetermined recording format. - A
display unit 109 is a display device, for example, an LCD or the like, in thedigital camera 100. Thedisplay unit 109 functions as an electronic viewfinder by displaying image data corresponding to an image signal obtained by image capturing. - A
recording medium 110 is, for example, the built-in memory of thedigital camera 100, or a recording device detachably connected to thedigital camera 100, such as a memory card or HDD. When thedigital camera 100 performs shooting, image data for saving that has been generated by theimage processing unit 105 is saved on therecording medium 110. - This embodiment will explain that each block serving as hardware in the
digital camera 100 implements processing. However, the practice of the present invention is not limited to this, and processing of each block may be implemented by a program that performs the same processing as that of the block. - <<Layer-Specific Processing>>
- Layer-specific processing to be performed by the
signal processing unit 104 according to this embodiment will be described with reference to the drawings.FIG. 2 is a block diagram showing the internal arrangement of thesignal processing unit 104 regarding layer-specific processing. - As shown in
FIG. 2 , aseparation unit 200, aprocessing memory 210, anoise reduction unit 220, and acomposition unit 230 perform the layer-specific processing and can generate an image in which noise is preferably reduced for each frequency range. In this embodiment, thesignal processing unit 104 performs processing of reducing noise respectively in the high frequency range and two types of low frequency ranges of image data, and outputs an image. - The
separation unit 200 generates image data (frequency range images) of the two types of low frequency ranges from input RAW data, and stores these data and the RAW data in theprocessing memory 210. More specifically, theseparation unit 200 stores the input RAW data in theprocessing memory 210 without any change, and also outputs it to a low-pass filter (LPF) 201. - The
LPF 201 is a filter that only a predetermined low frequency range of the input RAW data passes through. More specifically, theLPF 201 outputs RAW data of the low frequency range by applying a low-pass filter to, for example, image data of each of the R, G, and B components obtained from RAW data, or data obtained by converting RAW data into a YUV space. The output RAW data is input to a down sampling (DS)unit 202. - The
DS unit 202 performs down sampling processing (conversion processing for decreasing the resolution) on the input RAW data of the low frequency range, thereby generating low-resolution image data. Since the high frequency component of image data is removed from image data of the low frequency range, the influence of an information loss by this down sampling processing is small, unlike thinning processing. Since down sampling processing decreases the resolution of image data, that is, the number of pixels, the circuit scale in noise removal processing (to be described later) can be reduced, and the calculation amount regarding noise removal processing can be decreased. The level of down sampling processing is decided in accordance with a transfer function applied by theLPF 201. For example, when the transfer function of theLPF 201 is expressed by H(z)=½(1+Z−1), theDS unit 202 performs processing of halving the numbers of pixels in the horizontal and vertical directions. TheDS unit 202 stores the low-resolution image data (first low-frequency-range image data) in theprocessing memory 210, and outputs the data to anLPF 203. - The
LPF 203 and aDS unit 204 are the same components as theLPF 201 and theDS unit 202. TheLPF 203 and theDS unit 204 further perform processing of extracting image data of the low frequency range from the input first low-frequency-range image data and decreasing the resolution. TheDS unit 204 stores, in theprocessing memory 210, low-resolution image data (second low-frequency-range image data) obtained by down sampling processing. - The following description assumes that image data stored in the
processing memory 210 from theseparation unit 200 according to this embodiment are RAW data, the first low-frequency-range image data in which the numbers of pixels in the horizontal and vertical directions are ½ of those of the RAW data, and the second low-frequency-range image data in which the numbers of pixels in the horizontal and vertical directions are ¼. Also, assume that the first low-frequency-range image data and the second low-frequency-range image data undergo synchronization processing for convenience and are output because theDS unit 202 or theDS unit 204 generates one pixel from four pixels, that is, two pixels in the horizontal direction×two pixels in the vertical direction. - The
noise reduction unit 220 executes noise removal processing on RAW data or image data stored in theprocessing memory 210. InFIG. 2 , three, first, second, and third lines are arranged and illustrated sequentially from the top in the vertical direction on the drawing. These lines explicitly indicate data input to the respective lines when outputting image data for saving. That is, RAW data is stored in theprocessing memory 210 from theseparation unit 200 via the first line, the first low-frequency-range image data is stored via the second line, and the second low-frequency-range image data is stored via the third line. When outputting image data for saving, RAW data is input to the first line of thenoise reduction unit 220, the first low-frequency-range image data is input to the second line, and the second low-frequency-range image data is input to the third line. - A high-pass filter (HPF) 221 is a filter that only a predetermined high frequency range of input image data, such as an edge component, passes through. More specifically, when RAW data is input, the
HPF 221 outputs RAW data of the high frequency range by applying a high-pass filter to, for example, image data of each of the R, G, and B components. The output RAW data is input to anoise removal unit 222. The transfer function of theHPF 221 may be, for example, H(z)=½(1−Z−1). Assume that synchronization processing is also performed when theHPF 221 applies the high-pass filter to RAW data. - The
noise removal unit 222 removes noise from the image data input from theHPF 221. Noise removal may adopt a method of detecting, for example, the representative direction of an edge component included in image data, and applying a low-pass filter along the detected representative direction to perform smoothing. - The arrangements of an
HPF 223 andnoise removal unit 224 on the second line are the same as those of theHPF 221 andnoise removal unit 222 on the first line. Anoise removal unit 225 on the third line is identical to thenoise removal units - In the
noise reduction unit 220 according to this embodiment, data are synchronized at the time of input to the respective noise removal units. This is because image information is less lost in comparison with dot-sequential image data, and noise can be suppressed at high precision in noise removal processing. - The
composition unit 230 composites image data of a plurality of frequency ranges when outputting image data. More specifically, image data input to the respective three lines have different resolutions. First, an up sampling (US)unit LPFs US units pixel composition unit composition unit 230 composites the noise-reduced high-frequency-range image data, first low-frequency-range image data, and second low-frequency-range image data. - <<Display/Saving Processing>>
- Practical processing will be explained with reference to the flowchart of
FIG. 3 as display/saving processing to be performed by thedigital camera 100 having the above arrangement according to this embodiment. The display/saving processing is processing representing an operation when performing processing of saving an image during image capturing/saving such as continuous shooting of sequentially performing image capturing, and in addition, displaying the image as the electronic viewfinder on thedisplay unit 109. Processing corresponding to this flowchart can be implemented when, for example, theCPU 108 reads out a corresponding processing program stored in the ROM, and loads and executes it in the RAM to control thesignal processing unit 104 and theimage processing unit 105. In the following description, the display/saving processing starts when, for example, RAW data sequentially obtained by image capturing are input to thesignal processing unit 104. - In step S301, the
signal processing unit 104 causes theseparation unit 200 to generate first low-frequency-range image data and second low-frequency-range image data from input RAW data. Thesignal processing unit 104 stores the respective data in theprocessing memory 210. - In step S302, the
signal processing unit 104 selects, from the data stored in theprocessing memory 210, data used to generate image data for display on thedisplay unit 109. More specifically, thesignal processing unit 104 selects data used to generate image data for display, from the RAW data, the first low-frequency-range image data, and the second low-frequency-range image data in accordance with the display resolution (number of pixels) of thedisplay unit 109. For example, a case will be examined, in which the display resolution of thedisplay unit 109 is 640 pixels in the horizontal direction×480 pixels in the vertical direction, and the resolution of RAW data is 2,560 pixels in the horizontal direction×1,920 pixels in the vertical direction. At this time, the resolution of the first low-frequency-range image data is 1,280 pixels in the horizontal direction×960 pixels in the vertical direction, and the resolution of the second low-frequency-range image data is 640 pixels in the horizontal direction×480 pixels in the vertical direction. Thesignal processing unit 104 determines that the second low-frequency-range image data matches image data for display, and selects it as data to be used. For example, a case will be examined, in which the display resolution of thedisplay unit 109 is 640 pixels in the horizontal direction×480 pixels in the vertical direction, and the resolution of RAW data is 1,280 pixels in the horizontal direction×560 pixels in the vertical direction. At this time, the resolution of the first low-frequency-range image data is 640 pixels in the horizontal direction×480 pixels in the vertical direction, and the resolution of the second low-frequency-range image data is 320 pixels in the horizontal direction×240 pixels in the vertical direction. Thesignal processing unit 104 determines that the first low-frequency-range image data and second low-frequency-range image data having resolutions equal to or lower than that of thedisplay unit 109 match image data for display, and selects the first low-frequency-range image data and second low-frequency-range image data as data to be used. That is, in this step, thesignal processing unit 104 selects data having a resolution lower than the display resolution, as data used to generate image data for display. - In step S303, the
signal processing unit 104 causes thenoise reduction unit 220 to perform noise removal processing on the data used to generate image data for display. More specifically, thesignal processing unit 104 inputs data to be used to the first to third lines of thenoise reduction unit 220 in the descending order of the resolution, and performs noise removal processing. For example, when data to be used is only the second low-frequency-range image data, thesignal processing unit 104 inputs the second low-frequency-range image data to the first line, and causes thenoise reduction unit 220 to output image data from which noise of the high frequency range has been removed. For example, when data to be used are the first low-frequency-range image data and the second low-frequency-range image data, thesignal processing unit 104 inputs the first low-frequency-range image data to the first line and the second low-frequency-range image data to the second line. Then, thesignal processing unit 104 causes thenoise reduction unit 220 to output image data of two types of resolutions from which noise of the high frequency range has been removed. - In step S304, the
signal processing unit 104 causes thecomposition unit 230 to perform up sampling processing, application of an LPF, and composition processing, as needed, and outputs image data for display. More specifically, when image data are input to a plurality of lines in step S303, thesignal processing unit 104 performs composition processing of the image data in step S304. When image data is input to only the first line in step S303, thesignal processing unit 104 directly outputs, as image data for display, image data in which noise has been reduced without performing composition processing by thecomposition unit 230 in step S304. - In step S305, the
signal processing unit 104 performs other signal processes (for example, gamma processing, interpolation processing, and matrix transformation) on the image data for display. - At this time, the processing of the
signal processing unit 104 regarding the image data for display is completed. Subsequently, theCPU 108 in parallel performs display processing of the image data for display and saving processing of image data for saving. - In step S306, the
image processing unit 105 applies, to the image data for display, image processing such as color adjustment necessary for display on thedisplay unit 109, thereby generating an image signal for display. Theimage processing unit 105 outputs the image signal for display to thedisplay unit 109 to display an image serving as the electronic viewfinder. After that, the processing regarding display, out of the display/saving processing, is completed. - In step S307, the
signal processing unit 104 causes thenoise reduction unit 220 to perform noise removal processing on all data stored in theprocessing memory 210. More specifically, thesignal processing unit 104 inputs the RAW data to the first line, the first low-frequency-range image data to the second line, and the second low-frequency-range image data to the third line, and performs noise removal processing. - In step S308, the
signal processing unit 104 causes thecomposition unit 230 to perform up sampling processing, application of an LPF, and composition processing, and outputs image data for saving. In step S309, thesignal processing unit 104 performs other signal processes (for example, gamma processing, interpolation processing, and matrix transformation) on the image data for saving. - In step S310, the
image processing unit 105 applies, to the image data for saving, image processes such as color adjustment and enlargement/reduction necessary for saving on therecording medium 110. Further, theimage processing unit 105 applies necessary processing such as encoding processing, completing the image data for saving. - In step S311, the
CPU 108 transmits the generated image data for saving to therecording medium 110 to save it, and then completes the processing regarding saving, out of the display/saving processing. - Accordingly, the
display unit 109 can display an image having synchronism with image data to be saved. The method according to this embodiment can decrease the readout count, compared to the method described in Japanese Patent Laid-Open No. 2005-142707 for one image recording and display, and thus can cancel the display delay. More specifically, as shown inFIG. 4A , the method in Japanese Patent Laid-Open No. 2005-142707 performs processing regarding image data for display (monitor image) after performing sensor readout twice, and then presents an image on thedisplay unit 109. To the contrary, as shown inFIG. 4B , the method according to this embodiment performs processing regarding image data for display after performing sensor readout once, and then presents an image on thedisplay unit 109. The method according to this embodiment can therefore shorten the delay time from shooting to display. In general, the display resolution of thedisplay unit 109 is lower than the resolution of image data for saving. Therefore, the method according to this embodiment uses not all frequency range images but only some frequency range images to generate image data for display. That is, the time taken for processing regarding display is shorter than the time taken for processing regarding saving, so the start timing of the processing regarding saving can be quickened in comparison with the conventional method. - In the method according to this embodiment, image data for display is an image generated by the low-pass filter and down sampling, unlike an image generated by conventional pixel thinning. Thus, even the image quality is improved.
- As described above, the image processing apparatus according to this embodiment can ensure the synchronism between an image saved during image capturing/saving and an image displayed as the electronic viewfinder. More specifically, the image processing apparatus generates a plurality of frequency range images by separating an image based on an obtained image signal into predetermined frequency ranges. Then, the image processing apparatus generates an image for saving based on a plurality of images by applying predetermined image processing to a plurality of frequency range images. Also, the image processing apparatus generates an image for display based on some of the plurality of frequency range images to which the predetermined image processing has been applied.
- [Modification]
- In the above-described embodiment, data used to generate image data for display is selected based on only the display resolution of the
display unit 109. However, the practice of the present invention is not limited to this. - For example, when it is set to give priority to improvement of the image quality or resolution of an image to be displayed on the
display unit 109, thesignal processing unit 104 may select data having a resolution higher than the display resolution in display/saving processing of step S302. Alternatively, thesignal processing unit 104 may select data having a resolution higher than the display resolution and one or more data having a resolution lower than this resolution. For example, a case will be examined, in which the display resolution of thedisplay unit 109 is 640 pixels in the horizontal direction×480 pixels in the vertical direction, and the resolution of RAW data is 2,000 pixels in the horizontal direction×1,500 pixels in the vertical direction. At this time, the resolution of the first low-frequency-range image data is 1,000 pixels in the horizontal direction×750 pixels in the vertical direction, and the resolution of the second low-frequency-range image data is 500 pixels in the horizontal direction×375 pixels in the vertical direction. Thesignal processing unit 104 may determine that the first low-frequency-range image data matches image data for display, and select the first low-frequency-range image data as data to be used. Since the image quality is improved by compositing image data of a plurality of frequency ranges, thesignal processing unit 104 may determine that the first low-frequency-range image data and the second low-frequency-range image data match image data for display, and select them as data to be used. - For example, when it is set to give priority to the response performance so as to shorten the delay time till display on the
display unit 109 after image capturing, thesignal processing unit 104 may select only one data in display/saving processing of step S302 regardless of the display resolution. For example, as in the above-described example, even when the resolution of the second low-frequency-range image data is 500 pixels in the horizontal direction×375 pixels in the vertical direction and is lower than the display resolution of 640 pixels in the horizontal direction×480 pixels in the vertical direction, thesignal processing unit 104 may select the second low-frequency-range image data as data to be used. In this case, theimage processing unit 105 applies processing of enlargement up to the display resolution. With this processing, the calculation amount of noise removal processing in thenoise reduction unit 220 can be reduced, and composition processing in thecomposition unit 230 can be skipped. Hence, the display delay amount can be reduced, as shown inFIG. 5 , compared to the method of the above-described embodiment. Note that it is also possible to select, as data to be used, one data having a resolution higher than the display resolution, like the first low-frequency-range image data, and reduce the image data by theimage processing unit 105 at the time of display. However, in terms of reducing the calculation amount regarding noise reduction, it is preferable to use the smallest number of data, as a matter of course. - Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
- While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
- This application claims the benefit of Japanese Patent Application No. 2014-014743, filed Jan. 29, 2014, which is hereby incorporated by reference herein in its entirety.
Claims (10)
1. An image processing apparatus comprising:
an obtaining unit configured to obtain an image signal output by image capturing;
a separation unit configured to generate a plurality of frequency range images by separating an image based on the image signal obtained by said obtaining unit into predetermined frequency ranges;
an image processing unit configured to apply predetermined image processing to each of the plurality of frequency range images generated by said separation unit;
a first generation unit configured to generate an image for saving based on the plurality of frequency range images to which said image processing unit has applied the predetermined image processing;
a second generation unit configured to generate an image for display based on some frequency range images, out of the plurality of frequency range images to which said image processing unit has applied the predetermined image processing; and
an output unit configured to output one of the image for saving and the image for display.
2. The apparatus according to claim 1 , wherein the predetermined image processing by said image processing unit includes noise reduction processing.
3. The apparatus according to claim 1 , wherein
said image processing unit applies conversion processing for decreasing a resolution of a frequency range image of a low frequency range, out of the plurality of frequency range images, and
said second generation unit selects, in accordance with a resolution of a display unit configured to display the image for display, the some frequency range images from the frequency range images to which the conversion processing has been applied.
4. The apparatus according to claim 3 , wherein in a case where a setting to give priority to image quality of the image for display is made, said second generation unit selects, as the some frequency range images, at least one of a frequency range image having a resolution higher than the resolution of the display unit and not less than one frequency range image having a resolution not higher than the resolution after the conversion processing is applied.
5. The apparatus according to claim 3 , wherein in a case where a setting to shorten a display delay until the image for display is displayed after shooting of the image signal obtained by said obtaining unit is made, said second generation unit selects, as the some frequency range images, one frequency range image having a resolution corresponding to the resolution of the display unit after the conversion processing is applied.
6. The apparatus according to claim 3 , wherein the image for display has a resolution lower than a resolution of the image for saving.
7. The apparatus according to claim 1 , wherein
said obtaining unit sequentially obtains image signals output by image capturing performed sequentially, and
said first generation unit is started after generation of the image for display by said second generation unit.
8. An image capturing apparatus comprising:
an image capturing unit;
an image processing apparatus defined in claim 1 ;
a display unit configured to display an image for display output from said image processing apparatus; and
a saving unit configured to save, on a recording medium, an image for saving output from said image processing apparatus.
9. A method of controlling an image processing apparatus, comprising:
an obtaining step of obtaining an image signal output by image capturing;
a separation step of generating a plurality of frequency range images by separating an image based on the image signal obtained in the obtaining step into predetermined frequency ranges;
an image processing step of applying predetermined image processing to the frequency range images generated in the separation step;
a first generation step of generating an image for saving based on the plurality of frequency range images to which the predetermined image processing has been applied in the image processing step;
a second generation step of generating an image for display based on some frequency range images, out of the plurality of frequency range images to which the predetermined image processing has been applied in the image processing step; and
an output step of outputting one of the image for saving and the image for display.
10. A computer-readable recording medium recording a program for causing a computer to function as each unit of an image processing apparatus defined in claim 1 .
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014-014743 | 2014-01-29 | ||
JP2014014743A JP6327869B2 (en) | 2014-01-29 | 2014-01-29 | Image processing apparatus, imaging apparatus, control method, and program |
Publications (2)
Publication Number | Publication Date |
---|---|
US20150215544A1 true US20150215544A1 (en) | 2015-07-30 |
US9648232B2 US9648232B2 (en) | 2017-05-09 |
Family
ID=53680308
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/605,561 Active 2035-03-22 US9648232B2 (en) | 2014-01-29 | 2015-01-26 | Image processing apparatus, image capturing apparatus, control method and recording medium |
Country Status (2)
Country | Link |
---|---|
US (1) | US9648232B2 (en) |
JP (1) | JP6327869B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9648232B2 (en) * | 2014-01-29 | 2017-05-09 | Canon Kabushiki Kaisha | Image processing apparatus, image capturing apparatus, control method and recording medium |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040028271A1 (en) * | 2001-07-27 | 2004-02-12 | Pollard Stephen Bernard | Colour correction of images |
US6829385B2 (en) * | 2000-09-01 | 2004-12-07 | Kabushiki Kaisha Toshiba | Apparatus and method for processing images, and a computer-readable medium |
US6833868B1 (en) * | 1998-12-10 | 2004-12-21 | Imec Vzw | Method and device for determining corrected color aspects of a pixel in an imaging device |
US20060192860A1 (en) * | 2003-06-25 | 2006-08-31 | Nokia Corporation | Digital photographic device for controlling compression parameter of image data and method of deciding compression parameter value of image data |
US20070097236A1 (en) * | 2005-11-02 | 2007-05-03 | Samsung Electronics Co., Ltd | Method and apparatus for reducing noise of image sensor |
US20070229710A1 (en) * | 2006-02-09 | 2007-10-04 | Sung-Cheol Park | Post-processing circuit for processing an image signal according to frequency components of the image signal |
US20080122953A1 (en) * | 2006-07-05 | 2008-05-29 | Konica Minolta Holdings, Inc. | Image processing device, image processing method, and image sensing apparatus |
US20080175510A1 (en) * | 2007-01-18 | 2008-07-24 | Sony Corporation | Imaging apparatus, noise removing device, noise removing method, program for noise removing method, and recording medium for recording the same |
US7834917B2 (en) * | 2005-08-15 | 2010-11-16 | Sony Corporation | Imaging apparatus, noise reduction apparatus, noise reduction method, and noise reduction program |
US20120127216A1 (en) * | 2010-11-22 | 2012-05-24 | Canon Kabushiki Kaisha | Image display apparatus and control method therefor |
US8194160B2 (en) * | 2006-09-12 | 2012-06-05 | Olympus Corporation | Image gradation processing apparatus and recording |
US20120147226A1 (en) * | 2010-12-10 | 2012-06-14 | Sony Corporation | Image processing device, image processing method, and program |
US20120328152A1 (en) * | 2011-06-22 | 2012-12-27 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method and storage medium |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3695140B2 (en) * | 1998-04-20 | 2005-09-14 | カシオ計算機株式会社 | Electronic still camera and control method thereof |
JP2004320365A (en) * | 2003-04-15 | 2004-11-11 | Matsushita Electric Ind Co Ltd | Image processor, imaging device, and image compression display unit |
JP2005101865A (en) * | 2003-09-24 | 2005-04-14 | Sony Corp | Image pickup device and video image output method |
JP2005142707A (en) | 2003-11-05 | 2005-06-02 | Matsushita Electric Ind Co Ltd | Imaging apparatus |
JP5719148B2 (en) * | 2010-11-10 | 2015-05-13 | キヤノン株式会社 | Imaging apparatus, control method therefor, and program |
JP5882576B2 (en) * | 2010-12-01 | 2016-03-09 | キヤノン株式会社 | Image processing apparatus, image processing apparatus control method, and program |
JP6327869B2 (en) * | 2014-01-29 | 2018-05-23 | キヤノン株式会社 | Image processing apparatus, imaging apparatus, control method, and program |
-
2014
- 2014-01-29 JP JP2014014743A patent/JP6327869B2/en not_active Expired - Fee Related
-
2015
- 2015-01-26 US US14/605,561 patent/US9648232B2/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6833868B1 (en) * | 1998-12-10 | 2004-12-21 | Imec Vzw | Method and device for determining corrected color aspects of a pixel in an imaging device |
US6829385B2 (en) * | 2000-09-01 | 2004-12-07 | Kabushiki Kaisha Toshiba | Apparatus and method for processing images, and a computer-readable medium |
US20040028271A1 (en) * | 2001-07-27 | 2004-02-12 | Pollard Stephen Bernard | Colour correction of images |
US20060192860A1 (en) * | 2003-06-25 | 2006-08-31 | Nokia Corporation | Digital photographic device for controlling compression parameter of image data and method of deciding compression parameter value of image data |
US7834917B2 (en) * | 2005-08-15 | 2010-11-16 | Sony Corporation | Imaging apparatus, noise reduction apparatus, noise reduction method, and noise reduction program |
US20070097236A1 (en) * | 2005-11-02 | 2007-05-03 | Samsung Electronics Co., Ltd | Method and apparatus for reducing noise of image sensor |
US20070229710A1 (en) * | 2006-02-09 | 2007-10-04 | Sung-Cheol Park | Post-processing circuit for processing an image signal according to frequency components of the image signal |
US20080122953A1 (en) * | 2006-07-05 | 2008-05-29 | Konica Minolta Holdings, Inc. | Image processing device, image processing method, and image sensing apparatus |
US8194160B2 (en) * | 2006-09-12 | 2012-06-05 | Olympus Corporation | Image gradation processing apparatus and recording |
US20080175510A1 (en) * | 2007-01-18 | 2008-07-24 | Sony Corporation | Imaging apparatus, noise removing device, noise removing method, program for noise removing method, and recording medium for recording the same |
US20120127216A1 (en) * | 2010-11-22 | 2012-05-24 | Canon Kabushiki Kaisha | Image display apparatus and control method therefor |
US20120147226A1 (en) * | 2010-12-10 | 2012-06-14 | Sony Corporation | Image processing device, image processing method, and program |
US20120328152A1 (en) * | 2011-06-22 | 2012-12-27 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9648232B2 (en) * | 2014-01-29 | 2017-05-09 | Canon Kabushiki Kaisha | Image processing apparatus, image capturing apparatus, control method and recording medium |
Also Published As
Publication number | Publication date |
---|---|
US9648232B2 (en) | 2017-05-09 |
JP2015142286A (en) | 2015-08-03 |
JP6327869B2 (en) | 2018-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10082723B2 (en) | Image capturing apparatus for generating a high dynamic range video frame from multiple image frames with different characteristics | |
US9832382B2 (en) | Imaging apparatus and imaging method for outputting image based on motion | |
US9113024B2 (en) | Apparatus and method for image processing using color image data and low frequency image data | |
JP5541205B2 (en) | Image processing apparatus, imaging apparatus, image processing program, and image processing method | |
JP2017188760A (en) | Image processing apparatus, image processing method, computer program, and electronic apparatus | |
US20180270448A1 (en) | Image processing system | |
US10091415B2 (en) | Image processing apparatus, method for controlling image processing apparatus, image pickup apparatus, method for controlling image pickup apparatus, and recording medium | |
US20150161771A1 (en) | Image processing method, image processing apparatus, image capturing apparatus and non-transitory computer-readable storage medium | |
JP5829122B2 (en) | Imaging apparatus and evaluation value generation apparatus | |
EP2645702B1 (en) | Image capturing apparatus, control method therefor, and program | |
JP5996418B2 (en) | Imaging apparatus and imaging method | |
US9648232B2 (en) | Image processing apparatus, image capturing apparatus, control method and recording medium | |
JP2013225779A (en) | Image processing device, imaging device, and image processing program | |
JP2012095341A (en) | Imaging apparatus | |
JP6473049B2 (en) | Display control apparatus and imaging apparatus | |
US11303869B2 (en) | Image processing apparatus and image processing method | |
JP2015095890A (en) | Image processing apparatus and control method for the same | |
JP4339671B2 (en) | Imaging device | |
JP2008072428A (en) | Image processor, electronic camera, and image processing program | |
KR102014444B1 (en) | Photographing apparatus, method for controlling the same, and computer-readable recording medium | |
JP5158167B2 (en) | Image processing apparatus, imaging apparatus, and image processing program | |
JP6019587B2 (en) | Image processing device | |
JP5234123B2 (en) | Image processing apparatus, imaging apparatus, and image processing program | |
JP2008079301A (en) | Image capture device | |
JP2017220821A (en) | Image processing device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CANON KABUSHIKI KAISHA, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KUNIEDA, SHUTARO;REEL/FRAME:035798/0760 Effective date: 20150116 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |