JP2014239396A - Imaging apparatus and control method for imaging apparatus - Google Patents

Imaging apparatus and control method for imaging apparatus Download PDF

Info

Publication number
JP2014239396A
JP2014239396A JP2013122101A JP2013122101A JP2014239396A JP 2014239396 A JP2014239396 A JP 2014239396A JP 2013122101 A JP2013122101 A JP 2013122101A JP 2013122101 A JP2013122101 A JP 2013122101A JP 2014239396 A JP2014239396 A JP 2014239396A
Authority
JP
Japan
Prior art keywords
image signal
imaging
image
brightness
composite image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2013122101A
Other languages
Japanese (ja)
Other versions
JP6214229B2 (en
Inventor
近藤 浩
Hiroshi Kondo
浩 近藤
Original Assignee
キヤノン株式会社
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by キヤノン株式会社, Canon Inc filed Critical キヤノン株式会社
Priority to JP2013122101A priority Critical patent/JP6214229B2/en
Publication of JP2014239396A publication Critical patent/JP2014239396A/en
Application granted granted Critical
Publication of JP6214229B2 publication Critical patent/JP6214229B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

PROBLEM TO BE SOLVED: To achieve image quality of both person and a night view by avoiding the influences of image quality degradation resulting from high sensitivity setting and subject blur and hand shake resulting from slow synchronization setting, in night portrait photographing.SOLUTION: An imaging apparatus comprises: imaging means; imaging control means configured such that a first image signal is output by causing the imaging means to image while light emitting means emits light, and a plurality of image signals are output by causing the imaging means to successively image while the light emitting means emits no light; first combining means configured to add and combine the image signals, thereby generating a first composite signal; correcting means configured to correct the brightness of the first composite image signal; and second combining means configured to generate second composite image signal by selecting one with a larger value from the first image signal and the composite image signal output by the correcting means. The correcting means corrects the first composite image signal so that the brightness of the first composite image signal does not exceed the brightness of the first image signal.

Description

  The present invention relates to an imaging apparatus that has a strobe for illuminating a subject to be photographed and generates a single image by combining continuously photographed images.

  Conventionally, when taking a portrait with a digital camera at night, it is possible to make the subject brightness appropriate by emitting a flash. However, in full auto mode, the shutter speed is automatically set to prevent camera shake. For this reason, the background often becomes dark.

  In order to solve such problems, there has been proposed an imaging apparatus that combines strobe light emission and time-division exposure / image synthesis.

  In the imaging apparatus of Patent Document 1, it is possible to perform electronic camera shake correction by adding and synthesizing a plurality of images photographed by time-division exposure.

JP 2007-281547 A

  However, since the time-exposure-exposed image also has a long total exposure time, if the subject moves during the exposure, the obtained image is affected. For example, a split image that was flashed with front curtain sync is an image with almost no subject blur, but if a person moves during the subsequent split exposure, the background hidden behind the person will be exposed. . As a result, there is a case in which a part of the background is mixed with the person portion of the final composite image.

  The present invention has been made in view of the above problems, and provides an imaging apparatus and an imaging apparatus control method capable of imaging a main subject and a background with an appropriate luminance balance while suppressing the influence of blur due to exposure time. For the purpose.

  In order to solve the above-described problem, an imaging apparatus according to the present invention causes an imaging unit to capture an image with the light emitting unit emitting light and output a first image signal, and the light emitting unit is in a non-light emitting state. Imaging control means for causing the imaging means to continuously capture images and outputting a plurality of image signals; a first combining means for adding and combining the plurality of image signals to generate a first combined image signal; The second synthesis is performed by selecting the correction unit that corrects the brightness of the first composite image signal, the first image signal, and the composite image signal output from the correction unit, and selecting the one that is not smaller in value. Second correction means for generating an image signal, and the correction means includes the first synthesis image signal so that the brightness of the first composite image signal does not exceed the brightness of the first image signal. The composite image signal is corrected.

  The image pickup apparatus control method according to the present invention includes: an image pickup unit; the image pickup unit picks up an image when the light emission unit emits light; and outputs the first image signal. An image pickup control means for continuously picking up images and outputting a plurality of image signals, wherein the plurality of image signals are added and combined to generate a first composite image signal A value that is not smaller from the first combining step, the correcting step for correcting the brightness of the first combined image signal, and the first image signal and the combined image signal output from the correcting means. And generating a second composite image signal by selecting a second composite image signal, wherein in the correction step, the brightness of the first composite image signal is the brightness of the first image signal. The first composite image signal so as not to exceed And correcting the.

  According to the present invention, it is possible to photograph the main subject and the background with an appropriate luminance balance while suppressing the influence of blur due to light time.

1 is a block diagram showing the overall configuration of a digital camera system according to an embodiment of the present invention. Mode-specific shooting process flowchart in the first embodiment Image Level Correction Processing Flowchart in the First Embodiment The figure which shows the division | segmentation exposure synthetic | combination timing in 1st Embodiment. Photographing example in the embodiment of the present invention Mode-specific shooting process flowchart in the second embodiment Image level correction processing flowchart in the second embodiment

  Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings.

(First embodiment)
FIG. 1 is a block diagram illustrating a configuration of a digital camera 100 that is an example of an imaging apparatus according to the first embodiment.

  The lens 101 is a lens for forming a subject image, and here also includes a diaphragm for adjusting the amount of light.

  The mechanical shutter 102 is a mechanical shutter disposed on the optical path of the lens.

  The imaging unit 103 is a sensor for converting an image formed by the lens 101 into electrical information and outputting it as an image signal. In the present invention, a CMOS (Complementary Metal Oxide Semiconductor) image sensor or a CCD (Charge Coupled Device) image sensor is used as an imaging device included in the imaging unit. The imaging unit 103 is treated as including a CDS circuit (correlated double sampling circuit) and an AGC circuit (auto gain control circuit) in addition to the imaging element.

  A CPU (Central Processing Unit) 104 is an arithmetic circuit for controlling the entire camera, and transmits an instruction signal to each unit in the digital camera 100 to control each unit. The CPU 104 controls the lens 101, the mechanical shutter 102, the imaging unit 103, the strobe control circuit 110, and the like to perform imaging control including exposure control.

  The image processing circuit 105 performs various image processing based on the image signal output from the imaging unit 103 and the image signal recorded in the buffer memory 106. In the present embodiment, in particular, a process of adding the image signal output from the image capturing unit 103 and the image signal recorded in the buffer memory 106, or selecting and combining values that are not small to generate a combined image signal (First synthesizing means, second synthesizing means).

  The buffer memory 106 is a memory that temporarily stores image data from the imaging unit 103 or image data processed by the image processing circuit 105.

  The external memory 107 is a memory such as an SD card for storing images stored in the buffer memory 106.

  A display unit 108 is a display unit for displaying various types of camera information such as shooting conditions and through images.

  The operation unit 109 includes a power button for switching power on / off, a release switch serving as a shooting trigger, a shooting interruption button for interrupting shooting, a mode dial for determining a shooting mode, and the like. A user input from the operation unit 109 is detected, and various operations are performed according to the determination and instruction of the CPU 104.

  Further, the shape of the operation unit 109 is not limited to a button as long as an operation intended by the user is possible, and may be a ring, a surface pressure sensor, or the like.

  Other general members provided in the camera are not directly related to the present case, and are therefore omitted.

  The strobe control circuit 110 is controlled by the CPU 104 to control the presence or absence of strobe light emission, the light emission timing, the charging of the light emitting capacitor, and the like for the strobe light emitting unit 111 (light emitting means).

  FIG. 2 is a flowchart showing the mode-specific shooting process at the time of shooting in the first embodiment of the digital camera. The operation according to the shooting mode in this digital camera will be described by taking the most characteristic night view portrait mode as an embodiment. Detailed description of other modes is omitted. In the flowchart of FIG. 2, it is assumed that an image like a first-curtain sync, in which the flash emission timing is matched with the exposure start timing, is acquired.

  First, it is determined whether or not the shooting mode set with the mode dial in the operation unit 109 is the night view portrait mode (S201). If the shooting mode is a mode other than the night view portrait, processing corresponding to each shooting mode is performed. (S217), and the process ends.

  If the mode dial is set to the night view portrait mode in S201, the process proceeds to S202. In step S202, photometry / exposure control is performed by adjusting exposure conditions such as electronic shutter speed and gain so that an image signal obtained through the image sensor has an appropriate luminance level.

  Of course, since the night view portrait mode is set in this case, exposure control specific to the mode is performed. In such a scene, the exposure is generally controlled so that it is underexposed rather than the exposure in the (general-purpose) normal shooting mode for shooting a normal subject. In S202, AF (Auto Focus) control is also performed in parallel while moving the lens 101 based on the contrast information of the image.

  Next, the state of the release switch in the operation unit 109 is checked (S203). If the release switch is not pressed, the process returns to S202. If it has been pressed, the process proceeds to S204. Here, the release switch may be divided into two stages: a shooting preparation instruction for determining shooting conditions such as photometry, exposure, and AF as in S202, and a shooting execution instruction for controlling the timing of actual shooting. Good.

In S204, dimming control is performed. Various methods have been proposed for the dimming control method. For example, in the case of the pre-flash method, the flash control circuit 110 is set to a predetermined light amount before the main flash for shooting, and the flash is pre-flashed, and the main flash is generated based on the obtained image information. The light emission level (light emission amount) at the time is calculated. At this time, the exposure time t 0 of the photographed image obtained by photographing with the flash being emitted is also calculated. That is, the exposure condition of the strobe light emission image (first image signal) is set so that the subject has an appropriate brightness when imaged in the strobe light emission state.

  Next, in accordance with the latest photometric result, the total exposure time T of the photographed image acquired by continuously photographing in time series in the non-light emitting state where the flash is not emitted is determined (S205). That is, the exposure conditions of the plurality of image signals are set so that the subject has an appropriate brightness when a plurality of image signals to be photographed in a non-light emitting state where the strobe is not emitted. Next, the divided exposure time t and the number of divided exposures N are calculated from the total exposure time T and other shooting conditions (S206). Next, a variable i for counting the current number of divided exposures is initialized to zero, and an area Buffer1 in the buffer memory 106 is also initialized to zero (S207). Then, it is determined whether the current number of divided exposures i is N (S208). If it is less than N times, it is determined whether the first exposure, that is, the number of divided exposures i = 0 (S209). If i = 0, exposure and image capturing / reading control are performed at the exposure time t while the flash is emitted so that the light emission level calculated in S204 is obtained, and the image data is stored in the area Buffer0 in the buffer memory 106. (S210).

  Then, the number of divided exposures i is increased by 1 (S212), and the process proceeds to S208. In step S209, if the current number of divided exposures i is not zero, i-th divided exposure / imaging readout control is performed. Then, the read image data is stored while being added to the Buffer 1 image data (S211), and the process proceeds to S212.

In S208, when the current number of divided exposures reaches N, an image photographed in the flashing state of the strobe in Buffer 0 is analyzed, and an image of the image photographed in the non-lighting state of Buffer 1 strobe from the result. The level is corrected (S213). Details of this processing will be described later. Next, the image data stored in Buffer 0 and Buffer 1 are combined (S214). At this time, a divided exposure image (first image signal) at the time of flash emission is added in Buffer 0, and a composite image obtained by adding (N-1) divided exposure images in a non-flash state in Buffer 1 is added to Buffer 1. A signal (first synthesized image signal) is stored. When the images of Buffer 0 and Buffer 1 are combined, so-called peak hold addition is performed to select the maximum value of the same pixel address of both images. Specifically:
I (x, y) = max (I_1 (x, y), I_2 (x, y),..., I_N (x, y)) (1)

Here, I_i (x, y) (i = 1 to N, x, y represents coordinates in the screen), and the pixel values of the N images before the composition are combined. Let the pixel value be I (x, y). In this embodiment, since the images to be shot are sequentially combined by peak hold addition,
I2 (x, y) = max (I_1 (x, y), I_2 (x, y)),
I3 (x, y) = max (I2 (x, y), I_3 (x, y)),
...
I (x, y) = IN (x, y) = max (IN−1 (x, y), I_N (x, y)),
(2)
It becomes. The synthesis processing method is not limited to this, and N sheets may be stored in the memory and combined at once, as in the above-described equation (1).

  After that, the image processing circuit 105 performs recording development processing, encoding / compression processing into a predetermined format such as JPEG, etc., on the synthesized image that has been synthesized (S215), and records it in the external memory 107. (S216) The process ends.

  FIG. 3 is a flowchart showing details of the image analysis process and the image level correction process performed in S213 of FIG.

  Here, the purpose is mainly to balance the luminance of the composite image of the image P0 that is flashed on a person and the images P1 to P3 that are synthesized by exposing the background.

  First, an area where the light from the strobe of the image P0 is considered to have reached and a strobe illumination area are extracted (S301). As an extraction method, a luminance histogram distribution is obtained from an image and a region composed of pixels that fall within a predetermined range of luminance values may be targeted, or another method may be used. In addition, the strobe illumination area may be extracted by, for example, associating a portion having a large brightness difference with the strobe illumination area based on a brightness difference with an image captured with the strobe not emitting light. Next, the minimum luminance value Ylow is detected for the pixel corresponding to the extracted strobe illumination area (S302). Finally, level correction is performed on the pixels in the region corresponding to the strobe illumination region of the image P4 so that the maximum luminance value in the region does not exceed Ylow.

  FIG. 4 shows an example in which the state of divided exposure / combination is expressed in time series.

  In FIG. 4, it is assumed that the total exposure time T = 37.5 (ms), t0 = 12.5 (ms), t = 12.5 (ms), and N = 4 (times) as the image sensor. .

  After pressing the release SW, the CPU 104 controls the strobe control circuit 110 so that the strobe light is emitted during the exposure of the first image P0 among the four divided exposure images. The remaining three (images P1, P2, and P3) divided exposure images are subjected to a known alignment process based on the feature points of the image to reduce camera shake, and are then added and combined. Create an image equivalent to long-second exposure. Let P4 be the image after addition synthesis. Finally, the image P4 and the image P0 are subjected to alignment processing and then combined by peak hold addition to create a final image P5.

  The reason why the peak hold composition is performed when the image P0 and the image P4 are combined is to prevent the background hidden behind the person from appearing in the person even when the person who is the main subject moves during shooting. . The processing described in FIG. 3 is also performed for this purpose.

  FIG. 5A shows an example of shooting in the night view portrait mode, in which a person is in front and a Christmas tree with lights on as a background. Assume that only a person has a strobe light. This image is assumed to be a divided exposure image P0. Actually, the background Christmas tree is darker, but for the sake of clarity, it is displayed brighter for convenience. If the person moves slightly to the left between P1 to P3 that are continuously exposed as shown in FIG. 5B, one of the lights of the Christmas tree that was hidden behind the person in FIG. You can see that the part is visible. For this reason, when the images P0 and P4 are simply added and synthesized, the lights of the Christmas tree are reflected in the human body as shown in FIG. 5C.

  After performing the level correction described with reference to FIG. 3, the image P0 and the image P4 are subjected to peak hold synthesis so that there is no background reflection as shown in FIG. Can be created.

  In the above-described embodiment, the total exposure time T and the number of divided exposures N of the non-light-emitting image are calculated from the latest photometric results. However, the minimum luminance value Ylow obtained in S302 may be obtained at the timing of S210, and the number of division exposures may be varied based on this Ylow. In this way, it is possible to complete the exposure when the background approaches the luminance level of the person, and no time is wasted.

  As described above, in the present embodiment, the main subject and the background are appropriately balanced while suppressing the influence of the blur due to the exposure time by performing peak hold addition on the strobe image and the divided exposure image that has been corrected for brightness. You can get a photo taken with.

(Second Embodiment)
In the present embodiment, a processing procedure at the time of trailing curtain sync shooting in which strobe light is emitted immediately before completion of exposure will be described.

  FIG. 6 is a flowchart showing a mode-specific shooting process at the time of shooting in the second embodiment of the present digital camera. The same parts as those in FIG. 2 are denoted by the same step numbers and the description thereof is omitted.

  First, a variable i for counting the current number of divided exposures is initialized to 1 (S601). Then, it is determined whether the current number of division exposures i and the number of division exposures N are large or small (S208). When the current number of divided exposures i is less than N, exposure is performed at an exposure time t as strobe non-emission, and image reading and reading control is performed, and image data is stored in an area Buffer (i) in the buffer memory 106. Store (S211).

  Then, the number of divided exposures i is increased by 1 (S212), and the process proceeds to S208. In S208, when the current number of division exposures i is equal to the number of division exposures N, it is the last division exposure, and therefore, the division exposure / imaging readout control is performed while the strobe is emitted so that the light emission level calculated in S204 is obtained. Store in Buffer 0 (S602). Next, extraction of the strobe illumination area of the strobe light emission image P0 stored in Buffer 0 and correction of the divided exposure image without strobe light emission are performed (S213). Here, the minimum luminance value Ylow is detected for the pixels corresponding to the strobe illumination area extracted from the strobe emission image P0, as in the first embodiment. Then, the brightness level is compared with the maximum brightness value of the area corresponding to the P4 strobe illumination area, and the brightness level of the pixels in the strobe illumination area of the composite image P4 of the image photographed without the strobe light does not exceed the minimum brightness value Ylow. Adjust to lower.

  Next, the image data stored in Buffer 0 and Buffer 1 are combined (S214). The subsequent processing is the same as in FIG.

  FIG. 7 is a flowchart showing details of the process in S603 of FIG.

  First, a variable j for counting the number of additions of non-flash images is initialized to N-2 (S701). Next, the maximum luminance value of the image in Buffer 1 is checked and compared with the minimum luminance value Ylow in the strobe emission state (S702). If Ylow is large, the variable j is compared with 1 (S703). If the variable j is larger than 1, the image in Buffer1 and the image in Buffer (j) are stored in Buffer1 while being added and synthesized (S704). Next, the variable j is decreased by 1 (S705), and the process returns to S702. If the maximum luminance value of the image in Buffer 1 is equal to or higher than Ylow in S702, the image in Buffer (j + 1) is subtracted from the image in Buffer 1, and stored in Buffer 1 (S706). If the variable j is 1 or less in S703, the process is terminated.

  As a result, Buffer 1 stores a result of adding and synthesizing the strobe non-emission images within a range that does not exceed the minimum luminance value Ylow of the strobe illumination unit. As the addition order at this time, since the addition is performed in the order from the last image that is flashed, natural images that are temporally continuous are stored.

  As described above, in the present embodiment, the main subject and the background are appropriately balanced while suppressing the influence of the blur due to the exposure time by performing peak hold addition on the strobe image and the divided exposure image that has been corrected for brightness. You can get a photo taken with.

  Although the present invention has been described in detail based on preferred embodiments thereof, the present invention is not limited to these specific embodiments, and various forms within the scope of the present invention are also included in the present invention. included. A part of the above-described embodiments may be appropriately combined.

(Other embodiments)
The object of the present invention can also be achieved as follows. That is, a storage medium in which a program code of software in which a procedure for realizing the functions of the above-described embodiments is described is recorded is supplied to the system or apparatus. The computer (or CPU, MPU, etc.) of the system or apparatus reads out and executes the program code stored in the storage medium.

  In this case, the program code itself read from the storage medium realizes the novel function of the present invention, and the storage medium and program storing the program code constitute the present invention.

  Examples of the storage medium for supplying the program code include a flexible disk, a hard disk, an optical disk, and a magneto-optical disk. Further, a CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD-R, magnetic tape, nonvolatile memory card, ROM, or the like can also be used.

  Further, by making the program code read by the computer executable, the functions of the above-described embodiments are realized. Furthermore, when the OS (operating system) running on the computer performs part or all of the actual processing based on the instruction of the program code, the functions of the above-described embodiments are realized by the processing. Is also included.

  Furthermore, the following cases are also included. First, the program code read from the storage medium is written in a memory provided in a function expansion board inserted into the computer or a function expansion unit connected to the computer. Thereafter, based on the instruction of the program code, the CPU or the like provided in the function expansion board or function expansion unit performs part or all of the actual processing.

  In addition, the present invention is not limited to devices such as digital cameras, but includes built-in or external connection of imaging devices such as mobile phones, personal computers (laptop type, desktop type, tablet type, etc.), game machines, etc. It can be applied to any device. Therefore, the “imaging device” in this specification is intended to include any electronic device having an imaging function.

101 Lens 102 Mechanical shutter 103 Imaging unit 104 CPU
105 Image Processing Circuit 106 Buffer Memory 107 External Memory 108 Display Unit 109 Operation Unit 110 Strobe Control Circuit 111 Strobe Light Emitting Unit

Claims (8)

  1. Imaging means;
    Imaging control for causing the imaging unit to capture an image while the light emitting unit emits light and outputting the first image signal, and causing the imaging unit to continuously capture an image while the light emitting unit is not emitting light and outputting a plurality of image signals. Means,
    First combining means for adding and combining the plurality of image signals to generate a first combined image signal;
    Correction means for correcting the brightness of the first composite image signal;
    Second combining means for generating a second combined image signal by selecting the one having a smaller value from the first image signal and the combined image signal output from the correcting means;
    The imaging apparatus corrects the first composite image signal so that the brightness of the first composite image signal does not exceed the brightness of the first image signal.
  2. Extraction means for extracting an illumination area from which light emission by the light emission means has arrived, from the first image signal;
    The correction unit is configured so that the maximum brightness value of the area corresponding to the illumination area in the first composite image signal does not exceed the minimum brightness value of the illumination area in the first image signal. The imaging apparatus according to claim 1, wherein the image signal is corrected.
  3. Exposure control means for controlling exposure of imaging by the imaging means;
    The exposure control means includes
    An exposure condition for outputting the first image signal is set so that the subject has an appropriate brightness when imaged in a light emitting state by the light emitting means, and the subject is appropriate when the plurality of image signals are combined. The imaging apparatus according to claim 1, wherein an exposure condition for outputting the plurality of image signals is set so that the brightness is high.
  4.   The correction means corrects the brightness of the first composite image signal by controlling the number of image signals used for the first composite image signal synthesized by the first synthesis means. The imaging apparatus according to any one of claims 1 to 3.
  5.   The imaging apparatus according to claim 4, wherein when the plurality of image signals are added and synthesized, the addition and synthesis are sequentially performed from an image signal captured later in time series.
  6. The imaging means and the imaging means output the first image signal while the light emitting means emits light, and the imaging means continuously captures a plurality of image signals while the light emitting means is not emitting light. An imaging control means for outputting, and a control method of an imaging device comprising:
    A first combining step of adding and combining the plurality of image signals to generate a first combined image signal;
    A correction step of correcting the brightness of the first composite image signal;
    A second combining step of generating a second combined image signal by selecting the one having a smaller value from the first image signal and the combined image signal output from the correcting unit;
    In the correction step, the first composite image signal is corrected so that the brightness of the first composite image signal does not exceed the brightness of the first image signal. Method.
  7.   A computer-executable program in which a procedure of a control method for an imaging apparatus according to claim 6 is described.
  8.   A computer-readable storage medium storing a program for causing a computer to execute each step of the control method of the imaging apparatus according to claim 6.
JP2013122101A 2013-06-10 2013-06-10 Imaging apparatus, imaging apparatus control method, image processing apparatus, and image processing method Active JP6214229B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2013122101A JP6214229B2 (en) 2013-06-10 2013-06-10 Imaging apparatus, imaging apparatus control method, image processing apparatus, and image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2013122101A JP6214229B2 (en) 2013-06-10 2013-06-10 Imaging apparatus, imaging apparatus control method, image processing apparatus, and image processing method

Publications (2)

Publication Number Publication Date
JP2014239396A true JP2014239396A (en) 2014-12-18
JP6214229B2 JP6214229B2 (en) 2017-10-18

Family

ID=52136214

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2013122101A Active JP6214229B2 (en) 2013-06-10 2013-06-10 Imaging apparatus, imaging apparatus control method, image processing apparatus, and image processing method

Country Status (1)

Country Link
JP (1) JP6214229B2 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007288235A (en) * 2006-04-12 2007-11-01 Sony Corp Imaging apparatus and imaging method
JP2012119858A (en) * 2010-11-30 2012-06-21 Aof Imaging Technology Ltd Imaging device, imaging method, and program
JP2012195660A (en) * 2011-03-15 2012-10-11 Nikon Corp Image processing apparatus and electronic camera, and image processing program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007288235A (en) * 2006-04-12 2007-11-01 Sony Corp Imaging apparatus and imaging method
JP2012119858A (en) * 2010-11-30 2012-06-21 Aof Imaging Technology Ltd Imaging device, imaging method, and program
JP2012195660A (en) * 2011-03-15 2012-10-11 Nikon Corp Image processing apparatus and electronic camera, and image processing program

Also Published As

Publication number Publication date
JP6214229B2 (en) 2017-10-18

Similar Documents

Publication Publication Date Title
US9912875B2 (en) Imaging device and imaging method capable of generating a bulb exposure image derived from relatively bright image combination#data and relatively dark image combination data
US8401378B2 (en) Flash control for electronic rolling shutter
JP5898466B2 (en) Imaging device, control method thereof, and program
US8723974B2 (en) Image pickup apparatus, image pickup method and recording device recording image processing program
JP4217698B2 (en) Imaging apparatus and image processing method
US7362370B2 (en) Image capturing apparatus, image capturing method, and computer-readable medium storing program using a distance measure for image correction
JP5108093B2 (en) Imaging apparatus and imaging method
JP4840688B2 (en) Imaging apparatus and program thereof
CN102348066B (en) Camera head
TWI293846B (en) Image pickup device with brightness correcting function and method of correcting brightness of image
JP4021716B2 (en) Camera with strobe adjustment function
JP5639140B2 (en) Camera
EP2220863B1 (en) Camera flash module and method for controlling the same
JP5474653B2 (en) Imaging apparatus and imaging method
JP5319078B2 (en) Camera, camera image processing method, program, and recording medium
JP5096017B2 (en) Imaging device
KR101900097B1 (en) Image capturing method and image capturing apparatus
US7509042B2 (en) Digital camera, image capture method, and image capture control program
JP4350616B2 (en) Imaging apparatus and control method thereof
US7813637B2 (en) Digital camera
JP5610762B2 (en) Imaging apparatus and control method
JP5787648B2 (en) Image processing apparatus and image processing apparatus control method
US20040095472A1 (en) Electronic still imaging apparatus and method having function for acquiring synthesis image having wide-dynamic range
US9307163B2 (en) Imaging device and imaging method
JP3758655B2 (en) Digital camera

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20160602

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20161220

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20161227

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20170224

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20170822

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20170919

R151 Written notification of patent or utility model registration

Ref document number: 6214229

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R151