EP3474537A1 - Imaging control device, imaging control method, and program - Google Patents

Imaging control device, imaging control method, and program Download PDF

Info

Publication number
EP3474537A1
EP3474537A1 EP17813042.3A EP17813042A EP3474537A1 EP 3474537 A1 EP3474537 A1 EP 3474537A1 EP 17813042 A EP17813042 A EP 17813042A EP 3474537 A1 EP3474537 A1 EP 3474537A1
Authority
EP
European Patent Office
Prior art keywords
flickering
timing
imaging
component
peak
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP17813042.3A
Other languages
German (de)
French (fr)
Other versions
EP3474537A4 (en
Inventor
Satoko Suzuki
Yutaro Honda
Syouei Hirasawa
Osamu Izuta
Daisuke Kasai
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Group Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of EP3474537A1 publication Critical patent/EP3474537A1/en
Publication of EP3474537A4 publication Critical patent/EP3474537A4/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/745Detection of flicker frequency or suppression of flicker wherein the flicker is caused by illumination, e.g. due to fluorescent tube illumination or pulsed LED illumination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/72Combination of two or more compensation controls
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/53Control of the integration time

Definitions

  • the present technology relates to an imaging control device, an imaging control method, and a program.
  • Patent Literature 1 JP 2014-220763A
  • Patent Literature 1 a different sensor from an image sensor (imager) is used to detect flickering. Therefore, there is a problem that it is difficult to miniaturize the device.
  • the present technology is devised in view of such a problem and one object of the present technology is to provide an imaging control device, an imaging control method, and a program capable of preventing deterioration in image quality due to flickering.
  • an imaging control device including: a control unit configured to perform control such that an exposure timing is synchronized with a timing of a peak of a detected flickering component on the basis of a period of the flickering component and the timing of the peak of the flickering component.
  • the present technology is, for example, an imaging control method including: performing, by a control unit, control such that an exposure timing is synchronized with a timing of a peak of a detected flickering component on the basis of a period of the flickering component and the timing of the peak of the flickering component.
  • the present technology is, for example, a program causing a computer to perform an imaging control method of performing, by a control unit, control such that an exposure timing is synchronized with a timing of a peak of a detected flickering component on the basis of a period of the flickering component and the timing of the peak of the flickering component.
  • the present technology it is possible to prevent deterioration in image quality due to flickering.
  • the effect described above is not necessarily limitative and the effects described in the present technology may be achieved.
  • content of the present technology is not construed to be limited by the exemplified effects.
  • FIG. 1 is a block diagram illustrating a system configuration example of an imaging device (an imaging device 100) according to an embodiment of the present technology.
  • an imaging device an imaging device 100
  • light from a subject is incident on a complementary metal oxide semiconductor (CMOS) image sensor 12 via an imaging optical system 11, the light is photoelectrically converted in the CMOS image sensor 12, and an analog image signal is obtained from the CMOS image sensor 12.
  • CMOS complementary metal oxide semiconductor
  • the imaging optical system 11 and the CMOS image sensor 12 are included in an imaging unit.
  • CMOS image sensor 12 a plurality of pixels including a photodiode (photo-gate), a transmission gate (a shutter transistor), a switching transistor (an address transistor), an amplification transistor, a reset transistor (a reset gate), and the like are arrayed and formed in a 2-dimensional shape, and a vertical scanning circuit, a horizontal scanning circuit, and a video signal output circuit are formed on a CMOS substrate.
  • a photodiode photo-gate
  • a transmission gate a shutter transistor
  • switching transistor an address transistor
  • amplification transistor an amplification transistor
  • reset transistor a reset gate
  • the CMOS image sensor 12 may be one of a primary color system and a complementary color system, as will be described below.
  • An analog image signal obtained from the CMOS image sensor 12 is a primary color signal of each color of RGB or a color signal of a complementary color system.
  • the analog image signal from the CMOS image sensor 12 is sampled and held for each color signal in the analog signal processing unit 13 configured as an integrated circuit (IC), a gain of the analog image signal is controlled through automatic gain control (AGC), and the analog image signal is converted into a digital signal through analog-to-digital (A/D) conversion.
  • IC integrated circuit
  • AGC automatic gain control
  • A/D analog-to-digital
  • the digital image signal from the analog signal processing unit 13 is processed, as will be described below, in a digital signal processing unit 20 that is configured as an IC and functions as a detection unit. Then, in a flickering reduction unit 25 in the digital signal processing unit 20, a flickering component is reduced for each signal component, as will be described below, and then the signal is finally converted into color difference signals R-Y and B-Y between red and blue and a luminance signal Y to be output from the digital signal processing unit 20.
  • a system controller 14 which is an example of a control unit includes a microcomputer or the like and controls each unit of the imaging device 100.
  • a lens driving control signal is supplied from the system controller 14 to the lens driving driver 15 including an IC and a lens or an iris of the imaging optical system 11 is driven by the lens driving driver 15.
  • a timing control signal is supplied from the system controller 14 to a timing generator 16 and various timing signals are supplied from the timing generator 16 to the CMOS image sensor 12 so that the CMOS image sensor 12 is driven.
  • a shutter speed of the CMOS image sensor 12 is also controlled with a timing control signal from the system controller 14. Specifically, a shutter control unit 14c in the system controller 14 sets a shutter speed.
  • a detection signal of each signal component is captured from the digital signal processing unit 20 to the system controller 14.
  • the analog signal processing unit 13 controls a gain of each color signal, as described above, with an AGC signal from the system controller 14 and the system controller 14 controls signal processing in the digital signal processing unit 20.
  • a camera-shake sensor 17 is connected to the system controller 14 and camera-shake information obtained from the camera-shake sensor 17 is used to correct camera shake.
  • a manipulation unit 18a and a display unit 18b included in a user interface 18 are connected to the system controller 14 via an interface 19 including a microcomputer or the like.
  • the system controller 14 detects a setting manipulation, a selection manipulation, or the like in the manipulation unit 18a and the system controller 14 displays a setting state, a control state, or the like of a camera on the display unit 18b. For example, setting regarding whether or not to perform flickerless photographing to be described below can be performed using the manipulation unit 18a.
  • the imaging device 100 may include a storage device.
  • the storage device may be a device such as a hard disk contained in the imaging device 100 or may be a memory such as a Universal Serial Bus (USB) memory which is detachably mounted on the imaging device 100.
  • the imaging device 100 may include a communication device. Image data, various kinds of setting data, and the like may be transmitted to and received from an external device via the Internet or the like using the communication device. Communication may be performed as wired communication or may be performed as wireless communication.
  • FIG. 2 is a block diagram illustrating a configuration example of the digital signal processing unit 20 in the case of a primary color system.
  • the primary color system is a three-plate system in which the imaging optical system 11 in FIG.
  • CMOS image sensor 12 includes a separation optical system that separates light from a subject into color light with respective colors of RGB and a CMOS image sensor for each color of RGB as the CMOS image sensor 12 or a one-plate system in which one CMOS image sensor in which color filters of respective colors of RGB are arranged sequentially and repeatedly for each one pixel in a screen-horizontal direction on a light incidence surface is included as the CMOS image sensor 12.
  • RGB primary color signals are read in parallel from the CMOS image sensor 12.
  • a clamp circuit 21 clamps a black level of the input RGB primary color signals to a predetermined level
  • a gain adjustment circuit 22 adjusts gains of the RGB primary color signals after the clamping in accordance with an exposure amount
  • flickering reduction units 25R, 25G, and 25B reduce flickering components in the RGB primary color signals after the gain adjustment in accordance with a method to be described below.
  • a process is performed to perform flickerless photographing.
  • the flickerless photographing means photographing capable of preventing an influence of flickering occurring from a flickering light source on image quality (reduction in image quality).
  • a white balance adjustment circuit 27 adjusts white balance of the RGB primary color signals after the reduction of the flickering
  • a gamma correction circuit 28 converts the gray scales of the RGB primary color signals after the adjustment of the white balance
  • a synthesis matrix circuit 29 generates the luminance signal Y and the color difference signals R-Y and B-Y of the output from the RGB primary color signals after the gamma correction.
  • the luminance signal Y is generated after the RGB primary color signal has fully ended as in FIG. 2 . Therefore, by reducing the flickering components in the RGB primary color signals during the process on the RGB primary color signals as in FIG. 2 , it is possible to sufficiently reduce the flickering components of luminance components and each color component.
  • a flickering reduction unit 25 may be provided on an output side of the luminance signal Y of the synthesis matrix circuit 29 to detect a flickering component in the luminance signal Y and reduce the flickering component.
  • the complementary color system is a one-plate system that includes one CMOS image sensor in which color filters of a complementary system are formed on a light incidence surface as the CMOS image sensor 12 in FIG. 1 .
  • the digital signal processing unit 20 clamps a black level of a complementary signal (a synthesized signal) at a predetermined level, adjusts a gain of the complementary signal after the clamping in accordance with an exposure amount, and further generates a luminance signal and RGB primary color signals from the complementary signal after adjustment of the gain.
  • a complementary signal a synthesized signal
  • the flickering component in the luminance signal and the flickering components in the RGB primary color signals are reduced by the flickering reduction unit 25, the gray scale of the luminance signal after the reduction of the flickering is further corrected to obtain the luminance signal Y of the output and the white balance of the RGB primary color signals after the reduction of the flickering is adjusted, the gray scales of the RGB primary color signals after the adjustment of the white balance are converted, and the color difference signals R-Y and B-Y are generated from the RGB primary color signals after the gamma correction.
  • an operation example of the imaging device 100 will be described.
  • an image (a through image) in a moving image aspect is displayed on the display unit 18b (live-view display) at the time of deciding a composition of (framing) a subject before photographing.
  • the preparation manipulation is a manipulation of preparation to perform photographing and is a manipulation performed immediately before photographing.
  • the preparation manipulation is, for example, a half push manipulation of pushing a shutter button included in the manipulation unit 18a partially (halfway).
  • a preparation operation of capturing a still image of the subject is performed.
  • a detection operation of detecting focus or setting an exposure control value and light emission of an auxiliary lighting unit, and the like can be exemplified. Note that when the pushing of the shutter button in the half push state is released, the preparation operation ends.
  • the imaging device 100 When the shutter button is further pushed from the half push state and the shutter button is fully pushed, the imaging device 100 is instructed to perform photographing and an exposure operation is performed on a subject image (an optical image of the subject) using the CMOS image sensor 12.
  • the analog signal processing unit 13 or the digital signal processing unit 20 performs predetermined signal processing on image data obtained in response to the exposure operation to obtain a still image.
  • the image data corresponding to the obtained still image is appropriately stored in a storage device (not illustrated).
  • the imaging device 100 may photograph a moving image.
  • the moving image is captured, for example, when the shutter button is pushed, photographing of the moving image and recording of the moving image are performed.
  • the shutter button is pushed again, the photographing of the moving image is stopped.
  • the flickering reduction process is, for example, a process performed on a through image in live-view display.
  • a flickering component occurring due to a fluorescent lamp or the like in an NTSC system will be described to facilitate understanding. Note that in this example, a case in which a frame rate is set to 60 frames per second (fps) and a commercial power frequency is set to 50 hertz (Hz) will be described. Characteristics of the flickering component occurring in this case are as follows:
  • a flickering component is generated, as illustrated in FIG. 3 , when a flickering phenomenon occurs.
  • scanning is assumed to be performed from the upper side (the upper portion of the screen) to the lower side (the lower portion of the screen) in FIG. 3 .
  • the CMOS image sensor 12 since an exposure timing differs for each horizontal line, an amount of received light is changed in accordance with the horizontal line. Accordingly, although the fluorescent lamp or the like illuminates uniformly spatially, there are a horizontal line with a value of a video signal greater than an average value and a horizontal line with a value of a video signal less than the average value as in FIG. 3 . For example, in the frames of FIG.
  • a flickering component (an amplitude of the flickering component) in the highest horizontal line in the image, that is, a head line, has the highest peak. Further, in a horizontal line shifted from the head line by lines equivalent to 3/5 of the total number of lines included in one screen, a flickering component is also the highest.
  • the flickering component can be expressed as a sine function (sinusoidal wave) that has an amplitude, a period, and an initial phase, as illustrated in FIG. 3 . Note that in this example, the initial phase means a phase in the head line.
  • the phase of each horizontal line changes in accordance with a frame. That is, a horizontal line with a value of a video signal greater than the average value and a horizontal line with a value of a video signal less than the average value are changed for each frame.
  • a sinusoidal wave with a different initial phase is formed. For example, when flickering in the fluorescent lamp is generated at 100 Hz and a frame rate is 60 fps, 5 periods of the flickering in the fluorescent lamp are a time equivalent to 3 frames. Accordingly, the initial phase is the same phase every 3 frames. In this way, the flickering component is changed in accordance with the horizontal line and the frames.
  • the flickering component can be expressed as a sinusoidal wave that has a period of 5 frames.
  • An example of a process (an operation) of reducing the flickering component that has the foregoing properties will be described.
  • FIG. 4 is a block diagram illustrating a detailed configuration example of a flickering reduction unit 25.
  • input image signals mean RGB primary color signals or a luminance signal before the flickering reduction process input to the flickering reduction unit 25 and output image signals mean the RGB primary color signals or the luminance signal after the flickering reduction process output from the flickering reduction unit 25.
  • the flickering reduction unit 25 includes, for example, a normalized integrated value calculation block 30, an arithmetic block 40, a discrete Fourier transform (DFT) block 50, a flickering generation block 55, and a frequency estimation/peak detection block 60.
  • the normalized integrated value calculation block 30 includes an integration block 31, an integrated value retention block 32, an average value calculation block 33, a difference calculation block 34, and a normalization block 35.
  • the integration block 31 calculates an integrated value Fn(y) by integrating one line of an input image signal In'(x, y) in the horizontal direction of the screen.
  • the calculated integrated value Fn(y) is stored and retained for flickering detection in subsequent frames in the integrated value retention block 32.
  • the integrated value retention block 32 may have a configuration capable of retaining an integrated value equivalent to at least 2 frames.
  • the average value calculation block 33 calculates an average AVE[Fn(y)] of three integrated values Fn(y), Fn_1(y), and Fn_2(y). Note that Fn_1(y) is an integrated value Fn_1(y) of the same line before one frame, Fn_2(y) is an integrated value Fn 2(y) of the same line before two frames, and these integrated values are values read from the integrated value retention block 32.
  • the difference calculation block 34 calculates a difference between the integrated value Fn(y) supplied from the integration block 31 and the integrated value Fn_1(y) before one frame supplied from the integrated value retention block 32.
  • the difference Fn(y)-Fn_1(y) an influence of a subject is sufficiently removed. Therefore, the form of a flickering component (a flickering coefficient) is expressed more clearly than in the integrated value Fn(y).
  • the normalization block 35 performs a normalization process of dividing the difference Fn(y)-Fn_1(y) from the difference calculation block 34 by an average value AVE[Fn(y)] from the average value calculation block 33 to calculate a difference gn(y) after the normalization.
  • the DFT block 50 performs a discrete Fourier transform on data equivalent to one wavelength (equivalent to L lines) of flickering in a difference gn(y) after the normalization from the normalization block 35.
  • an amplitude ⁇ m and an initial phase ⁇ mn of each subsequent flickering component are estimated.
  • the initial phase ⁇ mn is generated in the imaging device 100 and is retained in association with a counter at each predetermined time (for example, at 0.5 microseconds ( ⁇ s)).
  • the flickering generation block 55 calculates a flickering coefficient ⁇ n(y) from estimated values of ⁇ m and ⁇ mn from the DFT block 50. Then, the arithmetic block 40 performs a process of adding 1 to the flickering coefficient ⁇ n(y) from the flickering generation block 53 and multiplying an inverse gain by dividing the input image signal In'(x, y) by the sum [1+ ⁇ n(y)].
  • the flickering component included in the input image signal In'(x, y) is substantially completely removed, and thus a signal component In(x, y) with substantially no flickering component is obtained as an output image signal (the RGB primary color signal or the luminance signal after the flickering reduction process) from the arithmetic block 40.
  • the flickering reduction process may be performed at the time of photographing of a moving image (including recording).
  • the flickering component is detected for each color of RGB. In this case, a timing of a peak of a color component with the maximum amplitude is detected. Instead, a peak of the luminance signal may be detected.
  • the initial phase ⁇ mn calculated in the DFT block 50 is supplied to the frequency estimation/peak detection block 60.
  • the frequency estimation/peak detection block 60 estimates at least a frequency of the flickering component (light source), in other words, a period of the flickering component on the basis of the input initial phase ⁇ mn. Further, a timing of the peak of the flickering component is detected. For example, the frequency estimation/peak detection block 60 estimates a frequency of the flickering component from a time difference based on the frame rate and a phase difference of the initial phase ⁇ mn. Further, the frequency estimation/peak detection block 60 detects the initial phase ⁇ mn in an initial frame and a timing of the peak of the flickering component from, for example, a counter associated with the initial phase ⁇ mn.
  • a timing at which the peak (for example, 90 degrees) of the flickering component approximated to a sinusoidal wave appears can be obtained using a time interval of the counter.
  • the system controller 14 is notified of information obtained by the frequency estimation/peak detection block 60.
  • the peak of the flickering component is a spot at which the amplitude of the flickering component is the maximum, as described above.
  • characteristics (the period of the flickering component, the timing of the peak, or the like) of the flickering component can be detected on the basis of an imaging result (a photographed image obtained via the imaging unit) by the imaging unit. Therefore, it is possible to prevent cost from increasing due to an increase in the number of components. In addition, it is possible to miniaturize the imaging device. Note that the process of obtaining the characteristics of the flickering component is not limited to the above-described method, and a known method can be applied.
  • the flickerless photographing process is, for example, a process performed in a case in which a flickerless photographing mode is set in the imaging device 100.
  • a background component is extracted from an average using an image of a plurality of frames (for example, 3 frames). Therefore, in a case in which the frame rate matches a blinking period of a flickering light source such as a fluorescent lamp, it is difficult to separate a background from flicker, and thus it is difficult to detect the flicker.
  • the image of the plurality of frames is used. Therefore, in a case in which a still image is captured in flickerless photographing, it is difficult to apply the above-described flickering reduction process without change. Accordingly, in a case in which the still image is captured in the flickerless photographing, a flickerless photographing process to be described below is performed.
  • the frame rate after the switching is, for example, N times the frequency of the light source (here a frequency greater than a frequency (100 Hz or 120 Hz) of a flickering component) and is preferably one period of the flickering component in a frame.
  • the frequency of the flickering light source may be obtained from setting of a user or may be set automatically on the basis of a result of the above-described flickering reduction process in live-view display. That is, in a case in which no flickering component is detected in the case of the frame rate of 60 fps in the flickering reduction process, the frequency of the light source is determined to be 50 Hz. In a case in which no flickering component is detected in the case of the frame rate of 50 fps, the frequency of the light source is determined to be 60 Hz. This result may be used in the flickerless photographing process. In addition, whether there is flickering may be detected in the flickerless photographing process.
  • a timing at which the frame rate is switched can appropriately be set, and is preferably a timing immediately before photographing.
  • the frame rate is switched when a manipulation of pushing the shutter button halfway, which is a photographing preparation manipulation, is performed. More specifically, a manipulation signal in accordance with the manipulation of pushing the shutter button halfway is supplied to the system controller 14 via the interface 19.
  • the system controller 14 controls the timing generator 16 such that the CMOS image sensor 12 is driven and the frame rate is accelerated.
  • a repetition period of the flickering component is changed.
  • the repetition period of the flickering component is 20 frames.
  • the repetition period of the flickering component is 12 frames.
  • Image data can be obtained on the basis of the accelerated frame rate.
  • the obtained image data is subjected to a process by the analog signal processing unit 13 to be input to the digital signal processing unit 20.
  • the image data obtained at the high frame rate is similarly subjected to the above-described flickering reduction process by the flickering reduction unit 25. Further, in this process, the initial phase ⁇ mn which is an output from the DFT block 50 is input to the frequency estimation/peak detection block 60 of the flickering reduction unit 25.
  • the frequency estimation/peak detection block 60 estimates at least a frequency (period) of the flickering component (light source) on the basis of the input initial phase ⁇ mn and further detects a timing of a peak of the flickering component.
  • FIG. 5 is a diagram for summarizing the above-described process.
  • An image captured at a normal frame rate for example, 50 or 60 fps
  • the image subjected to the flickering reduction process is displayed as a through image on the display unit 18b.
  • the frame rate is switched at a high speed (for example, 200 or 240 fps) and frequency estimation and a peak detection process are performed along with the flickering reduction process.
  • the image subjected to the flickering reduction process is displayed on the display unit 18b.
  • the display unit 18b performs display based on image data obtained by thinning out some of the obtained image data. Then, when the shutter button is deeply pushed, photographing is performed.
  • FIG. 6 is an explanatory diagram illustrating a process in photographing performed in response to a deep push manipulation on the shutter button.
  • a frequency of a flickering component is estimated and a process of detecting a timing of a peak is performed. This process is repeatedly performed while the half push manipulation is performed. Note that when the half push manipulation is performed, it is determined whether or not a mode in which the flickerless photographing is performed is set (whether the mode is set to be turned on).
  • a process to be described below is performed.
  • a deep push manipulation on the shutter button is assumed to be performed at a timing TA.
  • the flickering reduction unit 25 notifies of the system controller 14 (the shutter control unit 14c) of a timing of a subsequent peak (in this example, TB) of the flickering component.
  • the timing of the peak herein is, for example, a timing obtained immediately before the deep push manipulation is performed.
  • the system controller 14 performs photographing in which an exposure timing is caused to be synchronized with the timing of the peak of the flickering component.
  • a latest timing at which the flickering component is a peak is the timing TB.
  • the exposure timing is caused to be synchronized with a timing (for example, a timing TC) subsequent temporally by a multiple of the period of the flickering component from the timing TB in consideration of delay or the like of a process related to still image photographing.
  • the exposure timing may be caused to be synchronized with the timing TB.
  • the photographing in which the exposure timing is caused to be synchronized with the timing of the peak of the flickering component is performed, for example, at a timing at which centers of a shutter speed (exposure time) and a curtain speed match or substantially match the peak of the flickering component.
  • the fact that the centers of the shutter speed and the curtain speed substantially match the peak of the flickering component means that a deviation in the timing is within a range of a predetermined error.
  • a center of gravity (exposure gravity center) of a quadrangle with diagonals indicating an exposure amount matches or substantially matches the peak of the flickering component. Since the exposure timing is normally synchronized with the peak of the flickering component, it is possible to realize the flickerless photographing in which the quality of an image is prevented from deteriorating due to the flickering component.
  • FIG. 7 is a flowchart illustrating an example of a flow of a process in the flickerless photographing.
  • the system controller 14 determines whether or not a mode in which the flickerless photographing is performed (a flickerless photographing mode) is set.
  • a mode in which the flickerless photographing is performed a mode in which the flickerless photographing is performed.
  • a process related to normal photographing herein meaning photographing in which the flickerless photographing process is not performed
  • the process proceeds to step ST12.
  • step ST12 it is determined whether or not the shutter button included in the manipulation unit 18a is pushed halfway.
  • the flickering reduction process is performed on an image captured at the normal frame rate (for example, 50 or 60 fps) and the image subjected to the flickering reduction process is displayed as a through image on the display unit 18b. Note that when the flickering reduction process is performed, the flickering reduction process is not performed in a case in which no flickering of outdoor photographing or the like occurs and no flickering component is detected. In a case in which the shutter button is pushed halfway, the process proceeds to step ST13.
  • step ST13 the flickerless photographing process is performed in response to the half push manipulation of the shutter button.
  • the CMOS image sensor 12 is driven at a high frame rate (for example, 200 or 240 fps), the frequency of the flickering component is estimated using the obtained image data, and a process of detecting a timing at which the peak of the flickering component comes is performed. Note that in a case in which the frequency of the flickering component is estimated in the flickering reduction process, only the process of detecting the timing of the peak of the flickering component may be performed.
  • the system controller 14 is notified of data such as the obtained timing. The foregoing process continues, for example, while the half push manipulation continues. Then, the process proceeds to step ST14.
  • step ST14 it is determined whether or not a deep push manipulation on the shutter button is performed under a flickering environment in which flickering occurs. In a case in which it is determined that the deep push manipulation on the shutter button is performed under the environment in which no flickering occurs, the process proceeds to step ST15. In step ST15, a still image photographing process in which the flickerless photographing process is not performed is performed. Conversely, in a case in which the deep push manipulation on the shutter button is performed under the environment in which the flickering occurs, the process proceeds to step ST16.
  • step ST16 the flickerless photographing is performed. That is, the photographing in which the exposure timing is caused to be synchronized with the peak of the flickering component obtained in the process of step ST13 is performed. On the other hand, the photographing in which the quality of a still image is prevented from deteriorating due to the flickering component can be performed.
  • the device can be miniaturized, and thus application to products in a wide variety of categories is possible.
  • the flickering reduction process on a through image may not be performed.
  • whether there is flickering may be detected in a method similar to a flickerless photographing process performed in response to a half push manipulation on the shutter button, that is, a flickering reduction process using image data obtained by exposing an accelerated frame rate.
  • the flickerless photographing process may not be performed or the flickerless photographing process may be performed by delaying the process for a time in which at least one period of the flickering component can be detected.
  • bracket photographing consecutive shoot photographing in which still images are consecutively photographed, or the like, photographing caused to be synchronized with a timing of a peak of a flickering component obtained before consecutive shooting even after the second image may be performed. That is, on the basis of a timing of a peak of a flickering component detected before first exposure, photographing in which a timing after the second exposure is caused to be synchronized with a timing of a peak of the flickering component may be performed.
  • a process of strengthening an effect of a process of reducing noise (a noise reduction process) or a process of increasing sensitivity may be performed.
  • the flickerless photographing process may not be performed.
  • the photographing is performed by causing the exposure timing to be synchronized with a timing of a peak of a flickering component.
  • an image obtained through the photographing can be brighter than an image (an image displayed on the display unit 18b) checked by a user in a half push manipulation. Accordingly, a gain control process of decreasing the luminance of the obtained image or the like may be performed.
  • exposure correction is performed on an image obtained through the real photographing (for example, photographing performed in response to full pushing of the shutter button) using amplitude information of a flickering component obtained at the time of detection of the flickering component (for example, while the shutter button is pushed halfway.
  • the exposure correction is controlled by, for example, the system controller 14.
  • an exposure correction amount (a correction amount necessary for the exposure correction) in the photographing can be decided from an amplitude of the flickering component obtained in the detection.
  • the exposure correction may be performed by an amount equivalent to the amplitude of the flickering component during the detection.
  • an exposure correction amount necessary in the photographing can be predicted on the basis of the amplitude of the flickering component obtained in the detection.
  • Intensity of a flickering component shown in an image depends on a shutter speed (an exposure time).
  • An influence of a flickering light source on an image occurs as integration of the exposure time of the blink of the light source.
  • the exposure time is short, the blinking of the light source is shown in the image without change.
  • the exposure time is longer, a difference between the brightest portion and the darkest portion of the blinking becomes smaller because of an integration effect, and thus the blinking completely disappears by the integration in an exposure time which is an integer multiple of a blinking period of the light source.
  • the horizontal axis represents (1/shutter speed) and the vertical axis represents intensity of normalized flickering (hereinafter appropriately referred to as flickering intensity).
  • the intensity of the normalized flickering is a numerical value expressing the intensity of the flickering for convenience and is also referred to as an amplitude coefficient.
  • the graph of FIG. 8 indicates that the flickering intensity is 0 at an integer multiple of a light source period (for example, 1/100 seconds).
  • the graph illustrated in FIG. 8 is stored as, for example, a table in the system controller 14.
  • the system controller 14 obtains an appropriate exposure correction amount with reference to the table.
  • numerical values described in the table may be numerical values by actual measurement or may be numerical values by simulation.
  • the system controller 14 determines a shutter speed in the detection of the flickering component (hereinafter appropriately referred to as a shutter speed SS1) and a shutter speed in the real photographing (hereinafter appropriately referred to as a shutter speed SS2). These shutter speeds can be determined with reference to setting or the like of the imaging device 100.
  • the system controller 14 obtains the flickering intensity corresponding to the shutter speed SS1 with reference to the table. For example, the flickering intensity corresponding to the shutter speed SS1 is obtained as ⁇ 1. In addition, the system controller 14 obtains the flickering intensity corresponding to the shutter speed SS2 with reference to the table. For example, the flickering intensity corresponding to the shutter speed SS2 is obtained as ⁇ 2.
  • the system controller 14 obtains an amplitude of the flickering component in the real photographing (an amount of blinking of an unnormalized actual flickering component).
  • the amplitude of the flickering component in the real photographing is given in accordance with Expression (1A) below.
  • Amplitude of flickering component in real photographing amplitude of flickering component obtained in detection ⁇ ⁇ ⁇ 2/1
  • the amplitude of the flickering component obtained in the detection can be obtained from an output of the DFT block 50.
  • the system controller 14 sets the amplitude of the flickering component in the real photographing which is a result of Expression (1A) as an exposure correction amount in the real photographing.
  • the exposure correction based on the exposure correction amount is performed with a control value other than shutter.
  • the system controller 14 sets a gain so that the gain is the obtained exposure correction amount.
  • the image obtained in the real photographing is multiplied by the set gain.
  • a gain control unit that performs such gain control may be included in the digital signal processing unit 20 and the digital signal processing unit 20 may operate under the control of the system controller 14.
  • the system controller 14 may control a diaphragm so that a diaphragm value corresponding to the exposure correction amount is set.
  • the gain or the diaphragm value corresponding to the exposure correction amount may be described in, for example, the table or may be obtained by calculation.
  • the second example is an example in which the shutter speed in the real photographing is changed to correspond to the exposure correction amount.
  • the correction amount is accordingly changed. Accordingly, in the second example, an example of a method of deciding the shutter speed in the real photographing at one time will be described.
  • scales of the horizontal axis and the vertical axis of the graph illustrated in FIG. 8 are converted into, for example, exposure values (EVs) in accordance with a known method, as illustrated in FIG. 10 .
  • a correction amount corresponding to the shutter speed in the detection is ⁇ 2 [EV]
  • a virtual straight line L2 that has a slope of 1 from an intersection point P1 between the horizontal axis of 0 and the shutter speed in the detection is set.
  • the system controller 14 identifies a shutter speed corresponding to an intersection point P2 between the line L1 and the line L2 and sets the shutter speed as a shutter speed of the real photographing.
  • the shutter speed in the real photographing can be appropriately set and the exposure of an image can be appropriately corrected.
  • the above-described first example and second example related to the exposure correction may be switched in accordance with a mode or the like set in the imaging device 100.
  • the relation (a correlation value) between the shutter speed and the amplitude of the flickering component in the image may not be the table, but may be obtained by predetermined calculation or the like.
  • the half push manipulation has been exemplified as an example of the preparation manipulation, but the preparation manipulation may be another manipulation such as a manipulation of stopping or substantially stopping the imaging device 100 for a given period or more.
  • the digital signal processing unit 20 including the flickering reduction unit 25 is configured by hardware has been described, but a part or all of the flickering reduction unit 25 or the digital signal processing unit 20 may be configured by software.
  • a configuration in which a plurality of flickering reduction units 25 (for example, two flickering reduction units) are provided and processing blocks that separately perform the flickering reduction process on a through image and the flickerless photographing process on image data obtained at a high frame rate are separately set may be adopted.
  • the present technology is not limited to a fluorescent lamp.
  • the present technology can also be applied to another light source (for example, an LED) as long as the light source blinks with periodicity. In this case, a process of identifying a frequency of an LED may be performed as a preliminary step.
  • the above-described embodiment can also be applied to an imaging device using an image sensor of an XY address scanning type or a rolling shutter or an image sensor to which a rolling shutter is applied, other than the CMOS image sensor.
  • FIG. 11 is an explanatory diagram illustrating different waveforms for each color of RGB of flickering components in accordance with kinds of flickering light sources.
  • the horizontal axis of the graph represents a time and the vertical axis represents an output level of an image with a Joint Photographic Experts Group (JPEG) format.
  • JPEG Joint Photographic Experts Group
  • a solid line indicates a R component
  • a dotted line indicates a G component
  • a one-dot chain line indicates a B component.
  • FIG. 11A illustrates a waveform example of a flickering component of a neutral white fluorescent lamp.
  • FIG. 11B illustrates a waveform example of a flickering component of a three-wavelength neutral white fluorescent lamp.
  • FIG. 11C illustrates a waveform example of a flickering component of a mercury lamp.
  • waveforms differ for each color of RGB of the flickering component in accordance with the kinds of flickering light sources. There is concern of an influence of the characteristics of the flickering light sources on white balance (tone) of an image obtained through a flickerless photographing process.
  • FIG. 12 is an explanatory diagram illustrating a color deviation caused due to flickering of the neutral white fluorescent lamp which is an example of the flickering light source.
  • FIG. 12 illustrates two exposure times in a case in which an exposure timing is caused to be synchronized with a timing of a peak of a flickering component.
  • Ta which is a long exposure time is, for example, 1/100 seconds
  • Tb which is a short exposure time is, for example, 1/1000 seconds.
  • the color of an image obtained for each exposure time is an integrated value obtained by integrating RGB for the exposure time.
  • the second embodiment is an embodiment corresponding to this point.
  • FIG. 13 is a diagram illustrating a configuration example of a digital signal processing unit (hereinafter appropriately referred to as a digital signal processing unit 20A) according to the second embodiment. Differences from the digital signal processing unit 20 in the first embodiment will be mainly described.
  • a memory 27A is connected to the white balance adjustment circuit 27.
  • the memory 27A stores a white balance adjustment parameter in accordance with a shutter speed (hereinafter appropriately referred to as a white balance gain).
  • a white balance gain read from the memory 27A under the control of the system controller 14 is set in the white balance adjustment circuit 27. Note that in the embodiment, generation of the white balance gain is controlled by the system controller 14.
  • an imaging device 100A an imaging device (hereinafter appropriately referred to as an imaging device 100A) according to the second embodiment will be described with reference to the flowcharts of FIGS. 14 and 15 .
  • the following three modes can be set as setting related to white balance (WB) in the imaging device 100A:
  • the auto white balance mode is a mode in which a white balance gain is automatically set by the imaging device 100A.
  • the preset white balance mode is a mode in which a plurality of representative light sources (the sun, an electric lamp, a fluorescent lamp, or the like) can be selected and a white balance gain optimum for the selected light source is set.
  • the custom white balance mode is a mode in which a user experimentally photographs a spot with an achromatic color on a wall or the like under a use environment of the imaging device 100A (photographs a test) to acquire a white balance gain in accordance with the result.
  • step ST21 the user performs a manipulation of changing setting of the white balance using the manipulation unit 18a in step ST21. Then, the process proceeds to step ST22.
  • step ST22 the system controller 14 determines whether or not the auto white balance is set as the setting of the white balance. In a case in which the auto white balance mode is set, the process proceeds to step ST23.
  • step ST23 for example, the system controller 14 of the imaging device 100A automatically generates a white balance gain and sets the white balance gain in the white balance adjustment circuit 27.
  • the process proceeds to step ST24.
  • step ST24 the system controller 14 determines whether or not the preset white balance is set as the setting of the white balance. In a case in which the preset white balance mode is set, the process proceeds to step ST25. In step ST25, the white balance gain corresponding to the selected light source is read from the memory 27A and the white balance gain is set in the white balance adjustment circuit 27. In a case in which the set mode of the white balance is not the preset white balance mode in step ST24, the process proceeds to step ST26.
  • step ST26 the custom white balance mode is set as the mode of the white balance.
  • test photographing is performed to generate (obtain) the white balance gain.
  • display or the like for prompting the test photographing may be performed on the display unit 18b.
  • the test photographing starts and the user turns the imaging device 100A to a spot with an achromatic color and pushes the shutter button. Then, the process proceeds to step ST27.
  • step ST27 a driving rate of the CMOS image sensor 12 is controlled such that the frame rate is accelerated (for example, 200 or 240 fps). Then, whether there is a flickering component, a frequency, a timing of a peak, and the like are detected. Note that the details of this process have been described in detail in the first embodiment, and thus the repeated description thereof will be omitted. Then, the process proceeds to step ST28.
  • step ST28 the system controller 14 determines whether or not the flickering component is detected in the process of step ST27. In a case in which no flickering component is detected, the process proceeds to step ST29 and a process in accordance with the photographing of a spot of one piece of achromatic color such as white or gray at an exposure time T1 is performed. Note that the exposure time T1 herein is 1/n seconds (where n is a light source frequency, which is 100 or 120 in many cases) at which no flickering occurs. Then, the process proceeds to step ST30.
  • step ST30 the system controller 14 generates a white balance gain Wb appropriate for image data obtained in a result of the test photographing. Then, the process proceeds to step ST31.
  • step ST31 the white balance gain Wb obtained through the process of step ST30 is stored and preserved in the memory 27A in accordance with the control by the system controller 14.
  • step ST32 test photographing is performed to photograph one image of a spot of the achromatic color at the exposure time T1.
  • the test photographing is a flickerless photographing process in which the exposure timing is caused to be synchronized with a timing of a peak of the flickering component, as described in the first embodiment. Then, the process proceeds to step ST33.
  • test photographing is performed to photograph one image of a spot of the achromatic color at an exposure time T2 subsequently to the photographing of step ST32.
  • the test photographing is also a flickerless photographing process in which the exposure timing is caused to be synchronized with a timing of a peak of the flickering component.
  • the exposure time T2 is, for example, a highest shutter speed which can be set in the imaging device 100A and is 1/8000 seconds in this example.
  • the test photographing in steps ST32 and ST33 is automatically performed successively, for example, when the user pushes the shutter button once to perform the test photographing, and thus the user does not need to push the shutter button twice. Then, the process proceeds to step ST34.
  • step ST34 the system controller 14 generates white balance gains WbT1 and WbT2 respectively appropriate for an image A obtained in the test photographing in step ST32 and an image B obtained in the test photographing in step ST33. Then, the process proceeds to step ST35.
  • step ST35 the white balance gains WbT1 and WbT2 respectively corresponding to the exposure times T1 and T2 are stored and preserved in the memory 27A in association with the exposure times. Note that the white balance gain obtained in the flickerless photographing is stored in association with a flag indicating the above fact.
  • step ST41 after the shutter button is pushed for the real photographing (second photographing) of actually photographing a subject, a process of selecting the white balance gain stored in the memory 27A is performed. Note that this selection process may be performed by the user or a recent white balance gain may be selected under the control of the system controller 14. Then, the process proceeds to step ST42.
  • step ST42 it is determined whether or not the white balance gain selected in step ST41 is obtained in the test photographing in the flickerless photographing. Note that whether or not the selected white balance gain is obtained in the test photographing in the flickerless photographing can be determined by referring the flag associated with the white balance gain stored in the memory 27A.
  • the selected white balance gain is the white balance gain Wb calculated and stored in steps ST30 and ST31 in FIG. 14 .
  • the negative determination is performed in step ST42 and the process proceeds to step ST43.
  • step ST43 the selected white balance gain Wb is set as the white balance gain to be used in the real photographing in the white balance adjustment circuit 27. Then, the process proceeds to step ST44.
  • step ST44 signal processing such as a white balance adjustment process in accordance with the white balance gain Wb is performed on the image data captured in the photographing process of the real photographing.
  • the image data subjected to various kinds of signal processing are appropriately stored.
  • step ST41 In a case in which the white balance gain selected in step ST41 is obtained in the test photographing in the flickerless photographing in step ST42, the process proceeds to step ST45.
  • step ST45 an exposure time (a shutter speed) Tact to be used in photographing is acquired. Then, the process proceeds to step ST46.
  • step ST46 the exposure times T1 and T2 used at the time of the generation of the white balance gains are read from the memory 27A. Then, the process proceeds to step ST47.
  • step ST48 the white balance gain WbT1 corresponding to the exposure time T1 is read from the memory 27A and is set in the white balance adjustment circuit 27. Then, the process proceeds to step ST44.
  • step ST44 a photographing process is performed.
  • signal processing such as a white balance adjustment process in accordance with a white balance gain Wb1 is performed on the image data obtained through the real photographing.
  • the image data subjected to various kinds of signal processing are appropriately stored.
  • step ST50 the white balance gain WbT2 corresponding to the exposure time T2 is read from the memory 27A and is set in the white balance adjustment circuit 27. Then, the process proceeds to step ST44.
  • step ST44 the photographing process is performed.
  • signal processing such as a white balance adjustment process in accordance with a white balance gain Wb2 is performed on the image data obtained through the real photographing.
  • the image data subjected to various kinds of signal processing are appropriately stored.
  • step ST51 the system controller 14 applies a white balance gain WbTact corresponding to a different exposure time Tact from the exposure times T1, T2 on the basis of the generated white balance gains WbT1 and WbT2.
  • the white balance gain WbTact can be generated through, for example, a linear interpolation process.
  • T LOG 1 / exposure time , 2
  • the white balance gain WbTact corresponding to the exposure time Tact is generated through the process of step ST51 and is set in the white balance adjustment circuit 27. Then, the process proceeds to step ST44.
  • step ST44 the photographing process is performed.
  • signal processing such as a white balance adjustment process to which the white balance gain WbTact set for the image data obtained through the real photographing is applied is performed.
  • the image data subjected to various kinds of signal processing are appropriately stored.
  • the white balance gain process in accordance with the appropriate white balance gain corresponding to the shutter speed can be performed, and thus appropriate color adjustment is possible.
  • the second embodiment can be modified as follows, for example.
  • two images are obtained by performing the test photographing twice and the white balance gain corresponding to each exposure time is generated on the basis of the two images, but the present technology is not limited thereto.
  • three images may be obtained by performing test photographing three times by flickerless photographing in which a center of a curtain speed matches a peak of a flickering component, and white balance gains WbT1, WbT2, and WbT3 corresponding to exposure times may be calculated.
  • test photographing may be performed four or more times. Note that as illustrated in the example of the drawing, a B component is less as the exposure time is longer. Therefore, a B gain is increased or the like.
  • Exposure times T1, T2, and T3 in FIG. 16 are 1/100 seconds, 1/1000 seconds, and 1/8000 seconds.
  • the exposure times can be appropriately set when the exposure times are not later than one period of a flickering component.
  • white balance gains corresponding to exposure times on an external side may be obtained through interpolation from the white balance gains WbT1 and WbT2 corresponding to the exposure times T1 and T2.
  • the test photographing may be performed from test photographing in which an exposure time is short or may be performed from test photographing in which an exposure time is long.
  • a white balance gain corresponding to an exposure time may be generated.
  • metadata (accessory information) associated with an image may include information regarding a shutter speed and a white balance gain corresponding to the shutter speed may be generated.
  • the generated white balance gain may be stored the generated white balance gain can be later.
  • a previously obtained image may be an image stored in the imaging device, may be an image stored in a portable memory, or may be an image downloaded via the Internet or the like.
  • the generated white balance gain may be stored to be used later.
  • positional information of Global Positioning System (GPS) or the like is stored in association with an exposure time (which may be a shutter speed) and a white balance gain and photographing is performed at the same location and the same exposure time, a process of setting a previous white balance gain at that location or presenting the white balance gain to a user may be performed.
  • GPS Global Positioning System
  • the timings at which the test photographing and the real photographing are performed are caused to be synchronized with timings of the peaks of the flickering component, but may be synchronized with other timings such as timings of the bottom (which is a portion in which the amplitude is the smallest) of the flickering component.
  • the phase of the flickering component in which the exposure timing is caused to be synchronized may be the same in each photographing.
  • the white balance gains WbT1 and WbT2 corresponding to T1 and T2 may be used as white balance gains corresponding to the shutter speed as long as an error is within a predetermined range.
  • the second embodiment can be applied to any of a mechanical shutter, an electronic shutter, a global shutter, a rolling shutter, and the like and can be applied even in a case in which an image sensor is a charge coupled device (CCD).
  • CCD charge coupled device
  • the light source frequency of the flickering light source is not limited to 100 Hz or 120 Hz, but the present technology can be applied to an LED that blinks at a high speed.
  • test photographing may be performed in advance in an achromatic chart or the like while an exposure time is changed, and a white balance gain corresponding to the exposure time may be calculated.
  • a subject formed from various kinds of color other than white in a Macbeth chart or the like may be photographed a plurality of times at different exposure times under the flickering light source and a parameter related to color reproduction at each exposure time may be calculated. That is, in the second embodiment of the present technology, it is possible to control generation of parameters that are related to color adjustment at each exposure time and includes at least one of a white balance gain or a parameter related to color reproduction as well as the white balance gains corresponding to different exposure times.
  • the flickerless photographing may be performed during monitoring of a subject and a white balance gain at each exposure time may be generated for a through image obtained in the photographing.
  • the exposure time in step ST45 may automatically be set by the imaging device.
  • the exposure times T1 and T2 in steps ST32 and ST33 may be set to be different in accordance with a kind of flickering light source or a parameter such as a white balance gain may be generated in accordance with a method suitable for the kind of flickering light source.
  • a white balance gain or the like in accordance with a shutter speed set in the imaging device 100 may be generated.
  • a white balance gain or the like generated in advance may be applied.
  • the system controller 14 generates the white balance gains and the like.
  • another functional block may generate the white balance gains in accordance with control of the system controller 14.
  • present technology may also be configured as below.
  • the imaging device in the above-described embodiments may be embedded in a medical device, a smartphone, a computer device, a game device, a robot, a surveillance camera, or a moving object (a train, an airplane, a helicopter, a small flying body, or the like).
  • the present technology has been described specifically above, but the present technology is not limited to the above-described embodiments and can be modified in various forms based on the technical ideals of the present technology.
  • the configurations, the methods, the processes, the shapes, the materials, the numerical values, and the like exemplified in the above-described embodiments are merely exemplary and other different configurations, methods, processes, shapes, materials, numerical values, and the like may be used as necessary. Configurations for realizing the above-described embodiments and the modification examples may be appropriately added.
  • the present technology is not limited to a device and the present technology can be realized by any form such as a method, a program, and a recording medium on which the program is recorded.

Abstract

An imaging control device including: a control unit configured to perform control such that an exposure timing is synchronized with a timing of a peak of a detected flickering component on a basis of a period of the flickering component and the timing of the peak of the flickering component.

Description

    Technical Field
  • The present technology relates to an imaging control device, an imaging control method, and a program.
  • Background Art
  • In fluorescent lamps prevalent as indoor light sources, light emitting diodes (LEDs) that have become more common recently, and the like, so-called flickering in which illumination light periodically blinks due to influences of commercial power frequencies occurs. Technologies related to imaging devices for preventing deterioration in image quality such as color unevenness due to such flickering have been proposed (for example, see Patent Literature 1 below).
  • Citation List Patent Literature
  • Patent Literature 1: JP 2014-220763A
  • Disclosure of Invention Technical Problem
  • In a device in Patent Literature 1, a different sensor from an image sensor (imager) is used to detect flickering. Therefore, there is a problem that it is difficult to miniaturize the device.
  • The present technology is devised in view of such a problem and one object of the present technology is to provide an imaging control device, an imaging control method, and a program capable of preventing deterioration in image quality due to flickering.
  • Solution to Problem
  • In order to solve the above problem, the present technology is, for example, an imaging control device including: a control unit configured to perform control such that an exposure timing is synchronized with a timing of a peak of a detected flickering component on the basis of a period of the flickering component and the timing of the peak of the flickering component.
  • In addition, the present technology is, for example, an imaging control method including: performing, by a control unit, control such that an exposure timing is synchronized with a timing of a peak of a detected flickering component on the basis of a period of the flickering component and the timing of the peak of the flickering component.
  • In addition, the present technology is, for example, a program causing a computer to perform an imaging control method of performing, by a control unit, control such that an exposure timing is synchronized with a timing of a peak of a detected flickering component on the basis of a period of the flickering component and the timing of the peak of the flickering component.
  • Advantageous Effects of Invention
  • According to at least one embodiment of the present technology, it is possible to prevent deterioration in image quality due to flickering. Note that the effect described above is not necessarily limitative and the effects described in the present technology may be achieved. In addition, content of the present technology is not construed to be limited by the exemplified effects.
  • Brief Description of Drawings
    • FIG. 1 is a block diagram illustrating a configuration example of an imaging device according to an embodiment of the present technology.
    • FIG. 2 is a block diagram illustrating a configuration example of a digital signal processing unit according to a first embodiment of the present technology.
    • FIG. 3 is a diagram illustrating an example of a flickering component.
    • FIG. 4 is a block diagram illustrating a configuration example of a flickering reduction unit according to an embodiment of the present technology.
    • FIG. 5 is an explanatory diagram illustrating an operation example of an imaging device according to an embodiment of the present technology.
    • FIG. 6 is an explanatory diagram illustrating an example of a flickerless photographing process.
    • FIG. 7 is a flowchart illustrating an example of a flow of a process according to the first embodiment of the present technology.
    • FIG. 8 is an explanatory diagram illustrating an example of a relation between intensity of flickering and a shutter speed.
    • FIG. 9 is an explanatory diagram illustrating an example of control for correcting exposure of an image.
    • FIG. 10 is an explanatory diagram illustrating another example of the control for correcting exposure of an image.
    • FIGS. 11A, 11B, and 11C are diagrams illustrating waveform examples of RGB in accordance with a flickering light source.
    • FIG. 12 is an explanatory diagram illustrating a color deviation caused due to flickering of a neutral white fluorescent lamp which is an example of the flickering light source.
    • FIG. 13 is a block diagram illustrating a configuration example of a digital signal processing unit according to a second embodiment of the present technology.
    • FIG. 14 is a flowchart illustrating an example of a flow of a process according to the second embodiment of the present technology.
    • FIG. 15 is a flowchart illustrating an example of a flow of a process according to the second embodiment of the present technology.
    • FIG. 16 is an explanatory diagram illustrating a modification example.
    Mode(s) for Carrying Out the Invention
  • Hereinafter, embodiments and the like of the present technology will be described with reference to the drawings. Note that the description will be made in the following order.
    1. <1. First embodiment>
    2. <2. Second embodiment>
    3. <3. Other modification examples>
  • The embodiments and the like to be described below are specific preferred examples of the present technology and content of the present technology is not limited to the embodiments and the like.
  • <1. First embodiment> [Configuration example of imaging device] "Overall configuration example"
  • FIG. 1 is a block diagram illustrating a system configuration example of an imaging device (an imaging device 100) according to an embodiment of the present technology. In the imaging device 100, light from a subject is incident on a complementary metal oxide semiconductor (CMOS) image sensor 12 via an imaging optical system 11, the light is photoelectrically converted in the CMOS image sensor 12, and an analog image signal is obtained from the CMOS image sensor 12. For example, the imaging optical system 11 and the CMOS image sensor 12 are included in an imaging unit.
  • In the CMOS image sensor 12, a plurality of pixels including a photodiode (photo-gate), a transmission gate (a shutter transistor), a switching transistor (an address transistor), an amplification transistor, a reset transistor (a reset gate), and the like are arrayed and formed in a 2-dimensional shape, and a vertical scanning circuit, a horizontal scanning circuit, and a video signal output circuit are formed on a CMOS substrate.
  • The CMOS image sensor 12 may be one of a primary color system and a complementary color system, as will be described below. An analog image signal obtained from the CMOS image sensor 12 is a primary color signal of each color of RGB or a color signal of a complementary color system.
  • The analog image signal from the CMOS image sensor 12 is sampled and held for each color signal in the analog signal processing unit 13 configured as an integrated circuit (IC), a gain of the analog image signal is controlled through automatic gain control (AGC), and the analog image signal is converted into a digital signal through analog-to-digital (A/D) conversion.
  • The digital image signal from the analog signal processing unit 13 is processed, as will be described below, in a digital signal processing unit 20 that is configured as an IC and functions as a detection unit. Then, in a flickering reduction unit 25 in the digital signal processing unit 20, a flickering component is reduced for each signal component, as will be described below, and then the signal is finally converted into color difference signals R-Y and B-Y between red and blue and a luminance signal Y to be output from the digital signal processing unit 20.
  • A system controller 14 which is an example of a control unit includes a microcomputer or the like and controls each unit of the imaging device 100.
  • Specifically, a lens driving control signal is supplied from the system controller 14 to the lens driving driver 15 including an IC and a lens or an iris of the imaging optical system 11 is driven by the lens driving driver 15.
  • In addition, a timing control signal is supplied from the system controller 14 to a timing generator 16 and various timing signals are supplied from the timing generator 16 to the CMOS image sensor 12 so that the CMOS image sensor 12 is driven.
  • At this time, a shutter speed of the CMOS image sensor 12 is also controlled with a timing control signal from the system controller 14. Specifically, a shutter control unit 14c in the system controller 14 sets a shutter speed.
  • Further, a detection signal of each signal component is captured from the digital signal processing unit 20 to the system controller 14. The analog signal processing unit 13 controls a gain of each color signal, as described above, with an AGC signal from the system controller 14 and the system controller 14 controls signal processing in the digital signal processing unit 20.
  • In addition, a camera-shake sensor 17 is connected to the system controller 14 and camera-shake information obtained from the camera-shake sensor 17 is used to correct camera shake.
  • In addition, a manipulation unit 18a and a display unit 18b included in a user interface 18 are connected to the system controller 14 via an interface 19 including a microcomputer or the like. The system controller 14 detects a setting manipulation, a selection manipulation, or the like in the manipulation unit 18a and the system controller 14 displays a setting state, a control state, or the like of a camera on the display unit 18b. For example, setting regarding whether or not to perform flickerless photographing to be described below can be performed using the manipulation unit 18a.
  • Note that the imaging device 100 may include a storage device. The storage device may be a device such as a hard disk contained in the imaging device 100 or may be a memory such as a Universal Serial Bus (USB) memory which is detachably mounted on the imaging device 100. In addition, the imaging device 100 may include a communication device. Image data, various kinds of setting data, and the like may be transmitted to and received from an external device via the Internet or the like using the communication device. Communication may be performed as wired communication or may be performed as wireless communication. "Configuration example of digital signal processing unit"
    FIG. 2 is a block diagram illustrating a configuration example of the digital signal processing unit 20 in the case of a primary color system. The primary color system is a three-plate system in which the imaging optical system 11 in FIG. 1 includes a separation optical system that separates light from a subject into color light with respective colors of RGB and a CMOS image sensor for each color of RGB as the CMOS image sensor 12 or a one-plate system in which one CMOS image sensor in which color filters of respective colors of RGB are arranged sequentially and repeatedly for each one pixel in a screen-horizontal direction on a light incidence surface is included as the CMOS image sensor 12. In this case, RGB primary color signals are read in parallel from the CMOS image sensor 12.
  • In the digital signal processing unit 20 in FIG. 2, a clamp circuit 21 clamps a black level of the input RGB primary color signals to a predetermined level, a gain adjustment circuit 22 adjusts gains of the RGB primary color signals after the clamping in accordance with an exposure amount, and flickering reduction units 25R, 25G, and 25B reduce flickering components in the RGB primary color signals after the gain adjustment in accordance with a method to be described below. In addition, at the time of photographing, a process is performed to perform flickerless photographing. Note that the flickerless photographing means photographing capable of preventing an influence of flickering occurring from a flickering light source on image quality (reduction in image quality).
  • Further, in the digital signal processing unit 20 in FIG. 2, a white balance adjustment circuit 27 adjusts white balance of the RGB primary color signals after the reduction of the flickering, a gamma correction circuit 28 converts the gray scales of the RGB primary color signals after the adjustment of the white balance, and a synthesis matrix circuit 29 generates the luminance signal Y and the color difference signals R-Y and B-Y of the output from the RGB primary color signals after the gamma correction.
  • In the primary color system, in general, the luminance signal Y is generated after the RGB primary color signal has fully ended as in FIG. 2. Therefore, by reducing the flickering components in the RGB primary color signals during the process on the RGB primary color signals as in FIG. 2, it is possible to sufficiently reduce the flickering components of luminance components and each color component.
  • Here, instead of detecting the flickering component for each primary color signal of each color of RGB by the flickering reduction units 25R, 25G, and 25B and reducing the flickering component as in FIG. 2, for example, a flickering reduction unit 25 may be provided on an output side of the luminance signal Y of the synthesis matrix circuit 29 to detect a flickering component in the luminance signal Y and reduce the flickering component.
  • On the other hand, the complementary color system is a one-plate system that includes one CMOS image sensor in which color filters of a complementary system are formed on a light incidence surface as the CMOS image sensor 12 in FIG. 1.
  • In the complementary color system, video signals at two adjacent horizontal line positions are combined and read from the CMOS image sensor 12. The digital signal processing unit 20 clamps a black level of a complementary signal (a synthesized signal) at a predetermined level, adjusts a gain of the complementary signal after the clamping in accordance with an exposure amount, and further generates a luminance signal and RGB primary color signals from the complementary signal after adjustment of the gain.
  • Then, the flickering component in the luminance signal and the flickering components in the RGB primary color signals are reduced by the flickering reduction unit 25, the gray scale of the luminance signal after the reduction of the flickering is further corrected to obtain the luminance signal Y of the output and the white balance of the RGB primary color signals after the reduction of the flickering is adjusted, the gray scales of the RGB primary color signals after the adjustment of the white balance are converted, and the color difference signals R-Y and B-Y are generated from the RGB primary color signals after the gamma correction.
  • [Operation examples] "Basic operation"
  • Next, an operation example of the imaging device 100 will be described. Here, an example in which a still image is photographed will be described. When the imaging device 100 is powered on, an image (a through image) in a moving image aspect is displayed on the display unit 18b (live-view display) at the time of deciding a composition of (framing) a subject before photographing.
  • Subsequently, after the subject is decided, a preparation manipulation is performed. The preparation manipulation is a manipulation of preparation to perform photographing and is a manipulation performed immediately before photographing. The preparation manipulation is, for example, a half push manipulation of pushing a shutter button included in the manipulation unit 18a partially (halfway). When the half push manipulation is performed on the shutter button, for example, a preparation operation of capturing a still image of the subject is performed. As the preparation operation of capturing the still image of the subject, a detection operation of detecting focus or setting an exposure control value and light emission of an auxiliary lighting unit, and the like can be exemplified. Note that when the pushing of the shutter button in the half push state is released, the preparation operation ends.
  • When the shutter button is further pushed from the half push state and the shutter button is fully pushed, the imaging device 100 is instructed to perform photographing and an exposure operation is performed on a subject image (an optical image of the subject) using the CMOS image sensor 12. The analog signal processing unit 13 or the digital signal processing unit 20 performs predetermined signal processing on image data obtained in response to the exposure operation to obtain a still image. The image data corresponding to the obtained still image is appropriately stored in a storage device (not illustrated).
  • Note that the imaging device 100 may photograph a moving image. In a case in which the moving image is captured, for example, when the shutter button is pushed, photographing of the moving image and recording of the moving image are performed. When the shutter button is pushed again, the photographing of the moving image is stopped.
  • "Flickering reduction process"
  • Next, a flickering reduction process or the like in the imaging device 100 will be described. The flickering reduction process is, for example, a process performed on a through image in live-view display. Before the description of the flickering reduction process, an example of a flickering component occurring due to a fluorescent lamp or the like in an NTSC system will be described to facilitate understanding. Note that in this example, a case in which a frame rate is set to 60 frames per second (fps) and a commercial power frequency is set to 50 hertz (Hz) will be described. Characteristics of the flickering component occurring in this case are as follows:
    1. (1) generated by 5/3 periods in one screen (3 frames (which may be fields) are set as a repetition period);
    2. (2) a phase is changed for each line; and
    3. (3) handled as a sinusoidal wave with a frequency (100 Hz) which is twice the commercial power frequency (50 Hz).
  • From the foregoing characteristics, a flickering component is generated, as illustrated in FIG. 3, when a flickering phenomenon occurs. Note that scanning is assumed to be performed from the upper side (the upper portion of the screen) to the lower side (the lower portion of the screen) in FIG. 3. In the CMOS image sensor 12, since an exposure timing differs for each horizontal line, an amount of received light is changed in accordance with the horizontal line. Accordingly, although the fluorescent lamp or the like illuminates uniformly spatially, there are a horizontal line with a value of a video signal greater than an average value and a horizontal line with a value of a video signal less than the average value as in FIG. 3. For example, in the frames of FIG. 3, a flickering component (an amplitude of the flickering component) in the highest horizontal line in the image, that is, a head line, has the highest peak. Further, in a horizontal line shifted from the head line by lines equivalent to 3/5 of the total number of lines included in one screen, a flickering component is also the highest. In this way, the flickering component can be expressed as a sine function (sinusoidal wave) that has an amplitude, a period, and an initial phase, as illustrated in FIG. 3. Note that in this example, the initial phase means a phase in the head line.
  • Further, the phase of each horizontal line changes in accordance with a frame. That is, a horizontal line with a value of a video signal greater than the average value and a horizontal line with a value of a video signal less than the average value are changed for each frame. In a subsequent frame, a sinusoidal wave with a different initial phase is formed. For example, when flickering in the fluorescent lamp is generated at 100 Hz and a frame rate is 60 fps, 5 periods of the flickering in the fluorescent lamp are a time equivalent to 3 frames. Accordingly, the initial phase is the same phase every 3 frames. In this way, the flickering component is changed in accordance with the horizontal line and the frames. Note that in the case of a PAL scheme, that is, a case in which the frame rate is 50 fps and the commercial power frequency is 60 Hz, the flickering component can be expressed as a sinusoidal wave that has a period of 5 frames. An example of a process (an operation) of reducing the flickering component that has the foregoing properties will be described.
  • FIG. 4 is a block diagram illustrating a detailed configuration example of a flickering reduction unit 25. Note that in the following description, input image signals mean RGB primary color signals or a luminance signal before the flickering reduction process input to the flickering reduction unit 25 and output image signals mean the RGB primary color signals or the luminance signal after the flickering reduction process output from the flickering reduction unit 25.
  • The flickering reduction unit 25 includes, for example, a normalized integrated value calculation block 30, an arithmetic block 40, a discrete Fourier transform (DFT) block 50, a flickering generation block 55, and a frequency estimation/peak detection block 60. The normalized integrated value calculation block 30 includes an integration block 31, an integrated value retention block 32, an average value calculation block 33, a difference calculation block 34, and a normalization block 35.
  • The integration block 31 calculates an integrated value Fn(y) by integrating one line of an input image signal In'(x, y) in the horizontal direction of the screen. The calculated integrated value Fn(y) is stored and retained for flickering detection in subsequent frames in the integrated value retention block 32. In a case in which a vertical synchronization frequency is 60 Hz, the integrated value retention block 32 may have a configuration capable of retaining an integrated value equivalent to at least 2 frames.
  • The average value calculation block 33 calculates an average AVE[Fn(y)] of three integrated values Fn(y), Fn_1(y), and Fn_2(y). Note that Fn_1(y) is an integrated value Fn_1(y) of the same line before one frame, Fn_2(y) is an integrated value Fn 2(y) of the same line before two frames, and these integrated values are values read from the integrated value retention block 32.
  • The difference calculation block 34 calculates a difference between the integrated value Fn(y) supplied from the integration block 31 and the integrated value Fn_1(y) before one frame supplied from the integrated value retention block 32. In the difference Fn(y)-Fn_1(y), an influence of a subject is sufficiently removed. Therefore, the form of a flickering component (a flickering coefficient) is expressed more clearly than in the integrated value Fn(y).
  • Further, the normalization block 35 performs a normalization process of dividing the difference Fn(y)-Fn_1(y) from the difference calculation block 34 by an average value AVE[Fn(y)] from the average value calculation block 33 to calculate a difference gn(y) after the normalization.
  • The DFT block 50 performs a discrete Fourier transform on data equivalent to one wavelength (equivalent to L lines) of flickering in a difference gn(y) after the normalization from the normalization block 35. Thus, an amplitude γ m and an initial phase Φ mn of each subsequent flickering component are estimated. Note that the initial phase Φ mn is generated in the imaging device 100 and is retained in association with a counter at each predetermined time (for example, at 0.5 microseconds (µs)).
  • Further, the flickering generation block 55 calculates a flickering coefficient Γn(y) from estimated values of γ m and Φ mn from the DFT block 50. Then, the arithmetic block 40 performs a process of adding 1 to the flickering coefficient Γn(y) from the flickering generation block 53 and multiplying an inverse gain by dividing the input image signal In'(x, y) by the sum [1+Γn(y)]. Thus, the flickering component included in the input image signal In'(x, y) is substantially completely removed, and thus a signal component In(x, y) with substantially no flickering component is obtained as an output image signal (the RGB primary color signal or the luminance signal after the flickering reduction process) from the arithmetic block 40.
  • Through the foregoing flickering reduction process, whether there is flickering can be detected to prevent the quality of a through image from deteriorating due to the flickering. Note that the above-described flickering reduction process may be performed at the time of photographing of a moving image (including recording). Note that in the embodiment, the flickering component is detected for each color of RGB. In this case, a timing of a peak of a color component with the maximum amplitude is detected. Instead, a peak of the luminance signal may be detected.
  • Note that the initial phase Φ mn calculated in the DFT block 50 is supplied to the frequency estimation/peak detection block 60. The frequency estimation/peak detection block 60 estimates at least a frequency of the flickering component (light source), in other words, a period of the flickering component on the basis of the input initial phase Φ mn. Further, a timing of the peak of the flickering component is detected. For example, the frequency estimation/peak detection block 60 estimates a frequency of the flickering component from a time difference based on the frame rate and a phase difference of the initial phase Φ mn. Further, the frequency estimation/peak detection block 60 detects the initial phase Φ mn in an initial frame and a timing of the peak of the flickering component from, for example, a counter associated with the initial phase Φ mn.
  • For example, when the initial phase Φ mn is 60 degrees, a timing at which the peak (for example, 90 degrees) of the flickering component approximated to a sinusoidal wave appears can be obtained using a time interval of the counter. The system controller 14 is notified of information obtained by the frequency estimation/peak detection block 60. Note that the peak of the flickering component is a spot at which the amplitude of the flickering component is the maximum, as described above.
  • In this way, even when a separate sensor or the like is not provided, characteristics (the period of the flickering component, the timing of the peak, or the like) of the flickering component can be detected on the basis of an imaging result (a photographed image obtained via the imaging unit) by the imaging unit. Therefore, it is possible to prevent cost from increasing due to an increase in the number of components. In addition, it is possible to miniaturize the imaging device. Note that the process of obtaining the characteristics of the flickering component is not limited to the above-described method, and a known method can be applied.
  • "Flickerless photographing process"
  • Next, a flickerless photographing process will be described. The flickerless photographing process is, for example, a process performed in a case in which a flickerless photographing mode is set in the imaging device 100.
  • In the above-described flickering reduction process, a background component is extracted from an average using an image of a plurality of frames (for example, 3 frames). Therefore, in a case in which the frame rate matches a blinking period of a flickering light source such as a fluorescent lamp, it is difficult to separate a background from flicker, and thus it is difficult to detect the flicker. In addition, the image of the plurality of frames is used. Therefore, in a case in which a still image is captured in flickerless photographing, it is difficult to apply the above-described flickering reduction process without change. Accordingly, in a case in which the still image is captured in the flickerless photographing, a flickerless photographing process to be described below is performed.
  • First, a process of switching the frame rate at a higher speed than a frame rate at the normal time is performed. The frame rate after the switching is, for example, N times the frequency of the light source (here a frequency greater than a frequency (100 Hz or 120 Hz) of a flickering component) and is preferably one period of the flickering component in a frame. For example, N=4, that is, 200 fps (in the case of the frequency of 50 Hz of the light source) or 240 fps (in the case of the frequency of 60 Hz of the light source), is set.
  • Note that the frequency of the flickering light source may be obtained from setting of a user or may be set automatically on the basis of a result of the above-described flickering reduction process in live-view display. That is, in a case in which no flickering component is detected in the case of the frame rate of 60 fps in the flickering reduction process, the frequency of the light source is determined to be 50 Hz. In a case in which no flickering component is detected in the case of the frame rate of 50 fps, the frequency of the light source is determined to be 60 Hz. This result may be used in the flickerless photographing process. In addition, whether there is flickering may be detected in the flickerless photographing process.
  • A timing at which the frame rate is switched can appropriately be set, and is preferably a timing immediately before photographing. For example, the frame rate is switched when a manipulation of pushing the shutter button halfway, which is a photographing preparation manipulation, is performed. More specifically, a manipulation signal in accordance with the manipulation of pushing the shutter button halfway is supplied to the system controller 14 via the interface 19. The system controller 14 controls the timing generator 16 such that the CMOS image sensor 12 is driven and the frame rate is accelerated.
  • When the frame rate is accelerated, a repetition period of the flickering component is changed. For example, in a case in which the frame rate is 200 fps, the repetition period of the flickering component is 20 frames. In a case in which the frame rate is 240 fps, the repetition period of the flickering component is 12 frames.
  • Image data can be obtained on the basis of the accelerated frame rate. The obtained image data is subjected to a process by the analog signal processing unit 13 to be input to the digital signal processing unit 20. The image data obtained at the high frame rate is similarly subjected to the above-described flickering reduction process by the flickering reduction unit 25. Further, in this process, the initial phase Φ mn which is an output from the DFT block 50 is input to the frequency estimation/peak detection block 60 of the flickering reduction unit 25.
  • The frequency estimation/peak detection block 60 estimates at least a frequency (period) of the flickering component (light source) on the basis of the input initial phase Φ mn and further detects a timing of a peak of the flickering component.
  • FIG. 5 is a diagram for summarizing the above-described process. An image captured at a normal frame rate (for example, 50 or 60 fps) is subjected to the flickering reduction process and the image subjected to the flickering reduction process is displayed as a through image on the display unit 18b. Then, when the shutter is pushed halfway, the frame rate is switched at a high speed (for example, 200 or 240 fps) and frequency estimation and a peak detection process are performed along with the flickering reduction process. Then, the image subjected to the flickering reduction process is displayed on the display unit 18b. Note that from a viewpoint of reducing a bandwidth of data in a display system or a viewpoint of power consumption or the like, the display unit 18b performs display based on image data obtained by thinning out some of the obtained image data. Then, when the shutter button is deeply pushed, photographing is performed.
  • FIG. 6 is an explanatory diagram illustrating a process in photographing performed in response to a deep push manipulation on the shutter button. As described above, when the manipulation of pushing the shutter button halfway is performed, a frequency of a flickering component is estimated and a process of detecting a timing of a peak is performed. This process is repeatedly performed while the half push manipulation is performed. Note that when the half push manipulation is performed, it is determined whether or not a mode in which the flickerless photographing is performed is set (whether the mode is set to be turned on). Here, in a case in which the mode in which the flickerless photographing is set, a process to be described below is performed.
  • In FIG. 6, for example, a deep push manipulation on the shutter button is assumed to be performed at a timing TA. In response to the deep push manipulation, the flickering reduction unit 25 notifies of the system controller 14 (the shutter control unit 14c) of a timing of a subsequent peak (in this example, TB) of the flickering component. Note that the timing of the peak herein is, for example, a timing obtained immediately before the deep push manipulation is performed.
  • The system controller 14 performs photographing in which an exposure timing is caused to be synchronized with the timing of the peak of the flickering component. Note that in the example illustrated in FIG. 6, a latest timing at which the flickering component is a peak is the timing TB. In this example, the exposure timing is caused to be synchronized with a timing (for example, a timing TC) subsequent temporally by a multiple of the period of the flickering component from the timing TB in consideration of delay or the like of a process related to still image photographing. Here, when the process is in time, the exposure timing may be caused to be synchronized with the timing TB.
  • The photographing in which the exposure timing is caused to be synchronized with the timing of the peak of the flickering component is performed, for example, at a timing at which centers of a shutter speed (exposure time) and a curtain speed match or substantially match the peak of the flickering component. The fact that the centers of the shutter speed and the curtain speed substantially match the peak of the flickering component means that a deviation in the timing is within a range of a predetermined error. Thus, in FIG. 6, a center of gravity (exposure gravity center) of a quadrangle with diagonals indicating an exposure amount matches or substantially matches the peak of the flickering component. Since the exposure timing is normally synchronized with the peak of the flickering component, it is possible to realize the flickerless photographing in which the quality of an image is prevented from deteriorating due to the flickering component.
  • "Flow of process"
  • FIG. 7 is a flowchart illustrating an example of a flow of a process in the flickerless photographing. In step ST11, the system controller 14 determines whether or not a mode in which the flickerless photographing is performed (a flickerless photographing mode) is set. Here, in a case in which it is determined that the flickerless photographing mode is not set, a process related to normal photographing (herein meaning photographing in which the flickerless photographing process is not performed) is performed in a subsequent process. In a case in which it is determined in step ST11 that the flickerless photographing mode is set, the process proceeds to step ST12.
  • In step ST12, it is determined whether or not the shutter button included in the manipulation unit 18a is pushed halfway. In a case in which the shutter button is not pushed halfway, the flickering reduction process is performed on an image captured at the normal frame rate (for example, 50 or 60 fps) and the image subjected to the flickering reduction process is displayed as a through image on the display unit 18b. Note that when the flickering reduction process is performed, the flickering reduction process is not performed in a case in which no flickering of outdoor photographing or the like occurs and no flickering component is detected. In a case in which the shutter button is pushed halfway, the process proceeds to step ST13.
  • In step ST13, the flickerless photographing process is performed in response to the half push manipulation of the shutter button. Specifically, the CMOS image sensor 12 is driven at a high frame rate (for example, 200 or 240 fps), the frequency of the flickering component is estimated using the obtained image data, and a process of detecting a timing at which the peak of the flickering component comes is performed. Note that in a case in which the frequency of the flickering component is estimated in the flickering reduction process, only the process of detecting the timing of the peak of the flickering component may be performed. The system controller 14 is notified of data such as the obtained timing. The foregoing process continues, for example, while the half push manipulation continues. Then, the process proceeds to step ST14.
  • In step ST14, it is determined whether or not a deep push manipulation on the shutter button is performed under a flickering environment in which flickering occurs. In a case in which it is determined that the deep push manipulation on the shutter button is performed under the environment in which no flickering occurs, the process proceeds to step ST15. In step ST15, a still image photographing process in which the flickerless photographing process is not performed is performed. Conversely, in a case in which the deep push manipulation on the shutter button is performed under the environment in which the flickering occurs, the process proceeds to step ST16.
  • In step ST16, the flickerless photographing is performed. That is, the photographing in which the exposure timing is caused to be synchronized with the peak of the flickering component obtained in the process of step ST13 is performed. On the other hand, the photographing in which the quality of a still image is prevented from deteriorating due to the flickering component can be performed.
  • [Advantageous effects of first embodiment]
  • According to the above-described first embodiment, the following exemplary advantageous effects can be obtained.
  • It is not necessary to provide a sensor or the like detecting a flickering component, the device can be miniaturized, and thus application to products in a wide variety of categories is possible.
  • Since the frequency estimation process or the like for a flickering component is performed in accordance with the sufficient number of samplings based on an image based on a high frame rate, it is possible to improve precision of a processing result.
  • Since a still image is photographed in accordance with a timing of a peak of flicker, it is possible to photograph an image with no variation in color or brightness without being limited by a shutter speed.
  • [Modification examples of first embodiment]
  • The above-described first embodiment can be modified as follows, for example.
  • The flickering reduction process on a through image may not be performed. In this case, whether there is flickering may be detected in a method similar to a flickerless photographing process performed in response to a half push manipulation on the shutter button, that is, a flickering reduction process using image data obtained by exposing an accelerated frame rate.
  • In addition, in a case in which the deep push operation on the shutter button is performed without performing the half push manipulation, the flickerless photographing process may not be performed or the flickerless photographing process may be performed by delaying the process for a time in which at least one period of the flickering component can be detected.
  • In the case of bracket photographing, consecutive shoot photographing in which still images are consecutively photographed, or the like, photographing caused to be synchronized with a timing of a peak of a flickering component obtained before consecutive shooting even after the second image may be performed. That is, on the basis of a timing of a peak of a flickering component detected before first exposure, photographing in which a timing after the second exposure is caused to be synchronized with a timing of a peak of the flickering component may be performed.
  • In a case in which a frame rate is accelerated, a process of strengthening an effect of a process of reducing noise (a noise reduction process) or a process of increasing sensitivity may be performed. In addition, in a case in which luminance is detected and ambient brightness is equal to or less than a threshold or the like, the flickerless photographing process may not be performed.
  • In the above-described embodiment, the photographing is performed by causing the exposure timing to be synchronized with a timing of a peak of a flickering component. For example, an image obtained through the photographing can be brighter than an image (an image displayed on the display unit 18b) checked by a user in a half push manipulation. Accordingly, a gain control process of decreasing the luminance of the obtained image or the like may be performed.
  • Through the flickerless photographing, image quality can be prevented from deteriorating due to a flickering component. However, exposure of an image obtained through the photographing, as described above, can be over. Accordingly, for example, exposure correction is performed on an image obtained through the real photographing (for example, photographing performed in response to full pushing of the shutter button) using amplitude information of a flickering component obtained at the time of detection of the flickering component (for example, while the shutter button is pushed halfway. The exposure correction is controlled by, for example, the system controller 14. In a case in which a shutter speed in the detection is the same as a shutter speed in the photographing, an exposure correction amount (a correction amount necessary for the exposure correction) in the photographing can be decided from an amplitude of the flickering component obtained in the detection. For example, the exposure correction may be performed by an amount equivalent to the amplitude of the flickering component during the detection. Conversely, in a case in which the shutter speed in the detection is different from the shutter speed in the photographing, an exposure correction amount necessary in the photographing can be predicted on the basis of the amplitude of the flickering component obtained in the detection.
  • Hereinafter, the exposure correction will be described specifically. Intensity of a flickering component shown in an image (which is also intensity of blink of flickering and corresponds to an amplitude of the flicker) depends on a shutter speed (an exposure time). An influence of a flickering light source on an image occurs as integration of the exposure time of the blink of the light source. In a case in which the exposure time is short, the blinking of the light source is shown in the image without change. When the exposure time is longer, a difference between the brightest portion and the darkest portion of the blinking becomes smaller because of an integration effect, and thus the blinking completely disappears by the integration in an exposure time which is an integer multiple of a blinking period of the light source. When a relation (a correlation value) between the shutter speed and the flickering component in the image is illustrated, a line L1 in the graph of FIG. 8 is formed.
  • In the graph of FIG. 8, the horizontal axis represents (1/shutter speed) and the vertical axis represents intensity of normalized flickering (hereinafter appropriately referred to as flickering intensity). Note that the intensity of the normalized flickering is a numerical value expressing the intensity of the flickering for convenience and is also referred to as an amplitude coefficient. The graph of FIG. 8 indicates that the flickering intensity is 0 at an integer multiple of a light source period (for example, 1/100 seconds). The graph illustrated in FIG. 8 is stored as, for example, a table in the system controller 14. The system controller 14 obtains an appropriate exposure correction amount with reference to the table. Note that numerical values described in the table may be numerical values by actual measurement or may be numerical values by simulation.
  • A first example (a control example) in which the exposure correction amount is calculated will be described with reference to FIG. 9. The system controller 14 determines a shutter speed in the detection of the flickering component (hereinafter appropriately referred to as a shutter speed SS1) and a shutter speed in the real photographing (hereinafter appropriately referred to as a shutter speed SS2). These shutter speeds can be determined with reference to setting or the like of the imaging device 100.
  • The system controller 14 obtains the flickering intensity corresponding to the shutter speed SS1 with reference to the table. For example, the flickering intensity corresponding to the shutter speed SS1 is obtained as α1. In addition, the system controller 14 obtains the flickering intensity corresponding to the shutter speed SS2 with reference to the table. For example, the flickering intensity corresponding to the shutter speed SS2 is obtained as α2.
  • Subsequently, the system controller 14 obtains an amplitude of the flickering component in the real photographing (an amount of blinking of an unnormalized actual flickering component). The amplitude of the flickering component in the real photographing is given in accordance with Expression (1A) below. Amplitude of flickering component in real photographing = amplitude of flickering component obtained in detection × α 2/1
    Figure imgb0001
  • Note that the amplitude of the flickering component obtained in the detection can be obtained from an output of the DFT block 50.
  • The system controller 14 sets the amplitude of the flickering component in the real photographing which is a result of Expression (1A) as an exposure correction amount in the real photographing. In the first example, the exposure correction based on the exposure correction amount is performed with a control value other than shutter. For example, the system controller 14 sets a gain so that the gain is the obtained exposure correction amount. The image obtained in the real photographing is multiplied by the set gain. A gain control unit that performs such gain control may be included in the digital signal processing unit 20 and the digital signal processing unit 20 may operate under the control of the system controller 14. In addition, the system controller 14 may control a diaphragm so that a diaphragm value corresponding to the exposure correction amount is set. The gain or the diaphragm value corresponding to the exposure correction amount may be described in, for example, the table or may be obtained by calculation.
  • Next, a second example in which a correction amount of the exposure correction (hereinafter appropriately referred to as an exposure correction amount) will be described with reference to FIG. 10. The second example is an example in which the shutter speed in the real photographing is changed to correspond to the exposure correction amount. When the shutter speed is faster by the exposure correction, the correction amount is accordingly changed. Accordingly, in the second example, an example of a method of deciding the shutter speed in the real photographing at one time will be described.
  • First, scales of the horizontal axis and the vertical axis of the graph illustrated in FIG. 8 are converted into, for example, exposure values (EVs) in accordance with a known method, as illustrated in FIG. 10. For example, when a correction amount corresponding to the shutter speed in the detection is β2 [EV], the shutter speed is faster by a value corresponding to the correction amount, that is, β1 satisfying β1 [EV] = β2 [EV]. For example, a virtual straight line L2 that has a slope of 1 from an intersection point P1 between the horizontal axis of 0 and the shutter speed in the detection is set.
  • Then, the system controller 14 identifies a shutter speed corresponding to an intersection point P2 between the line L1 and the line L2 and sets the shutter speed as a shutter speed of the real photographing. In accordance with the foregoing method, the shutter speed in the real photographing can be appropriately set and the exposure of an image can be appropriately corrected. Note that the above-described first example and second example related to the exposure correction may be switched in accordance with a mode or the like set in the imaging device 100. In addition, the relation (a correlation value) between the shutter speed and the amplitude of the flickering component in the image may not be the table, but may be obtained by predetermined calculation or the like.
  • Other modification examples will be described. When the shutter speed in the imaging device 100 is longer than a predetermined value, an obtained waveform of the flickering component is integrated to approximate a sinusoidal wave. In particular, when the shutter speed is longer than one period (1/100 or 1/120) of the flickering component, the phase of the flickering component is reversed. Accordingly, in a case in which setting of the shutter speed is checked and the shutter speed is longer than one period of the flickering component, the flickerless photographing process may not be performed or a process of correcting a timing of a peak in accordance with a shift of the phase (for example, a shift of 180 degrees) or the like may be performed. In addition, in a case in which a shutter speed longer than one period of the flickering component is set, a process of notifying a user that flickerless photographing may not be performed may be performed.
  • In the above-described embodiment, the half push manipulation has been exemplified as an example of the preparation manipulation, but the preparation manipulation may be another manipulation such as a manipulation of stopping or substantially stopping the imaging device 100 for a given period or more.
  • In addition, in the above-described embodiment, the case in which the digital signal processing unit 20 including the flickering reduction unit 25 is configured by hardware has been described, but a part or all of the flickering reduction unit 25 or the digital signal processing unit 20 may be configured by software. In addition, a configuration in which a plurality of flickering reduction units 25 (for example, two flickering reduction units) are provided and processing blocks that separately perform the flickering reduction process on a through image and the flickerless photographing process on image data obtained at a high frame rate are separately set may be adopted.
  • In the above-described embodiment, the example in which a fluorescent lamp is exemplified as the light source in which flickering occurs has been described, but the present technology is not limited to a fluorescent lamp. The present technology can also be applied to another light source (for example, an LED) as long as the light source blinks with periodicity. In this case, a process of identifying a frequency of an LED may be performed as a preliminary step.
  • Further, the above-described embodiment can also be applied to an imaging device using an image sensor of an XY address scanning type or a rolling shutter or an image sensor to which a rolling shutter is applied, other than the CMOS image sensor.
  • <2. Second embodiment>
  • Next, a second embodiment of the present technology will be described. Note that the factors described in the first embodiment (the configuration, the function, and the like of the imaging device 100) can be applied to the second embodiment unless otherwise mentioned.
  • "Color deviation in accordance with flickering light source"
  • In the first embodiment, the flickerless photographing process of preventing image quality from deteriorating due to a flickering component has been described on the assumption of the case in which photographing is performed under a light source causing flickering (a flickering light source). Incidentally, as the flickering light source, there are many light sources in which waveforms of flickering components are different for each color of RGB in accordance with the kinds of flickering light sources. FIG. 11 is an explanatory diagram illustrating different waveforms for each color of RGB of flickering components in accordance with kinds of flickering light sources. In each of the drawings of FIGS. 11A, 11B, and 11C, the horizontal axis of the graph represents a time and the vertical axis represents an output level of an image with a Joint Photographic Experts Group (JPEG) format. In addition, in the drawings, a solid line indicates a R component, a dotted line indicates a G component, and a one-dot chain line indicates a B component.
  • FIG. 11A illustrates a waveform example of a flickering component of a neutral white fluorescent lamp. FIG. 11B illustrates a waveform example of a flickering component of a three-wavelength neutral white fluorescent lamp. FIG. 11C illustrates a waveform example of a flickering component of a mercury lamp. As illustrated, it can be understood that waveforms differ for each color of RGB of the flickering component in accordance with the kinds of flickering light sources. There is concern of an influence of the characteristics of the flickering light sources on white balance (tone) of an image obtained through a flickerless photographing process.
  • FIG. 12 is an explanatory diagram illustrating a color deviation caused due to flickering of the neutral white fluorescent lamp which is an example of the flickering light source. FIG. 12 illustrates two exposure times in a case in which an exposure timing is caused to be synchronized with a timing of a peak of a flickering component. Ta which is a long exposure time is, for example, 1/100 seconds and Tb which is a short exposure time is, for example, 1/1000 seconds. The color of an image obtained for each exposure time is an integrated value obtained by integrating RGB for the exposure time. When the same white balance process is performed on two obtained images, there is concern of the color of the two images obtained by the white balance being different since integrated values of RGB differ. The second embodiment is an embodiment corresponding to this point.
  • "Configuration example of digital signal processing unit"
  • FIG. 13 is a diagram illustrating a configuration example of a digital signal processing unit (hereinafter appropriately referred to as a digital signal processing unit 20A) according to the second embodiment. Differences from the digital signal processing unit 20 in the first embodiment will be mainly described. A memory 27A is connected to the white balance adjustment circuit 27. The memory 27A stores a white balance adjustment parameter in accordance with a shutter speed (hereinafter appropriately referred to as a white balance gain). A white balance gain read from the memory 27A under the control of the system controller 14 is set in the white balance adjustment circuit 27. Note that in the embodiment, generation of the white balance gain is controlled by the system controller 14.
  • "Operation example"
  • Next, an operation example of an imaging device (hereinafter appropriately referred to as an imaging device 100A) according to the second embodiment will be described with reference to the flowcharts of FIGS. 14 and 15. Note that the following three modes can be set as setting related to white balance (WB) in the imaging device 100A:
    • an auto white balance mode;
    • a preset white balance mode; and
    • a custom white balance mode.
  • Of the three modes, the auto white balance mode is a mode in which a white balance gain is automatically set by the imaging device 100A. The preset white balance mode is a mode in which a plurality of representative light sources (the sun, an electric lamp, a fluorescent lamp, or the like) can be selected and a white balance gain optimum for the selected light source is set. The custom white balance mode is a mode in which a user experimentally photographs a spot with an achromatic color on a wall or the like under a use environment of the imaging device 100A (photographs a test) to acquire a white balance gain in accordance with the result.
  • In the flow of FIG. 14, the user performs a manipulation of changing setting of the white balance using the manipulation unit 18a in step ST21. Then, the process proceeds to step ST22.
  • In step ST22, the system controller 14 determines whether or not the auto white balance is set as the setting of the white balance. In a case in which the auto white balance mode is set, the process proceeds to step ST23. In step ST23, for example, the system controller 14 of the imaging device 100A automatically generates a white balance gain and sets the white balance gain in the white balance adjustment circuit 27. In a case in which the set mode of the white balance is not the auto white balance mode in step S22, the process proceeds to step ST24.
  • In step ST24, the system controller 14 determines whether or not the preset white balance is set as the setting of the white balance. In a case in which the preset white balance mode is set, the process proceeds to step ST25. In step ST25, the white balance gain corresponding to the selected light source is read from the memory 27A and the white balance gain is set in the white balance adjustment circuit 27. In a case in which the set mode of the white balance is not the preset white balance mode in step ST24, the process proceeds to step ST26.
  • In step ST26, the custom white balance mode is set as the mode of the white balance. Thus, test photographing is performed to generate (obtain) the white balance gain. Note that at this time, display or the like for prompting the test photographing may be performed on the display unit 18b. The test photographing starts and the user turns the imaging device 100A to a spot with an achromatic color and pushes the shutter button. Then, the process proceeds to step ST27.
  • In step ST27, a driving rate of the CMOS image sensor 12 is controlled such that the frame rate is accelerated (for example, 200 or 240 fps). Then, whether there is a flickering component, a frequency, a timing of a peak, and the like are detected. Note that the details of this process have been described in detail in the first embodiment, and thus the repeated description thereof will be omitted. Then, the process proceeds to step ST28.
  • In step ST28, the system controller 14 determines whether or not the flickering component is detected in the process of step ST27. In a case in which no flickering component is detected, the process proceeds to step ST29 and a process in accordance with the photographing of a spot of one piece of achromatic color such as white or gray at an exposure time T1 is performed. Note that the exposure time T1 herein is 1/n seconds (where n is a light source frequency, which is 100 or 120 in many cases) at which no flickering occurs. Then, the process proceeds to step ST30.
  • In step ST30, the system controller 14 generates a white balance gain Wb appropriate for image data obtained in a result of the test photographing. Then, the process proceeds to step ST31. In step ST31, the white balance gain Wb obtained through the process of step ST30 is stored and preserved in the memory 27A in accordance with the control by the system controller 14.
  • In a case in which the flickering component is detected in step ST28, the process proceeds to step ST32. In step ST32, test photographing is performed to photograph one image of a spot of the achromatic color at the exposure time T1. The test photographing is a flickerless photographing process in which the exposure timing is caused to be synchronized with a timing of a peak of the flickering component, as described in the first embodiment. Then, the process proceeds to step ST33.
  • In step ST33, test photographing is performed to photograph one image of a spot of the achromatic color at an exposure time T2 subsequently to the photographing of step ST32. The test photographing is also a flickerless photographing process in which the exposure timing is caused to be synchronized with a timing of a peak of the flickering component. Note that the exposure time T2 is, for example, a highest shutter speed which can be set in the imaging device 100A and is 1/8000 seconds in this example. Note that the test photographing in steps ST32 and ST33 is automatically performed successively, for example, when the user pushes the shutter button once to perform the test photographing, and thus the user does not need to push the shutter button twice. Then, the process proceeds to step ST34.
  • In step ST34, the system controller 14 generates white balance gains WbT1 and WbT2 respectively appropriate for an image A obtained in the test photographing in step ST32 and an image B obtained in the test photographing in step ST33. Then, the process proceeds to step ST35.
  • In step ST35, the white balance gains WbT1 and WbT2 respectively corresponding to the exposure times T1 and T2 are stored and preserved in the memory 27A in association with the exposure times. Note that the white balance gain obtained in the flickerless photographing is stored in association with a flag indicating the above fact.
  • Next, a process after the custom white balance mode is selected and the test photographing is performed will be described with reference to the flowchart of FIG. 15.
  • In step ST41, after the shutter button is pushed for the real photographing (second photographing) of actually photographing a subject, a process of selecting the white balance gain stored in the memory 27A is performed. Note that this selection process may be performed by the user or a recent white balance gain may be selected under the control of the system controller 14. Then, the process proceeds to step ST42.
  • In step ST42, it is determined whether or not the white balance gain selected in step ST41 is obtained in the test photographing in the flickerless photographing. Note that whether or not the selected white balance gain is obtained in the test photographing in the flickerless photographing can be determined by referring the flag associated with the white balance gain stored in the memory 27A. Here, in a case in which the selected white balance gain is the white balance gain Wb calculated and stored in steps ST30 and ST31 in FIG. 14, the negative determination is performed in step ST42 and the process proceeds to step ST43.
  • In step ST43, the selected white balance gain Wb is set as the white balance gain to be used in the real photographing in the white balance adjustment circuit 27. Then, the process proceeds to step ST44.
  • In step ST44, signal processing such as a white balance adjustment process in accordance with the white balance gain Wb is performed on the image data captured in the photographing process of the real photographing. The image data subjected to various kinds of signal processing are appropriately stored.
  • In a case in which the white balance gain selected in step ST41 is obtained in the test photographing in the flickerless photographing in step ST42, the process proceeds to step ST45. In step ST45, an exposure time (a shutter speed) Tact to be used in photographing is acquired. Then, the process proceeds to step ST46.
  • In step ST46, the exposure times T1 and T2 used at the time of the generation of the white balance gains are read from the memory 27A. Then, the process proceeds to step ST47.
  • In step ST47, it is determined whether or not Tact=T1 is satisfied. In a case in which Tact=T1 is satisfied, the process proceeds to step ST48.
  • In step ST48, the white balance gain WbT1 corresponding to the exposure time T1 is read from the memory 27A and is set in the white balance adjustment circuit 27. Then, the process proceeds to step ST44.
  • In step ST44, a photographing process is performed. For example, signal processing such as a white balance adjustment process in accordance with a white balance gain Wb1 is performed on the image data obtained through the real photographing. The image data subjected to various kinds of signal processing are appropriately stored.
  • In a case in which Tact=T1 is not satisfied in step ST47, the process proceeds to step ST49. In step ST49, it is determined whether or not Tact=T2 is satisfied. In a case in which Tact=T2 is satisfied, the process proceeds to step ST50.
  • In step ST50, the white balance gain WbT2 corresponding to the exposure time T2 is read from the memory 27A and is set in the white balance adjustment circuit 27. Then, the process proceeds to step ST44.
  • In step ST44, the photographing process is performed. For example, signal processing such as a white balance adjustment process in accordance with a white balance gain Wb2 is performed on the image data obtained through the real photographing. The image data subjected to various kinds of signal processing are appropriately stored.
  • In a case in which Tact=T2 is not satisfied in step ST49, the process proceeds to step ST51. In step ST51, the system controller 14 applies a white balance gain WbTact corresponding to a different exposure time Tact from the exposure times T1, T2 on the basis of the generated white balance gains WbT1 and WbT2. The white balance gain WbTact can be generated through, for example, a linear interpolation process.
  • An example of linear interpolation will be described. Note that the following example is an example in which gains for R and B are obtained, but a gain for G may be included.
  • When a white balance gain (R, B) of the exposure time T1 is (Rt1, Bt1) and a white balance gains (R, B) of the exposure time T2 is (Rt2, Bt2), (Rtx, Btx) which is the white balance gain (R, B) of the exposure time Tact can be expressed with Expressions (1a) and (1b) below. Rtx = Rt 1 + α Rt 2 Rt 1
    Figure imgb0002
    Btx = Bt 1 + α Bt 2 Bt 1
    Figure imgb0003
  • Here, α is an interpolation coefficient and α=(Tact-T1)/(T2-T1) is satisfied.
  • Note that a relationship between T and an exposure time can be regulated in Expression (1c) below. T = LOG 1 / exposure time , 2
    Figure imgb0004
  • Note that the above-described interpolation calculation example is merely exemplary and the present technology is not limited thereto.
  • The white balance gain WbTact corresponding to the exposure time Tact is generated through the process of step ST51 and is set in the white balance adjustment circuit 27. Then, the process proceeds to step ST44.
  • In step ST44, the photographing process is performed. For example, signal processing such as a white balance adjustment process to which the white balance gain WbTact set for the image data obtained through the real photographing is applied is performed. The image data subjected to various kinds of signal processing are appropriately stored.
  • [Advantageous effect of second embodiment]
  • According to the above-described second embodiment, even in a case in which a shutter speed is variable in the flickerless photographing, the white balance gain process in accordance with the appropriate white balance gain corresponding to the shutter speed can be performed, and thus appropriate color adjustment is possible.
  • [Modification examples of second embodiment]
  • The second embodiment can be modified as follows, for example.
  • In the above-described second embodiment, two images are obtained by performing the test photographing twice and the white balance gain corresponding to each exposure time is generated on the basis of the two images, but the present technology is not limited thereto. As illustrated in FIG. 16, for example, three images may be obtained by performing test photographing three times by flickerless photographing in which a center of a curtain speed matches a peak of a flickering component, and white balance gains WbT1, WbT2, and WbT3 corresponding to exposure times may be calculated. In addition, test photographing may be performed four or more times. Note that as illustrated in the example of the drawing, a B component is less as the exposure time is longer. Therefore, a B gain is increased or the like.
  • Exposure times T1, T2, and T3 in FIG. 16 are 1/100 seconds, 1/1000 seconds, and 1/8000 seconds. The exposure times can be appropriately set when the exposure times are not later than one period of a flickering component. In addition, white balance gains corresponding to exposure times on an external side (times out of a range from T1 to T2) may be obtained through interpolation from the white balance gains WbT1 and WbT2 corresponding to the exposure times T1 and T2. The test photographing may be performed from test photographing in which an exposure time is short or may be performed from test photographing in which an exposure time is long.
  • On a plurality of images obtained in previous flickerless photographing, a white balance gain corresponding to an exposure time may be generated. For example, metadata (accessory information) associated with an image may include information regarding a shutter speed and a white balance gain corresponding to the shutter speed may be generated. In addition, the generated white balance gain may be stored the generated white balance gain can be later. A previously obtained image may be an image stored in the imaging device, may be an image stored in a portable memory, or may be an image downloaded via the Internet or the like.
  • In the above-described second embodiment, the generated white balance gain may be stored to be used later. In addition, in a case in which positional information of Global Positioning System (GPS) or the like is stored in association with an exposure time (which may be a shutter speed) and a white balance gain and photographing is performed at the same location and the same exposure time, a process of setting a previous white balance gain at that location or presenting the white balance gain to a user may be performed.
  • In the above-described second embodiment, the timings at which the test photographing and the real photographing are performed are caused to be synchronized with timings of the peaks of the flickering component, but may be synchronized with other timings such as timings of the bottom (which is a portion in which the amplitude is the smallest) of the flickering component. The phase of the flickering component in which the exposure timing is caused to be synchronized may be the same in each photographing.
  • In the above-described second embodiment, even when the shutter speed may not precisely match T1 or T2 in the processes of steps ST47 and ST49, the white balance gains WbT1 and WbT2 corresponding to T1 and T2 may be used as white balance gains corresponding to the shutter speed as long as an error is within a predetermined range.
  • The second embodiment can be applied to any of a mechanical shutter, an electronic shutter, a global shutter, a rolling shutter, and the like and can be applied even in a case in which an image sensor is a charge coupled device (CCD).
  • The light source frequency of the flickering light source is not limited to 100 Hz or 120 Hz, but the present technology can be applied to an LED that blinks at a high speed.
  • In the above-described second embodiment, even in a case in which a moving image is photographed under a flickering light source, test photographing may be performed in advance in an achromatic chart or the like while an exposure time is changed, and a white balance gain corresponding to the exposure time may be calculated.
  • In the above-described second embodiment, a subject formed from various kinds of color other than white in a Macbeth chart or the like may be photographed a plurality of times at different exposure times under the flickering light source and a parameter related to color reproduction at each exposure time may be calculated. That is, in the second embodiment of the present technology, it is possible to control generation of parameters that are related to color adjustment at each exposure time and includes at least one of a white balance gain or a parameter related to color reproduction as well as the white balance gains corresponding to different exposure times.
  • In the above-described second embodiment, the flickerless photographing may be performed during monitoring of a subject and a white balance gain at each exposure time may be generated for a through image obtained in the photographing.
  • In the above-described second embodiment, the exposure time in step ST45 may automatically be set by the imaging device.
  • In the above-described second embodiment, the exposure times T1 and T2 in steps ST32 and ST33 may be set to be different in accordance with a kind of flickering light source or a parameter such as a white balance gain may be generated in accordance with a method suitable for the kind of flickering light source.
  • Before the real photographing, a white balance gain or the like in accordance with a shutter speed set in the imaging device 100 may be generated. In a case in which the real photographing is performed at the shutter speed, a white balance gain or the like generated in advance may be applied.
  • In the above-described second embodiment, the system controller 14 generates the white balance gains and the like. However, another functional block may generate the white balance gains in accordance with control of the system controller 14.
  • <3. Other modification examples>
  • Additionally, the present technology may also be configured as below.
    1. (1) An imaging control device including:
      a control unit configured to perform control such that an exposure timing is synchronized with a timing of a peak of a detected flickering component on the basis of a period of the flickering component and the timing of the peak of the flickering component.
    2. (2) The imaging control device according to (1), in which the period of the flickering component and the timing of the peak of the flickering component are detected on the basis of an imaging result by an imaging unit.
    3. (3) The imaging control device according to (1) or (2), in which the control unit controls an imaging unit for switching to a second frame rate which is greater than a first frame rate, and the period of the flickering component and the timing of the peak of the flickering component are detected on the basis of image data based on the second frame rate.
    4. (4) The imaging control device according to (3), in which the control unit performs control for switching the frame rate in response to a preparation manipulation of preparing to perform photographing.
    5. (5) The imaging control device according to (4), in which the preparation manipulation is a manipulation of pushing a shutter button halfway.
    6. (6) The imaging control device according to any one of (1) to (5), further including:
      • a detection unit configured to detect whether there is the flickering component,
      • in which the detection unit detects the period of the flickering component and the timing of the peak of the flickering component.
    7. (7) The imaging control device according to any one of (3) to (5), in which the second frame rate is N times (where N is an integer equal to or greater than 3) the first frame rate.
    8. (8) The imaging control device according to (7),
      in which first frame rate is 50 or 60 frames per second (fps), and
      the second frame rate is 200 or 240 fps.
    9. (9) The imaging control device according to (2), including:
      the imaging unit.
    10. (10) The imaging control device according to any one of (1) to (9), in which the control unit successively performs the control such that the exposure timing is synchronized with the timing of the peak of the flickering component in accordance with setting of photographing.
    11. (11) The imaging control device according to (10), in which, on the basis of a timing of a peak of the flickering component detected before first exposure, the control is successively performed such that the exposure timing is synchronized.
    12. (12) The imaging control device according to any one of (1) to (11), in which the control unit does not perform the control in a case in which ambient brightness is equal to or less than a predetermined value.
    13. (13) The imaging control device according to any one of (3) to (5), in which an additional process is performed in a case in which the frame rate is switched.
    14. (14) The imaging control device according to (13), in which the additional process is at least one of a process of increasing sensitivity or a process of strengthening a noise reduction effect.
    15. (15) The imaging control device according to any one of (1) to (14), in which the control unit performs control such that exposure of an image obtained in response to the control is corrected.
    16. (16) The imaging control device according to (15), in which the control unit controls the exposure of the image on the basis of a correction value obtained from a relation between intensity of flickering and a shutter speed.
    17. (17) The imaging control device according to (15), in which the control unit controls the exposure of the image by obtaining a shutter speed in the control on the basis of a correction amount of the flickering component and a shutter speed in the detection and setting the shutter speed.
    18. (18) The imaging control device according to any one of (1) to (17), in which the control unit determines whether or not to perform the control in accordance with a set shutter speed.
    19. (19) An imaging control method including:
      performing, by a control unit, control such that an exposure timing is synchronized with a timing of a peak of a detected flickering component on the basis of a period of the flickering component and the timing of the peak of the flickering component.
    20. (20) A program causing a computer to perform an imaging control method of performing, by a control unit, control such that an exposure timing is synchronized with a timing of a peak of a detected flickering component on the basis of a period of the flickering component and the timing of the peak of the flickering component.
  • The imaging device in the above-described embodiments may be embedded in a medical device, a smartphone, a computer device, a game device, a robot, a surveillance camera, or a moving object (a train, an airplane, a helicopter, a small flying body, or the like).
  • The embodiments of the present technology have been described specifically above, but the present technology is not limited to the above-described embodiments and can be modified in various forms based on the technical ideals of the present technology. For example, the configurations, the methods, the processes, the shapes, the materials, the numerical values, and the like exemplified in the above-described embodiments are merely exemplary and other different configurations, methods, processes, shapes, materials, numerical values, and the like may be used as necessary. Configurations for realizing the above-described embodiments and the modification examples may be appropriately added. In addition, the present technology is not limited to a device and the present technology can be realized by any form such as a method, a program, and a recording medium on which the program is recorded.
  • Reference Signs List
  • 100
    imaging device
    11
    imaging optical system
    12
    CMOS image sensor
    14
    system controller
    20
    digital signal processing unit
    25, 25R, 25G, 25B
    flickering reduction unit
    27
    white balance adjustment circuit

Claims (20)

  1. An imaging control device comprising:
    a control unit configured to perform control such that an exposure timing is synchronized with a timing of a peak of a detected flickering component on a basis of a period of the flickering component and the timing of the peak of the flickering component.
  2. The imaging control device according to claim 1, wherein the period of the flickering component and the timing of the peak of the flickering component are detected on a basis of an imaging result by an imaging unit.
  3. The imaging control device according to claim 1, wherein the control unit controls an imaging unit for switching to a second frame rate which is greater than a first frame rate, and the period of the flickering component and the timing of the peak of the flickering component are detected on a basis of image data based on the second frame rate.
  4. The imaging control device according to claim 3, wherein the control unit performs control for switching the frame rate in response to a preparation manipulation of preparing to perform photographing.
  5. The imaging control device according to claim 4, wherein the preparation manipulation is a manipulation of pushing a shutter button halfway.
  6. The imaging control device according to claim 1, further comprising:
    a detection unit configured to detect whether there is the flickering component,
    wherein the detection unit detects the period of the flickering component and the timing of the peak of the flickering component.
  7. The imaging control device according to claim 3, wherein the second frame rate is N times (where N is an integer equal to or greater than 3) the first frame rate.
  8. The imaging control device according to claim 7,
    wherein first frame rate is 50 or 60 frames per second (fps), and
    the second frame rate is 200 or 240 fps.
  9. The imaging control device according to claim 2, comprising:
    the imaging unit.
  10. The imaging control device according to claim 1, wherein the control unit successively performs the control such that the exposure timing is synchronized with the timing of the peak of the flickering component in accordance with setting of photographing.
  11. The imaging control device according to claim 10, wherein, on a basis of a timing of a peak of the flickering component detected before first exposure, the control is successively performed such that the exposure timing is synchronized.
  12. The imaging control device according to claim 1, wherein the control unit does not perform the control in a case in which ambient brightness is equal to or less than a predetermined value.
  13. The imaging control device according to claim 3, wherein an additional process is performed in a case in which the frame rate is switched.
  14. The imaging control device according to claim 13, wherein the additional process is at least one of a process of increasing sensitivity or a process of strengthening a noise reduction effect.
  15. The imaging control device according to claim 1, wherein the control unit performs control such that exposure of an image obtained in response to the control is corrected.
  16. The imaging control device according to claim 15, wherein the control unit controls the exposure of the image on a basis of a correction value obtained from a relation between intensity of flickering and a shutter speed.
  17. The imaging control device according to claim 15, wherein the control unit controls the exposure of the image by obtaining a shutter speed in the control on a basis of a correction amount of the flickering component and a shutter speed in the detection and setting the shutter speed.
  18. The imaging control device according to claim 1, wherein the control unit determines whether or not to perform the control in accordance with a set shutter speed.
  19. An imaging control method comprising:
    performing, by a control unit, control such that an exposure timing is synchronized with a timing of a peak of a detected flickering component on a basis of a period of the flickering component and the timing of the peak of the flickering component.
  20. A program causing a computer to perform an imaging control method of performing, by a control unit, control such that an exposure timing is synchronized with a timing of a peak of a detected flickering component on a basis of a period of the flickering component and the timing of the peak of the flickering component.
EP17813042.3A 2016-06-15 2017-05-08 Imaging control device, imaging control method, and program Pending EP3474537A4 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2016118535 2016-06-15
JP2016179442 2016-09-14
PCT/JP2017/017403 WO2017217137A1 (en) 2016-06-15 2017-05-08 Imaging control device, imaging control method, and program

Publications (2)

Publication Number Publication Date
EP3474537A1 true EP3474537A1 (en) 2019-04-24
EP3474537A4 EP3474537A4 (en) 2019-05-22

Family

ID=60664004

Family Applications (1)

Application Number Title Priority Date Filing Date
EP17813042.3A Pending EP3474537A4 (en) 2016-06-15 2017-05-08 Imaging control device, imaging control method, and program

Country Status (4)

Country Link
US (1) US10771713B2 (en)
EP (1) EP3474537A4 (en)
JP (1) JPWO2017217137A1 (en)
WO (1) WO2017217137A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6911850B2 (en) * 2016-06-15 2021-07-28 ソニーグループ株式会社 Image processing equipment, image processing methods and programs
JP2019220764A (en) 2018-06-15 2019-12-26 オリンパス株式会社 Acquisition method, program, and imaging apparatus
KR20200016559A (en) * 2018-08-07 2020-02-17 삼성전자주식회사 Apparatus and method for generating a moving image data including multiple sections image of the electronic device
JP6956894B2 (en) * 2018-09-27 2021-11-02 富士フイルム株式会社 Image sensor, image sensor, image data processing method, and program
WO2021095257A1 (en) * 2019-11-15 2021-05-20 オリンパス株式会社 Image capture device, method for reducing color non-uniformity due to flickering, and program for reducing color non-uniformity
US20230026669A1 (en) * 2019-12-10 2023-01-26 Gopro, Inc. Image sensor with variable exposure time and gain factor
CN114143470A (en) * 2020-09-04 2022-03-04 华为技术有限公司 Method, device and program product for adjusting exposure time of camera
WO2022153682A1 (en) * 2021-01-12 2022-07-21 ソニーグループ株式会社 Imaging device, imaging control method, and program
WO2022172639A1 (en) * 2021-02-09 2022-08-18 ソニーグループ株式会社 Imaging device, imaging method, and program

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4173457B2 (en) 2004-03-12 2008-10-29 富士フイルム株式会社 Imaging apparatus and control method thereof
US20120154628A1 (en) * 2010-12-20 2012-06-21 Samsung Electronics Co., Ltd. Imaging device and method
JP2012134663A (en) 2010-12-20 2012-07-12 Samsung Electronics Co Ltd Imaging apparatus and imaging method
KR101919479B1 (en) * 2012-05-02 2018-11-19 삼성전자주식회사 Apparatus and method for detecting flicker in a camera module
JP5814865B2 (en) * 2012-06-20 2015-11-17 株式会社 日立産業制御ソリューションズ Imaging device
JP6060824B2 (en) * 2013-06-20 2017-01-18 株式会社Jvcケンウッド Imaging apparatus and flicker reduction method
JP6220225B2 (en) * 2013-10-30 2017-10-25 キヤノン株式会社 IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, PROGRAM, AND RECORDING MEDIUM
JP2015097326A (en) * 2013-11-15 2015-05-21 キヤノン株式会社 Flicker-less photographing device
US9648249B2 (en) * 2013-11-20 2017-05-09 Canon Kabushiki Kaisha Image capturing apparatus, method of controlling the same, and storage medium
JP6391352B2 (en) 2014-08-07 2018-09-19 キヤノン株式会社 Imaging apparatus, control method, program, and storage medium
JP2016092786A (en) * 2014-11-11 2016-05-23 キヤノン株式会社 Imaging apparatus

Also Published As

Publication number Publication date
US20190215434A1 (en) 2019-07-11
JPWO2017217137A1 (en) 2019-04-11
US10771713B2 (en) 2020-09-08
WO2017217137A1 (en) 2017-12-21
EP3474537A4 (en) 2019-05-22

Similar Documents

Publication Publication Date Title
EP3474537A1 (en) Imaging control device, imaging control method, and program
JP6950698B2 (en) Imaging control device, imaging control method and imaging device
JP5035025B2 (en) Image processing apparatus, flicker reduction method, imaging apparatus, and flicker reduction program
US8081242B2 (en) Imaging apparatus and imaging method
US20090310885A1 (en) Image processing apparatus, imaging apparatus, image processing method and recording medium
JP2010114834A (en) Imaging apparatus
JP2007174537A (en) Imaging apparatus
JP2005223898A (en) Image processing method and imaging apparatus
JP2007028043A (en) Digital camera
EP3474544B1 (en) Image processing device, image processing method, and program
JP5048599B2 (en) Imaging device
JP2008228185A (en) Imaging apparatus
JP2007208833A (en) Digital camera and image processing method
JP2016134753A (en) Image processing system, information processing method, and program
JP3943613B2 (en) Imaging device and lens unit
EP4280590A1 (en) Imaging device, imaging control method, and program
JP2012227744A (en) Imaging apparatus
US10848684B2 (en) Imaging control device, imaging control method, and imaging system
JP5597124B2 (en) Image signal processing device
JP3959707B2 (en) Imaging device and white balance control method of imaging device
JP2004112403A (en) Imaging apparatus
JP2011160049A (en) Video intercom system
JP2002290824A (en) Digital camera
JP2005117535A (en) Image pickup device and image pickup method
JP2007020020A (en) Video signal processor

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20181208

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

A4 Supplementary search report drawn up and despatched

Effective date: 20190423

RIC1 Information provided on ipc code assigned before grant

Ipc: H04N 5/353 20110101ALI20190415BHEP

Ipc: H04N 5/235 20060101AFI20190415BHEP

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20191217

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

RAP3 Party data changed (applicant data changed or rights of an application transferred)

Owner name: SONY GROUP CORPORATION