WO2018008426A1 - Dispositif et procédé de traitement de signaux, et dispositif d'imagerie - Google Patents

Dispositif et procédé de traitement de signaux, et dispositif d'imagerie Download PDF

Info

Publication number
WO2018008426A1
WO2018008426A1 PCT/JP2017/023146 JP2017023146W WO2018008426A1 WO 2018008426 A1 WO2018008426 A1 WO 2018008426A1 JP 2017023146 W JP2017023146 W JP 2017023146W WO 2018008426 A1 WO2018008426 A1 WO 2018008426A1
Authority
WO
WIPO (PCT)
Prior art keywords
brightness
correction
value
captured image
brightness information
Prior art date
Application number
PCT/JP2017/023146
Other languages
English (en)
Japanese (ja)
Inventor
智裕 山崎
村松 良徳
博誠 片山
修二 上原
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Publication of WO2018008426A1 publication Critical patent/WO2018008426A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene

Definitions

  • the present technology relates to a signal processing device and method, and an imaging device, and more particularly, to a signal processing device and method, and an imaging device that can perform flicker correction with a sufficiently small amount of computation with sufficiently high accuracy.
  • the brightness fluctuation appears as flicker in the moving image obtained by the imaging. It will be.
  • line flicker in which the brightness changes in the direction in which the pixel lines are arranged in one frame image occurs.
  • surface flicker occurs in which the brightness of the entire image changes in the time direction, that is, between frames.
  • these line flickers and surface flickers are also simply referred to as flickers when it is not necessary to distinguish them.
  • the frequency of line flicker is estimated by performing Fourier transform on an image including line flicker that occurs when a light source that periodically repeats light and dark is imaged, and in accordance with the line flicker period obtained as a result of the estimation.
  • a technique for correcting an image has been proposed (see, for example, Patent Document 1).
  • the light and dark cycle of surface flicker that occurs when a light source that repeats light and dark periodically is imaged at a high frame rate is estimated from the average luminance value of the image of each frame, and matched to the cycle obtained as the estimation result.
  • a technique for correcting surface flicker has also been proposed (see, for example, Patent Document 2).
  • a flicker cycle is estimated by imaging a light source having a frequency of 60 Hz or 50 Hz at frame rates of 480 fps, 240 fps, and 120 fps, and modeling surface flicker with a periodic waveform.
  • the flicker waveform model can be estimated even with flicker with a complicated waveform, but the amount of calculation increases. Specifically, for example, not only is the amount of computation of the Fourier transform itself large, but if the flicker waveform becomes complicated, the peak value of the frequency detected by the Fourier transform increases, so that the number of peaks after that is increased. The amount of calculation increases.
  • the present technology has been made in view of such a situation, and makes it possible to perform flicker correction with sufficiently small accuracy with a small amount of calculation.
  • the signal processing device includes a brightness information calculation unit that calculates brightness information of a captured image, and brightness indicating a reference brightness based on the brightness information at a plurality of times.
  • a brightness reference calculation unit that calculates a reference value; and a correction parameter calculation unit that calculates a correction parameter for correcting the brightness of the captured image based on the brightness information and the brightness reference value. .
  • the correction parameter can be used to perform flicker correction by correcting the brightness of the captured image at each time.
  • the brightness reference calculation unit can calculate the brightness reference value by smoothing the brightness information by a filtering process based on the brightness information.
  • the brightness information calculation unit can calculate an average value of luminance values of each pixel of the captured image as the brightness information.
  • the correction parameter can be information relating to the gain of the captured image.
  • the signal processing device calculates a predicted value of the brightness information based on the brightness information at a plurality of times, and brightness of the captured image based on the predicted value and the brightness reference value.
  • a correction value calculation unit that calculates a correction value indicating the degree of correction of the correction value may be further provided, and the correction parameter calculation unit may calculate the correction parameter for performing gain adjustment based on the correction value.
  • the correction value calculation unit can calculate the predicted value based on the amount of change in the brightness information.
  • the correction value calculation unit can calculate a ratio between the brightness reference value and the predicted value as the correction value.
  • the correction value calculation unit includes the predicted value of the brightness information at the processing target time and the brightness reference value calculated from the brightness information up to a time before the processing target time.
  • An analog gain applying unit that calculates the correction value based on the gain and adjusts the gain of the analog captured image at the time to be processed by a gain determined by the correction parameter can be further provided.
  • a correction value calculation unit for calculating a correction value may be further provided, and the correction parameter calculation unit may calculate the correction parameter for performing gain adjustment based on the correction value.
  • the correction value calculation unit can calculate a ratio between the brightness reference value and the brightness information as the correction value.
  • the signal processing device may further include a digital gain application unit that performs gain adjustment on the digital captured image at the time to be processed with a gain determined by the correction parameter.
  • the correction parameter can be information relating to the shutter speed of the captured image.
  • the correction parameter can be information relating to a threshold value for binarizing the captured image.
  • the correction parameter can be information relating to white balance of the captured image.
  • the correction parameter calculation unit can calculate the correction parameter for each area of the captured image.
  • the signal processing method calculates brightness information of a captured image, calculates a brightness reference value indicating a reference brightness based on the brightness information at a plurality of times, And calculating a correction parameter for correcting the brightness of the captured image based on the brightness information and the brightness reference value.
  • brightness information of a captured image is calculated, a brightness reference value indicating brightness serving as a reference is calculated based on the brightness information at a plurality of times, and the brightness A correction parameter for correcting the brightness of the captured image is calculated based on the information and the brightness reference value.
  • the imaging device is based on an imaging unit that captures a captured image, a brightness information calculation unit that calculates brightness information of the captured image, and the brightness information at a plurality of times.
  • a brightness reference calculation unit that calculates a brightness reference value indicating a reference brightness, and a correction parameter for correcting the brightness of the captured image based on the brightness information and the brightness reference value.
  • a correction parameter calculation unit for calculating.
  • a captured image is captured, brightness information of the captured image is calculated, and a brightness reference value that indicates a reference brightness based on the brightness information at a plurality of times Is calculated, and a correction parameter for correcting the brightness of the captured image is calculated based on the brightness information and the brightness reference value.
  • flicker correction can be performed with sufficiently high accuracy with a smaller amount of calculation.
  • the light source that is, an image is captured at a sufficiently short cycle relative to the light / dark cycle of the subject. It will be.
  • brightness information indicating the brightness of the entire image is calculated, and from the brightness information in a predetermined period in the vicinity of the frame to be processed, a reference average brightness, Specifically, a brightness reference value indicating the smoothed brightness is obtained. Then, a flicker correction value for correcting flicker is calculated from the brightness reference value and the brightness information.
  • one or a plurality of periodic waveform models are used to estimate flicker changing in the time direction, that is, a luminance waveform.
  • the horizontal direction indicates time
  • the vertical direction indicates the luminance of the entire image.
  • a flicker waveform is estimated using a plurality of waveform models such as the waveform model MD1, the waveform model MD2, and the waveform model MD3.
  • each of the waveform models MD1 to MD3 is a sine wave model having a constant amplitude whose luminance periodically changes in the time direction.
  • the waveform model MD1 to the waveform model MD3 are also simply referred to as the waveform model MD when it is not necessary to distinguish them.
  • each circle represents the luminance of the entire image in each frame of the captured moving image.
  • the waveform of the flicker that is, the waveform of the luminance change of the image can be correctly estimated by using the waveform model MD.
  • the waveform indicated by the arrow Q12 matches the waveform model MD1 as indicated by the arrow Q13, the flicker correction can be correctly performed using the waveform model MD1.
  • the waveform is a complex waveform obtained by synthesizing a plurality of waveforms.
  • Such a complicated change in brightness often occurs, for example, when a plurality of light sources are within the region to be imaged.
  • a flicker waveform is estimated using a complicated waveform model, such as a combination of a plurality of waveform models MD, it is possible to estimate a waveform of a change in luminance of a moving image. If is used, the amount of calculation required for waveform estimation and flicker correction increases.
  • the brightness of the subject has changed as shown by a curve L11 in FIG. 2, the horizontal direction indicates time, and the vertical direction indicates luminance, that is, brightness.
  • each circle (point) constituting the curve L12 indicates the brightness in each frame of the moving image.
  • the vertical line at each time drawn in the part indicated by the arrow Q21 indicates the imaging timing of each frame of the moving image.
  • each circle indicates the brightness of the entire image in each frame, and the arrow drawn between the circles indicates a change in brightness between adjacent frames. ing.
  • the brightness of the entire image increases with time, and thereafter the brightness of the entire image decreases with time.
  • the brightness of the entire image changes with time in this way, for example, if the brightness of the moving image is corrected so as to change in the opposite direction to the change in brightness as indicated by an arrow Q24, flicker can be correctly performed with sufficient accuracy. Correction can be performed.
  • a flicker correction value for correcting the brightness is calculated so that the brightness of the entire image is decreased by the increase in the brightness.
  • Brightness correction according to the flicker correction value is performed on the image. For example, as the brightness correction according to the flicker correction value, image gain correction, brightness correction by controlling the shutter speed, threshold value correction when outputting a binary image, and the like are performed.
  • the brightness of the entire image decreases with time.
  • the brightness of the entire image changes with time, for example, as shown by an arrow Q25, if the brightness of the moving image is corrected so as to change in the opposite direction to the change in brightness, the brightness can be accurately corrected. Flicker correction can be performed.
  • this technology uses a brightness reference value obtained from brightness information for a certain period near the frame to be processed and brightness information when calculating the flicker correction value. Flicker correction can be realized.
  • the brightness information that is used as a reference for a certain period using the brightness information for a certain period in the vicinity of the processing target frame is used as the target brightness after the correction of the processing target frame. It is calculated as a brightness reference value indicating the brightness. In this way, it is possible to obtain an appropriate brightness reference value in consideration of surface flicker with a small amount of calculation.
  • the moving image is captured at a high frame rate in this technology, the change in brightness information between frames is minute. For this reason, for example, even when the flicker correction value is applied to the next frame of the processing target frame, the brightness information of the next frame can be estimated with high accuracy from the brightness information of the processing target frame.
  • the change in the brightness reference value between frames is also very small.
  • FIG. 3 is a diagram illustrating a configuration example of an embodiment of a signal processing device to which the present technology is applied.
  • 3 includes a pixel unit 21, an analog gain application unit 22, an AD (Analog Digital) conversion unit 23, a digital signal processing unit 24, a flicker correction unit 25, and a system controller 26.
  • AD Analog Digital
  • the signal processing device 11 includes an imaging device (solid-state imaging device) such as a CMOS (Complementary Metal Oxide Semiconductor) image sensor, an imaging device on which such an imaging device is mounted, and the like.
  • an imaging device solid-state imaging device
  • CMOS Complementary Metal Oxide Semiconductor
  • the pixel unit 21 images a subject under the control of the system controller 26 and supplies a captured image, which is a moving image obtained as a result, to the analog gain application unit 22.
  • the pixel unit 21 has a plurality of pixels arranged in a matrix, and each pixel is provided with a photodiode, a transfer transistor, a floating diffusion region, a selection transistor, an amplification transistor, and the like.
  • each pixel receives light incident from a subject and outputs a pixel signal obtained by photoelectric conversion, and these pixel signals become signals indicating the pixel values of the pixels of the captured image. That is, data including pixel signals output from each pixel is used as image data of the captured image.
  • the pixel unit 21 captures a captured image in a sufficiently short cycle with respect to the subject, that is, the light source / darkness change cycle.
  • the captured image is captured at a frame rate of 1000 fps or higher.
  • the captured image may be a still image captured continuously in time.
  • the analog gain application unit 22 multiplies the captured image supplied from the pixel unit 21 under the control of the system controller 26, more specifically, the image data of the captured image that is an analog signal, by an analog gain that is a predetermined gain. As a result, the gain of the captured image is adjusted.
  • the analog gain application unit 22 supplies the gain-adjusted captured image to the AD conversion unit 23.
  • the AD conversion unit 23 performs AD conversion for converting the captured image that is an analog signal supplied from the analog gain application unit 22 into a digital signal, and the captured image that is a digital signal obtained as a result is converted into a digital signal processing unit 24. And supplied to the flicker correction unit 25.
  • gain adjustment by analog gain may be performed at the time of AD conversion.
  • the digital signal processing unit 24 performs various processes such as gain adjustment, white balance adjustment, and gamma correction on the captured image supplied from the AD conversion unit 23 according to the control of the system controller 26, and obtains the result. Output the output image.
  • the digital signal processing unit 24 includes a digital gain application unit 31.
  • the digital gain application unit 31 multiplies a captured image by a digital gain that is a predetermined gain according to control of the system controller 26. Adjust the gain of the image.
  • the digital signal processing unit 24 binarizes the pixel value of each pixel by comparing the luminance value of each pixel of the captured image with a threshold value. A value image is generated as an output image.
  • the binarization process may be performed by the digital signal processing unit 24, or a block provided between the AD conversion unit 23 and the digital signal processing unit 24. You may be made to perform in. In this embodiment, in order to simplify the description, it is assumed that the captured image is not binarized.
  • the flicker correction unit 25 calculates a flicker correction value for correcting the brightness of the captured image, that is, for correcting flicker occurring in the captured image, based on the captured image supplied from the AD conversion unit 23. This is supplied to the system controller 26.
  • the flicker correction value is a value indicating the degree of correction of the brightness of the captured image, that is, the degree of correction of flicker for the captured image.
  • the system controller 26 controls the operation of each part of the signal processing device 11. For example, based on the flicker correction value supplied from the flicker correction unit 25, the system controller 26 calculates correction parameters for correcting the brightness of a captured image, that is, flicker correction, in each unit of the signal processing device 11. To supply.
  • the system controller 26 functions as a correction parameter calculation unit that calculates a correction parameter based on the flicker correction value.
  • the correction parameter is for performing flicker correction by correcting the brightness of the captured image of each time, that is, each frame. More specifically, the correction parameter is information for controlling the operation in the block in which flicker correction is actually performed, calculated based on the flicker correction value indicating the degree of flicker correction, and for each block in which flicker correction is performed. Calculated.
  • the correction parameter is information regarding the gain of the captured image, information regarding the shutter speed of the captured image, information regarding a threshold value for binarizing the captured image, and the like.
  • the system controller 26 calculates a correction value of the shutter speed as a correction parameter, and exposure of each pixel is performed at the shutter speed corrected by the correction value. In this way, the imaging operation by the pixel unit 21 is controlled.
  • the shutter speed correction value is a correction value that increases the exposure time of each pixel, that is, the shutter speed is higher. The correction value is set so as to be delayed.
  • the system controller 26 calculates a correction value for correcting the analog gain, and supplies the obtained correction value to the analog gain application unit 22 as a correction parameter.
  • the analog gain application unit 22 corrects the analog gain, for example, by multiplying the analog gain by the correction parameter, and adjusts the gain of the captured image using the corrected analog gain. Specifically, for example, when it is desired to realize flicker correction that brightens the entire captured image, a correction value that increases the analog gain is calculated as a correction parameter.
  • the analog gain before correction by the correction parameter is a predetermined value such as 1, for example. Further, the analog gain before correction may be a value larger than 1 or a value smaller than 1.
  • the correction parameter is also supplied to the flicker correction unit 25, and is used by the flicker correction unit 25 to calculate brightness information indicating the brightness of the captured image.
  • the system controller 26 calculates a correction value for correcting the digital gain and supplies the obtained correction value to the digital gain application unit 31 as a correction parameter.
  • the digital gain application unit 31 corrects the digital gain by, for example, multiplying the digital gain by a correction parameter, and adjusts the gain of the captured image using the corrected digital gain. Specifically, for example, when it is desired to realize flicker correction that brightens the entire captured image, a correction value that increases the digital gain is calculated as a correction parameter.
  • the digital gain before correction by the correction parameter is a predetermined value such as 1, for example.
  • the digital gain before correction may be a value larger than 1 or a value smaller than 1.
  • flicker correction may be realized by performing any one of the shutter speed correction, analog gain correction, and digital gain correction described above.
  • Flicker correction may be realized by combining any two or more corrections among the corrections.
  • the signal processing device 11 is composed of an image sensor
  • such an image sensor can be a stacked image sensor, for example, as shown in FIG.
  • a stacked image sensor is formed by stacking a sensor unit 61 and a circuit unit 62.
  • the sensor unit 61 has a pixel region 71 in which the pixel unit 21 shown in FIG. 3 is provided, and the circuit unit 62 is a signal processing circuit in which the analog gain application unit 22 to the system controller 26 shown in FIG. 3 are provided.
  • a region 72 is provided.
  • a layer having a memory area in which a memory is provided may be formed between the sensor unit 61 and the circuit unit 62.
  • FIG. 5 is a diagram illustrating a more detailed configuration example of the flicker correction unit 25.
  • parts corresponding to those in FIG. 3 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
  • the captured image of each frame is sequentially supplied from the imaging unit 101 to the flicker correction unit 25 shown in FIG.
  • the imaging unit 101 includes, for example, the pixel unit 21 to the digital signal processing unit 24 in FIG. 3, and the flicker correction unit 25 receives a captured image that is an analog signal from the AD conversion unit 23 that configures the imaging unit 101. Supplied.
  • the captured image may be a monochrome monochrome image or a color image having a plurality of color components, such as a color image made up of RGB images.
  • a flicker correction unit 25 is provided for each color component constituting the color image, and flicker correction is performed for each color component.
  • the captured image is a monochrome image.
  • the flicker correction unit 25 includes a brightness information calculation unit 111, a brightness information delay unit 112, a brightness reference calculation unit 113, a difference calculation unit 114, and a flicker correction value calculation unit 115.
  • the brightness information calculation unit 111 appropriately uses the correction parameter indicating the correction value of the analog gain supplied from the system controller 26, and based on the captured image of each frame supplied from the imaging unit 101, the entire captured image. Brightness information indicating the brightness of is calculated.
  • the brightness information calculation unit 111 calculates the luminance value of each pixel of the captured image, calculates the average value of the calculated luminance values of each pixel, and uses the obtained average value of the luminance values as the brightness information. .
  • the brightness information calculation unit 111 calculates the pixel value of each pixel of the captured image. Brightness information is calculated using a product obtained by multiplying the inverse of the analog gain correction parameter supplied from the system controller 26. That is, the brightness information calculation unit 111 calculates brightness information for a captured image before flicker correction using an analog gain.
  • the signal processing apparatus 11 Since the signal processing apparatus 11 captures a captured image at a high frame rate, flicker appearing in the captured image becomes surface flicker. Therefore, the signal processing device 11 can perform flicker correction using, for example, a feature amount having a relatively small amount of calculation as brightness information, such as an average value of luminance values of pixels of a captured image.
  • the brightness information indicates the brightness of the entire captured image
  • the brightness information is not limited to the average value of the luminance values of the pixels of the captured image, but the integrated value (added value) of the luminance values of the pixels of the captured image, Any value such as a representative value of the luminance value of the pixel of the captured image may be used.
  • the frame at time t is also referred to as frame t
  • the brightness information of the captured image at frame t is also referred to as brightness information I (t).
  • the brightness information calculation unit 111 supplies the brightness information obtained from the captured image to the difference calculation unit 114, the brightness information delay unit 112, the brightness reference calculation unit 113, and the flicker correction value calculation unit 115.
  • the brightness information delay unit 112 delays the brightness information supplied from the brightness information calculation unit 111 by a time corresponding to one frame of the captured image and supplies the delay information to the difference calculation unit 114.
  • the delay time of the brightness information by the brightness information delay unit 112 is a time for one frame, but the time is not limited to the time for one frame but may be delayed by an arbitrary number of frames. .
  • the brightness reference calculation unit 113 indicates the brightness of a smoothed captured image that serves as a reference for brightness correction, that is, flicker correction, during a certain period based on the brightness information supplied from the brightness information calculation unit 111.
  • the brightness reference value is calculated and supplied to the flicker correction value calculation unit 115.
  • This brightness reference value is a value indicating the brightness that should be the target after correction during flicker correction.
  • the brightness reference calculation unit 113 calculates the brightness reference value by smoothing the brightness information of a plurality of continuous frames for a certain period, that is, by an arbitrary method such as filter processing.
  • the brightness reference calculation unit 113 uses a IIR (Infinite Impulse Response) type smoothing filter such as a low-pass filter to perform a filter process on the brightness information of a predetermined number of consecutive frames. Information is smoothed in the time direction to obtain a brightness reference value. In other words, the brightness reference calculation unit 113 calculates the brightness reference value by convolving the brightness information at each time and the filter coefficient of the smoothing filter.
  • IIR Infinite Impulse Response
  • the horizontal direction indicates time
  • the vertical direction indicates luminance
  • the curve L41 indicates the brightness information of the captured image at each time (frame), that is, the brightness of the captured image before flicker correction, and the curve L41 changes in brightness as viewed globally.
  • the brightness of the captured image changes with time as viewed globally as described above, for example, when capturing an image of a vehicle as a subject approaching in a state of emitting light.
  • the brightness of the captured image changes periodically, and the periodic brightness change in this short period becomes a flicker component. That is, the brightness of the captured image changes in a short cycle due to the occurrence of flicker.
  • the brightness reference value of each frame shown in the curve L42 is obtained.
  • the curve L42 is drawn with a dotted line, and each point represents the brightness reference value at the time (frame) when those points are drawn.
  • the curve L42 indicating such a brightness reference value changes slowly, and the temporal change of the curve L42 is substantially equal to the global change of the curve L41. That is, the curve L42 is substantially equal to the curve L41 when viewed globally. Therefore, if flicker correction is performed using the brightness reference value indicated by the curve L42 as the brightness of the target captured image after flicker correction, flicker components can be accurately removed from the captured image by flicker correction. I understand. In other words, flicker correction can be performed with sufficiently high accuracy.
  • the brightness reference calculation unit 113 calculates a brightness reference value for each frame of the captured image, and supplies the brightness reference value to the flicker correction value calculation unit 115.
  • the brightness reference value obtained using the captured images up to the frame t is also referred to as a brightness reference value Is (t).
  • the difference calculation unit 114 calculates a difference value between the brightness information supplied from the brightness information calculation unit 111 and the brightness information supplied from the brightness information delay unit 112, and supplies the difference value to the flicker correction value calculation unit 115. To do. Since this difference value is a difference between brightness information at different times, it can be said that the difference value is a change amount in brightness information.
  • the difference value obtained for the frame t is also referred to as a difference value ⁇ I (t).
  • Such a difference value ⁇ I (t) indicates the amount of change in brightness information between temporally adjacent frames.
  • the flicker correction value calculation unit 115 is based on the difference value supplied from the difference calculation unit 114, the brightness reference value supplied from the brightness reference calculation unit 113, and the brightness information supplied from the brightness information calculation unit 111.
  • the flicker correction value is calculated and supplied to the system controller 26.
  • the flicker correction value in the frame t is also referred to as V (t).
  • the horizontal direction indicates time
  • the vertical direction indicates brightness, that is, a luminance value
  • a straight line L61 indicates the brightness reference value Is (t) at each time.
  • the brightness reference value Is (t) is the same value at each time (frame).
  • the difference between the brightness information I (0) and the brightness information I (1) is the difference value ⁇ I ( Calculated as 1).
  • the difference value ⁇ I (1) I (1) is set.
  • the correction parameter at frame t is also referred to as P (t).
  • P (t) the correction parameter at frame t
  • the flicker correction value V (1) is used as it is as the digital gain DG (1) corrected by the correction parameter P (1)
  • a value obtained by substituting into a predetermined function is the correction parameter P (1).
  • the digital gain corrected using the correction parameter P (1) as a correction value is set as the digital gain DG (1).
  • the analog gain before correction by the correction parameter is set to 1.
  • the correction parameter P (t) can be calculated before actually capturing the captured image using those information.
  • the predicted value of the brightness information I (t) of the frame t is also referred to as Ie (t).
  • the flicker correction value V (2) is used as it is as the analog gain AG (2) corrected by the correction parameter P (2)
  • a value obtained by substituting into a predetermined function is the correction parameter P (2).
  • the analog gain corrected using the correction parameter P (2) as a correction value is set as the analog gain AG (2).
  • the flicker correction value V (t) is calculated from the predicted value Ie (t) and the brightness reference value Is (t ⁇ 1) and is used as the correction parameter P (t) as it is, and from the correction parameter P (t).
  • An analog gain AG (t) is obtained. Then, the gain adjustment of the captured image of the frame t is performed as flicker correction by the analog gain AG (t).
  • either the difference value which is the amount of change in brightness information between frames, or the brightness information of the current frame may be used, or both of them may be used. You may make it use.
  • the signal processing device 11 since flicker appearing in the captured image is mainly surface flicker, information having a relatively small amount of calculation such as an average value of luminance values of each pixel of the captured image is used as the brightness information. be able to. Further, the difference value ⁇ I (t), the predicted value Ie (t), and the brightness reference value Is (t) calculated using the brightness information can be obtained with a relatively small amount of calculation.
  • the pixel unit 21 receives light incident from the subject and performs photoelectric conversion to capture a captured image, and supplies the obtained captured image to the analog gain application unit 22.
  • the analog gain application unit 22 adjusts the gain of the captured image using, for example, a predetermined analog gain, and supplies it to the AD conversion unit 23.
  • the AD conversion unit 23 performs AD conversion on the captured image supplied from the analog gain application unit 22, and the digital captured image obtained as a result is converted into the brightness information calculation unit 111 and the digital of the flicker correction unit 25.
  • the signal is supplied to the signal processing unit 24.
  • the brightness information calculation unit 111 converts the brightness information I (1) thus obtained from the difference calculation unit 114, the brightness information delay unit 112, the brightness reference calculation unit 113, and the flicker correction value calculation unit 115. To supply.
  • step S13 the difference calculation unit 114 calculates the difference value ⁇ I (1) based on the brightness information I (1) supplied from the brightness information calculation unit 111, and supplies the difference value ⁇ I (1) to the flicker correction value calculation unit 115.
  • the brightness information I (1) is directly used as the difference value ⁇ I (1).
  • step S14 the flicker correction value calculation unit 115 determines the flicker correction value V (1) based on the predetermined brightness reference value Is (0) and the difference value ⁇ I (1) supplied from the difference calculation unit 114. ) Is calculated and supplied to the system controller 26.
  • the flicker correction value V (1) is not limited to this example, and may be a value obtained by substituting the brightness reference value Is (0) and the difference value ⁇ I (1) into a predetermined function.
  • step S15 the system controller 26 calculates the correction parameter P (1) based on the flicker correction value V (1) supplied from the flicker correction value calculation unit 115.
  • the flicker correction value V (1) is directly used as the correction value of the digital gain as the correction parameter P (1).
  • a value obtained by substituting the flicker correction value V (1) into a predetermined function may be used as the correction parameter P (1).
  • the system controller 26 supplies the correction parameter P (1) thus obtained to the digital signal processing unit 24.
  • step S17 the brightness reference calculation unit 113 calculates the brightness reference value Is (1) based on the brightness information I (1) supplied from the brightness information calculation unit 111, and the flicker correction value calculation unit. 115.
  • the brightness reference calculation unit 113 calculates the brightness reference value Is (1) by performing filter processing using a smoothing filter on the brightness information I (0) and the brightness information I (1). To do.
  • step S18 the system controller 26 increments the parameter t indicating the held frame number by one.
  • step S18 performed immediately after the process of step S17 is performed, the parameter t is incremented to 2.
  • the predicted value Ie (2) is calculated from the brightness information I (1) obtained in step S12 and the difference value ⁇ I (1) obtained in step S13.
  • the predicted value Ie (t) calculated in this way is based on the amount of change in brightness information using brightness information of a plurality of times before time t, that is, a plurality of frames before frame t. This is the predicted value (estimated value) of the brightness information of the frame t calculated in the above manner.
  • step S20 the flicker correction value calculation unit 115 is based on the brightness reference value Is (t-1) supplied from the brightness reference calculation unit 113 and the predicted value Ie (t) calculated in step S19.
  • the flicker correction value V (t) Is (t ⁇ 1) / Ie (t) is calculated.
  • the brightness reference value Is (t ⁇ 1) calculated from the brightness information of a plurality of frames up to the frame (t ⁇ 1) before the processing target frame t, and the processing target frame t
  • the ratio of the brightness information to the predicted value Ie (t) is calculated as the flicker correction value V (t).
  • the flicker correction value V (2) is calculated using the brightness reference value Is (1) obtained in step S17.
  • the parameter t ⁇ 3 the brightness reference value Is (t ⁇ 1) obtained in step S26 performed immediately before step S20 is used, and the flicker correction value V (t ) Is calculated.
  • the flicker correction value V (t) is not limited to this example, and may be a value obtained by substituting the brightness reference value Is (t ⁇ 1) and the predicted value Ie (t) into a predetermined function. Good.
  • the flicker correction value calculation unit 115 supplies the flicker correction value V (t) thus obtained to the system controller 26.
  • step S21 the system controller 26 calculates the correction parameter P (t) based on the flicker correction value V (t) supplied from the flicker correction value calculation unit 115.
  • the flicker correction value V (t) is used as the correction value of the analog gain as the correction parameter P (t) as it is.
  • a value obtained by substituting the flicker correction value V (t) into a predetermined function may be used as the correction parameter P (t).
  • the system controller 26 supplies the correction parameter P (t) thus obtained to the analog gain application unit 22 and the brightness information calculation unit 111.
  • step S22 the imaging unit 101 captures a captured image of the frame t.
  • the pixel unit 21 receives light incident from the subject and performs photoelectric conversion to capture a captured image, and supplies the obtained captured image to the analog gain application unit 22.
  • the shutter speed is corrected by the correction value, and the corrected shutter speed is corrected.
  • a captured image of the frame t is captured.
  • step S23 the analog gain application unit 22 applies the analog gain determined from the correction parameter P (t) supplied from the system controller 26 in step S21 to the analog captured image of the frame t supplied from the pixel unit 21. .
  • the analog gain application unit 22 calculates the analog gain AG (t) described with reference to FIG. 7 by correcting a predetermined analog gain with the correction parameter P (t). Then, the analog gain application unit 22 performs gain adjustment by multiplying the analog gain AG (t) by the pixel value of each pixel of the captured image of the frame t, and supplies the gain-adjusted captured image to the AD conversion unit 23. To do. Flicker correction at the frame t is realized by such analog gain adjustment.
  • the AD conversion unit 23 performs AD conversion on the captured image supplied from the analog gain application unit 22, and the digital captured image obtained as a result is converted into the brightness information calculation unit 111 and the digital of the flicker correction unit 25.
  • the signal is supplied to the signal processing unit 24.
  • the digital signal processing unit 24 performs various processes such as gamma correction and gain adjustment on the captured image supplied from the AD conversion unit 23 to generate and output an output image.
  • step S24 the brightness information calculation unit 111 calculates brightness information I (t) based on the captured image of the frame t supplied from the AD conversion unit 23.
  • the brightness information calculation unit 111 obtains an analog gain AG (t) from the correction parameter P (t) of the frame t supplied from the system controller 26, and further performs correction using the analog gain AG (t).
  • the analog gain before correction by the parameter P (t) is divided to obtain the gain ratio.
  • the brightness information calculation unit 111 calculates the captured image before flicker correction, more specifically, the captured image when flicker correction is not performed, by multiplying the gain ratio by the pixel value of each pixel of the captured image of frame t.
  • an average value of luminance values of each pixel of the captured image before flicker correction is calculated as brightness information I (t).
  • the gain ratio is the reciprocal of the analog gain AG (t).
  • the brightness information calculation unit 111 supplies the obtained brightness information I (t) to the difference calculation unit 114, the brightness information delay unit 112, the brightness reference calculation unit 113, and the flicker correction value calculation unit 115.
  • the brightness information I (t) calculated in step S24 is used in the process of step S19 performed for the next frame (t + 1).
  • the difference calculation unit 114 supplies the obtained difference value ⁇ I (t) to the flicker correction value calculation unit 115.
  • This difference value ⁇ I (t) is used in the process of step S19 performed for the next frame (t + 1).
  • step S26 the brightness reference calculation unit 113 calculates the brightness reference value Is (t) based on the brightness information I (t) supplied from the brightness information calculation unit 111, and the flicker correction value calculation unit 115. To supply.
  • the brightness reference calculation unit 113 convolves the brightness information I (t) of each frame during a certain period including the frame t, that is, the period from the predetermined frame to the frame t, with the filter coefficient of the smoothing filter. Then, the brightness information is filtered to calculate the brightness reference value Is (t). That is, the brightness reference value Is (t) is calculated by smoothing the brightness information I (t) by a filter process using a smoothing filter.
  • the brightness reference value Is (t) obtained in this way is used in the process of step S20 performed for the next frame (t + 1).
  • step S27 the system controller 26 determines whether or not to end the process. For example, when the imaging of the captured image is finished, it is determined that the process is finished.
  • step S27 If it is determined in step S27 that the process is not terminated, the process returns to step S18, and the above-described process is repeated.
  • step S27 when it is determined in step S27 that the processing is to be ended, the system controller 26 ends the operation of each unit, and the flicker correction processing is ended.
  • the signal processing device 11 obtains the brightness information and the brightness reference value from the captured image, and calculates the flicker correction value.
  • the signal processing apparatus 11 performs flicker correction using brightness information and a brightness reference value with a small amount of computation without modeling a light source, so that the amount of computation increases due to the complexity of the light source model. There is nothing. Therefore, if the correction parameter can be calculated from the acquisition of the captured image to the acquisition of the next captured image, that is, the readout, the correction parameter can be applied to the next frame. Flicker correction can be performed with high accuracy.
  • the signal processing apparatus 11 can perform flicker correction with a small amount of calculation and a simple configuration, a configuration for performing such flicker correction can be mounted in the image sensor, and has higher functionality.
  • An image sensor can be provided.
  • flicker appearing in a captured image is surface flicker. Therefore, when performing flicker correction with digital gain in all frames, if a frame buffer is provided, flicker correction is performed using information on the current frame. be able to. However, in this case, it is difficult to perform flicker correction when noise increases, whiteout occurs, or blackout occurs.
  • the brightness information of the captured image of each frame is obtained as shown in FIG.
  • the horizontal direction indicates time
  • the vertical direction indicates brightness, that is, a luminance value.
  • brightness information I (t) to brightness information I (t + 2) is obtained as brightness information in each of the frames t to t + 2.
  • a straight line L71 indicates a brightness reference value at each time. In this example, the brightness reference value is the same value in each frame.
  • I ′ (t) to I ′ (t + 2) indicate the brightness information of the captured image after flicker correction in frames t to t + 2, that is, the brightness information obtained for the output image.
  • the brightness information I (t) is obtained from the captured image of the frame t in the brightness information calculation unit 111 after the captured image of the frame t is obtained. Is calculated.
  • the brightness reference calculation unit 113 calculates the brightness reference value Is (t) from the brightness information I (t) and the brightness information of the past frame by a filter process using a smoothing filter. . That is, the brightness reference value Is (t) is calculated using brightness information of a plurality of frames up to the current frame t of interest.
  • the flicker correction value calculation unit 115 calculates the brightness information I (t) and the brightness information I (t).
  • a flicker correction value V (t) is calculated based on the brightness reference value Is (t).
  • the flicker correction value V (t) is directly used as the correction parameter P (t) in the frame t.
  • the digital gain determined by the correction parameter P (t), that is, the digital gain corrected by the correction parameter P (t) is set as the digital gain DG (t) of the frame t.
  • the digital gain application unit 31 adjusts the gain of the digital captured image of the frame t by the digital gain DG (t), thereby correcting flicker.
  • the brightness information obtained for the output image of the frame t becomes brightness information I ′ (t) that is substantially equal to the brightness reference value Is (t) to be targeted.
  • flicker correction of the captured images of the current frame can be performed using the brightness information and the brightness reference value obtained for the current frame.
  • a flicker correction value with less error can be obtained, and more accurate flicker correction can be realized.
  • the correction parameter can also be used for white balance adjustment for a subject whose brightness changes for each color. That is, the system controller 26 may calculate information regarding the white balance of the captured image as the correction parameter. In such a case, the digital signal processing unit 24 performs white balance adjustment on the captured image based on the correction parameter.
  • correction parameters are calculated for each frame of a captured image captured at a high frame rate, and if white balance adjustment is performed using such correction parameters, high-speed white balance adjustment is realized. be able to.
  • a vision chip system that performs imaging at a high frame rate is known.
  • Such a vision chip system is described in, for example, Japanese Patent Laid-Open Nos. 07-086936 and 01-173269.
  • a threshold value of a luminance value is set when a target subject is extracted from a captured image, and binarization processing of the captured image is performed. For example, one or both of an upper limit value and a lower limit value are used as the threshold value of the luminance value.
  • a threshold value is a fixed value, if the brightness of the subject on the captured image changes, it may be impossible to correctly extract the target subject. Therefore, if the present technology is applied to a vision chip system, even before the brightness of the subject changes, by performing flicker correction on a captured image captured at a high frame rate, imaging before binarization is performed. It is possible to suppress changes in the brightness of the image and improve the stability of target extraction.
  • the threshold used for binarization of the captured image may be adjusted (corrected) in accordance with the flicker based on the correction parameter P (t) calculated by the signal processing device 11.
  • the system controller 26 calculates, as the correction parameter (t), a correction value for correcting the threshold value used for the binarization process based on the flicker correction value.
  • the binarization processing of the captured image may be performed by the digital signal processing unit 24 or by a block provided between the AD conversion unit 23 and the digital signal processing unit 24. You may do it.
  • the upper and lower thresholds which are threshold values used for binarization, may be corrected in accordance with the change.
  • the vertical axis indicates the luminance, that is, the value of brightness information
  • the horizontal axis indicates time.
  • a curve L81 indicates brightness information I (t) in each frame.
  • the correction parameter P As a result of the correction by t), a large upper limit threshold value UTH1 and a lower limit threshold value LTH1 are obtained.
  • the pixel value of the pixel is set to 1, and the luminance value of the pixel of the captured image is less than the lower threshold Or when it is larger than the upper limit threshold, the pixel value of the pixel is set to zero.
  • the upper limit threshold and the lower limit threshold used for binarization are corrected by the correction parameter P (t)
  • the brightness level of the captured image that is, for example, when the brightness information is large
  • the upper limit threshold and the lower limit threshold are large. Correction is performed so that On the other hand, for example, when the brightness information is small, the correction is performed so that the upper and lower thresholds become smaller.
  • the captured image can be binarized using an appropriate threshold value.
  • ⁇ Modification 3 of the first embodiment> ⁇ About flicker correction> Furthermore, in the above, an example has been described in which one piece of brightness information is calculated for the entire captured image, and the brightness of the entire captured image is corrected using the brightness information, that is, flicker correction is performed. However, flicker correction for the captured image may be performed for each region of the captured image. An example in which flicker correction is performed for each area of a captured image is particularly effective when the flicker characteristics differ greatly from area to area.
  • a captured image PC11 having a light source LS11 that causes flicker is obtained in the upper left part of the drawing.
  • a correction value map AM11 having a different correction parameter P (t) for each region of the captured image is obtained.
  • the correction value map AM11 is a map showing the correction parameters of each area of the captured image PC11, and the shade at each position of the correction value map AM11 is the value of the correction parameter at the position of the captured image PC11 corresponding to that position. Is shown.
  • the correction value map AM11 indicates correction parameters in a total of 12 areas of 4 ⁇ 3 of the captured image PC11.
  • the value of the correction parameter at each position in the region R31 corresponding to the region where the light source LS11 of the captured image PC11 is present is a value that suppresses the brightness of the captured image.
  • the correction parameter is set to a value that keeps the brightness of the captured image low. I understand that.
  • the brightness information calculation unit 111 calculates brightness information for each captured image area, and the flicker correction value calculation unit 115 performs flicker for each captured image area. A correction value is calculated. Similarly, the system controller 26 calculates correction parameters for each area of the captured image.
  • the signal processing device 11 includes an image sensor, a surface parallel AD conversion configuration in which an AD conversion unit 23 is provided for each region including a plurality of pixels, or a pixel parallel AD in which an AD conversion unit 23 is provided for each pixel column.
  • flicker correction for each region can be realized.
  • any one of flicker correction using analog gain, flicker correction using digital gain, and flicker correction using shutter speed correction can be combined, and the correction parameter calculated for each area of the captured image Based on the above, flicker correction of each area is performed.
  • imaging device ⁇ Configuration example of imaging device> Furthermore, the present technology can be applied to a solid-state imaging device (image sensor) in a photoelectric conversion unit such as an imaging device such as a digital still camera or a video camera, a portable terminal device having an imaging function, or a copying machine using a solid-state imaging device for an image reading unit. It is applicable to all electronic devices using a solid-state imaging device (image sensor) in a photoelectric conversion unit such as an imaging device such as a digital still camera or a video camera, a portable terminal device having an imaging function, or a copying machine using a solid-state imaging device for an image reading unit. It is applicable to all electronic devices using a solid-state imaging device (image sensor) in a photoelectric conversion unit such as an imaging device such as a digital still camera or a video camera, a portable terminal device having an imaging function, or a copying machine using a solid-state imaging device for an image reading unit. It is applicable to all electronic devices using a solid-state imaging device (image sensor) in
  • FIG. 13 is a diagram illustrating a configuration example of an imaging apparatus as an electronic apparatus to which the present technology is applied.
  • the 13 includes an optical unit 511 including a lens group, a solid-state imaging device (imaging device) 512, and a DSP (Digital Signal Processor) circuit 513 that is a camera signal processing circuit.
  • the imaging device 501 also includes a frame memory 514, a display unit 515, a recording unit 516, an operation unit 517, and a power supply unit 518.
  • the DSP circuit 513, the frame memory 514, the display unit 515, the recording unit 516, the operation unit 517, and the power supply unit 518 are connected to each other via a bus line 519.
  • the optical unit 511 takes in incident light (image light) from a subject and forms an image on the imaging surface of the solid-state imaging device 512.
  • the solid-state imaging device 512 converts the amount of incident light imaged on the imaging surface by the optical unit 511 into an electrical signal in units of pixels and outputs the electrical signal.
  • This solid-state imaging device 512 corresponds to the signal processing device 11 shown in FIG.
  • the display unit 515 includes a panel type display device such as a liquid crystal panel or an organic EL (Electro Luminescence) panel, and displays a moving image or a still image captured by the solid-state image sensor 512.
  • the recording unit 516 records a moving image or a still image captured by the solid-state imaging device 512 on a recording medium such as a video tape or a DVD (Digital Versatile Disk).
  • the operation unit 517 issues operation commands for various functions of the imaging device 501 under the operation of the user.
  • the power supply unit 518 appropriately supplies various power sources serving as operation power sources for the DSP circuit 513, the frame memory 514, the display unit 515, the recording unit 516, and the operation unit 517 to these supply targets.
  • the present invention is applied to a CMOS image sensor in which pixels that detect signal charges corresponding to the amount of visible light as physical quantities are arranged in a matrix has been described as an example.
  • the present technology is not limited to application to a CMOS image sensor, and can be applied to all solid-state imaging devices.
  • FIG. 14 is a diagram illustrating a usage example in which the above-described solid-state imaging device (image sensor) is used.
  • the solid-state imaging device described above can be used in various cases for sensing light such as visible light, infrared light, ultraviolet light, and X-ray as follows.
  • Devices that capture images used for viewing such as digital cameras and portable devices with camera functions
  • For safe driving such as automatic stop and recognition of the driver's condition, etc.
  • Devices used for traffic such as in-vehicle sensors that capture images of the back, surroundings, and the interior of a vehicle, surveillance cameras that monitor traveling vehicles and roads, and ranging sensors that measure distances between vehicles, etc.
  • Devices used for home appliances such as TVs, refrigerators, air conditioners, etc. to capture and operate equipment according to the gestures ⁇ Endoscopes, devices that perform angiography by receiving infrared light, etc.
  • Equipment used for medical and health care ⁇ Security equipment such as security surveillance cameras and personal authentication cameras ⁇ Skin measuring instrument for imaging skin and scalp imaging Such as a microscope to do beauty Equipment used for sports-Equipment used for sports such as action cameras and wearable cameras for sports applications-Used for agriculture such as cameras for monitoring the condition of fields and crops apparatus
  • the technology (this technology) according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure is realized as a device that is mounted on any type of mobile body such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, personal mobility, an airplane, a drone, a ship, and a robot. May be.
  • FIG. 15 is a block diagram illustrating a schematic configuration example of a vehicle control system that is an example of a mobile control system to which the technology according to the present disclosure can be applied.
  • the vehicle control system 12000 includes a plurality of electronic control units connected via a communication network 12001.
  • the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, a vehicle exterior information detection unit 12030, a vehicle interior information detection unit 12040, and an integrated control unit 12050.
  • a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (interface) 12053 are illustrated as a functional configuration of the integrated control unit 12050.
  • the drive system control unit 12010 controls the operation of devices related to the vehicle drive system according to various programs.
  • the drive system control unit 12010 includes a driving force generator for generating a driving force of a vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism that adjusts and a braking device that generates a braking force of the vehicle.
  • the body control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs.
  • the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as a headlamp, a back lamp, a brake lamp, a blinker, or a fog lamp.
  • the body control unit 12020 can be input with radio waves transmitted from a portable device that substitutes for a key or signals from various switches.
  • the body system control unit 12020 receives input of these radio waves or signals, and controls a door lock device, a power window device, a lamp, and the like of the vehicle.
  • the outside information detection unit 12030 detects information outside the vehicle on which the vehicle control system 12000 is mounted.
  • the imaging unit 12031 is connected to the vehicle exterior information detection unit 12030.
  • the vehicle exterior information detection unit 12030 causes the imaging unit 12031 to capture an image outside the vehicle and receives the captured image.
  • the vehicle outside information detection unit 12030 may perform object detection processing or distance detection processing such as a person, a car, an obstacle, a sign, or a character on a road surface based on the received image.
  • the imaging unit 12031 is an optical sensor that receives light and outputs an electrical signal corresponding to the amount of received light.
  • the imaging unit 12031 can output an electrical signal as an image, or can output it as distance measurement information.
  • the light received by the imaging unit 12031 may be visible light or invisible light such as infrared rays.
  • the in-vehicle information detection unit 12040 detects in-vehicle information.
  • a driver state detection unit 12041 that detects the driver's state is connected to the in-vehicle information detection unit 12040.
  • the driver state detection unit 12041 includes, for example, a camera that images the driver, and the in-vehicle information detection unit 12040 determines the degree of driver fatigue or concentration based on the detection information input from the driver state detection unit 12041. It may be calculated or it may be determined whether the driver is asleep.
  • the microcomputer 12051 calculates a control target value of the driving force generator, the steering mechanism or the braking device based on the information inside and outside the vehicle acquired by the vehicle outside information detection unit 12030 or the vehicle interior information detection unit 12040, and the drive system control unit A control command can be output to 12010.
  • the microcomputer 12051 realizes ADAS (Advanced Driver Assistance System) functions including vehicle collision avoidance or impact mitigation, tracking based on inter-vehicle distance, vehicle speed maintenance traveling, vehicle collision warning, or vehicle lane departure warning. It is possible to perform cooperative control for the purpose.
  • ADAS Advanced Driver Assistance System
  • the microcomputer 12051 controls the driving force generation device, the steering mechanism, the braking device, etc. based on information around the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040. It is possible to perform cooperative control for the purpose of automatic driving that autonomously travels without depending on the operation.
  • the microcomputer 12051 can output a control command to the body system control unit 12020 based on information outside the vehicle acquired by the vehicle outside information detection unit 12030.
  • the microcomputer 12051 controls the headlamps according to the position of the preceding vehicle or oncoming vehicle detected by the outside information detection unit 12030, and performs cooperative control for the purpose of anti-glare, such as switching from a high beam to a low beam. It can be carried out.
  • the sound image output unit 12052 transmits an output signal of at least one of sound and image to an output device capable of visually or audibly notifying information to a vehicle occupant or the outside of the vehicle.
  • an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are illustrated as output devices.
  • the display unit 12062 may include at least one of an on-board display and a head-up display, for example.
  • FIG. 16 is a diagram illustrating an example of an installation position of the imaging unit 12031.
  • the imaging unit 12031 includes imaging units 12101, 12102, 12103, 12104, and 12105.
  • the imaging units 12101, 12102, 12103, 12104, and 12105 are provided, for example, at positions such as a front nose, a side mirror, a rear bumper, a back door, and an upper portion of a windshield in the vehicle interior of the vehicle 12100.
  • the imaging unit 12101 provided in the front nose and the imaging unit 12105 provided in the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 12100.
  • the imaging units 12102 and 12103 provided in the side mirror mainly acquire images of the side of the vehicle 12100.
  • the imaging unit 12104 provided in the rear bumper or the back door mainly acquires an image behind the vehicle 12100.
  • the imaging unit 12105 provided on the upper part of the windshield in the passenger compartment is mainly used for detecting a preceding vehicle or a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
  • FIG. 16 shows an example of the shooting range of the imaging units 12101 to 12104.
  • the imaging range 12111 indicates the imaging range of the imaging unit 12101 provided in the front nose
  • the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided in the side mirrors, respectively
  • the imaging range 12114 The imaging range of the imaging unit 12104 provided in the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, an overhead image when the vehicle 12100 is viewed from above is obtained.
  • At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
  • at least one of the imaging units 12101 to 12104 may be a stereo camera including a plurality of imaging elements, or may be an imaging element having pixels for phase difference detection.
  • the microcomputer 12051 based on the distance information obtained from the imaging units 12101 to 12104, the distance to each three-dimensional object in the imaging range 12111 to 12114 and the temporal change of this distance (relative speed with respect to the vehicle 12100)
  • the closest three-dimensional object on the traveling path of the vehicle 12100 can be extracted as a preceding vehicle that travels at a predetermined speed (for example, 0 km / h or more) in substantially the same direction as the vehicle 12100. it can.
  • the microcomputer 12051 can set an inter-vehicle distance to be secured in advance before the preceding vehicle, and can perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like.
  • cooperative control for the purpose of autonomous driving or the like autonomously traveling without depending on the operation of the driver can be performed.
  • the microcomputer 12051 converts the three-dimensional object data related to the three-dimensional object to other three-dimensional objects such as two-wheeled vehicles, ordinary vehicles, large vehicles, pedestrians, and power poles based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles.
  • the microcomputer 12051 identifies obstacles around the vehicle 12100 as obstacles that are visible to the driver of the vehicle 12100 and obstacles that are difficult to see.
  • the microcomputer 12051 determines the collision risk indicating the risk of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, the microcomputer 12051 is connected via the audio speaker 12061 or the display unit 12062.
  • driving assistance for collision avoidance can be performed by outputting an alarm to the driver and performing forced deceleration or avoidance steering via the drive system control unit 12010.
  • At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays.
  • the microcomputer 12051 can recognize a pedestrian by determining whether a pedestrian is present in the captured images of the imaging units 12101 to 12104. Such pedestrian recognition is, for example, whether or not the person is a pedestrian by performing a pattern matching process on a sequence of feature points indicating the outline of an object and a procedure for extracting feature points in the captured images of the imaging units 12101 to 12104 as infrared cameras. It is carried out by the procedure for determining.
  • the audio image output unit 12052 displays a rectangular contour line for emphasizing the recognized pedestrian.
  • the display unit 12062 is controlled so as to be superimposed. Further, the audio image output unit 12052 may control the display unit 12062 so as to display an icon or the like indicating a pedestrian at a desired position.
  • the technology according to the present disclosure can be applied to the imaging unit 12031 among the configurations described above.
  • flicker correction can be performed with sufficiently small accuracy and with a sufficiently small amount of calculation.
  • the present technology can take a cloud computing configuration in which one function is shared by a plurality of devices via a network and is jointly processed.
  • each step described in the above flowchart can be executed by one device or can be shared by a plurality of devices.
  • the plurality of processes included in the one step can be executed by being shared by a plurality of apparatuses in addition to being executed by one apparatus.
  • the present technology can be configured as follows.
  • a brightness information calculation unit for calculating brightness information of the captured image;
  • a brightness reference calculation unit that calculates a brightness reference value indicating a reference brightness based on the brightness information at a plurality of times;
  • a signal processing apparatus comprising: a correction parameter calculation unit that calculates a correction parameter for correcting the brightness of the captured image based on the brightness information and the brightness reference value.
  • the correction parameter is for performing flicker correction by correcting the brightness of the captured image at each time.
  • the brightness reference calculation unit calculates the brightness reference value by smoothing the brightness information by a filter process based on the brightness information.
  • the signal processing apparatus calculates an average value of luminance values of pixels of the captured image as the brightness information.
  • the correction parameter is information related to a gain of the captured image.
  • a predicted value of the brightness information is calculated, and a correction indicating a correction level of the brightness of the captured image based on the predicted value and the brightness reference value A correction value calculation unit for calculating a value;
  • the correction parameter calculation unit calculates the correction parameter for performing gain adjustment based on the correction value.
  • the signal processing apparatus calculates the predicted value based on a change amount of the brightness information.
  • the signal processing apparatus calculates a ratio between the brightness reference value and the predicted value as the correction value.
  • the correction value calculation unit is based on the predicted value of the brightness information at a processing target time and the brightness reference value calculated from the brightness information up to a time before the processing target time.
  • the signal processing according to any one of (6) to (8), further including: an analog gain application unit that performs gain adjustment on the analog captured image at the time to be processed by a gain determined by the correction parameter. apparatus.
  • a correction value calculation unit for calculating a correction value The signal processing apparatus according to (5), wherein the correction parameter calculation unit calculates the correction parameter for performing gain adjustment based on the correction value.
  • the correction value calculation unit calculates a ratio between the brightness reference value and the brightness information as the correction value.
  • the signal processing apparatus according to any one of (1) to (4), wherein the correction parameter is information related to a shutter speed of the captured image.
  • the correction parameter is information relating to a threshold value for binarizing the captured image.
  • the correction parameter is information relating to white balance of the captured image.
  • the correction parameter calculation unit calculates the correction parameter for each region of the captured image.
  • a signal processing method including a step of calculating a correction parameter for correcting the brightness of the captured image based on the brightness information and the brightness reference value.
  • An imaging unit that captures a captured image; A brightness information calculation unit for calculating brightness information of the captured image; A brightness reference calculation unit that calculates a brightness reference value indicating a reference brightness based on the brightness information at a plurality of times; An imaging apparatus comprising: a correction parameter calculation unit that calculates a correction parameter for correcting the brightness of the captured image based on the brightness information and the brightness reference value.
  • 11 signal processing device 21 pixel unit, 22 analog gain application unit, 23 AD conversion unit, 24 digital signal processing unit, 25 flicker correction unit, 26 system controller, 31 digital gain application unit, 111 brightness information calculation unit, 113 brightness Standard calculation unit, 114 difference calculation unit, 115 flicker correction value calculation unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

La présente technologie concerne un dispositif et un procédé de traitement de signaux ainsi qu'un dispositif d'imagerie, qui permettent de réaliser une correction de papillonnement avec une précision suffisamment grande et avec moins de calculs. Le dispositif de traitement de signaux comprend : une unité de calcul d'informations de luminosité qui calcule des informations de luminosité d'une image capturée ; une unité de calcul de référence de luminosité qui calcule une valeur de référence de luminosité exprimant une luminosité de référence sur la base des informations de luminosité à partir d'une pluralité de périodes de temps ; et une unité de calcul de paramètre de correction qui calcule un paramètre de correction pour corriger la luminosité de l'image capturée sur la base des informations de luminosité et de la valeur de référence de luminosité. La présente technologie peut être appliquée à un capteur d'image à multiples couches.
PCT/JP2017/023146 2016-07-08 2017-06-23 Dispositif et procédé de traitement de signaux, et dispositif d'imagerie WO2018008426A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-136211 2016-07-08
JP2016136211A JP2018007210A (ja) 2016-07-08 2016-07-08 信号処理装置および方法、並びに撮像装置

Publications (1)

Publication Number Publication Date
WO2018008426A1 true WO2018008426A1 (fr) 2018-01-11

Family

ID=60901653

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/023146 WO2018008426A1 (fr) 2016-07-08 2017-06-23 Dispositif et procédé de traitement de signaux, et dispositif d'imagerie

Country Status (2)

Country Link
JP (1) JP2018007210A (fr)
WO (1) WO2018008426A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112516590A (zh) * 2019-09-19 2021-03-19 华为技术有限公司 一种帧率识别方法及电子设备

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102111670B1 (ko) * 2018-08-08 2020-05-15 (주)로닉스 고휘도 광원 작동에 대한 빠른 응답을 위한 전자식 셔터속도 사전설정 기능을 구비한 카메라 장치 및 제어 방법
WO2020044763A1 (fr) 2018-08-31 2020-03-05 富士フイルム株式会社 Élément d'imagerie, dispositif d'imagerie, procédé et programme de traitement de données d'image
CN112771847A (zh) 2018-09-27 2021-05-07 富士胶片株式会社 成像元件、摄像装置、图像数据处理方法及程序
CN110798626B (zh) * 2019-12-02 2020-07-28 重庆紫光华山智安科技有限公司 一种自动曝光调节方法、系统及设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0721367A (ja) * 1993-06-22 1995-01-24 Oyo Keisoku Kenkyusho:Kk 画像認識装置
JPH1175109A (ja) * 1997-06-27 1999-03-16 Matsushita Electric Ind Co Ltd 固体撮像装置
JPH11164192A (ja) * 1997-11-27 1999-06-18 Toshiba Corp 撮像方法及び装置
JP2004048616A (ja) * 2002-07-16 2004-02-12 Renesas Technology Corp センサモジュール
JP2014060517A (ja) * 2012-09-14 2014-04-03 Canon Inc 固体撮像装置及び固体撮像装置の駆動方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0721367A (ja) * 1993-06-22 1995-01-24 Oyo Keisoku Kenkyusho:Kk 画像認識装置
JPH1175109A (ja) * 1997-06-27 1999-03-16 Matsushita Electric Ind Co Ltd 固体撮像装置
JPH11164192A (ja) * 1997-11-27 1999-06-18 Toshiba Corp 撮像方法及び装置
JP2004048616A (ja) * 2002-07-16 2004-02-12 Renesas Technology Corp センサモジュール
JP2014060517A (ja) * 2012-09-14 2014-04-03 Canon Inc 固体撮像装置及び固体撮像装置の駆動方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112516590A (zh) * 2019-09-19 2021-03-19 华为技术有限公司 一种帧率识别方法及电子设备

Also Published As

Publication number Publication date
JP2018007210A (ja) 2018-01-11

Similar Documents

Publication Publication Date Title
TWI814804B (zh) 距離測量處理設備,距離測量模組,距離測量處理方法及程式
WO2018008426A1 (fr) Dispositif et procédé de traitement de signaux, et dispositif d'imagerie
WO2018042887A1 (fr) Dispositif de mesure de distance et procédé de commande pour dispositif de mesure de distance
US11082626B2 (en) Image processing device, imaging device, and image processing method
JP2020136958A (ja) イベント信号検出センサ及び制御方法
WO2021085128A1 (fr) Dispositif de mesure de distance, procédé de mesure, et système de mesure de distance
WO2017175492A1 (fr) Dispositif de traitement d'image, procédé de traitement d'image, programme informatique et dispositif électronique
US10771711B2 (en) Imaging apparatus and imaging method for control of exposure amounts of images to calculate a characteristic amount of a subject
WO2018207666A1 (fr) Élément d'imagerie, procédé de commande associé et dispositif électronique
WO2021060118A1 (fr) Dispositif d'imagerie
WO2021079789A1 (fr) Dispositif de télémétrie
JP2018098613A (ja) 撮像装置、および、撮像装置の制御方法
JP7030607B2 (ja) 測距処理装置、測距モジュール、測距処理方法、およびプログラム
WO2021177045A1 (fr) Dispositif de traitement de signal, procédé de traitement de signal et module de télémétrie
WO2021246194A1 (fr) Dispositif de traitement de signal, procédé de traitement de signal, et capteur de détection
WO2020209079A1 (fr) Capteur de mesure de distance, procédé de traitement de signal et module de mesure de distance
WO2021065494A1 (fr) Capteur de mesure de distances, procédé de traitement de signaux et module de mesure de distances
WO2021010174A1 (fr) Dispositif de réception de lumière et procédé de commande de dispositif de réception de lumière
WO2020153272A1 (fr) Dispositif de mesure, dispositif de télémétrie et procédé de mesure
JP2020153909A (ja) 受光装置および測距装置
US20210217146A1 (en) Image processing apparatus and image processing method
JP2021056143A (ja) 測距センサ、信号処理方法、および、測距モジュール
JP2021056141A (ja) 測距センサ、信号処理方法、および、測距モジュール
WO2022004441A1 (fr) Dispositif de télémétrie et procédé de télémétrie
WO2020203331A1 (fr) Dispositif de traitement de signal, procédé de traitement de signal, et dispositif de télémétrie

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17824033

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17824033

Country of ref document: EP

Kind code of ref document: A1