WO2022149431A1 - Image sensor, imaging device, and signal processing method - Google Patents

Image sensor, imaging device, and signal processing method Download PDF

Info

Publication number
WO2022149431A1
WO2022149431A1 PCT/JP2021/046511 JP2021046511W WO2022149431A1 WO 2022149431 A1 WO2022149431 A1 WO 2022149431A1 JP 2021046511 W JP2021046511 W JP 2021046511W WO 2022149431 A1 WO2022149431 A1 WO 2022149431A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
signal
likelihood
global
image sensor
Prior art date
Application number
PCT/JP2021/046511
Other languages
French (fr)
Japanese (ja)
Inventor
大輝 山崎
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Publication of WO2022149431A1 publication Critical patent/WO2022149431A1/en

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B7/00Control of exposure by setting shutters, diaphragms or filters, separately or conjointly
    • G03B7/08Control effected solely on the basis of the response, to the intensity of the light received by the camera, of a built-in light-sensitive device
    • G03B7/091Digital circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range

Definitions

  • This technology relates to an image sensor, an image pickup device, and a signal processing method for performing HDR (High Dynamic Range) synthesis according to the characteristics of the subject.
  • HDR High Dynamic Range
  • a technique for obtaining an HDR image signal by performing HDR composition using a plurality of images having different exposure times is known.
  • the blending ratio of a plurality of image signals having different exposure times is determined according to the characteristics of the subject. For example, in Patent Document 1 below, the blend ratio is determined so as to use a pixel signal having a short exposure time, which is a signal that is strong against movement, for pixels in which a moving subject is imaged.
  • the subject detected as a moving subject includes a luminous body that repeatedly blinks periodically. If the blending ratio is determined so as to use a pixel signal having a short exposure time for such a subject, there is a problem that the light emitter cannot normally image.
  • This technique was made in view of the above circumstances, and the purpose is to determine an appropriate blending ratio according to the characteristics of the subject.
  • the image sensor according to the present technology has two pixels, a first pixel in which reading is performed by the first light receiving sensitivity and a second pixel in which reading is performed by a second light receiving sensitivity which is lower than the first light receiving sensitivity.
  • the pixel array unit arranged in a dimension, the first signal output from the first pixel set as the first exposure time, and the second exposure time set to a second exposure time different from the first exposure time.
  • a global feature detection unit that detects features of a pixel region containing a plurality of the first pixel and a plurality of the second pixels as global features based on a second signal output from the pixels, and the global feature detection unit.
  • a blend rate determining unit for determining the blend rate of the first signal and the second signal based on the detection result of the feature detection unit.
  • the difference in the light receiving sensitivity between the first pixel and the second pixel may be due to the difference in the light receiving area in the pixel, the difference in the length of the exposure time, or the characteristics of the light receiving element. It may be due to the difference between.
  • the global feature is a feature of a subject imaged in a set of pixel regions.
  • the global feature may be a feature regarding the change in luminance with the passage of time.
  • a subject having a characteristic of a change in brightness with the passage of time for example, a light emitting body such as an LED (Light Emitting Diode) that blinks in a cycle of several msec to several tens of msec, or a light emitting body that reflects light emitted from the light emitting body is reflected.
  • LED Light Emitting Diode
  • a subject whose brightness is changing such as an object.
  • a subject with a large change in brightness is more likely to be detected.
  • a large change in luminance is likely to be detected.
  • the global feature detection unit in the image sensor described above may perform the detection by calculating the likelihood representing the degree to which the image pickup subject is a periodic blinker as the global blinker likelihood.
  • the global blinking object likelihood is calculated to be high.
  • the global blinking object likelihood is increased when a periodic blinking object is imaged in most of the target pixel region.
  • the global feature detection unit in the image sensor described above performs a process of calculating a temporal change amount of the brightness average value of the first pixel as a first pixel change amount for each pixel area, and the pixel area for each.
  • a process of calculating the temporal change amount of the brightness average value of the second pixel as a second pixel change amount, and a process of calculating the difference between the first pixel change amount and the second pixel change amount as a change amount difference value. May be performed by executing.
  • the amount of data for the information on the amount of change in the first pixel and the amount of change in the second pixel which is the amount of change in the brightness average value for each pixel region over time, is smaller than that calculated for each pixel.
  • the amount of data is reduced for the change amount difference value calculated from the change amount of the first pixel and the change amount of the second pixel.
  • the global blinking body likelihood may be calculated so as to be larger as the change amount difference value is larger and larger as the second pixel change amount is smaller. For example, when the exposure time of the first pixel having high light receiving sensitivity is shorter than the exposure time of the second pixel having low light receiving sensitivity, the entire exposure time of the first pixel is included in the non-light emitting state in the periodic blinker. There is. Even in such a case, a part of the exposure time of the second pixel may be included in the light emitting state in the periodic blinker. In such a case, the change amount difference value becomes large.
  • the second pixel having a long exposure time shows a periodic blinker in a light emitting state.
  • the exposure time is longer than the non-light emission time of the periodic blinker, the light emission state of the periodic blinker is surely captured, and the amount of change in the second pixel becomes small.
  • the image sensor described above may include a local feature detection unit that detects features for each pixel using the global features detected by the global feature detection unit. As a result, the characteristics of the subject are detected for each pixel.
  • the local feature detection unit in the image sensor described above blinks for each pixel based on the pixel-to-pixel difference value calculated as the difference between the luminance values of the first pixel and the second pixel and the global blinker likelihood.
  • the body likelihood may be calculated as a local blinking body likelihood.
  • the pixel-to-pixel difference value and the global blinking object likelihood can be used to estimate the degree to which the subject is a periodic blinking object.
  • the local blinking body likelihood in the above-mentioned image sensor may be calculated so as to be larger for pixels having a larger difference value between pixels and larger for pixels having a larger global blinking body likelihood.
  • a subject having a large difference value between pixels a moving subject, a periodic blinker, or the like can be mentioned.
  • a subject having a large difference value between pixels and having a high global blinking object likelihood is likely to be a periodic blinking object rather than a moving subject.
  • the blend ratio determination unit in the image sensor may determine the blend ratio so that the higher the global blinking likelihood is, the higher the ratio of the second signal is.
  • the second pixel has a longer exposure time than the first pixel. In this case, it is highly possible that the second pixel is an image of a periodic blinker in a light emitting state.
  • the blend rate determination unit in the image sensor may determine the blend rate for each pixel so that the higher the local blinking likelihood is, the higher the ratio of the second signal is.
  • the second pixel has a longer exposure time than the first pixel. In this case, the second pixel is likely to be generated in a state of capturing a periodic blinker in a light emitting state.
  • the local feature detection unit in the image sensor described above is capable of executing a process of calculating the local moving subject likelihood, and the local moving subject likelihood is larger as the pixel-to-pixel difference value is larger. It may be calculated so that the pixel with the smaller global blinker likelihood becomes larger.
  • a subject having a large difference value between pixels a moving subject, a periodic blinker, or the like can be mentioned. Among these subjects, a subject having a large difference value between pixels and having a low global blinking likelihood is highly likely to be a moving subject.
  • the global feature detection unit in the image sensor described above includes a process of calculating a temporal change in the brightness of the first pixel for each pixel region as a first pixel change, and a second for each pixel region.
  • a process of calculating the temporal change amount of the pixel brightness as the second pixel change amount and a process of calculating the difference between the first pixel change amount and the second pixel change amount as the change amount difference value are executed.
  • the global moving subject likelihood which indicates the degree to which the imaged subject is a moving subject, may be calculated so as to be larger as the change amount difference value is smaller and larger as the second pixel change amount is larger. For example, even if the light receiving sensitivities are different, the amount of change in the first signal and the second signal output from the first pixel and the second pixel is large when the subject is a moving subject, so that the difference is small. Can be considered.
  • the image sensor described above includes a local feature detection unit that executes a process of calculating the local moving subject likelihood, which is the moving subject likelihood for each pixel, and the local moving subject likelihood is the first pixel. It may be calculated so that the pixel having a larger difference value between pixels calculated as the difference in the brightness of the second pixel is larger and the pixel having a larger global moving subject likelihood is larger.
  • a subject having a large difference value between pixels a moving subject, a periodic blinker, or the like can be mentioned.
  • a subject having a large difference value between pixels and having a high likelihood of a local moving subject is likely to be a moving subject rather than a periodic blinker.
  • the blend ratio determination unit in the image sensor may determine the blend ratio so that the ratio of the first signal increases as the global moving subject likelihood increases.
  • the first signal is a signal obtained after an exposure time shorter than that of the second signal. Therefore, the image data generated based on the first signal has less blurring of the moving subject than the image data generated based on the second signal.
  • the blend rate determination unit in the image sensor may determine the blend rate for each pixel so that the higher the local moving subject likelihood is, the higher the ratio of the first signal is.
  • the first signal is a signal obtained after an exposure time shorter than that of the second signal. Therefore, the image data generated based on the first signal has less blurring of the moving subject than the image data generated based on the second signal.
  • the first pixel and the second pixel may be configured to have different light receiving sensitivities due to different light receiving areas. As a result, it is possible to detect the global feature of each pixel region by using the first pixel and the second pixel having different light receiving sensitivities.
  • the exposure time of the first pixel may be shorter than the exposure time of the second pixel. This makes it possible to detect, for example, the characteristic of a periodic blinking body that blinks periodically as a global characteristic.
  • the pixel array unit in the image sensor described above is capable of outputting a third signal read by a third light receiving sensitivity that is different from either the first light receiving sensitivity or the second light receiving sensitivity, and the global feature detection.
  • the unit detects a global feature in a pixel region including a plurality of pixels from which the third signal is output based on either one of the first signal and the second signal and the third signal. You may.
  • the blend rate determination unit determines the blend rate for blending the three pixel signals.
  • the image pickup apparatus includes an image sensor that performs photoelectric conversion and an optical member that collects the reflected light from the subject on the image sensor, and the image sensor reads out by the first light receiving sensitivity.
  • the first signal and the second signal are based on the detection results of the global feature detection unit that detects features of a pixel region containing a plurality of the second pixels as global features, and the global feature detection unit. It is provided with a blend rate determination unit for determining the blend rate of the signal.
  • the first signal output from the first pixel which is read out by the first light receiving sensitivity according to the first exposure time, and the second exposure time different from the first exposure time.
  • a plurality of the first pixel and the second pixel are each based on the second signal output from the second pixel in which the reading is performed by the second light receiving sensitivity, which is lower than the first light receiving sensitivity.
  • a process including a process of detecting a feature of a included pixel region as a global feature and a process of determining a blend ratio of the first signal and the second signal based on the detection result of the global feature. Is.
  • the above-mentioned various actions can also be obtained by such an image pickup device and a signal processing method.
  • FIG. 1 shows the configuration of the image sensor 1 according to the embodiment of the present technology.
  • the image sensor 1 includes a pixel array unit 2, a first read circuit 3, a second read circuit 4, and a signal processing unit 5.
  • the pixel array unit 2 is composed of a plurality of pixel units 6 arranged in the row direction and the column direction.
  • the pixel unit 6 is configured to include a plurality of pixels G.
  • the pixel G has a photoelectric conversion element, and obtains an electric signal according to the amount of received light.
  • one pixel unit 6 includes two pixels G will be described.
  • the pixel unit 6 may be configured to include three or more pixels G.
  • FIG. 2 shows a configuration example of the pixel unit 6.
  • the pixel unit 6 includes a first pixel G1 and a second pixel G2 having different light receiving sensitivities.
  • the light receiving area of the first pixel G1 is larger than that of the second pixel G2. Further, the exposure time of the first pixel G1 is shorter than the exposure time of the second pixel G2. As an example, the exposure time of the first pixel G1 is 8 msec, and the exposure time of the second pixel G2 is 11 msec.
  • the light receiving sensitivity of the first pixel G1 is higher than the light receiving sensitivity of the second pixel G2. Therefore, the first pixel G1 is more likely to be saturated than the second pixel G2.
  • the first pixel G1 and the second pixel G2 receive the reflected light reflected by the subject (imaging subject), perform photoelectric conversion, and output it as a pixel signal in the subsequent stage.
  • the pixel signal output from the first pixel G1 is referred to as the first signal S1
  • the pixel signal output from the second pixel G2 is referred to as the second signal S2.
  • the difference in light receiving sensitivity between the first pixel G1 and the second pixel G2 will be described by giving an example based on the light receiving area and the exposure time, but other cases are also conceivable.
  • the difference in light receiving sensitivity between the first pixel G1 and the second pixel G2 may be based only on the light receiving area, may be based only on the length of the exposure time, or may be based only on the length of the exposure time, or the light receiving element (PD).
  • PD light receiving element
  • Photodiode may be based only on the difference in characteristics. Of course, it may be based on these multiple factors.
  • the first read circuit 3 is a signal processing circuit for AD (Analog to Digital) conversion and noise reduction, as well as a configuration for reading the first signal S1 from the first pixel G1 provided in each pixel unit 6 of the pixel array unit 2. Etc. are provided.
  • the first signal S1 is supplied to the signal processing unit 5 via the first read circuit 3.
  • the second read circuit 4 has a configuration for reading the second signal S2 from the second pixel G2 provided in each pixel unit 6, and also includes a signal processing circuit for AD conversion and noise reduction.
  • the second signal S2 is supplied to the signal processing unit 5 via the second read circuit 4.
  • the signal processing unit 5 performs a process of detecting a global feature and a process of detecting a local feature using the input first signal S1 and second signal S2. Further, the signal processing unit 5 determines the blend ratio of the first signal S1 and the second signal S2 by using the detected global feature or local feature, and the first signal S1 and the second signal S1 and the second signal S2 are determined based on the blend ratio. HDR (High Dynamic Range) synthesis is performed by blending the signal S2. As a result, the HDR image signal is output from the signal processing unit 5.
  • HDR High Dynamic Range
  • the global feature is a feature of a subject detected for each pixel area GA set as a rectangular area including, for example, a plurality of pixel units 6. Specifically, when the subject imaged in a certain pixel region GA is a light emitting body that blinks periodically, the feature as the subject that blinks periodically is detected.
  • the signal processing unit 5 calculates the degree (high possibility) that the subject is a light emitting body that blinks periodically as a global feature as a likelihood.
  • a subject that blinks periodically is referred to as a “periodic blinking body”.
  • the periodic blinking body includes not only a subject that emits light by itself such as an LED (Light Emitting Diode), but also a subject that reflects the light of an LED that repeatedly turns on and off in several msec, for example.
  • the signal processing unit 5 calculates how much the subject in the pixel region GA is like a periodic blinker as a “global blinker likelihood LFL1”. That is, the global blinking object likelihood LFL1 is an index value indicating the degree of the periodic blinking object of the subject in the pixel region GA. If the subject imaged in the pixel region GA is an LED, the global blinking body likelihood LFL1 becomes high. Further, the more the LEDs are imaged in the many pixel units 6 in the pixel region GA, the higher the global blinker likelihood LFL1 is.
  • the signal processing unit 5 calculates how much the subject has the characteristics as a moving subject for each pixel region GA as "global moving subject likelihood LFM1". That is, the global moving subject likelihood LFM1 is an index value indicating the degree of the moving subject of the subject. If the subject captured in the pixel region GA is a moving subject, the global moving subject likelihood LFM1 becomes high. Further, the larger the number of pixel units 6 in the pixel region GA, the higher the global moving subject likelihood LFM1.
  • the signal processing unit 5 treats the plurality of pixel units 6 as the pixel region GA in order to detect the global feature.
  • the pixel units 6 arranged two-dimensionally as shown in FIG. 3 are divided into several rectangular areas as shown in FIG. 4, and each rectangular area is treated as a pixel area GA.
  • FIG. 5 when each of the vertical direction and the horizontal direction is divided into three pixel region GAs, nine pixel region GAs are set.
  • the signal processing unit 5 detects global features for each of the nine pixel region GAs.
  • the local feature detected by the signal processing unit 5 is a feature of the subject detected for each pixel unit 6. That is, the feature of the subject detected for the first pixel G1 of the pixel unit 6 is treated as the feature of the subject in the pixel unit 6 to which the first pixel G1 belongs. The same applies to the second pixel G2.
  • the signal processing unit 5 treats the feature as a periodic blinker and the feature as a moving subject detected for the pixel unit 6 as local features. Further, the signal processing unit 5 calculates the local blinking body likelihood LFL2 and the local moving subject likelihood LFM2 as index values indicating the degree of the local feature. Local features are detected using global features and the like.
  • FIG. 6 shows a configuration example of the signal processing unit 5.
  • the signal processing unit 5 includes an inter-pixel difference detection unit 7, a global feature detection unit 8, a local feature detection unit 9, a blend rate determination unit 10, and a blend unit 11.
  • the pixel-to-pixel difference detection unit 7 detects the difference in luminance between the first pixel G1 and the second pixel G2 for each pixel unit 6, and calculates the difference in luminance as the “pixel-to-pixel difference value GD”.
  • the pixel-to-pixel difference value GD can be effectively used in the subsequent processing because the exposure times of the first pixel G1 and the second pixel G2 are different.
  • the inter-pixel difference value GD is used in the local feature detection unit 9 to select whether the subject has a feature as a periodic blinker or a feature as a moving subject. Details will be described later.
  • the amount of data can be compressed by expressing the difference value GD between pixels with a small number of bits such as 256 steps from 0 to 255.
  • the global feature detection unit 8 detects the feature of the subject imaged in the pixel region GA as the global feature. Specifically, the global feature detection unit 8 calculates the global blinker likelihood LFL1 and the global moving subject likelihood LFM1 for each pixel region GA as global features. Depending on the pixel region GA, only the global blinking body likelihood LFL1 may be calculated, or only the global moving subject likelihood LFM1 may be calculated. In the present embodiment, the global blinker likelihood LFL1 and the global moving subject likelihood LFM1 are calculated as values in the range of 0.0 to 1.0. The global blinking body likelihood LFL1 and the global moving subject likelihood LFM1 indicate that the higher the numerical value, the higher the characteristic of the subject imaged in the target pixel region GA.
  • the global feature detection unit 8 includes a first brightness calculation unit 21A that performs processing related to the first signal S1, a region-specific brightness average value first calculation unit 22A, a first memory 23A, and a region-specific brightness change first calculation unit 24A. I have.
  • the global feature detection unit 8 includes a second brightness calculation unit 21B that performs processing related to the second signal S2, a region-specific brightness average value second calculation unit 22B, a second memory 23B, and a region-specific brightness change second calculation unit. It is equipped with 24B.
  • the global feature detection unit 8 calculates the feature amount for each pixel region GA using the signals output from the region-by-region luminance change first calculation unit 24A and the region-by-region luminance change second calculation unit 24B.
  • the quantity calculation unit 25 is provided.
  • the first luminance calculation unit 21A calculates the luminance value for each first pixel G1 based on the first signal S1.
  • each pixel unit 6 in the pixel array unit 2 has a Bayer array color filter.
  • the pixel array unit 2 has a pixel unit 6R as an R pixel having an R (red) color filter and a G pixel having a G (green) color filter. It has a pixel unit 6G and a pixel unit 6B as a B pixel having a B (blue) color filter.
  • the pixel unit 6R has a first pixel G1R and a second pixel G2R.
  • the pixel unit 6G has a first pixel G1G and a second pixel G2G
  • the pixel unit 6B has a first pixel G1B and a second pixel G2B.
  • the first luminance calculation unit 21A treats the four pixel units 6 composed of two pixel units 6 vertically and horizontally as one pixel set GS for the pixel array unit 2 arranged as shown in FIG. 8, and sets the pixels.
  • the luminance value L1S for the first pixel G1 is calculated for each GS.
  • the luminance value L1S for the pixel set GS is calculated using the following equation (1).
  • the calculated luminance value L1S is a luminance value for the first pixel G1 in one pixel set GS, but the luminance value L1S is the luminance value L1R of the first pixel G1R, the luminance value L1G of the first pixel G1G, and the first. It may be regarded as the luminance value L1B of the pixel G1B.
  • the first luminance calculation unit 21A calculates the luminance values L1R, L1G, L1B for each pixel unit 6 or the luminance values L1S for each pixel set GS.
  • the area-wise brightness average value first calculation unit 22A calculates the brightness average value L1A of the first pixel G1 for each pixel area GA. For example, when the pixel region GA includes n pixel set GS, the luminance value L1S for the first pixel G1 calculated for each pixel set GS is added and divided by n to obtain the luminance. The average value L1A is calculated.
  • the calculated luminance average value L1A is output to the area-by-region luminance change first calculation unit 24A and stored in the first memory 23A.
  • the first memory 23A is provided to store the brightness average value L1A one frame before.
  • the region-by-region brightness change first calculation unit 24A acquires the brightness average value L1A one frame before from the first memory 23A, and acquires the brightness average value L1A for the current frame from the region-by-region brightness average value first calculation unit 22A. ..
  • the region-by-region brightness change first calculation unit 24A calculates the difference between the brightness average value L1A one frame before and the brightness average value L1A of the current frame.
  • the luminance change of the first pixel G1 is calculated as the first pixel change amount G1V for each pixel region GA.
  • the calculated first pixel change amount G1V is output to the feature amount calculation unit 25 for each area.
  • the second brightness calculation unit 21B, the area-specific brightness average value second calculation unit 22B, the second memory 23B, and the area-by-region brightness change second calculation unit 24B are processing units that perform processing related to the second signal S2, respectively. 1 Performs the same processing as the brightness calculation unit 21A, the area-by-region brightness average value first calculation unit 22A, the first memory 23A, and the region-by-region brightness change first calculation unit 24A.
  • the second luminance calculation unit 21B calculates the luminance value L2S for the second pixel G2 for each pixel set GS.
  • the luminance value L2S for the pixel set GS is calculated using the following equation (2).
  • the second calculation unit 22B of the brightness average value for each region calculates the brightness average value L2A of the second pixel G2 for each pixel region GA.
  • the calculated luminance average value L2A is output to the region-by-region luminance change second calculation unit 24B and stored in the second memory 23B.
  • the area-by-region brightness change second calculation unit 24B acquires the brightness average value L2A one frame before from the second memory 23B, and acquires the brightness average value L2A for the current frame from the area-by-region brightness average value second calculation unit 22B. ..
  • the second calculation unit 24B for the brightness change for each region calculates the difference between the brightness average value L2A one frame before and the brightness average value L2A of the current frame.
  • the second pixel change amount G2V of the second pixel G2 is calculated for each pixel region GA.
  • the calculated second pixel change amount G2V is output to the feature amount calculation unit 25 for each area.
  • the area-by-region feature amount calculation unit 25 uses the first pixel change amount G1V for each pixel area GA for the first pixel G1 and the second pixel change amount G2V for each pixel area GA for the second pixel G2. Calculate the feature amount for each.
  • the area-by-region feature amount calculation unit 25 calculates the absolute value of the difference between the first pixel change amount G1V and the second pixel change amount G2V for the pixel area GA as the change amount difference value VD ( ⁇ 0). .. Then, the feature amount calculation unit 25 for each area calculates the feature amount for each pixel area GA based on the change amount difference value VD and the second pixel change amount G2V.
  • the feature amount is calculated so that the larger the change amount difference value VD and the smaller the second pixel change amount G2V, the higher the global blinker likelihood LFL1.
  • the first pixel change amount G1V can be rephrased as a change in the first signal S1 output from the first pixel G1 having a short exposure time. Since the exposure time of the first pixel G1 is short, when the subject is a periodic blinker, the LED in the non-light emitting state is imaged at the timing one frame before, and the LED in the light emitting state is imaged at the timing of the current frame. There is. In this case, the first pixel change amount G1V for the first pixel G1 becomes large.
  • the second pixel change amount G2V can be rephrased as a change in the second signal S2 output from the second pixel G2 having a long exposure time. Since the exposure time of the second pixel G2 is long, when the subject is a periodic blinker, there is a high possibility that the LED in the light emitting state is imaged at both the timing one frame before and the timing of the current frame. Therefore, the second pixel change amount G2V becomes small. In particular, when the exposure time of the second pixel G2 is longer than the length of the non-emission time of the periodic blinker, there is no frame in which only the non-emission time of the periodic blinker is imaged, so that the amount of change in the second pixel is changed. G2V is small.
  • the change amount difference value VD becomes large. Therefore, it can be estimated that there is a high possibility that the periodic blinker is imaged in the pixel region GA in which the change amount difference value VD is large and the second pixel change amount G2V is small.
  • the global blinking body likelihood LFL1 calculated by the feature amount calculation unit 25 for each region has strength and weakness of certainty.
  • the feature amount calculation unit 25 for each area calculates the global moving subject likelihood LFM1 as the feature amount for each pixel area GA.
  • the feature amount is calculated so that the smaller the change amount difference value VD and the larger the second pixel change amount G2V, the higher the global moving subject likelihood LFM1.
  • the first pixel change amount G1V and the second pixel change amount G2V have a large change in luminance value when the subject is a moving subject, although there is a difference in exposure time. That is, since both the first pixel change amount G1V and the second pixel change amount G2V are large, the change amount difference value VD is small.
  • the global moving subject likelihood LFM1 calculated by the feature amount calculation unit 25 for each region has strength and weakness of certainty.
  • the feature amount calculation unit 25 for each region increases the global moving subject likelihood LFM1 so that the smaller the change amount difference value VD and the larger the second pixel change amount G2V, the higher the global moving subject likelihood. Calculate LFM1.
  • the range of values that can be taken for the global blinking body likelihood LFL1 and the global moving subject likelihood LFM1 calculated by the feature amount calculation unit 25 for each region is 0.0 to 1.0. Specifically, it will be described with reference to FIG.
  • the graph shown in FIG. 9 shows the global characteristics of each pixel region GA with the change amount difference value VD as the horizontal axis and the second pixel change amount G2V as the vertical axis.
  • the pixel region GA in which the periodic blinker is imaged is distributed in the region AR1 near the horizontal axis of the graph. Further, the pixel region GA in which the moving subject is photographed is distributed in the region AR2 near the vertical axis of the graph.
  • the global blinking body likelihood LFL1 the larger the change amount difference value VD is, the larger the pixel region GA is, and the smaller the second pixel change amount G2V is, the higher the global blinking body likelihood LFL1 is. That is, as shown in FIG. 9, the global blinker likelihood LFL1 of the point P1 located in the region AR1 is 1.0, and the global blinker likelihood LFL1 of the point P2 located in the region AR1 is 0. It is said to be 1.
  • the pixel region GA having a smaller change amount difference value VD and the pixel region GA having a larger second pixel change amount G2V have a higher global moving subject likelihood LFM1. That is, the global moving subject likelihood LFM1 of the point P3 located in the region AR2 is 1.0, and the global moving subject likelihood LFM1 of the point P4 located in the region AR2 is 0.1.
  • the pixel region GA having a larger change amount difference value VD and the pixel region GA having a smaller second pixel change amount G2V do not increase the global blinker likelihood LFL1 but are located near the center of the region AR1.
  • the global blinker likelihood LFL1 for the pixel region GA may be 1.0.
  • the pixel region GA having a larger change amount difference value VD and the pixel region GA having a smaller second pixel change amount G2V do not have a higher global blinker likelihood LFL1 but are located near the center of the region AR2.
  • the global moving subject likelihood LFM1 for the located pixel region GA may be 1.0.
  • the region-specific feature amount calculation unit 25 outputs the global blinker likelihood LFL1 and the global moving subject likelihood LFM1 for each pixel region GA.
  • the local feature detection unit 9 performs a process of detecting the feature of the subject for each pixel unit 6 as a local feature.
  • the global feature detected by the global feature detection unit 8 is the one that detects the feature of the subject in the pixel region GA, and the global feature in which the feature of the subject is detected for the specific pixel unit 6 in the pixel region GA. It may not be the case. In such a case, if the blending ratio is determined based on the global characteristics, there is a possibility that inappropriate blending will be performed in the specific pixel unit 6. In view of such circumstances, the local feature detection unit 9 detects the local feature for each pixel G.
  • the local feature detection unit 9 uses the global blinker likelihood LFL1 output from the region-by-region feature amount calculation unit 25 and the pixel-to-pixel difference value GD output from the pixel-to-pixel difference detection unit 7. , The local blinker likelihood LFL2 is calculated. Further, the local moving subject likelihood LFM2 is calculated using the global moving subject likelihood LFM1 output from the region-by-region feature amount calculation unit 25 and the pixel-to-pixel difference value GD output from the pixel-to-pixel difference detecting unit 7. ..
  • the graph shown in FIG. 10 shows data for each pixel unit 6 with the global blinker likelihood LFL1 as the horizontal axis and the pixel-to-pixel difference value GD as the vertical axis.
  • the difference value GD between pixels on the vertical axis is normalized so as to take a value of 0.0 to 1.0.
  • the local feature detection unit 9 has a region in which the global blinking body likelihood LFL1 is set to a predetermined threshold Th1 or more and the pixel-to-pixel difference value GD is set to a predetermined threshold Th2 or more in order to prevent erroneous determination due to noise or the like.
  • Th1 the global blinking body likelihood LFL1
  • Th2 the pixel-to-pixel difference value GD
  • the local blinker likelihood LFL2 takes any value from 0.0 to 1.0.
  • the local blinking body likelihood is plotted at the point P5 in which the global blinking body likelihood LFL1 is set to the threshold value Th1 and the pixel-to-pixel difference value GD is set to the threshold value Th2.
  • LFL2 is calculated as 0.0.
  • the local blinker likelihood LFL2 is calculated as 1.0.
  • the local blinker likelihood LFL2 is set to approach 1.0 as the point P5 approaches the point P6.
  • the local blinker likelihood LFL2 for the pixel unit 6 in which data is plotted in a region other than the region Ar3 may have no value, may be a negative value, or may be 0.0.
  • the graph shown in FIG. 11 shows data for each pixel with the global moving subject likelihood LFM1 as the horizontal axis and the pixel-to-pixel difference value GD as the vertical axis.
  • the difference value GD between pixels on the vertical axis is normalized so as to take a value of 0.0 to 1.0.
  • the local feature detection unit 9 has a region in which the global moving subject likelihood LFM1 is set to a predetermined threshold Th3 or more and the pixel-to-pixel difference value GD is set to a predetermined threshold Th4 or more in order to prevent erroneous determination due to noise or the like.
  • Th3 the global moving subject likelihood LFM1
  • GD the pixel-to-pixel difference value GD
  • the local moving subject likelihood LFM2 takes any value from 0.0 to 1.0.
  • the local moving subject likelihood is plotted at the point P7 in which the global moving subject likelihood LFM1 is set to the threshold Th3 and the pixel-to-pixel difference value GD is set to the threshold Th4.
  • LFM2 is calculated as 0.0.
  • the pixel unit 6 plotted at the point P8 in which the global moving subject likelihood LFM1 and the pixel-to-pixel difference value GD are both 1.0 it is highly possible that the captured subject is a moving subject.
  • the local moving subject likelihood LFM2 is calculated as 1.0.
  • the local moving subject likelihood LFM2 is set to approach 1.0 as the point P7 approaches the point P8.
  • the local moving subject likelihood LFM2 for the pixel unit 6 in which data is plotted in a region other than the region Ar4 may have no value, may be a negative value, or may be 0.0.
  • the local feature detection unit 9 performs a process of dropping the global feature of the subject detected in each pixel region GA into the local feature of the pixel unit 6 (or each pixel G).
  • the local moving subject likelihood LFM2 may be calculated by using the global blinking body likelihood LFL1 without using the global moving subject likelihood LFM1.
  • a subject having a large difference value GD between pixels a periodic blinking object or a moving subject can be considered.
  • the global blinker likelihood LFL1 when the global blinker likelihood LFL1 is large, it is highly possible that the subject imaged in the pixel region GA is a periodic blinker.
  • the pixel region GA in which the pixel-to-pixel difference value GD is large and the global blinking body likelihood LFL1 is low there is a high possibility that the captured subject is a moving subject. Therefore, it is possible to estimate that the subject captured by the pixel unit 6 is a moving subject without using the global moving subject likelihood LFM1.
  • the blend rate determination unit 10 determines the blend rate of the first signal S1 and the second signal S2. In particular, in the present embodiment, the blend rate determination unit 10 determines the blend rate according to the characteristics of the subject for each pixel unit 6.
  • the blend rate determination unit 10 includes a temporary blend rate calculation unit 12 for calculating the temporary blend rate ABF, and a blend rate correction unit 13 for calculating the blend rate BF by correcting the temporary blend rate ABF. ..
  • the temporary blend rate calculation unit 12 calculates the temporary blend rate in a state where the characteristics of the subject are not taken into consideration as the temporary blend rate ABF.
  • the temporary blend ratio ABF and the blend ratio BF represent the blend ratio of the second signal S2, and take a value from 0.0 to 1.0. That is, when the temporary blend ratio ABF and the blend ratio BF are 0.0, it means that the second signal S2 is not blended at all, and when it is 1.0, the blend ratio of the second signal S2 is 100%. Represents.
  • the first signal S1 output from the first pixel G1 having high light receiving sensitivity is input to the temporary blend ratio calculation unit 12.
  • the temporary blend rate calculation unit 12 determines the temporary blend rate ABF according to the degree of saturation of the first signal S1. For example, as shown in FIG. 12, the temporary blend ratio ABF is set to 0.0 until the amount of incident light on the pixel G exceeds a predetermined threshold value Th5. Further, when the amount of incident light on the pixel G is a predetermined threshold value Th5 or more and less than the threshold value Th6, the provisional blend ratio ABF is 0.0 or more and less than 1.0. Then, when the amount of incident light on the pixel G exceeds a predetermined threshold value Th6, the temporary blend ratio ABF is fixed at 1.0. That is, when the amount of incident light on the pixel G exceeds the threshold value Th6, it is determined that the first pixel G1 is saturated, and the blend ratio of the first signal S1 becomes 0%.
  • the blend rate correction unit 13 uses the local blinker likelihood LFL2 and the local moving subject likelihood LFM2 calculated by the local feature detection unit 9, and the temporary blend rate calculated by the temporary blend rate calculation unit 12. ABF is corrected.
  • the provisional blend ratio ABF is corrected so that the blend ratio BF becomes high, that is, the blend ratio of the second signal S2 becomes high.
  • the second signal S2 is a signal obtained by relatively lengthening the exposure time, and there is a high possibility that the light emitting state of a periodic blinker such as an LED is imaged.
  • the blend ratio BF is tentatively reduced, that is, the ratio of the second signal S2 is tentatively reduced.
  • the blend ratio ABF is corrected. This is because the first signal S1 is a signal obtained by relatively shortening the exposure time, and blurring of a moving subject is suppressed.
  • the first signal S1 may be excluded by setting the blend ratio BF to 1.0.
  • the blend ratio BF is set to 0.0 to set the second signal S2. May not be included.
  • the blend unit 11 performs a blend process based on the blend rate BF determined by the blend rate correction unit 13. For example, the blending unit 11 generates an HDR image signal by performing an alpha blending process based on the blending ratio BF.
  • the alpha blend can be expressed by the following equation (3).
  • HDR image signal (1- ⁇ ) x S1 + ⁇ x S2 x Gain ... Equation (3)
  • the correction is made so that the blend ratio BF is 0.0 regardless of the value of the local moving subject likelihood LFM2, that is, the blend ratio BF. May be corrected so that
  • FIG. 13 shows a configuration example of the image pickup apparatus 100. In addition, in FIG. 13, only the main part included in the image pickup apparatus 100 is shown.
  • the image pickup apparatus 100 includes an optical lens system 101, an image sensor 1 including the above-mentioned signal processing unit 5, an image pickup signal processing unit 102, a control unit 103, a driver unit 104, a storage unit 105, and a communication unit. It includes 106 and a display unit 107.
  • the optical lens system 101 includes various lenses such as a zoom lens and a focus lens, an aperture mechanism, and the like. Light that has passed through the optical lens system 101 is incident on the pixel array unit 2 included in the image sensor 1. A part of the optical lens system 101 may be provided in the image sensor 1 like a microlens provided for each pixel G.
  • the imaging signal processing unit 102 receives the HDR image signal synthesized by HDR from the image sensor 1, and the signal processing required to generate the image data for recording and the signal required to display the image on the display unit 107. Perform processing etc.
  • the control unit 103 controls the image sensor 1 and the image pickup signal processing unit 102. Further, by sending a control signal to the driver unit 104, the optical members such as various lenses and the aperture mechanism of the optical lens system 101 are controlled.
  • the driver unit 104 provides a drive signal to the optical member of the optical lens system 101 based on the control signal input from the control unit 103. This drives the optical member.
  • the storage unit 105 is composed of, for example, a non-volatile memory or the like.
  • the storage unit 105 stores programs and the like required for each process executed by the control unit 103. Further, the storage unit 105 stores image data such as still image data and moving image data generated by the image pickup signal processing unit 102.
  • the storage unit 105 may be configured as a flash memory built in the image pickup device 100, or may be stored in or read from a memory card (for example, a portable flash memory) that can be attached to and detached from the image pickup device 100 and the memory card. It may be composed of an access unit that performs access for the purpose. Further, it may be realized as an HDD (Hard Disk Drive) or the like as a form built in the image pickup apparatus 100.
  • a flash memory built in the image pickup device 100 or may be stored in or read from a memory card (for example, a portable flash memory) that can be attached to and detached from the image pickup device 100 and the memory card. It may be composed of an access unit that performs access for the purpose. Further, it may be realized as an HDD (Hard Disk Drive) or the like as a form built in the image pickup apparatus 100.
  • HDD Hard Disk Drive
  • the communication unit 106 performs data communication and network communication with an external device by wire or wirelessly. For example, image data (still image data or moving image data) is transmitted to an external display device, recording device, playback device, or the like. Further, the communication unit 106 may perform communication by various networks such as the Internet, a home network, and a LAN (Local Area Network), and may transmit and receive various data to and from a server, a terminal, or the like on the network. .. Further, the communication unit 106 may function as an output unit that outputs image data to an external device.
  • image data still image data or moving image data
  • the communication unit 106 may perform communication by various networks such as the Internet, a home network, and a LAN (Local Area Network), and may transmit and receive various data to and from a server, a terminal, or the like on the network. .. Further, the communication unit 106 may function as an output unit that outputs image data to an external device.
  • the display unit 107 can display a reproduced image of the image data read from the recording medium as the storage unit 105. Further, the display unit 107 may be provided on the image pickup apparatus 100 as a rear monitor or a finder monitor provided on the back surface of the image pickup apparatus 100, and may be capable of displaying a so-called through image.
  • the display unit 107 may be in a form that can be attached to and detached from the image pickup apparatus 100.
  • the first signal S1 output from the first pixel G1 and the second signal S2 output from the second pixel G2 are appropriately blended according to the characteristics of the subject.
  • the HDR image signal can be received from the image sensor 1. Therefore, by performing various display processing, processing, and the like based on the HDR image signal, the imaging signal processing unit 102 appropriately displays periodic blinkers such as LEDs and suppresses blurring of moving subjects. It is possible to display HDR images and generate image files.
  • Second application example> an example in which the HDR image signal is provided from the above-mentioned image sensor 1 to the display device and the recognition device that performs the image recognition process will be described. As an example of such an application, the in-vehicle system 200 will be mentioned here.
  • the in-vehicle system 200 includes an optical lens system 201, a driver unit 202, an image sensor 1 including the above-mentioned signal processing unit 5, a display device 300, and a recognition device 400.
  • the optical lens system 201 has the same configuration as the optical lens system 101 described in the first application example, and the driver unit 202 has the same configuration as the driver unit 104 described in the first application example, the description thereof will be omitted. ..
  • the display device 300 includes monitors and instruments provided at predetermined positions in the vehicle interior such as an instrument panel, and a circuit for controlling the display thereof.
  • the display device 300 includes a display signal processing unit 301, a display control unit 302, a storage unit 303, a communication unit 304, a display unit 305, and the like.
  • the display signal processing unit 301 performs signal processing for displaying an image on the display unit 305, and has the same configuration as the imaging signal processing unit 102 described in the first application example.
  • the display control unit 302 has the same configuration as the control unit 103 described in the first application example. Further, the storage unit 303 and the communication unit 304 have the same configuration as the storage unit 105 and the communication unit 106 described in the first application example.
  • the communication unit 304 is capable of communication using CAN (Controller Area Network).
  • the display unit 305 is a monitor device or the like provided at a predetermined position in the vehicle interior, and can display image data captured by the image sensor 1. This enables the driver and passengers to grasp the situation outside the vehicle via the monitoring device.
  • the recognition device 400 performs subject recognition processing by performing various processing on the image data of the outside of the vehicle captured by the image sensor 1.
  • the recognition result of the subject by the recognition device 400 is provided to the advanced driver-assistance system (ADAS).
  • ADAS advanced driver-assistance system
  • the ADAS realizes a collision damage mitigation braking function, an ACC (Adaptive Cruise Control) function, and the like based on the information acquired from the recognition device 400.
  • the recognition device 400 includes a recognition signal processing unit 401, a recognition control unit 402, a storage unit 403, and a communication unit 404.
  • the storage unit 403 and the communication unit 404 have the same configuration as the storage unit 105 and the communication unit 106 described in the first application example, detailed description thereof will be omitted.
  • Data about the captured image is input from the image sensor 1 to the recognition signal processing unit 401.
  • the HDR synthesized HDR image signal may be input from the image sensor 1 to the recognition signal processing unit 401, or the first signal S1, the second signal S2, and each pixel G before HDR synthesis may be input.
  • Information of the local blinking object likelihood LFL2 and the local moving subject likelihood LFM2 as the recognition result of the above may be input. Of course, all of this information may be input from the image sensor 1 to the recognition signal processing unit 401.
  • the recognition signal processing unit 401 recognizes a preceding vehicle, a pedestrian, a traffic light, a sign, or the like based on the information of the local blinking object likelihood LFL2 and the local moving subject likelihood LFM2 output from the image sensor 1.
  • Image recognition processing is performed. Specifically, based on the information of the local blinking body likelihood LFL2 output from the image sensor 1, it is possible to recognize a traffic light, a tail lamp, etc. as a periodic blinking body, for example, to recognize a preceding vehicle. can. Further, based on the information of the local moving subject likelihood LFM2 output from the image sensor 1, it is possible to recognize a preceding vehicle or a pedestrian as a moving subject. Further, these image recognition processes may be realized by cooperating with the recognition signal processing unit 401 and the recognition control unit 402.
  • the recognition signal processing unit 401 may perform a process of recognizing a moving subject as an image in consideration of the traveling speed of the own vehicle and the like. As a result, the recognition signal processing unit 401 can appropriately perform image recognition processing on the subject.
  • the recognition control unit 402 may recognize a subject such as a preceding vehicle or a pedestrian by executing an image recognition process using, for example, a DNN (Deep Neural Network).
  • the recognition result obtained by the recognition signal processing unit 401 and the recognition control unit 402 thus obtained is output to the ADAS via the communication unit 404.
  • the image sensor 1 may include three types of pixels having different areas, or may be capable of outputting three types of pixel signals using two types of pixels having different areas as shown in FIG. ..
  • the first pixel G1 included in the image sensor 1 outputs the first signal S1 which is the pixel signal having the highest light receiving sensitivity
  • the second pixel G2 is a pixel signal whose light receiving sensitivity is lower than that of the first pixel G1.
  • the second signal S2 is output.
  • the third signal S3 having the lowest light receiving sensitivity is output by making the exposure time shorter than the case where the first signal S1 and the second signal S2 are output.
  • the first signal S1 is output by setting the exposure time of the first pixel G1 to 8 msec.
  • the second signal S2 is output by setting the exposure time of the second pixel G2 to 11 msec.
  • the third signal S3 is output by setting the exposure time of the first pixel G1 to 4 msec.
  • FIG. 15 is a configuration example of a signal processing unit 5A that detects features of a subject using three types of pixel signals output from the image sensor 1.
  • the same components as those in FIG. 6 are designated by the same reference numerals, and the description thereof will be omitted as appropriate.
  • the signal processing unit 5A uses the first signal S1 and the second signal S2 to perform global feature detection and local feature detection, so that the pixel-to-pixel difference detection unit 7, the global feature detection unit 8, and the local feature detection are performed.
  • a unit 9 and a blend rate determining unit 10 are provided.
  • the first blending portion 11A is provided instead of the blending portion 11 shown in FIG.
  • each of these parts is a processing part that performs the same processing as each part shown in FIG. 6, the description thereof will be omitted.
  • the blend rate determination unit 10 includes a temporary blend rate calculation unit 12 and a blend rate correction unit 13.
  • the blend ratio determining unit 10 determines the ratio of the blend of the second signal S2 to the first signal S1 as the blend ratio BF1.
  • the first blending unit 11A outputs an HDR image signal by performing an alpha blending process of the first signal S1 and the second signal S2 according to the blending ratio BF1 determined by the blending ratio determining unit 10.
  • the signal processing unit 5A uses the second signal S2 and the third signal S3 to perform global feature detection and local feature detection, so that the pixel-to-pixel difference detection unit 7A, the global feature detection unit 8A, and the local feature detection are performed.
  • a unit 9A and a blend rate determining unit 10A are provided.
  • the blend rate determination unit 10A includes a temporary blend rate calculation unit 12A and a blend rate correction unit 13A.
  • each of these parts is a processing unit that performs the same processing as each part shown in FIG. 6 except that the second signal S2 is used instead of the first signal S1 and the third signal S3 is used instead of the second signal S2. ..
  • the blend ratio determining unit 10A determines the blend ratio of the third signal S3 with respect to the HDR image signal output from the first blend unit 11A as the blend ratio BF2.
  • the second blending unit 11B performs an alpha blending process of the HDR image signal output from the first blending unit 11A and the third signal S3 according to the blending ratio BF2 determined by the blending ratio determining unit 10A.
  • the first memory 23A and the second memory 23B store the brightness average values L1A and L2A of two frames or more.
  • the brightness average values L1A and L2A one frame before and the brightness average values L1A and L2A two frames before are stored.
  • the region-by-region luminance change first calculation unit 24A and the region-by-region luminance change second calculation unit 24B perform processing for observing the transition of the luminance change.
  • the region-by-region luminance change first calculation unit 24A and the region-by-region luminance change second calculation unit 24B can estimate the blinking cycle of the periodic blinker.
  • the exposure time at which the light emitting state of the periodic blinking object can be reliably imaged can be set for the second pixel G2. This makes it possible to generate an HDR image signal in which an LED or the like is reliably captured.
  • the process of recognizing the preceding vehicle or the traffic light is executed by the processing unit in the subsequent stage, it is possible to improve the running safety of the vehicle, which is preferable.
  • the signal processing unit 5 includes both the global feature detection unit 8 and the local feature detection unit 9, and the local blinker likelihood LFL2 and the local motion output from the local feature detection unit 9.
  • the blend ratio is determined using the subject likelihood LFM2 has been described.
  • the signal processing unit 5 includes only the global feature detection unit 8 and does not include the local feature detection unit 9 will be described.
  • the global feature detection unit 8 detects the global feature of the subject for each pixel region GA virtually set in the pixel array unit 2.
  • a desired HDR image signal can be obtained by using the characteristics of the subject for each pixel region GA detected by the global feature detection unit 8.
  • the blend ratio BF can be determined appropriately.
  • the signal processing unit 5 may be configured without the inter-pixel difference detection unit 7 and the local feature detection unit 9 shown in FIG. That is, the blend rate correction unit 13 uses the global blinker likelihood LFL1 and the global moving subject likelihood LFM1 for each pixel region GA output from the global feature detection unit 8, and the blend ratio for each pixel region GA. Calculate BF.
  • the blending unit 11 outputs an HDR image signal by performing HDR composition using the blending ratio BF for each pixel region GA.
  • the configuration of the signal processing unit 5 can be simplified, and the processing load can be reduced.
  • the imaging environment erroneous determination may still occur frequently. Specifically, it is a dark imaging environment or the like.
  • the threshold values Th1, Th2, Th3, Th4 By increasing each threshold value, the region Ar3 shown in FIG. 10 and the region Ar4 shown in FIG. 11 become narrower, and the local blinking body likelihood LFL2 and the local moving subject likelihood LFM2 for the subject are calculated to be lower.
  • the process of determining the blend ratio so that the ratio of the second signal S2 becomes high when the global blinking body likelihood LFL1 or the local blinking body likelihood LFL2 for the subject is high has been described.
  • the blending ratio may be basically determined so that the ratio of the first signal S1 is high.
  • the blending ratio may be determined so that the ratio of the second signal S2 is 0%.
  • the image sensor 1 in the embodiment has the first pixel G1 in which the reading is performed by the first light receiving sensitivity and the reading by the second light receiving sensitivity which is lower than the first light receiving sensitivity.
  • the first signal S1 and the first exposure time output from the first pixel G1 which is the first exposure time are different from the pixel array unit 2 in which the second pixel G2 is arranged in two dimensions.
  • the feature of the pixel region GA including a plurality of the first pixel G1 and the second pixel G2 is a global feature. It is provided with a global feature detection unit 8 (8A) for detecting as.
  • the image sensor 1 determines the blend ratio BF (BF1, BF2) of the first signal S1 and the second signal S2 based on the detection result of the global feature detection unit 8 (8A). ) And.
  • the difference in the light receiving sensitivity between the first pixel G1 and the second pixel G2 may be due to the difference in the light receiving area in the pixel, the difference in the length of the exposure time, or the difference in the light receiving element. It may be due to the difference in characteristics.
  • the global feature is a feature of a subject imaged in a set of pixel region GA.
  • the blend ratio BE of the first signal S1 and the second signal S2 can be changed for each pixel region GA, and appropriate HDR synthesis can be performed with a small processing load. Can be done. Further, by using the signals of the first pixel G1 and the second pixel G2 (first signal S1 and second signal S2) having different exposure times for each pixel region GA, a global feature when the subject is a periodic blinker. It is possible to detect global features when the subject is a moving subject. Further, many subjects are imaged in the pixel array unit 2 with a certain range. That is, few subjects are imaged with only one pixel G.
  • the pixel signal output from the pixel G may have a feature as, for example, a periodic blinker. be. In such a case, it is conceivable that the characteristics of the subject with respect to the pixel G may be erroneously detected.
  • the feature of the subject detected for the failed pixel G is almost ignored, and the feature is erroneously detected. Inappropriate blending process based on is not executed. This makes it possible to execute an appropriate blending process even when a failure occurs in the pixel G.
  • the global feature may be a feature regarding the change in luminance with the passage of time.
  • a subject having a characteristic of a change in brightness with the passage of time for example, a light emitting body such as an LED (Light Emitting Diode) that blinks in a cycle of several msec to several tens of msec, or a light emitting body that reflects light emitted from the light emitting body is reflected.
  • a subject whose brightness is changing such as an object.
  • a subject with a large change in brightness is more likely to be detected.
  • a large change in luminance is likely to be detected.
  • the global feature detection unit 8 (8A) in the image sensor 1 may perform detection by calculating the likelihood representing the degree to which the image pickup subject is a periodic blinker as the global blinker likelihood LFL1.
  • the global blinking object likelihood LFL1 is calculated to be high.
  • the global blinker likelihood LFL1 is increased when a periodic blinker is imaged in most of the target pixel region GA.
  • moving subjects can be considered as the subject whose brightness changes, but by calculating the global blinking body likelihood LFL1 as in this configuration, the subject moves with the periodic blinking body. It is possible to distinguish which of the subjects it is.
  • the blend ratio BF (BF1, BF2) can be changed according to whether the subject is a periodic blinker or a moving subject, and appropriate HDR composition according to the subject becomes possible.
  • the global feature detection unit 8 (8A) of the image sensor 1 changes the amount of change over time in the brightness average value of the first pixel G1 for each pixel region GA by the first pixel. Processing to calculate as the amount G1V, processing to calculate the temporal change amount of the brightness average value of the second pixel G2 for each pixel area GA as the second pixel change amount G2V, and the first pixel change amount G1V and the second pixel.
  • the detection may be performed by executing a process of calculating the difference of the change amount G2V by VDing the change amount difference value.
  • the amount of data for the information of the first pixel change amount G1V and the second pixel change amount G2V which are the temporal changes in the brightness average values L1A and L2A for each pixel region GA, is smaller than that calculated for each pixel G. To. Similarly, the amount of data is also reduced for the change amount difference value VD calculated from the first pixel change amount G1V and the second pixel change amount G2V. By using such information, it is possible to detect global features with a small processing load.
  • the global blinker likelihood LFL1 is calculated so as to be larger as the change amount difference value VD is larger and larger as the second pixel change amount G2V is smaller. You may. For example, when the exposure time of the first pixel G1 having a high light receiving sensitivity is shorter than the exposure time of the second pixel G2 having a low light receiving sensitivity, all the exposure times of the first pixel G1 are in a non-light emitting state in the periodic blinker. May be included. Even in such a case, a part of the exposure time of the second pixel G2 may be included in the light emitting state in the periodic blinker. In such a case, the change amount difference value VD becomes large.
  • the second pixel change amount G2V there is a high possibility that the second pixel G2 having a long exposure time shows a periodic blinker in a light emitting state.
  • the exposure time is longer than the non-light emission time of the periodic blinker, the light emission state of the periodic blinker is surely captured, and the second pixel change amount G2V becomes small. Therefore, when the change amount difference value VD is large and the second pixel change amount G2V is small, the global blinking body likelihood LFL1 appropriately determines that the subject is a periodic blinking body, so that an appropriate blend ratio BF is obtained. (BF1, BF2) can be set.
  • the local feature detection unit 9 that detects the feature of each pixel G by using the global feature detected by the global feature detection unit 8 (8A). (9A) may be provided.
  • the characteristics of the subject are detected for each pixel G. Therefore, an appropriate blend ratio BF (BF1, BF2) is determined for each pixel G, and optimum HDR synthesis can be performed.
  • the local feature detection unit 9 (9A) of the image sensor 1 has a pixel-to-pixel difference calculated as a difference between the luminance values of the first pixel G1 and the second pixel G2.
  • the blinking body likelihood for each pixel G may be calculated as the local blinking body likelihood LFL2 based on the value GD and the global blinking body likelihood LFL1.
  • the pixel-to-pixel difference value GD and the global blinking object likelihood LFL1 can be used to estimate the degree to which the subject is a periodic blinking object. As a result, it is possible to detect that the subject is a periodic blinker, so that the blend ratio BF (BF1, BF2) can be appropriately set for each pixel G.
  • the local blinking body likelihood LFL2 is larger for the pixel G having a larger pixel-to-pixel difference value GD, and larger for the pixel G having a larger global blinking body likelihood LFL1. It may be calculated so as to be.
  • a subject having a large difference value GD between pixels a moving subject, a periodic blinker, or the like can be mentioned.
  • a subject having a large inter-pixel difference value GD and having a high global blinking body likelihood LFL1 is likely to be a periodic blinking object rather than a moving subject.
  • the blend ratio BF (blend ratio BF) is based on the local blinking body likelihood LFL2.
  • BF1, BF2) can be appropriately set for each pixel G.
  • the blend ratio determination unit 10 (10A) of the image sensor 1 has a blend ratio BF (BF1, BF2) so that the higher the global blinker likelihood LFL1, the higher the ratio of the second signal S2. ) May be determined.
  • the second pixel G2 has a longer exposure time than the first pixel G1.
  • the second pixel G2 is imaged with a periodic blinker in a light emitting state. Therefore, by increasing the ratio of the second signal S2 output from the second pixel G2, it is possible to increase the possibility of obtaining image data in which the periodic blinker is emitting light.
  • the blend ratio determination unit 10 (10A) of the image sensor 1 has a blend ratio BF (BF1, BF1) for each pixel G so that the higher the local blinker likelihood LFL2, the higher the ratio of the second signal S2. BF2) may be determined.
  • the second pixel G2 has a longer exposure time than the first pixel G1.
  • the second pixel G2 is likely to be generated in a state of capturing a periodic blinker in a light emitting state.
  • the blend ratio is determined as if the periodic blinker is imaged for the pixel G in which the periodic blinker is not imaged. Therefore, it is possible to perform appropriate HDR composition according to the characteristics of the subject for each pixel G.
  • the local feature detection unit 9 (9A) of the image sensor 1 can execute the process of calculating the local moving subject likelihood LFM2, and the local moving subject likelihood LFM2 can be executed. May be calculated so that the larger the pixel-to-pixel difference value GD is, the larger the pixel G is, and the smaller the global blinking likelihood LFL1 is, the larger the pixel G is.
  • a subject having a large difference value GD between pixels a moving subject, a periodic blinker, or the like can be mentioned. Among these subjects, a subject having a large inter-pixel difference value GD and having a low global blinking likelihood LFL1 is highly likely to be a moving subject.
  • the blend ratio BF (BF1, BF2) can be appropriately set for each pixel G. Further, since it is not necessary to calculate an index value or the like indicating the degree to which the subject is a moving subject, it is possible to determine an appropriate blend ratio BF (BF1, BF2) while suppressing an increase in the processing load.
  • the global feature detection unit 8 (8A) of the image sensor 1 determines the amount of time change in the brightness of the first pixel G1 for each pixel region GA as the first pixel.
  • the process of calculating the difference of the amount G2V as the change amount difference value VD is executed, and the global moving subject likelihood LFM1 indicating the degree to which the imaged subject is a moving subject is larger and the first as the change amount difference value VD is smaller.
  • the two-pixel change amount G2V it may be calculated so that the larger the two-pixel change amount G2V is, the larger the amount is. For example, even if the light receiving sensitivities are different, the amount of change in the first signal S1 and the second signal S2 output from the first pixel G1 and the second pixel G2 is large when the subject is a moving subject. It is possible that the difference will be small. Therefore, when the change amount difference value VD is small and the second pixel change amount G2V is large, the global moving subject likelihood LFM1 appropriately determines that the subject is a moving subject, so that an appropriate blend ratio BF ( BF1, BF2) can be set.
  • VD change amount difference value
  • the global moving subject likelihood LFM1 appropriately determines that the subject is a moving subject, so that an appropriate blend ratio BF ( BF1, BF2) can be set.
  • the local feature detection unit 9 that executes a process of calculating the local moving subject likelihood LFM2, which is the moving subject likelihood for each pixel G. (9A) is provided, and the local moving subject likelihood LFM2 is larger as the pixel-to-pixel difference value GD calculated as the difference in brightness between the first pixel G1 and the second pixel G2 is larger, and the global moving subject likelihood LFM1 is provided. It may be calculated so that the larger the pixel, the larger the pixel. For example, as a subject having a large difference value GD between pixels, a moving subject, a periodic blinker, or the like can be mentioned.
  • a subject having a large inter-pixel difference value GD and having a high local moving subject likelihood LFM2 is likely to be a moving subject rather than a periodic blinker.
  • the blend ratio BF (BF1, BF2) for each pixel G.
  • the blend ratio determination unit 10 (10A) of the image sensor 1 has a blend ratio BF (BF1, BF2) so that the higher the global moving subject likelihood LFM1, the higher the ratio of the first signal S1. ) May be determined.
  • the first signal S1 is a signal obtained after a shorter exposure time than the second signal S2. Therefore, the image data generated based on the first signal S1 has less blurring of the moving subject than the image data generated based on the second signal S2. Therefore, by increasing the ratio of the first signal S1 and performing blending, it is possible to obtain image data in which blurring of a moving subject is suppressed.
  • the blend rate determining unit 10 (10A) of the image sensor 1 blends each pixel G so that the higher the local moving subject likelihood LFM2, the higher the ratio of the first signal S1.
  • the rate BF (BF1, BF2) may be determined.
  • the first signal S1 is a signal obtained after a shorter exposure time than the second signal S2. Therefore, the image data generated based on the first signal S1 has less blurring of the moving subject than the image data generated based on the second signal S2.
  • the second signal is obtained for the other pixel G in which the periodic blinker is imaged while suppressing the blurring of the moving subject.
  • the ratio of S2 it is possible to obtain image data in which the light emitting state of the periodic blinker is captured.
  • the first pixel G1 and the second pixel G2 of the image sensor 1 may be configured to have different light receiving sensitivities due to different light receiving areas. Thereby, the global feature of each pixel region GA can be detected by using the first pixel G1 and the second pixel G2 having different light receiving sensitivities. Therefore, an appropriate blend ratio BF (BF1, BF2) based on the global characteristics can be determined for each pixel region GA.
  • BF1, BF2 blend ratio
  • the exposure time of the first pixel G1 of the image sensor 1 may be shorter than the exposure time of the second pixel G2.
  • the exposure time of the first pixel G1 is 8 msec
  • the exposure time of the second pixel G2 is 11 msec.
  • This makes it possible to detect, for example, the characteristic of a periodic blinking body that blinks periodically as a global characteristic. Therefore, it is possible to appropriately determine the blend ratio BF (BF1, BF2) based on the global characteristics and perform HDR synthesis.
  • the exposure time of the second pixel G2 is set based on the blinking cycle of the periodic blinking object.
  • the target of the subject as a periodic blinker such as a brake lamp or a traffic light of a preceding vehicle is clarified, so an appropriate exposure time is set. can do.
  • the pixel array unit 2 of the image sensor 1 has a third light receiving sensitivity that is different from both the first light receiving sensitivity and the second light receiving sensitivity.
  • the third signal S3 read by the above may be output.
  • the global feature detection unit 8 (8A) includes a plurality of pixels G to which the third signal S3 is output based on one of the first signal S1 and the second signal S2 and the third signal S3. Global features may be detected for the pixel region GA.
  • the blend rate determination unit 10 (10A) determines the blend rate for blending the three pixel signals. Therefore, the dynamic range in HDR synthesis can be increased.
  • a pixel array unit in which a first pixel that is read by a first light receiving sensitivity and a second pixel that is read by a second light receiving sensitivity, which is lower than the first light receiving sensitivity, are arranged two-dimensionally.
  • a global feature detection unit that detects features of a pixel region containing a plurality of the first pixel and the second pixel as global features
  • An image sensor including a blend rate determining unit that determines the blending rate of the first signal and the second signal based on the detection result of the global feature detecting unit.
  • the global feature detection unit calculates a temporal change amount of the brightness average value of the first pixel for each pixel area as a first pixel change amount, and a processing for calculating the brightness of the second pixel for each pixel area.
  • the process of calculating the temporal change amount of the average value as the second pixel change amount and the process of calculating the difference between the first pixel change amount and the second pixel change amount as the change amount difference value are executed.
  • the image sensor according to any one of (3) to (5) above comprising a local feature detection unit that detects a feature for each pixel using the global feature detected by the global feature detection unit.
  • the local feature detection unit locally determines the blinking body likelihood for each pixel based on the pixel-to-pixel difference value calculated as the difference between the luminance values of the first pixel and the second pixel and the global blinking body likelihood.
  • the image sensor according to (6) above which is calculated as the target blinking body likelihood.
  • the global feature detector is The process of calculating the temporal change in the brightness of the first pixel for each pixel area as the first pixel change, and the temporal change in the brightness of the second pixel for each pixel region in the second pixel.
  • a process of calculating as a change amount and a process of calculating the difference between the first pixel change amount and the second pixel change amount as a change amount difference value are executed.
  • the global moving subject likelihood which indicates the degree to which the imaged subject is a moving subject, is calculated so as to be larger as the change amount difference value is smaller and larger as the second pixel change amount is larger.
  • the image sensor according to any one of (11).
  • the local moving subject likelihood is the moving subject likelihood for each pixel.
  • the local moving subject likelihood is In the above (12), the pixel-to-pixel difference value calculated as the difference in brightness between the first pixel and the second pixel is calculated to be larger as the pixel is larger and the pixel having the global moving subject likelihood is larger.
  • the image sensor described.
  • the first pixel and the second pixel have different light receiving sensitivities due to different light receiving areas.
  • the exposure time of the first pixel is shorter than the exposure time of the second pixel.
  • the pixel array unit is capable of outputting a third signal read by a third light receiving sensitivity that is different from both the first light receiving sensitivity and the second light receiving sensitivity.
  • the global feature detection unit is a global feature region for a pixel region including a plurality of pixels to which the third signal is output based on the signal of either one of the first signal and the second signal and the third signal.
  • the image sensor according to any one of the above (1) to (17). (19) An image sensor that performs photoelectric conversion and An optical member that collects the reflected light from the subject on the image sensor is provided.
  • the image sensor is A pixel array unit in which a first pixel that is read by a first light receiving sensitivity and a second pixel that is read by a second light receiving sensitivity, which is lower than the first light receiving sensitivity, are arranged two-dimensionally.
  • the first signal output from the first pixel, which is the first exposure time, and the second signal, which is output from the second pixel, which is the second exposure time different from the first exposure time are higher than the first light receiving sensitivity.
  • a signal processing method comprising a process of determining a blend ratio of the first signal and the second signal based on a detection result of the global feature.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)
  • Exposure Control For Cameras (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

An image sensor comprising: a pixel array unit in which a first pixel for performing readout with a first light-reception sensitivity and a second pixel for performing readout with a second light-reception sensitivity lower than the first light-reception sensitivity are two-dimensionally arrayed; a comprehensive feature detecting unit which, on the basis of a first signal output from the first pixel set for a first exposure time and a second signal output from the second pixel set for a second exposure time different from the first exposure time, detects, as a comprehensive feature, a feature for a pixel region including a plurality of the first pixels and a plurality of the second pixels; and a blend ratio determining unit for determining a blend ratio of the first signal and the second signal on the basis of a detection result from the comprehensive feature detecting unit.

Description

イメージセンサ、撮像装置、信号処理方法Image sensor, image pickup device, signal processing method
 本技術は、被写体の特徴に応じたHDR(High Dynamic Range)合成を行うためのイメージセンサ、撮像装置、信号処理方法に関する。 This technology relates to an image sensor, an image pickup device, and a signal processing method for performing HDR (High Dynamic Range) synthesis according to the characteristics of the subject.
 露光時間の異なる複数の画像を用いたHDR合成を行うことによりHDR画像信号を得る技術が知られている。このような技術においては、被写体の特徴に応じて露光時間の異なる複数の画像信号のブレンド率を決定する。
 例えば、下記の特許文献1においては、動きのある被写体が撮像された画素については動きに強い信号である短露光時間の画素信号を用いるようにブレンド率を決定する。
A technique for obtaining an HDR image signal by performing HDR composition using a plurality of images having different exposure times is known. In such a technique, the blending ratio of a plurality of image signals having different exposure times is determined according to the characteristics of the subject.
For example, in Patent Document 1 below, the blend ratio is determined so as to use a pixel signal having a short exposure time, which is a signal that is strong against movement, for pixels in which a moving subject is imaged.
特開2012-257193号公報Japanese Unexamined Patent Publication No. 2012-257193
 ところが、動きのある被写体として検出される被写体の中には、周期的に明滅を繰り返す発光体なども含まれている。このような被写体について短露光時間とされた画素信号を用いるようにブレンド率を決定してしまうと、発光体が正常に撮像できないという問題がある。 However, the subject detected as a moving subject includes a luminous body that repeatedly blinks periodically. If the blending ratio is determined so as to use a pixel signal having a short exposure time for such a subject, there is a problem that the light emitter cannot normally image.
 本技術は上記事情に鑑み為されたものであり、被写体の特徴に応じて適切なブレンド率を決定することを目的とする。 This technique was made in view of the above circumstances, and the purpose is to determine an appropriate blending ratio according to the characteristics of the subject.
 本技術に係るイメージセンサは、第1の受光感度による読み出しが行われる第1画素と前記第1の受光感度よりも低い感度である第2の受光感度による読み出しが行われる第2画素とが二次元に配列された画素アレイ部と、第1の露光時間とされた前記第1画素から出力される第1信号と前記第1の露光時間とは異なる第2の露光時間とされた前記第2画素から出力される第2信号とに基づいて、前記第1画素と前記第2画素がそれぞれ複数含まれた画素領域についての特徴を大局的特徴として検出する大局的特徴検出部と、前記大局的特徴検出部の検出結果に基づいて前記第1信号と前記第2信号のブレンド率を決定するブレンド率決定部と、を備えたものである。
 第1画素と第2の画素の受光感度の違いは、画素における受光面積の違いによるものであってもよいし、露光時間の長さの違いによるものであってもよいし、受光素子の特性の違いによるものであってもよい。また、大局的特徴とは、一まとまりの画素領域において撮像された被写体の特徴とされている。
The image sensor according to the present technology has two pixels, a first pixel in which reading is performed by the first light receiving sensitivity and a second pixel in which reading is performed by a second light receiving sensitivity which is lower than the first light receiving sensitivity. The pixel array unit arranged in a dimension, the first signal output from the first pixel set as the first exposure time, and the second exposure time set to a second exposure time different from the first exposure time. A global feature detection unit that detects features of a pixel region containing a plurality of the first pixel and a plurality of the second pixels as global features based on a second signal output from the pixels, and the global feature detection unit. It is provided with a blend rate determining unit for determining the blend rate of the first signal and the second signal based on the detection result of the feature detection unit.
The difference in the light receiving sensitivity between the first pixel and the second pixel may be due to the difference in the light receiving area in the pixel, the difference in the length of the exposure time, or the characteristics of the light receiving element. It may be due to the difference between. Further, the global feature is a feature of a subject imaged in a set of pixel regions.
 上記したイメージセンサにおいて、前記大局的特徴は、時間経過に伴う輝度変化についての特徴とされていてもよい。
 時間経過に伴う輝度変化についての特徴を有する被写体としては、例えば、数msecから数十msecの周期で明滅するLED(Light Emitting Diode)などの発光体や、発光体から発せられる光を反射している物体など、輝度変化が起きている被写体を挙げることができる。特に大きな輝度変化が起きている被写体ほど検出されやすい。また、被写体が動きの大きな動被写体である場合においても、大きな輝度変化が検出されやすい。
In the above-mentioned image sensor, the global feature may be a feature regarding the change in luminance with the passage of time.
As a subject having a characteristic of a change in brightness with the passage of time, for example, a light emitting body such as an LED (Light Emitting Diode) that blinks in a cycle of several msec to several tens of msec, or a light emitting body that reflects light emitted from the light emitting body is reflected. It can be mentioned as a subject whose brightness is changing, such as an object. In particular, a subject with a large change in brightness is more likely to be detected. Further, even when the subject is a moving subject with a large movement, a large change in luminance is likely to be detected.
 上記したイメージセンサにおける前記大局的特徴検出部は、撮像被写体が周期明滅体である度合いを表す尤度を大局的明滅体尤度として算出することにより前記検出を行ってもよい。
 これにより、被写体がLEDなどの周期明滅体である場合に大局的明滅体尤度が高く算出される。特に、対象の画素領域の大部分に周期明滅体が撮像されている場合などに大局的明滅体尤度が高くされる。
The global feature detection unit in the image sensor described above may perform the detection by calculating the likelihood representing the degree to which the image pickup subject is a periodic blinker as the global blinker likelihood.
As a result, when the subject is a periodic blinking object such as an LED, the global blinking object likelihood is calculated to be high. In particular, the global blinking object likelihood is increased when a periodic blinking object is imaged in most of the target pixel region.
 上記したイメージセンサにおける前記大局的特徴検出部は、前記画素領域ごとに前記第1画素の輝度平均値の時間的な変化量を第1画素変化量として算出する処理と、前記画素領域ごとに前記第2画素の輝度平均値の時間的な変化量を第2画素変化量として算出する処理と、前記第1画素変化量と前記第2画素変化量の差分を変化量差分値として算出する処理と、を実行することにより前記検出を行ってもよい。
 画素領域ごとの輝度平均値の時間的な変化量である第1画素変化量や第2画素変化量の情報は、画素ごとに算出されるよりもデータ量が少なくされる。同様に、第1画素変化量と第2画素変化量から算出される変化量差分値についてもデータ量が少なくされる。
The global feature detection unit in the image sensor described above performs a process of calculating a temporal change amount of the brightness average value of the first pixel as a first pixel change amount for each pixel area, and the pixel area for each. A process of calculating the temporal change amount of the brightness average value of the second pixel as a second pixel change amount, and a process of calculating the difference between the first pixel change amount and the second pixel change amount as a change amount difference value. , May be performed by executing.
The amount of data for the information on the amount of change in the first pixel and the amount of change in the second pixel, which is the amount of change in the brightness average value for each pixel region over time, is smaller than that calculated for each pixel. Similarly, the amount of data is reduced for the change amount difference value calculated from the change amount of the first pixel and the change amount of the second pixel.
 上記したイメージセンサにおいて、前記大局的明滅体尤度は、前記変化量差分値が大きいほど大きく、前記第2画素変化量が小さいほど大きくなるように算出されてもよい。
 例えば、受光感度の高い第1画素の露光時間を受光感度の低い第2画素の露光時間よりも短くした場合に、第1画素の露光時間の全てが周期明滅体における非発光状態に含まれる場合がある。また、そのような場合であっても、第2画素の露光時間の一部は周期明滅体における発光状態に含まれる場合がある。このような場合には、変化量差分値が大きくなる。
 更に、第2画素変化量は、露光時間の長い第2画素は発光状態の周期明滅体が写っている可能性が高くなる。特に、周期明滅体の非発光時間よりも露光時間が長い場合には、確実に周期明滅体の発光状態が撮像されていることとなり、第2画素変化量は小さくなる。
In the above-mentioned image sensor, the global blinking body likelihood may be calculated so as to be larger as the change amount difference value is larger and larger as the second pixel change amount is smaller.
For example, when the exposure time of the first pixel having high light receiving sensitivity is shorter than the exposure time of the second pixel having low light receiving sensitivity, the entire exposure time of the first pixel is included in the non-light emitting state in the periodic blinker. There is. Even in such a case, a part of the exposure time of the second pixel may be included in the light emitting state in the periodic blinker. In such a case, the change amount difference value becomes large.
Further, regarding the amount of change in the second pixel, there is a high possibility that the second pixel having a long exposure time shows a periodic blinker in a light emitting state. In particular, when the exposure time is longer than the non-light emission time of the periodic blinker, the light emission state of the periodic blinker is surely captured, and the amount of change in the second pixel becomes small.
 上記したイメージセンサにおいては、前記大局的特徴検出部によって検出された前記大局的特徴を用いて画素ごとの特徴を検出する局所的特徴検出部を備えていてもよい。
 これにより、画素ごとに被写体の特徴が検出される。
The image sensor described above may include a local feature detection unit that detects features for each pixel using the global features detected by the global feature detection unit.
As a result, the characteristics of the subject are detected for each pixel.
 上記したイメージセンサにおける前記局所的特徴検出部は、前記第1画素と前記第2画素の輝度値の差分として算出された画素間差分値と前記大局的明滅体尤度に基づいて画素ごとの明滅体尤度を局所的明滅体尤度として算出してもよい。
 画素間差分値や大局的明滅体尤度は、被写体が周期明滅体である度合いを推し量るために用いることができる。
The local feature detection unit in the image sensor described above blinks for each pixel based on the pixel-to-pixel difference value calculated as the difference between the luminance values of the first pixel and the second pixel and the global blinker likelihood. The body likelihood may be calculated as a local blinking body likelihood.
The pixel-to-pixel difference value and the global blinking object likelihood can be used to estimate the degree to which the subject is a periodic blinking object.
 上記したイメージセンサにおける前記局所的明滅体尤度は、前記画素間差分値が大きい画素ほど大きく、前記大局的明滅体尤度が大きい画素ほど大きくなるように算出されてもよい。
 例えば、画素間差分値が大きい被写体としては、動被写体や周期明滅体などを挙げることができる。これらの被写体のうち、画素間差分値が大きい被写体であって、且つ、大局的明滅体尤度が高い被写体は、動被写体ではなく周期明滅体である可能性が高い。
The local blinking body likelihood in the above-mentioned image sensor may be calculated so as to be larger for pixels having a larger difference value between pixels and larger for pixels having a larger global blinking body likelihood.
For example, as a subject having a large difference value between pixels, a moving subject, a periodic blinker, or the like can be mentioned. Among these subjects, a subject having a large difference value between pixels and having a high global blinking object likelihood is likely to be a periodic blinking object rather than a moving subject.
 上記したイメージセンサにおける前記ブレンド率決定部は、前記大局的明滅体尤度が高いほど前記第2信号の比率が高くなるように前記ブレンド率を決定してもよい。
 第1画素と第2画素の露光時間の関係によっては、第2画素は第1画素よりも長い露光時間とされる。この場合における第2画素は、発光状態の周期明滅体が撮像されている可能性高い。
The blend ratio determination unit in the image sensor may determine the blend ratio so that the higher the global blinking likelihood is, the higher the ratio of the second signal is.
Depending on the relationship between the exposure time of the first pixel and the exposure time of the second pixel, the second pixel has a longer exposure time than the first pixel. In this case, it is highly possible that the second pixel is an image of a periodic blinker in a light emitting state.
 上記したイメージセンサにおける前記ブレンド率決定部は、前記局所的明滅体尤度が高いほど前記第2信号の比率が高くなるように画素ごとに前記ブレンド率を決定してもよい。
 第1画素と第2画素の露光時間の関係によっては、第2画素は第1画素よりも長い露光時間とされる。この場合における第2画素は、発光状態の周期明滅体を捉えた状態で生成される可能性が高い。
The blend rate determination unit in the image sensor may determine the blend rate for each pixel so that the higher the local blinking likelihood is, the higher the ratio of the second signal is.
Depending on the relationship between the exposure time of the first pixel and the exposure time of the second pixel, the second pixel has a longer exposure time than the first pixel. In this case, the second pixel is likely to be generated in a state of capturing a periodic blinker in a light emitting state.
 上記したイメージセンサにおける前記局所的特徴検出部は、局所的動被写体尤度を算出する処理を実行可能とされ、前記局所的動被写体尤度は、前記画素間差分値が大きい画素ほど大きく、前記大局的明滅体尤度が小さい画素ほど大きくなるように算出されてもよい。
 例えば、画素間差分値が大きい被写体としては、動被写体や周期明滅体などを挙げることができる。これらの被写体のうち、画素間差分値が大きい被写体であって、且つ、大局的明滅体尤度が低い被写体は、動被写体である可能性が高い。
The local feature detection unit in the image sensor described above is capable of executing a process of calculating the local moving subject likelihood, and the local moving subject likelihood is larger as the pixel-to-pixel difference value is larger. It may be calculated so that the pixel with the smaller global blinker likelihood becomes larger.
For example, as a subject having a large difference value between pixels, a moving subject, a periodic blinker, or the like can be mentioned. Among these subjects, a subject having a large difference value between pixels and having a low global blinking likelihood is highly likely to be a moving subject.
 上記したイメージセンサにおける前記大局的特徴検出部は、前記画素領域ごとに前記第1画素の輝度の時間的な変化量を第1画素変化量として算出する処理と、前記画素領域ごとに前記第2画素の輝度の時間的な変化量を第2画素変化量として算出する処理と、前記第1画素変化量と前記第2画素変化量の差分を変化量差分値として算出する処理と、を実行し、撮像被写体が動被写体である度合いを表す大局的動被写体尤度は、前記変化量差分値が小さいほど大きく且つ前記第2画素変化量が大きいほど大きくなるように算出されてもよい。
 例えば、受光感度が異なっていても、第1画素と第2画素から出力される第1信号及び第2信号の変化量は被写体が動被写体だった場合に共に大きくなるため、その差分は小さくなることが考えられる。
The global feature detection unit in the image sensor described above includes a process of calculating a temporal change in the brightness of the first pixel for each pixel region as a first pixel change, and a second for each pixel region. A process of calculating the temporal change amount of the pixel brightness as the second pixel change amount and a process of calculating the difference between the first pixel change amount and the second pixel change amount as the change amount difference value are executed. The global moving subject likelihood, which indicates the degree to which the imaged subject is a moving subject, may be calculated so as to be larger as the change amount difference value is smaller and larger as the second pixel change amount is larger.
For example, even if the light receiving sensitivities are different, the amount of change in the first signal and the second signal output from the first pixel and the second pixel is large when the subject is a moving subject, so that the difference is small. Can be considered.
 上記したイメージセンサにおいては、画素ごとの動被写体尤度である局所的動被写体尤度を算出する処理を実行する局所的特徴検出部を備え、前記局所的動被写体尤度は、前記第1画素と前記第2画素の輝度の差分として算出された画素間差分値が大きい画素ほど大きく且つ前記大局的動被写体尤度が大きい画素ほど大きくなるように算出されてもよい。
 例えば、画素間差分値が大きい被写体としては、動被写体や周期明滅体などを挙げることができる。これらの被写体のうち、画素間差分値が大きい被写体であって、且つ、局所的動被写体尤度が高い被写体は、周期明滅体ではなく動被写体である可能性が高い。
The image sensor described above includes a local feature detection unit that executes a process of calculating the local moving subject likelihood, which is the moving subject likelihood for each pixel, and the local moving subject likelihood is the first pixel. It may be calculated so that the pixel having a larger difference value between pixels calculated as the difference in the brightness of the second pixel is larger and the pixel having a larger global moving subject likelihood is larger.
For example, as a subject having a large difference value between pixels, a moving subject, a periodic blinker, or the like can be mentioned. Among these subjects, a subject having a large difference value between pixels and having a high likelihood of a local moving subject is likely to be a moving subject rather than a periodic blinker.
 上記したイメージセンサにおける前記ブレンド率決定部は、前記大局的動被写体尤度が高いほど前記第1信号の比率が高くなるように前記ブレンド率を決定してもよい。
 第1信号は、第2信号よりも短い露光時間を経て得られる信号とされる。従って、第1信号に基づいて生成された画像データは、第2信号に基づいて生成された画像データよりも動被写体のブラーが抑制されたものとなる。
The blend ratio determination unit in the image sensor may determine the blend ratio so that the ratio of the first signal increases as the global moving subject likelihood increases.
The first signal is a signal obtained after an exposure time shorter than that of the second signal. Therefore, the image data generated based on the first signal has less blurring of the moving subject than the image data generated based on the second signal.
 上記したイメージセンサにおける前記ブレンド率決定部は、前記局所的動被写体尤度が高いほど前記第1信号の比率が高くなるように画素ごとに前記ブレンド率を決定してもよい。
 第1信号は、第2信号よりも短い露光時間を経て得られる信号とされる。従って、第1信号に基づいて生成された画像データは、第2信号に基づいて生成された画像データよりも動被写体のブラーが抑制されたものとなる。
The blend rate determination unit in the image sensor may determine the blend rate for each pixel so that the higher the local moving subject likelihood is, the higher the ratio of the first signal is.
The first signal is a signal obtained after an exposure time shorter than that of the second signal. Therefore, the image data generated based on the first signal has less blurring of the moving subject than the image data generated based on the second signal.
 上記したイメージセンサにおいて、前記第1画素と前記第2画素は、受光面積が異なることにより異なる受光感度を有するように構成されていてもよい。
 これにより、受光感度の異なる第1画素と第2画素を用いて画素領域ごとの大局的特徴を検出することができる。
In the image sensor described above, the first pixel and the second pixel may be configured to have different light receiving sensitivities due to different light receiving areas.
As a result, it is possible to detect the global feature of each pixel region by using the first pixel and the second pixel having different light receiving sensitivities.
 上記したイメージセンサにおいては、前記第1画素の露光時間は前記第2画素の露光時間よりも短くされてもよい。
 これにより、例えば、大局的特徴として周期的に明滅する周期明滅体の特徴を検出することが可能となる。
In the image sensor described above, the exposure time of the first pixel may be shorter than the exposure time of the second pixel.
This makes it possible to detect, for example, the characteristic of a periodic blinking body that blinks periodically as a global characteristic.
 上記したイメージセンサにおける前記画素アレイ部は、前記第1の受光感度及び前記第2の受光感度の何れとも異なる第3の受光感度によって読み出される第3信号を出力可能とされ、前記大局的特徴検出部は、前記第1信号と前記第2信号の何れか一方の信号と前記第3信号とに基づいて前記第3信号が出力される画素が複数含まれた画素領域について大局的特徴を検出してもよい。
 これにより、ブレンド率決定部は三つの画素信号をブレンドするためのブレンド率を決定する。
The pixel array unit in the image sensor described above is capable of outputting a third signal read by a third light receiving sensitivity that is different from either the first light receiving sensitivity or the second light receiving sensitivity, and the global feature detection. The unit detects a global feature in a pixel region including a plurality of pixels from which the third signal is output based on either one of the first signal and the second signal and the third signal. You may.
As a result, the blend rate determination unit determines the blend rate for blending the three pixel signals.
 本技術に係る撮像装置は、光電変換を行うイメージセンサと、被写体からの反射光を前記イメージセンサに集光する光学部材と、を備え、前記イメージセンサは、第1の受光感度による読み出しが行われる第1画素と前記第1の受光感度よりも低い感度である第2の受光感度による読み出しが行われる第2画素とが二次元に配列された画素アレイ部と、第1の露光時間とされた前記第1画素から出力される第1信号と前記第1の露光時間とは異なる第2の露光時間とされた前記第2画素から出力される第2信号とに基づいて、前記第1画素と前記第2画素がそれぞれ複数含まれた画素領域についての特徴を大局的特徴として検出する大局的特徴検出部と、前記大局的特徴検出部の検出結果に基づいて前記第1信号と前記第2信号のブレンド率を決定するブレンド率決定部と、を備えたものである。 The image pickup apparatus according to the present technology includes an image sensor that performs photoelectric conversion and an optical member that collects the reflected light from the subject on the image sensor, and the image sensor reads out by the first light receiving sensitivity. A pixel array unit in which the first pixel and the second pixel to be read by the second light receiving sensitivity, which is lower than the first light receiving sensitivity, are arranged in two dimensions, and the first exposure time. The first pixel is based on the first signal output from the first pixel and the second signal output from the second pixel having a second exposure time different from that of the first exposure time. The first signal and the second signal are based on the detection results of the global feature detection unit that detects features of a pixel region containing a plurality of the second pixels as global features, and the global feature detection unit. It is provided with a blend rate determination unit for determining the blend rate of the signal.
 本技術に係る信号処理方法は、第1の露光時間により第1の受光感度による読み出しが行われる第1画素から出力される第1信号と前記第1の露光時間とは異なる第2の露光時間により前記第1の受光感度よりも低い感度である第2の受光感度による読み出しが行われる第2画素から出力される第2信号とに基づいて、前記第1画素と前記第2画素がそれぞれ複数含まれた画素領域についての特徴を大局的特徴として検出する処理と、前記大局的特徴の検出結果に基づいて前記第1信号と前記第2信号のブレンド率を決定する処理と、を備えたものである。
 このような撮像装置や信号処理方法によっても上記した各種の作用を得ることができる。
In the signal processing method according to the present technology, the first signal output from the first pixel, which is read out by the first light receiving sensitivity according to the first exposure time, and the second exposure time different from the first exposure time. A plurality of the first pixel and the second pixel are each based on the second signal output from the second pixel in which the reading is performed by the second light receiving sensitivity, which is lower than the first light receiving sensitivity. A process including a process of detecting a feature of a included pixel region as a global feature and a process of determining a blend ratio of the first signal and the second signal based on the detection result of the global feature. Is.
The above-mentioned various actions can also be obtained by such an image pickup device and a signal processing method.
本技術に係るイメージセンサの構成例を示すブロック図である。It is a block diagram which shows the structural example of the image sensor which concerns on this technique. 画素ユニットの構成例を示す概略図である。It is a schematic diagram which shows the structural example of a pixel unit. 画素アレイ部の構成例を示す概略図である。It is a schematic diagram which shows the structural example of a pixel array part. 画素アレイ部において複数の画素領域が設定された例を示す概略図である。It is a schematic diagram which shows the example in which a plurality of pixel areas are set in a pixel array part. 画素アレイ部において九つの画素領域が設定された例を示す概略図である。It is a schematic diagram which shows the example which nine pixel areas were set in the pixel array part. 信号処理部の構成例を示すブロック図である。It is a block diagram which shows the structural example of a signal processing part. 大局的特徴検出部の構成例を示すブロック図である。It is a block diagram which shows the structural example of the global feature detection part. ベイヤー配列のカラーフィルタを有する画素アレイ部の構成例を示す概略図である。It is a schematic diagram which shows the structural example of the pixel array part which has the color filter of the Bayer array. 変化量差分値と第2画素変化量から大局的特徴を検出する例を説明するための図である。It is a figure for demonstrating an example of detecting a global feature from the change amount difference value and the 2nd pixel change amount. 大局的明滅体尤度と画素間差分値に基づいて算出される局所的明滅体尤度を説明するためのグラフ図である。It is a graph for demonstrating the local blinking body likelihood calculated based on the global blinking body likelihood and the inter-pixel difference value. 大局的動被写体尤度と画素間差分値に基づいて算出される局所的動被写体尤度を説明するためのグラフ図である。It is a graph for demonstrating the local moving subject likelihood calculated based on the global moving subject likelihood and the inter-pixel difference value. 画素への入射光量と仮ブレンド率の関係を表すグラフ図である。It is a graph which shows the relationship between the amount of light incident on a pixel, and the tentative blend ratio. 第1の適用例における撮像装置の構成例を示すブロック図である。It is a block diagram which shows the structural example of the image pickup apparatus in the 1st application example. 第2の適用例における車載システムの構成例を示すブロック図である。It is a block diagram which shows the configuration example of the in-vehicle system in the 2nd application example. 三つ以上の信号を用いる例における信号処理部の構成例を示すブロック図である。It is a block diagram which shows the structural example of the signal processing part in the example which uses three or more signals.
 以下、添付図面を参照し、本技術に係る実施の形態を次の順序で説明する。
<1.イメージセンサの構成>
<2.信号処理部の構成と処理内容>
<3.第1の適用例>
<4.第2の適用例>
<5.三つ以上の信号を用いる例>
<6.変形例>
<7.まとめ>
<8.本技術>
Hereinafter, embodiments according to the present technology will be described in the following order with reference to the accompanying drawings.
<1. Image sensor configuration>
<2. Signal processing unit configuration and processing content>
<3. First application example>
<4. Second application example>
<5. Example of using three or more signals>
<6. Modification example>
<7. Summary>
<8. This technology>
<1.イメージセンサの構成>
 図1に本技術の実施の形態に係るイメージセンサ1の構成を示す。
 イメージセンサ1は、画素アレイ部2と第1読出回路3と第2読出回路4と信号処理部5とを備えて構成されている。
<1. Image sensor configuration>
FIG. 1 shows the configuration of the image sensor 1 according to the embodiment of the present technology.
The image sensor 1 includes a pixel array unit 2, a first read circuit 3, a second read circuit 4, and a signal processing unit 5.
 画素アレイ部2は、画素ユニット6が行方向及び列方向に複数配置されて成る。画素ユニット6は、複数の画素Gを含んで構成されている。画素Gは、光電変換素子を有しており、受光量に応じた電気信号を得る。本実施の形態においては、一つの画素ユニット6に二つの画素Gが含まれている例について説明する。但し、画素ユニット6が三つ以上の画素Gを含んで構成されていてもよい。 The pixel array unit 2 is composed of a plurality of pixel units 6 arranged in the row direction and the column direction. The pixel unit 6 is configured to include a plurality of pixels G. The pixel G has a photoelectric conversion element, and obtains an electric signal according to the amount of received light. In the present embodiment, an example in which one pixel unit 6 includes two pixels G will be described. However, the pixel unit 6 may be configured to include three or more pixels G.
 画素ユニット6の構成例を図2に示す。
 画素ユニット6は、受光感度の異なる第1画素G1と第2画素G2を含んで構成されている。
FIG. 2 shows a configuration example of the pixel unit 6.
The pixel unit 6 includes a first pixel G1 and a second pixel G2 having different light receiving sensitivities.
 第1画素G1は、第2画素G2よりも受光面積が大きくされている。また、第1画素G1の露光時間は第2画素G2の露光時間よりも短くされている。一例を挙げると、第1画素G1の露光時間は8msecとされ、第2画素G2の露光時間は11msecとされている。 The light receiving area of the first pixel G1 is larger than that of the second pixel G2. Further, the exposure time of the first pixel G1 is shorter than the exposure time of the second pixel G2. As an example, the exposure time of the first pixel G1 is 8 msec, and the exposure time of the second pixel G2 is 11 msec.
 第1画素G1の受光感度は、第2画素G2の受光感度よりも大きくされている。従って、第1画素G1の方が第2画素G2よりも飽和しやすくされている。 The light receiving sensitivity of the first pixel G1 is higher than the light receiving sensitivity of the second pixel G2. Therefore, the first pixel G1 is more likely to be saturated than the second pixel G2.
 第1画素G1及び第2画素G2は、被写体(撮像被写体)によって反射された反射光を受光して光電変換を行い、画素信号として後段に出力する。第1画素G1から出力される画素信号を第1信号S1とし、第2画素G2から出力される画素信号を第2信号S2とする。 The first pixel G1 and the second pixel G2 receive the reflected light reflected by the subject (imaging subject), perform photoelectric conversion, and output it as a pixel signal in the subsequent stage. The pixel signal output from the first pixel G1 is referred to as the first signal S1, and the pixel signal output from the second pixel G2 is referred to as the second signal S2.
 なお、以降の説明においては、第1画素G1と第2画素G2の受光感度の違いが受光面積及び露光時間に基づく例を挙げて説明するが、それ以外の場合も考えられる。例えば、第1画素G1と第2画素G2の受光感度の違いが受光面積のみに基づくものであってもよいし、露光時間の長さのみに基づくものであってもよいし、受光素子(PD:Photodiode)の特性の違いのみに基づくものであってもよい。もちろん、これらの複数の要因に基づくものであってもよい。 In the following description, the difference in light receiving sensitivity between the first pixel G1 and the second pixel G2 will be described by giving an example based on the light receiving area and the exposure time, but other cases are also conceivable. For example, the difference in light receiving sensitivity between the first pixel G1 and the second pixel G2 may be based only on the light receiving area, may be based only on the length of the exposure time, or may be based only on the length of the exposure time, or the light receiving element (PD). : Photodiode) may be based only on the difference in characteristics. Of course, it may be based on these multiple factors.
 図1の説明に戻る。
 第1読出回路3は、画素アレイ部2の各画素ユニット6に設けられた第1画素G1から第1信号S1を読み出す構成と共に、AD(Analog to Digital)変換やノイズ低減のための信号処理回路等を備えている。第1信号S1は、第1読出回路3を介して信号処理部5に供給される。
Returning to the description of FIG.
The first read circuit 3 is a signal processing circuit for AD (Analog to Digital) conversion and noise reduction, as well as a configuration for reading the first signal S1 from the first pixel G1 provided in each pixel unit 6 of the pixel array unit 2. Etc. are provided. The first signal S1 is supplied to the signal processing unit 5 via the first read circuit 3.
 第2読出回路4は、各画素ユニット6に設けられた第2画素G2から第2信号S2を読み出す構成と共に、AD変換やノイズ低減のための信号処理回路等を備えている。第2信号S2は、第2読出回路4を介して信号処理部5に供給される。 The second read circuit 4 has a configuration for reading the second signal S2 from the second pixel G2 provided in each pixel unit 6, and also includes a signal processing circuit for AD conversion and noise reduction. The second signal S2 is supplied to the signal processing unit 5 via the second read circuit 4.
 信号処理部5は、入力された第1信号S1と第2信号S2を用いて大局的特徴を検出する処理や局所的特徴を検出する処理を行う。
 また、信号処理部5は、検出した大局的特徴や局所的特徴を用いて、第1信号S1と第2信号S2のブレンド率を決定し、当該ブレンド率に基づいて第1信号S1と第2信号S2をブレンドすることによりHDR(High Dynamic Range)合成を行う。これにより、信号処理部5からHDR画像信号が出力される。
The signal processing unit 5 performs a process of detecting a global feature and a process of detecting a local feature using the input first signal S1 and second signal S2.
Further, the signal processing unit 5 determines the blend ratio of the first signal S1 and the second signal S2 by using the detected global feature or local feature, and the first signal S1 and the second signal S1 and the second signal S2 are determined based on the blend ratio. HDR (High Dynamic Range) synthesis is performed by blending the signal S2. As a result, the HDR image signal is output from the signal processing unit 5.
 大局的特徴とは、例えば複数の画素ユニット6が含まれる矩形領域として設定された画素領域GAごとに検出される被写体の特徴である。具体的には、ある画素領域GAに撮像された被写体が周期的に明滅する発光体である場合には、周期的に明滅する被写体としての特徴が検出される。信号処理部5は、大局的特徴として被写体が周期的に明滅する発光体である度合い(可能性の高さ)を尤度として算出する。 The global feature is a feature of a subject detected for each pixel area GA set as a rectangular area including, for example, a plurality of pixel units 6. Specifically, when the subject imaged in a certain pixel region GA is a light emitting body that blinks periodically, the feature as the subject that blinks periodically is detected. The signal processing unit 5 calculates the degree (high possibility) that the subject is a light emitting body that blinks periodically as a global feature as a likelihood.
 以下の例においては、周期的に明滅する被写体を「周期明滅体」と記載する。周期明滅体は、LED(Light Emitting Diode)のように自発光する被写体のみならず、例えば数msecで点消灯を繰り返しているLEDの光を反射している被写体なども含まれる。
 信号処理部5は、画素領域GAにおける被写体がどの程度の周期明滅体らしさを示しているかについて「大局的明滅体尤度LFL1」として算出する。即ち、大局的明滅体尤度LFL1は、画素領域GAにおける被写体の周期明滅体の度合いを示す指標値である。画素領域GAに撮像された被写体がLEDであれば大局的明滅体尤度LFL1が高くなる。また、画素領域GAにおける多くの画素ユニット6にLEDが撮像されているほど大局的明滅体尤度LFL1が高くなる。
In the following example, a subject that blinks periodically is referred to as a “periodic blinking body”. The periodic blinking body includes not only a subject that emits light by itself such as an LED (Light Emitting Diode), but also a subject that reflects the light of an LED that repeatedly turns on and off in several msec, for example.
The signal processing unit 5 calculates how much the subject in the pixel region GA is like a periodic blinker as a “global blinker likelihood LFL1”. That is, the global blinking object likelihood LFL1 is an index value indicating the degree of the periodic blinking object of the subject in the pixel region GA. If the subject imaged in the pixel region GA is an LED, the global blinking body likelihood LFL1 becomes high. Further, the more the LEDs are imaged in the many pixel units 6 in the pixel region GA, the higher the global blinker likelihood LFL1 is.
 また、信号処理部5は、画素領域GAごとに被写体が動被写体としての特徴をどの程度有しているかについて「大局的動被写体尤度LFM1」として算出する。即ち、大局的動被写体尤度LFM1は、被写体の動被写体の度合いを示す指標値である。画素領域GAに撮像された被写体が動被写体であれば大局的動被写体尤度LFM1が高くなる。また、画素領域GAにおける多くの画素ユニット6に動被写体が撮像されているほど大局的動被写体尤度LFM1が高くなる。 Further, the signal processing unit 5 calculates how much the subject has the characteristics as a moving subject for each pixel region GA as "global moving subject likelihood LFM1". That is, the global moving subject likelihood LFM1 is an index value indicating the degree of the moving subject of the subject. If the subject captured in the pixel region GA is a moving subject, the global moving subject likelihood LFM1 becomes high. Further, the larger the number of pixel units 6 in the pixel region GA, the higher the global moving subject likelihood LFM1.
 なお、周期明滅体や動被写体の検出は、時間経過に伴って画素領域GAにおいて輝度値が変化している被写体を検出することと言える。即ち、これらの大局的特徴は、時間経過に伴う輝度変化についての特徴と換言できる。 It can be said that the detection of a periodic blinking object or a moving subject detects a subject whose luminance value changes in the pixel region GA with the passage of time. That is, these global features can be rephrased as features regarding the change in luminance with the passage of time.
 信号処理部5は、大局的特徴を検出するために複数の画素ユニット6を画素領域GAとして扱う。例えば、図3に示すように二次元に配列された画素ユニット6を図4に示すようにいくつかの矩形領域に区切り、各矩形領域を画素領域GAとして扱う。
 例えば、図5に示すように、垂直方向及び水平方向それぞれ三つの画素領域GAに分割した場合には、九つの画素領域GAが設定される。信号処理部5は九つの画素領域GAそれぞれについて大局的特徴の検出を行う。
The signal processing unit 5 treats the plurality of pixel units 6 as the pixel region GA in order to detect the global feature. For example, the pixel units 6 arranged two-dimensionally as shown in FIG. 3 are divided into several rectangular areas as shown in FIG. 4, and each rectangular area is treated as a pixel area GA.
For example, as shown in FIG. 5, when each of the vertical direction and the horizontal direction is divided into three pixel region GAs, nine pixel region GAs are set. The signal processing unit 5 detects global features for each of the nine pixel region GAs.
 信号処理部5が検出する局所的特徴とは、画素ユニット6ごとに検出される被写体の特徴である。即ち、画素ユニット6が有する第1画素G1について検出された被写体の特徴は、当該第1画素G1が属する画素ユニット6における被写体の特徴として扱われる。第2画素G2についても同様である。 The local feature detected by the signal processing unit 5 is a feature of the subject detected for each pixel unit 6. That is, the feature of the subject detected for the first pixel G1 of the pixel unit 6 is treated as the feature of the subject in the pixel unit 6 to which the first pixel G1 belongs. The same applies to the second pixel G2.
 信号処理部5は、画素ユニット6について検出された周期明滅体としての特徴や動被写体としての特徴を局所的特徴として扱う。また、信号処理部5は、局所的特徴の度合いを示す指標値として、局所的明滅体尤度LFL2や局所的動被写体尤度LFM2を算出する。局所的特徴は大局的特徴等を用いて検出される。
The signal processing unit 5 treats the feature as a periodic blinker and the feature as a moving subject detected for the pixel unit 6 as local features. Further, the signal processing unit 5 calculates the local blinking body likelihood LFL2 and the local moving subject likelihood LFM2 as index values indicating the degree of the local feature. Local features are detected using global features and the like.
<2.信号処理部の構成と処理内容>
 信号処理部5の構成例について、図6に示す。信号処理部5は、画素間差分検出部7と大局的特徴検出部8と局所的特徴検出部9とブレンド率決定部10とブレンド部11とを備えている。
<2. Signal processing unit configuration and processing content>
FIG. 6 shows a configuration example of the signal processing unit 5. The signal processing unit 5 includes an inter-pixel difference detection unit 7, a global feature detection unit 8, a local feature detection unit 9, a blend rate determination unit 10, and a blend unit 11.
 画素間差分検出部7は、画素ユニット6ごとに第1画素G1と第2画素G2の輝度の差分を検出し、輝度の差分量を「画素間差分値GD」として算出する。画素間差分値GDは、第1画素G1と第2画素G2の露光時間が異なることにより後段の処理で有効に利用することができる。
 画素間差分値GDは、局所的特徴検出部9において被写体が周期明滅体としての特徴を有しているのか或いは動被写体としての特徴を有しているのかを選別するのに利用される。詳しくは後述する。
 なお、画素間差分値GDは0~255の256段階などのように少ないビット数で表現することによりデータ量の圧縮が可能である。
The pixel-to-pixel difference detection unit 7 detects the difference in luminance between the first pixel G1 and the second pixel G2 for each pixel unit 6, and calculates the difference in luminance as the “pixel-to-pixel difference value GD”. The pixel-to-pixel difference value GD can be effectively used in the subsequent processing because the exposure times of the first pixel G1 and the second pixel G2 are different.
The inter-pixel difference value GD is used in the local feature detection unit 9 to select whether the subject has a feature as a periodic blinker or a feature as a moving subject. Details will be described later.
The amount of data can be compressed by expressing the difference value GD between pixels with a small number of bits such as 256 steps from 0 to 255.
 大局的特徴検出部8は、前述したように、画素領域GAに撮像された被写体の特徴を大局的特徴として検出する。具体的には、大局的特徴検出部8は大局的特徴として画素領域GAごとに大局的明滅体尤度LFL1と大局的動被写体尤度LFM1を算出する。なお、画素領域GAによっては、大局的明滅体尤度LFL1のみを算出してもよいし、大局的動被写体尤度LFM1のみを算出してもよい。
 本実施の形態においては、大局的明滅体尤度LFL1及び大局的動被写体尤度LFM1を0.0~1.0の範囲の値として算出する。なお、大局的明滅体尤度LFL1及び大局的動被写体尤度LFM1は数値が高いほど対象の画素領域GAに撮像された被写体がより高い特徴を有していることを示している。
As described above, the global feature detection unit 8 detects the feature of the subject imaged in the pixel region GA as the global feature. Specifically, the global feature detection unit 8 calculates the global blinker likelihood LFL1 and the global moving subject likelihood LFM1 for each pixel region GA as global features. Depending on the pixel region GA, only the global blinking body likelihood LFL1 may be calculated, or only the global moving subject likelihood LFM1 may be calculated.
In the present embodiment, the global blinker likelihood LFL1 and the global moving subject likelihood LFM1 are calculated as values in the range of 0.0 to 1.0. The global blinking body likelihood LFL1 and the global moving subject likelihood LFM1 indicate that the higher the numerical value, the higher the characteristic of the subject imaged in the target pixel region GA.
 ここで、大局的特徴検出部8についてのより具体的な構成例を図7に示す。
 大局的特徴検出部8は、第1信号S1に係る処理を行う第1輝度算出部21A、領域毎輝度平均値第1算出部22A、第1メモリ23A、領域毎輝度変化第1算出部24Aを備えている。
Here, a more specific configuration example of the global feature detection unit 8 is shown in FIG.
The global feature detection unit 8 includes a first brightness calculation unit 21A that performs processing related to the first signal S1, a region-specific brightness average value first calculation unit 22A, a first memory 23A, and a region-specific brightness change first calculation unit 24A. I have.
 また、大局的特徴検出部8は、第2信号S2に係る処理を行う第2輝度算出部21B、領域毎輝度平均値第2算出部22B、第2メモリ23B、領域毎輝度変化第2算出部24Bを備えている。 Further, the global feature detection unit 8 includes a second brightness calculation unit 21B that performs processing related to the second signal S2, a region-specific brightness average value second calculation unit 22B, a second memory 23B, and a region-specific brightness change second calculation unit. It is equipped with 24B.
 更に、大局的特徴検出部8は、領域毎輝度変化第1算出部24A及び領域毎輝度変化第2算出部24Bから出力される信号を用いて画素領域GAごとの特徴量を算出する領域毎特徴量算出部25を備えている。 Further, the global feature detection unit 8 calculates the feature amount for each pixel region GA using the signals output from the region-by-region luminance change first calculation unit 24A and the region-by-region luminance change second calculation unit 24B. The quantity calculation unit 25 is provided.
 第1輝度算出部21Aは、第1信号S1に基づいて第1画素G1ごとの輝度値を算出する。一例を挙げると、画素アレイ部2における各画素ユニット6は、ベイヤー配列のカラーフィルタを有している。具体的には、画素アレイ部2は、図8に例示するように、R(赤)のカラーフィルタを有するR画素としての画素ユニット6Rと、G(緑)のカラーフィルタを有するG画素としての画素ユニット6Gと、B(青)のカラーフィルタを有するB画素としての画素ユニット6Bとを有している。 The first luminance calculation unit 21A calculates the luminance value for each first pixel G1 based on the first signal S1. As an example, each pixel unit 6 in the pixel array unit 2 has a Bayer array color filter. Specifically, as illustrated in FIG. 8, the pixel array unit 2 has a pixel unit 6R as an R pixel having an R (red) color filter and a G pixel having a G (green) color filter. It has a pixel unit 6G and a pixel unit 6B as a B pixel having a B (blue) color filter.
 画素ユニット6Rは、第1画素G1Rと第2画素G2Rを有している。同様に、画素ユニット6Gは、第1画素G1Gと第2画素G2Gを有し、画素ユニット6Bは、第1画素G1Bと第2画素G2Bを有している。 The pixel unit 6R has a first pixel G1R and a second pixel G2R. Similarly, the pixel unit 6G has a first pixel G1G and a second pixel G2G, and the pixel unit 6B has a first pixel G1B and a second pixel G2B.
 第1輝度算出部21Aは、図8に示すような配列とされた画素アレイ部2については、縦横それぞれ二つの画素ユニット6から成る四つの画素ユニット6を一つの画素セットGSとして扱い、画素セットGSごとに第1画素G1についての輝度値L1Sを算出する。例えば、以下の式(1)を用いて画素セットGSについての輝度値L1Sを算出する。 The first luminance calculation unit 21A treats the four pixel units 6 composed of two pixel units 6 vertically and horizontally as one pixel set GS for the pixel array unit 2 arranged as shown in FIG. 8, and sets the pixels. The luminance value L1S for the first pixel G1 is calculated for each GS. For example, the luminance value L1S for the pixel set GS is calculated using the following equation (1).
 画素セットGSの輝度値L1S=α×L1R+β×L1G+γ×L1B・・・式(1)
 但し、
 第1画素G1Rの輝度値=L1R
 第1画素G1Gの輝度値=L1G
 第1画素G1Bの輝度値=L1B
 とする。
Luminance value of pixel set GS L1S = α × L1R + β × L1G + γ × L1B ... Equation (1)
however,
Luminance value of first pixel G1R = L1R
Luminance value of first pixel G1G = L1G
Luminance value of first pixel G1B = L1B
And.
 係数α、β、γは各種考えられる。例えば、係数α=2、係数β=1、係数γ=2とされる。或いは、係数α=0.4、係数β=0.2、係数γ=0.4とされてもよい。 Coefficients α, β, γ can be various. For example, the coefficient α = 2, the coefficient β = 1, and the coefficient γ = 2. Alternatively, the coefficient α = 0.4, the coefficient β = 0.2, and the coefficient γ = 0.4 may be set.
 算出された輝度値L1Sは、一つの画素セットGSにおける第1画素G1についての輝度値であるが、輝度値L1Sを第1画素G1Rの輝度値L1R、第1画素G1Gの輝度値L1G、第1画素G1Bの輝度値L1Bとみなしてもよい。 The calculated luminance value L1S is a luminance value for the first pixel G1 in one pixel set GS, but the luminance value L1S is the luminance value L1R of the first pixel G1R, the luminance value L1G of the first pixel G1G, and the first. It may be regarded as the luminance value L1B of the pixel G1B.
 まとめると、第1輝度算出部21Aは、画素ユニット6ごとの輝度値L1R,L1G,L1B、或いは、画素セットGSごとの輝度値L1Sを算出する。 In summary, the first luminance calculation unit 21A calculates the luminance values L1R, L1G, L1B for each pixel unit 6 or the luminance values L1S for each pixel set GS.
 領域毎輝度平均値第1算出部22Aは、画素領域GAごとに第1画素G1の輝度平均値L1Aを算出する。例えば、画素領域GAにn個の画素セットGSが含まれている場合には、それぞれの画素セットGSについて算出された第1画素G1についての輝度値L1Sを加算してnで除算することにより輝度平均値L1Aを算出する。 The area-wise brightness average value first calculation unit 22A calculates the brightness average value L1A of the first pixel G1 for each pixel area GA. For example, when the pixel region GA includes n pixel set GS, the luminance value L1S for the first pixel G1 calculated for each pixel set GS is added and divided by n to obtain the luminance. The average value L1A is calculated.
 算出された輝度平均値L1Aは、領域毎輝度変化第1算出部24Aに出力されると共に第1メモリ23Aに記憶される。 The calculated luminance average value L1A is output to the area-by-region luminance change first calculation unit 24A and stored in the first memory 23A.
 第1メモリ23Aは、1フレーム前の輝度平均値L1Aを記憶するために設けられている。
 領域毎輝度変化第1算出部24Aは、第1メモリ23Aから1フレーム前の輝度平均値L1Aを取得し、領域毎輝度平均値第1算出部22Aから現フレームについての輝度平均値L1Aを取得する。領域毎輝度変化第1算出部24Aは、1フレーム前の輝度平均値L1Aと現フレームの輝度平均値L1Aの差分を算出する。これにより、画素領域GAごとに第1画素G1の輝度変化を第1画素変化量G1Vとして算出する。算出された第1画素変化量G1Vは領域毎特徴量算出部25に出力される。
The first memory 23A is provided to store the brightness average value L1A one frame before.
The region-by-region brightness change first calculation unit 24A acquires the brightness average value L1A one frame before from the first memory 23A, and acquires the brightness average value L1A for the current frame from the region-by-region brightness average value first calculation unit 22A. .. The region-by-region brightness change first calculation unit 24A calculates the difference between the brightness average value L1A one frame before and the brightness average value L1A of the current frame. As a result, the luminance change of the first pixel G1 is calculated as the first pixel change amount G1V for each pixel region GA. The calculated first pixel change amount G1V is output to the feature amount calculation unit 25 for each area.
 第2輝度算出部21B、領域毎輝度平均値第2算出部22B、第2メモリ23B、領域毎輝度変化第2算出部24Bは、第2信号S2に係る処理を行う処理部であり、それぞれ第1輝度算出部21A、領域毎輝度平均値第1算出部22A、第1メモリ23A、領域毎輝度変化第1算出部24Aと同様の処理を行う。 The second brightness calculation unit 21B, the area-specific brightness average value second calculation unit 22B, the second memory 23B, and the area-by-region brightness change second calculation unit 24B are processing units that perform processing related to the second signal S2, respectively. 1 Performs the same processing as the brightness calculation unit 21A, the area-by-region brightness average value first calculation unit 22A, the first memory 23A, and the region-by-region brightness change first calculation unit 24A.
 具体的に、第2輝度算出部21Bは、画素セットGSごとに第2画素G2についての輝度値L2Sを算出する。例えば、以下の式(2)を用いて画素セットGSについての輝度値L2Sを算出する。 Specifically, the second luminance calculation unit 21B calculates the luminance value L2S for the second pixel G2 for each pixel set GS. For example, the luminance value L2S for the pixel set GS is calculated using the following equation (2).
 画素セットGSの輝度値L2S=α’×L2R+β’×L2G+γ’×L2B・・・式(2)
 但し、
 第2画素G2Rの輝度値=L2R
 第2画素G2Gの輝度値=L2G
 第2画素G2Bの輝度値=L2B
 とする。
Luminance value of pixel set GS L2S = α'× L2R + β'× L2G + γ'× L2B ... Equation (2)
however,
Luminance value of second pixel G2R = L2R
Luminance value of second pixel G2G = L2G
Luminance value of second pixel G2B = L2B
And.
 係数α’、β’、γ’は、例えば、係数α’=2、係数β’=1、係数γ’=2とされてもよいし、係数α’=0.4、係数β’=0.2、係数γ’=0.4とされてもよい。 The coefficients α', β', and γ'may be, for example, a coefficient α'= 2, a coefficient β'= 1, a coefficient γ'= 2, a coefficient α'= 0.4, and a coefficient β'= 0. .2, the coefficient γ'= 0.4 may be set.
 領域毎輝度平均値第2算出部22Bは、画素領域GAごとに第2画素G2の輝度平均値L2Aを算出する。 The second calculation unit 22B of the brightness average value for each region calculates the brightness average value L2A of the second pixel G2 for each pixel region GA.
 算出された輝度平均値L2Aは、領域毎輝度変化第2算出部24Bに出力されると共に第2メモリ23Bに記憶される。 The calculated luminance average value L2A is output to the region-by-region luminance change second calculation unit 24B and stored in the second memory 23B.
 領域毎輝度変化第2算出部24Bは、第2メモリ23Bから1フレーム前の輝度平均値L2Aを取得し、領域毎輝度平均値第2算出部22Bから現フレームについての輝度平均値L2Aを取得する。領域毎輝度変化第2算出部24Bは、1フレーム前の輝度平均値L2Aと現フレームの輝度平均値L2Aの差分を算出する。これにより、画素領域GAごとに第2画素G2の第2画素変化量G2Vを算出する。算出された第2画素変化量G2Vは領域毎特徴量算出部25に出力される。 The area-by-region brightness change second calculation unit 24B acquires the brightness average value L2A one frame before from the second memory 23B, and acquires the brightness average value L2A for the current frame from the area-by-region brightness average value second calculation unit 22B. .. The second calculation unit 24B for the brightness change for each region calculates the difference between the brightness average value L2A one frame before and the brightness average value L2A of the current frame. As a result, the second pixel change amount G2V of the second pixel G2 is calculated for each pixel region GA. The calculated second pixel change amount G2V is output to the feature amount calculation unit 25 for each area.
 領域毎特徴量算出部25は、第1画素G1についての画素領域GAごとの第1画素変化量G1Vと第2画素G2についての画素領域GAごとの第2画素変化量G2Vを用いて画素領域GAごとの特徴量を算出する。 The area-by-region feature amount calculation unit 25 uses the first pixel change amount G1V for each pixel area GA for the first pixel G1 and the second pixel change amount G2V for each pixel area GA for the second pixel G2. Calculate the feature amount for each.
 具体的には、領域毎特徴量算出部25は、画素領域GAについての第1画素変化量G1Vと第2画素変化量G2Vの差分の絶対値を変化量差分値VD(≧0)として算出する。
 そして、領域毎特徴量算出部25は、変化量差分値VDと第2画素変化量G2Vとに基づいて画素領域GAごとの特徴量を算出する。
Specifically, the area-by-region feature amount calculation unit 25 calculates the absolute value of the difference between the first pixel change amount G1V and the second pixel change amount G2V for the pixel area GA as the change amount difference value VD (≧ 0). ..
Then, the feature amount calculation unit 25 for each area calculates the feature amount for each pixel area GA based on the change amount difference value VD and the second pixel change amount G2V.
 例えば、変化量差分値VDが大きいほど且つ第2画素変化量G2Vが小さいほど大局的明滅体尤度LFL1が高くなるように特徴量の算出を行う。 For example, the feature amount is calculated so that the larger the change amount difference value VD and the smaller the second pixel change amount G2V, the higher the global blinker likelihood LFL1.
 変化量差分値VDが大きい画素領域GAにおいては、第1画素G1と第2画素G2との何れか一方でのみ被写体の輝度値の変化を捉えることができたことを示している。具体的には、第1画素変化量G1Vは、露光時間の短い第1画素G1から出力される第1信号S1の変化と換言することができる。第1画素G1は露光時間が短いため、被写体が周期明滅体である場合において、1フレーム前のタイミングでは非発光状態のLEDが撮像され、現フレームのタイミングでは発光状態のLEDが撮像される場合がある。この場合には、第1画素G1についての第1画素変化量G1Vは大きくなる。 In the pixel region GA where the change amount difference value VD is large, it is shown that the change in the brightness value of the subject can be captured only by either one of the first pixel G1 and the second pixel G2. Specifically, the first pixel change amount G1V can be rephrased as a change in the first signal S1 output from the first pixel G1 having a short exposure time. Since the exposure time of the first pixel G1 is short, when the subject is a periodic blinker, the LED in the non-light emitting state is imaged at the timing one frame before, and the LED in the light emitting state is imaged at the timing of the current frame. There is. In this case, the first pixel change amount G1V for the first pixel G1 becomes large.
 一方、第2画素変化量G2Vは、露光時間の長い第2画素G2から出力される第2信号S2の変化と換言することができる。第2画素G2は露光時間が長いため、被写体が周期明滅体である場合において、1フレーム前のタイミング及び現フレームのタイミングの双方において発光状態のLEDが撮像される可能性が高い。従って、第2画素変化量G2Vは小さくなる。特に、周期明滅体の非発光時間の長さよりも第2画素G2の露光時間の方が長い場合には、周期明滅体の非発光時間のみが撮像されるフレームが無くなるため、第2画素変化量G2Vは小さい。 On the other hand, the second pixel change amount G2V can be rephrased as a change in the second signal S2 output from the second pixel G2 having a long exposure time. Since the exposure time of the second pixel G2 is long, when the subject is a periodic blinker, there is a high possibility that the LED in the light emitting state is imaged at both the timing one frame before and the timing of the current frame. Therefore, the second pixel change amount G2V becomes small. In particular, when the exposure time of the second pixel G2 is longer than the length of the non-emission time of the periodic blinker, there is no frame in which only the non-emission time of the periodic blinker is imaged, so that the amount of change in the second pixel is changed. G2V is small.
 従って、被写体が周期明滅体である場合には、変化量差分値VDは大きくなる。故に、変化量差分値VDが大きく且つ第2画素変化量G2Vが小さい画素領域GAでは、周期明滅体が撮像された可能性が高いと推定できる。 Therefore, when the subject is a periodic blinker, the change amount difference value VD becomes large. Therefore, it can be estimated that there is a high possibility that the periodic blinker is imaged in the pixel region GA in which the change amount difference value VD is large and the second pixel change amount G2V is small.
 なお、領域毎特徴量算出部25が算出する大局的明滅体尤度LFL1には確からしさの強弱がある。
 例えば、領域毎特徴量算出部25は、変化量差分値VDが大きくほど、且つ、第2画素変化量G2Vが小さいほど、大局的明滅体尤度LFL1が高くなるように大局的明滅体尤度LFL1を算出する。
It should be noted that the global blinking body likelihood LFL1 calculated by the feature amount calculation unit 25 for each region has strength and weakness of certainty.
For example, in the feature amount calculation unit 25 for each region, the larger the change amount difference value VD and the smaller the second pixel change amount G2V, the higher the global blinker likelihood LFL1. Calculate LFL1.
 また、領域毎特徴量算出部25は、画素領域GAごとの特徴量として大局的動被写体尤度LFM1を算出する。 Further, the feature amount calculation unit 25 for each area calculates the global moving subject likelihood LFM1 as the feature amount for each pixel area GA.
 例えば、変化量差分値VDが小さいほど且つ第2画素変化量G2Vが大きいほど大局的動被写体尤度LFM1が高くなるように特徴量の算出を行う。 For example, the feature amount is calculated so that the smaller the change amount difference value VD and the larger the second pixel change amount G2V, the higher the global moving subject likelihood LFM1.
 第1画素変化量G1V及び第2画素変化量G2Vは、露光時間の差はあれど被写体が動被写体である場合には輝度値の変化量は大きくなる。即ち、第1画素変化量G1V及び第2画素変化量G2Vは共に大きくなるため、変化量差分値VDは小さくなる。 The first pixel change amount G1V and the second pixel change amount G2V have a large change in luminance value when the subject is a moving subject, although there is a difference in exposure time. That is, since both the first pixel change amount G1V and the second pixel change amount G2V are large, the change amount difference value VD is small.
 従って、変化量差分値VDが小さく且つ第2画素変化量G2Vが大きい画素領域GAでは、動被写体が撮像された可能性が高いと推定できる。 Therefore, it can be estimated that there is a high possibility that a moving subject has been imaged in the pixel region GA where the change amount difference value VD is small and the second pixel change amount G2V is large.
 なお、領域毎特徴量算出部25が算出する大局的動被写体尤度LFM1には確からしさの強弱がある。
 例えば、領域毎特徴量算出部25は、変化量差分値VDが小さくほど、且つ、第2画素変化量G2Vが大きいほど、大局的動被写体尤度LFM1が高くなるように大局的動被写体尤度LFM1を算出する。
It should be noted that the global moving subject likelihood LFM1 calculated by the feature amount calculation unit 25 for each region has strength and weakness of certainty.
For example, the feature amount calculation unit 25 for each region increases the global moving subject likelihood LFM1 so that the smaller the change amount difference value VD and the larger the second pixel change amount G2V, the higher the global moving subject likelihood. Calculate LFM1.
 領域毎特徴量算出部25が算出する大局的明滅体尤度LFL1及び大局的動被写体尤度LFM1は、値の取り得る範囲が0.0~1.0とされる。具体的に図9を参照して説明する。 The range of values that can be taken for the global blinking body likelihood LFL1 and the global moving subject likelihood LFM1 calculated by the feature amount calculation unit 25 for each region is 0.0 to 1.0. Specifically, it will be described with reference to FIG.
 図9に示すグラフは、変化量差分値VDを横軸とし第2画素変化量G2Vを縦軸として各画素領域GAの大局的特徴を示したものである。
 周期明滅体が撮像された画素領域GAについてはグラフの横軸付近の領域AR1内に分布する。また、動被写体が撮影された画素領域GAについてはグラフの縦軸付近の領域AR2に分布する。
The graph shown in FIG. 9 shows the global characteristics of each pixel region GA with the change amount difference value VD as the horizontal axis and the second pixel change amount G2V as the vertical axis.
The pixel region GA in which the periodic blinker is imaged is distributed in the region AR1 near the horizontal axis of the graph. Further, the pixel region GA in which the moving subject is photographed is distributed in the region AR2 near the vertical axis of the graph.
 また、大局的明滅体尤度LFL1は、変化量差分値VDが大きい画素領域GAほど、且つ、第2画素変化量G2Vが小さい画素領域GAほど、大局的明滅体尤度LFL1が高くされる。即ち、図9に示すように、領域AR1内に位置するポイントP1の大局的明滅体尤度LFL1は1.0とされ、領域AR1内に位置するポイントP2の大局的明滅体尤度LFL1は0.1とされる。 Further, as for the global blinking body likelihood LFL1, the larger the change amount difference value VD is, the larger the pixel region GA is, and the smaller the second pixel change amount G2V is, the higher the global blinking body likelihood LFL1 is. That is, as shown in FIG. 9, the global blinker likelihood LFL1 of the point P1 located in the region AR1 is 1.0, and the global blinker likelihood LFL1 of the point P2 located in the region AR1 is 0. It is said to be 1.
 同様に、大局的動被写体尤度LFM1は、変化量差分値VDが小さい画素領域GAほど、且つ、第2画素変化量G2Vが大きい画素領域GAほど、大局的動被写体尤度LFM1が高くされる。即ち、領域AR2内に位置するポイントP3の大局的動被写体尤度LFM1は1.0とされ、領域AR2内に位置するポイントP4の大局的動被写体尤度LFM1は0.1とされる。 Similarly, as for the global moving subject likelihood LFM1, the pixel region GA having a smaller change amount difference value VD and the pixel region GA having a larger second pixel change amount G2V have a higher global moving subject likelihood LFM1. That is, the global moving subject likelihood LFM1 of the point P3 located in the region AR2 is 1.0, and the global moving subject likelihood LFM1 of the point P4 located in the region AR2 is 0.1.
 なお、変化量差分値VDが大きい画素領域GAほど、且つ、第2画素変化量G2Vが小さい画素領域GAほど、大局的明滅体尤度LFL1が高くするのではなく、領域AR1の中央付近に位置する画素領域GAについての大局的明滅体尤度LFL1を1.0としてもよい。 It should be noted that the pixel region GA having a larger change amount difference value VD and the pixel region GA having a smaller second pixel change amount G2V do not increase the global blinker likelihood LFL1 but are located near the center of the region AR1. The global blinker likelihood LFL1 for the pixel region GA may be 1.0.
 同様に、変化量差分値VDが大きい画素領域GAほど、且つ、第2画素変化量G2Vが小さい画素領域GAほど、大局的明滅体尤度LFL1が高くするのではなく、領域AR2の中央付近に位置する画素領域GAについての大局的動被写体尤度LFM1を1.0としてもよい。 Similarly, the pixel region GA having a larger change amount difference value VD and the pixel region GA having a smaller second pixel change amount G2V do not have a higher global blinker likelihood LFL1 but are located near the center of the region AR2. The global moving subject likelihood LFM1 for the located pixel region GA may be 1.0.
 領域毎特徴量算出部25は、画素領域GAごとの大局的明滅体尤度LFL1及び大局的動被写体尤度LFM1を出力する。 The region-specific feature amount calculation unit 25 outputs the global blinker likelihood LFL1 and the global moving subject likelihood LFM1 for each pixel region GA.
 図6の説明に戻る。
 局所的特徴検出部9は、画素ユニット6ごとの被写体の特徴を局所的特徴として検出する処理を行う。大局的特徴検出部8が検出した大局的特徴は、画素領域GAについて被写体の特徴を検出したものであり、画素領域GAにおける特定の画素ユニット6については、被写体の特徴が検出された大局的特徴に当てはまらない場合もある。そのような場合において、大局的特徴に基づいてブレンド率を決定してしまうと、特定の画素ユニット6において適切でないブレンドが行われてしまう可能性がある。このような事情に鑑みて局所的特徴検出部9は、画素Gごとに局所的特徴の検出を行う。
Returning to the description of FIG.
The local feature detection unit 9 performs a process of detecting the feature of the subject for each pixel unit 6 as a local feature. The global feature detected by the global feature detection unit 8 is the one that detects the feature of the subject in the pixel region GA, and the global feature in which the feature of the subject is detected for the specific pixel unit 6 in the pixel region GA. It may not be the case. In such a case, if the blending ratio is determined based on the global characteristics, there is a possibility that inappropriate blending will be performed in the specific pixel unit 6. In view of such circumstances, the local feature detection unit 9 detects the local feature for each pixel G.
 具体的には、局所的特徴検出部9は、領域毎特徴量算出部25から出力される大局的明滅体尤度LFL1と画素間差分検出部7から出力される画素間差分値GDを用いて、局所的明滅体尤度LFL2を算出する。また、領域毎特徴量算出部25から出力される大局的動被写体尤度LFM1と画素間差分検出部7から出力される画素間差分値GDを用いて、局所的動被写体尤度LFM2を算出する。 Specifically, the local feature detection unit 9 uses the global blinker likelihood LFL1 output from the region-by-region feature amount calculation unit 25 and the pixel-to-pixel difference value GD output from the pixel-to-pixel difference detection unit 7. , The local blinker likelihood LFL2 is calculated. Further, the local moving subject likelihood LFM2 is calculated using the global moving subject likelihood LFM1 output from the region-by-region feature amount calculation unit 25 and the pixel-to-pixel difference value GD output from the pixel-to-pixel difference detecting unit 7. ..
 先ず、局所的明滅体尤度LFL2の算出について図10を参照して説明する。
 図10に示すグラフは、大局的明滅体尤度LFL1を横軸とし、画素間差分値GDを縦軸として、画素ユニット6ごとのデータを表したものである。ここで、縦軸の画素間差分値GDは0.0~1.0の値を取るように正規化したものとされる。
First, the calculation of the local blinking body likelihood LFL2 will be described with reference to FIG.
The graph shown in FIG. 10 shows data for each pixel unit 6 with the global blinker likelihood LFL1 as the horizontal axis and the pixel-to-pixel difference value GD as the vertical axis. Here, the difference value GD between pixels on the vertical axis is normalized so as to take a value of 0.0 to 1.0.
 局所的特徴検出部9は、ノイズ等による誤判定を防止するために、大局的明滅体尤度LFL1が所定の閾値Th1以上とされ且つ画素間差分値GDが所定の閾値Th2以上とされた領域Ar3にデータがプロットされる画素ユニット6について、被写体が周期明滅体としての特徴を有すると判定し局所的明滅体尤度LFL2を算出する。 The local feature detection unit 9 has a region in which the global blinking body likelihood LFL1 is set to a predetermined threshold Th1 or more and the pixel-to-pixel difference value GD is set to a predetermined threshold Th2 or more in order to prevent erroneous determination due to noise or the like. For the pixel unit 6 in which the data is plotted on Ar3, it is determined that the subject has the characteristic as a periodic blinker, and the local blinker likelihood LFL2 is calculated.
 局所的明滅体尤度LFL2は0.0から1.0までのいずれかの値を取る。例えば、図10に示すように、大局的明滅体尤度LFL1が閾値Th1とされ且つ画素間差分値GDが閾値Th2とされたポイントP5にプロットされる画素ユニット6については局所的明滅体尤度LFL2が0.0として算出される。また、大局的明滅体尤度LFL1及び画素間差分値GDが共に1.0とされたポイントP6にプロットされる画素ユニット6については撮像された被写体が周期明滅体である可能性が非常に高いため局所的明滅体尤度LFL2が1.0として算出される。 The local blinker likelihood LFL2 takes any value from 0.0 to 1.0. For example, as shown in FIG. 10, the local blinking body likelihood is plotted at the point P5 in which the global blinking body likelihood LFL1 is set to the threshold value Th1 and the pixel-to-pixel difference value GD is set to the threshold value Th2. LFL2 is calculated as 0.0. Further, with respect to the pixel unit 6 plotted at the point P6 in which the global blinking body likelihood LFL1 and the pixel-to-pixel difference value GD are both 1.0, it is highly possible that the captured subject is a periodic blinking body. Therefore, the local blinker likelihood LFL2 is calculated as 1.0.
 図10においては、ポイントP5からポイントP6に近づくほど局所的明滅体尤度LFL2が1.0へと近づくようにされる。
 なお、領域Ar3以外の領域にデータがプロットされる画素ユニット6についての局所的明滅体尤度LFL2については、値なしとしてもよいし、マイナスの値としてもよいし、0.0としてもよい。
In FIG. 10, the local blinker likelihood LFL2 is set to approach 1.0 as the point P5 approaches the point P6.
The local blinker likelihood LFL2 for the pixel unit 6 in which data is plotted in a region other than the region Ar3 may have no value, may be a negative value, or may be 0.0.
 次に、局所的動被写体尤度LFM2の算出について図11を参照して説明する。
 図11に示すグラフは、大局的動被写体尤度LFM1を横軸とし、画素間差分値GDを縦軸として、画素ごとのデータを表したものである。ここで、縦軸の画素間差分値GDは0.0~1.0の値を取るように正規化したものとされる。
Next, the calculation of the local moving subject likelihood LFM2 will be described with reference to FIG.
The graph shown in FIG. 11 shows data for each pixel with the global moving subject likelihood LFM1 as the horizontal axis and the pixel-to-pixel difference value GD as the vertical axis. Here, the difference value GD between pixels on the vertical axis is normalized so as to take a value of 0.0 to 1.0.
 局所的特徴検出部9は、ノイズ等による誤判定を防止するために、大局的動被写体尤度LFM1が所定の閾値Th3以上とされ且つ画素間差分値GDが所定の閾値Th4以上とされた領域Ar4にデータがプロットされる画素ユニット6について、被写体が動被写体としての特徴を有すると判定し局所的動被写体尤度LFM2を算出する。 The local feature detection unit 9 has a region in which the global moving subject likelihood LFM1 is set to a predetermined threshold Th3 or more and the pixel-to-pixel difference value GD is set to a predetermined threshold Th4 or more in order to prevent erroneous determination due to noise or the like. For the pixel unit 6 in which the data is plotted on Ar4, it is determined that the subject has the characteristics as a moving subject, and the local moving subject likelihood LFM2 is calculated.
 局所的動被写体尤度LFM2は0.0から1.0までのいずれかの値を取る。例えば、図11に示すように、大局的動被写体尤度LFM1が閾値Th3とされ且つ画素間差分値GDが閾値Th4とされたポイントP7にプロットされる画素ユニット6については局所的動被写体尤度LFM2が0.0として算出される。また、大局的動被写体尤度LFM1及び画素間差分値GDが共に1.0とされたポイントP8にプロットされる画素ユニット6については撮像された被写体が動被写体である可能性が非常に高いため局所的動被写体尤度LFM2が1.0として算出される。 The local moving subject likelihood LFM2 takes any value from 0.0 to 1.0. For example, as shown in FIG. 11, the local moving subject likelihood is plotted at the point P7 in which the global moving subject likelihood LFM1 is set to the threshold Th3 and the pixel-to-pixel difference value GD is set to the threshold Th4. LFM2 is calculated as 0.0. Further, with respect to the pixel unit 6 plotted at the point P8 in which the global moving subject likelihood LFM1 and the pixel-to-pixel difference value GD are both 1.0, it is highly possible that the captured subject is a moving subject. The local moving subject likelihood LFM2 is calculated as 1.0.
 図11においては、ポイントP7からポイントP8に近づくほど局所的動被写体尤度LFM2が1.0へと近づくようにされる。
 なお、領域Ar4以外の領域にデータがプロットされる画素ユニット6についての局所的動被写体尤度LFM2については、値なしとしてもよいし、マイナスの値としてもよいし、0.0としてもよい。
In FIG. 11, the local moving subject likelihood LFM2 is set to approach 1.0 as the point P7 approaches the point P8.
The local moving subject likelihood LFM2 for the pixel unit 6 in which data is plotted in a region other than the region Ar4 may have no value, may be a negative value, or may be 0.0.
 このようにして、局所的特徴検出部9は、画素領域GAごとに検出された被写体の大局的特徴を画素ユニット6(或いは画素Gごと)の局所的特徴へと落とし込む処理を行う。 In this way, the local feature detection unit 9 performs a process of dropping the global feature of the subject detected in each pixel region GA into the local feature of the pixel unit 6 (or each pixel G).
 なお、局所的動被写体尤度LFM2は大局的動被写体尤度LFM1を用いずに大局的明滅体尤度LFL1を用いて算出してもよい。
 例えば、画素間差分値GDが大きい被写体としては、周期明滅体や動被写体が考えられる。そのような画素領域GAにおいて、大局的明滅体尤度LFL1が大きい場合には、当該画素領域GAに撮像された被写体が周期明滅体である可能性が高い。一方、画素間差分値GDが大きい画素領域GAのうち大局的明滅体尤度LFL1が低い画素領域GAにおいては、撮像された被写体が動被写体である可能性が高い。従って、大局的動被写体尤度LFM1を用いずに画素ユニット6に撮像された被写体が動被写体であることを推定することが可能となる。
The local moving subject likelihood LFM2 may be calculated by using the global blinking body likelihood LFL1 without using the global moving subject likelihood LFM1.
For example, as a subject having a large difference value GD between pixels, a periodic blinking object or a moving subject can be considered. In such a pixel region GA, when the global blinker likelihood LFL1 is large, it is highly possible that the subject imaged in the pixel region GA is a periodic blinker. On the other hand, in the pixel region GA in which the pixel-to-pixel difference value GD is large and the global blinking body likelihood LFL1 is low, there is a high possibility that the captured subject is a moving subject. Therefore, it is possible to estimate that the subject captured by the pixel unit 6 is a moving subject without using the global moving subject likelihood LFM1.
 図6の説明に戻る。
 ブレンド率決定部10は、第1信号S1と第2信号S2のブレンド率を決定する。特に本実施の形態においては、ブレンド率決定部10は、画素ユニット6ごとの被写体の特徴に応じてブレンド率を決定する。
Returning to the description of FIG.
The blend rate determination unit 10 determines the blend rate of the first signal S1 and the second signal S2. In particular, in the present embodiment, the blend rate determination unit 10 determines the blend rate according to the characteristics of the subject for each pixel unit 6.
 そのために、ブレンド率決定部10は、仮ブレンド率ABFを算出する仮ブレンド率算出部12と、仮ブレンド率ABFに補正を加えてブレンド率BFを算出するブレンド率補正部13とを備えている。 Therefore, the blend rate determination unit 10 includes a temporary blend rate calculation unit 12 for calculating the temporary blend rate ABF, and a blend rate correction unit 13 for calculating the blend rate BF by correcting the temporary blend rate ABF. ..
 仮ブレンド率算出部12は、被写体の特徴を考慮しない状態の仮のブレンド率を仮ブレンド率ABFとして算出する。なお、仮ブレンド率ABF及びブレンド率BFは、第2信号S2のブレンド比率を表し、0.0から1.0までの値を取る。即ち、仮ブレンド率ABF及びブレンド率BFが0.0の場合には第2信号S2を全くブレンドしないことを表し、1.0の場合には第2信号S2のブレンド比率が100%であることを表す。 The temporary blend rate calculation unit 12 calculates the temporary blend rate in a state where the characteristics of the subject are not taken into consideration as the temporary blend rate ABF. The temporary blend ratio ABF and the blend ratio BF represent the blend ratio of the second signal S2, and take a value from 0.0 to 1.0. That is, when the temporary blend ratio ABF and the blend ratio BF are 0.0, it means that the second signal S2 is not blended at all, and when it is 1.0, the blend ratio of the second signal S2 is 100%. Represents.
 仮ブレンド率算出部12には、受光感度の高い第1画素G1から出力される第1信号S1が入力される。仮ブレンド率算出部12は、第1信号S1の飽和度合いに応じて仮ブレンド率ABFを決定する。例えば、図12に示すように、画素Gへの入射光量が所定の閾値Th5を超えるまでは仮ブレンド率ABFが0.0とされる。また、画素Gへの入射光量が所定の閾値Th5以上且つ閾値Th6未満の場合には仮ブレンド率ABFが0.0以上且つ1.0未満とされる。そして、画素Gへの入射光量が所定の閾値Th6を超えた場合には仮ブレンド率ABFが1.0に固定される。即ち、画素Gへの入射光量が閾値Th6を超えた場合には、第1画素G1が飽和していると判定されて第1信号S1のブレンド比率が0%となる。 The first signal S1 output from the first pixel G1 having high light receiving sensitivity is input to the temporary blend ratio calculation unit 12. The temporary blend rate calculation unit 12 determines the temporary blend rate ABF according to the degree of saturation of the first signal S1. For example, as shown in FIG. 12, the temporary blend ratio ABF is set to 0.0 until the amount of incident light on the pixel G exceeds a predetermined threshold value Th5. Further, when the amount of incident light on the pixel G is a predetermined threshold value Th5 or more and less than the threshold value Th6, the provisional blend ratio ABF is 0.0 or more and less than 1.0. Then, when the amount of incident light on the pixel G exceeds a predetermined threshold value Th6, the temporary blend ratio ABF is fixed at 1.0. That is, when the amount of incident light on the pixel G exceeds the threshold value Th6, it is determined that the first pixel G1 is saturated, and the blend ratio of the first signal S1 becomes 0%.
 ブレンド率補正部13は、局所的特徴検出部9で算出された局所的明滅体尤度LFL2と局所的動被写体尤度LFM2とを用いて、仮ブレンド率算出部12で算出された仮ブレンド率ABFの補正を行う。 The blend rate correction unit 13 uses the local blinker likelihood LFL2 and the local moving subject likelihood LFM2 calculated by the local feature detection unit 9, and the temporary blend rate calculated by the temporary blend rate calculation unit 12. ABF is corrected.
 具体的に、局所的明滅体尤度LFL2が大きい場合には、ブレンド率BFが高くなるように、即ち、第2信号S2のブレンド比率が高くなるように仮ブレンド率ABFを補正する。これは、第2信号S2が相対的に露光時間を長くすることによって得られた信号であり、LEDなどの周期明滅体の発光状態が撮像されている可能性が高いためである。 Specifically, when the local blinker likelihood LFL2 is large, the provisional blend ratio ABF is corrected so that the blend ratio BF becomes high, that is, the blend ratio of the second signal S2 becomes high. This is because the second signal S2 is a signal obtained by relatively lengthening the exposure time, and there is a high possibility that the light emitting state of a periodic blinker such as an LED is imaged.
 また、局所的明滅体尤度LFL2が小さく、且つ、局所的動被写体尤度LFM2が大きい場合には、ブレンド率BFが低くなるように、即ち、第2信号S2の割合が少なくなるように仮ブレンド率ABFを補正する。これは、第1信号S1が相対的に露光時間を短くすることによって得られた信号であり、動被写体についてのブラーが抑えられているためである。 Further, when the local blinking body likelihood LFL2 is small and the local moving subject likelihood LFM2 is large, the blend ratio BF is tentatively reduced, that is, the ratio of the second signal S2 is tentatively reduced. The blend ratio ABF is corrected. This is because the first signal S1 is a signal obtained by relatively shortening the exposure time, and blurring of a moving subject is suppressed.
 一例として、局所的明滅体尤度LFL2が所定の値よりも大きい画素Gについては、ブレンド率BFを1.0とすることにより第1信号S1が含まれないようにしてもよい。また、局所的明滅体尤度LFL2が所定の値よりも小さく且つ局所的動被写体尤度LFM2が所定の値よりも大きい場合には、ブレンド率BFを0.0とすることにより第2信号S2が含まれないようにしてもよい。 As an example, for the pixel G whose local blinker likelihood LFL2 is larger than a predetermined value, the first signal S1 may be excluded by setting the blend ratio BF to 1.0. When the local blinking body likelihood LFL2 is smaller than the predetermined value and the local moving subject likelihood LFM2 is larger than the predetermined value, the blend ratio BF is set to 0.0 to set the second signal S2. May not be included.
 ブレンド部11は、ブレンド率補正部13によって決定されたブレンド率BFに基づいてブレンド処理を行う。例えば、ブレンド部11はブレンド率BFに基づいたαブレンド処理を行うことによりHDR画像信号を生成する。αブレンドは以下の式(3)によって表すことができる。 The blend unit 11 performs a blend process based on the blend rate BF determined by the blend rate correction unit 13. For example, the blending unit 11 generates an HDR image signal by performing an alpha blending process based on the blending ratio BF. The alpha blend can be expressed by the following equation (3).
 HDR画像信号=(1-α)×S1+α×S2×Gain・・・式(3) HDR image signal = (1-α) x S1 + α x S2 x Gain ... Equation (3)
 なお、局所的明滅体尤度LFL2が所定の値よりも小さい場合には、局所的動被写体尤度LFM2の値によらずブレンド率BFを0.0とするような補正、即ち、ブレンド率BFが低くなるような補正を行ってもよい。
When the local blinking body likelihood LFL2 is smaller than a predetermined value, the correction is made so that the blend ratio BF is 0.0 regardless of the value of the local moving subject likelihood LFM2, that is, the blend ratio BF. May be corrected so that
<3.第1の適用例>
 第1の適用例では、上述したイメージセンサ1を撮像装置100に適用した場合について説明する。
<3. First application example>
In the first application example, the case where the above-mentioned image sensor 1 is applied to the image pickup apparatus 100 will be described.
 撮像装置100の構成例について図13に示す。なお、図13においては、撮像装置100が備える主要部のみを記載している。 FIG. 13 shows a configuration example of the image pickup apparatus 100. In addition, in FIG. 13, only the main part included in the image pickup apparatus 100 is shown.
 撮像装置100は、光学レンズ系101と、上述した信号処理部5を備えたイメージセンサ1と、撮像用信号処理部102と、制御部103と、ドライバ部104と、記憶部105と、通信部106と、表示部107とを備えている。 The image pickup apparatus 100 includes an optical lens system 101, an image sensor 1 including the above-mentioned signal processing unit 5, an image pickup signal processing unit 102, a control unit 103, a driver unit 104, a storage unit 105, and a communication unit. It includes 106 and a display unit 107.
 光学レンズ系101は、ズームレンズやフォーカスレンズなどの各種レンズや絞り機構などから成る。イメージセンサ1が備える画素アレイ部2は、光学レンズ系101を通過した光が入射される。
 なお、画素Gごとに設けられるマイクロレンズのように光学レンズ系101の一部がイメージセンサ1に設けられていてもよい。
The optical lens system 101 includes various lenses such as a zoom lens and a focus lens, an aperture mechanism, and the like. Light that has passed through the optical lens system 101 is incident on the pixel array unit 2 included in the image sensor 1.
A part of the optical lens system 101 may be provided in the image sensor 1 like a microlens provided for each pixel G.
 撮像用信号処理部102は、イメージセンサ1からHDR合成されたHDR画像信号を受け取り、記録用の画像データを生成するために必要な信号処理や表示部107に画像を表示させるために必要な信号処理などを行う。 The imaging signal processing unit 102 receives the HDR image signal synthesized by HDR from the image sensor 1, and the signal processing required to generate the image data for recording and the signal required to display the image on the display unit 107. Perform processing etc.
 制御部103は、イメージセンサ1の制御や撮像用信号処理部102の制御を行う。また、ドライバ部104に対する制御信号を送ることにより光学レンズ系101が有する各種レンズや絞り機構などの光学部材を制御する。 The control unit 103 controls the image sensor 1 and the image pickup signal processing unit 102. Further, by sending a control signal to the driver unit 104, the optical members such as various lenses and the aperture mechanism of the optical lens system 101 are controlled.
 ドライバ部104は、制御部103から入力された制御信号に基づいて光学レンズ系101が有する光学部材に駆動信号を提供する。これにより、光学部材が駆動される。 The driver unit 104 provides a drive signal to the optical member of the optical lens system 101 based on the control signal input from the control unit 103. This drives the optical member.
 記憶部105は例えば不揮発性メモリ等からなる。記憶部105には、制御部103が実行する各処理に必要なプログラム等が記憶されている。また、記憶部105には、撮像用信号処理部102によって生成された静止画データや動画データ等の画像データ等が記憶される。 The storage unit 105 is composed of, for example, a non-volatile memory or the like. The storage unit 105 stores programs and the like required for each process executed by the control unit 103. Further, the storage unit 105 stores image data such as still image data and moving image data generated by the image pickup signal processing unit 102.
 記憶部105は、撮像装置100に内蔵されるフラッシュメモリとして構成されていてもよいし、撮像装置100に着脱できるメモリカード(例えば可搬型のフラッシュメモリ)と該メモリカードに対して記憶や読み出しのためのアクセスを行うアクセス部とで構成されていてもよい。また撮像装置100に内蔵されている形態としてHDD(Hard Disk Drive)などとして実現されていてもよい。 The storage unit 105 may be configured as a flash memory built in the image pickup device 100, or may be stored in or read from a memory card (for example, a portable flash memory) that can be attached to and detached from the image pickup device 100 and the memory card. It may be composed of an access unit that performs access for the purpose. Further, it may be realized as an HDD (Hard Disk Drive) or the like as a form built in the image pickup apparatus 100.
 通信部106は、外部機器とのデータ通信やネットワーク通信を有線や無線で行う。例えば、外部の表示装置、記録装置、再生装置等に対して画像データ(静止画データや動画データ)の送信を行う。
 また、通信部106は、インターネット、ホームネットワーク、LAN(Local Area Network)等の各種のネットワークによる通信を行い、ネットワーク上のサーバや端末等との間で各種データの送受信を行うようにしてもよい。
 また、通信部106は、画像データを外部機器に出力する出力部として機能してもよい。
The communication unit 106 performs data communication and network communication with an external device by wire or wirelessly. For example, image data (still image data or moving image data) is transmitted to an external display device, recording device, playback device, or the like.
Further, the communication unit 106 may perform communication by various networks such as the Internet, a home network, and a LAN (Local Area Network), and may transmit and receive various data to and from a server, a terminal, or the like on the network. ..
Further, the communication unit 106 may function as an output unit that outputs image data to an external device.
 表示部107は、記憶部105としての記録媒体から読み出された画像データの再生画像を表示させることが可能である。また、表示部107は、撮像装置100の背面に設けられる背面モニタやファインダモニタなどとして撮像装置100に設けられており、所謂スルー画の表示が可能とされていてもよい。 The display unit 107 can display a reproduced image of the image data read from the recording medium as the storage unit 105. Further, the display unit 107 may be provided on the image pickup apparatus 100 as a rear monitor or a finder monitor provided on the back surface of the image pickup apparatus 100, and may be capable of displaying a so-called through image.
 なお、表示部107は撮像装置100に着脱可能な形態とされていてもよい。 The display unit 107 may be in a form that can be attached to and detached from the image pickup apparatus 100.
 本実施の形態における撮像用信号処理部102は、第1画素G1から出力される第1信号S1と第2画素G2から出力される第2信号S2が被写体の特徴に応じて適切にブレンドされたHDR画像信号をイメージセンサ1から受け取ることができる。従って、撮像用信号処理部102は、HDR画像信号に基づいて各種の表示処理や加工処理等を行うことにより、LEDなどの周期明滅体が適切に表示されると共に動被写体のブラーが抑制されたHDR画像の表示や画像ファイルの生成を行うことが可能となる。
In the image pickup signal processing unit 102 of the present embodiment, the first signal S1 output from the first pixel G1 and the second signal S2 output from the second pixel G2 are appropriately blended according to the characteristics of the subject. The HDR image signal can be received from the image sensor 1. Therefore, by performing various display processing, processing, and the like based on the HDR image signal, the imaging signal processing unit 102 appropriately displays periodic blinkers such as LEDs and suppresses blurring of moving subjects. It is possible to display HDR images and generate image files.
<4.第2の適用例>
 第2の適用例では、表示装置と画像認識処理を行う認識装置に対して上述したイメージセンサ1からHDR画像信号が提供される例について説明する。
 このような適用例として、ここでは車載システム200を挙げる。
<4. Second application example>
In the second application example, an example in which the HDR image signal is provided from the above-mentioned image sensor 1 to the display device and the recognition device that performs the image recognition process will be described.
As an example of such an application, the in-vehicle system 200 will be mentioned here.
 車載システム200は、光学レンズ系201と、ドライバ部202と、上述した信号処理部5を備えたイメージセンサ1と、表示装置300と認識装置400とを備えている。 The in-vehicle system 200 includes an optical lens system 201, a driver unit 202, an image sensor 1 including the above-mentioned signal processing unit 5, a display device 300, and a recognition device 400.
 光学レンズ系201は第1の適用例で説明した光学レンズ系101と同様の構成であり、ドライバ部202は第1の適用例で説明したドライバ部104と同様の構成であるため、説明を省く。 Since the optical lens system 201 has the same configuration as the optical lens system 101 described in the first application example, and the driver unit 202 has the same configuration as the driver unit 104 described in the first application example, the description thereof will be omitted. ..
 表示装置300は、インパネ等の車室内の所定位置に設けられるモニタ類や計器類及びそれらの表示制御を行う回路等を含んで構成されている。
 表示装置300は、表示用信号処理部301、表示用制御部302、記憶部303、通信部304、表示部305等を備えている。
The display device 300 includes monitors and instruments provided at predetermined positions in the vehicle interior such as an instrument panel, and a circuit for controlling the display thereof.
The display device 300 includes a display signal processing unit 301, a display control unit 302, a storage unit 303, a communication unit 304, a display unit 305, and the like.
 表示用信号処理部301は、表示部305に画像を表示させるための信号処理を行うものであり、第1の適用例で説明した撮像用信号処理部102と同様の構成とされる。 The display signal processing unit 301 performs signal processing for displaying an image on the display unit 305, and has the same configuration as the imaging signal processing unit 102 described in the first application example.
 表示用制御部302は、第1の適用例で説明した制御部103と同様の構成である。また、記憶部303及び通信部304についても第1の適用例で説明した記憶部105及び通信部106と同様の構成とされている。なお、通信部304は、CAN(Controller Area Network)を用いた通信が可能とされている。 The display control unit 302 has the same configuration as the control unit 103 described in the first application example. Further, the storage unit 303 and the communication unit 304 have the same configuration as the storage unit 105 and the communication unit 106 described in the first application example. The communication unit 304 is capable of communication using CAN (Controller Area Network).
 表示部305は、車室内の所定位置に設けられたモニタ装置などであり、イメージセンサ1で撮像された画像データの表示が可能とされる。これにより、運転者や同乗者はモニタ装置を介して車外の状況を把握することが可能となる。 The display unit 305 is a monitor device or the like provided at a predetermined position in the vehicle interior, and can display image data captured by the image sensor 1. This enables the driver and passengers to grasp the situation outside the vehicle via the monitoring device.
 認識装置400は、イメージセンサ1が撮像した車外についての画像データに対して各種の処理を施すことにより被写体の認識処理を行う。認識装置400による被写体の認識結果は先進運転支援システム(ADAS:Advanced Driver-Assistance Systems)に提供される。
 ADASは、認識装置400から取得した情報に基づいて衝突被害軽減ブレーキ機能やACC(Adaptive Cruise Control)機能等を実現する。
The recognition device 400 performs subject recognition processing by performing various processing on the image data of the outside of the vehicle captured by the image sensor 1. The recognition result of the subject by the recognition device 400 is provided to the advanced driver-assistance system (ADAS).
The ADAS realizes a collision damage mitigation braking function, an ACC (Adaptive Cruise Control) function, and the like based on the information acquired from the recognition device 400.
 認識装置400は、認識用信号処理部401と認識用制御部402と記憶部403と通信部404とを備えている。 The recognition device 400 includes a recognition signal processing unit 401, a recognition control unit 402, a storage unit 403, and a communication unit 404.
 記憶部403と通信部404は、第1の適用例で説明した記憶部105及び通信部106と同様の構成とされているため、詳述を省く。 Since the storage unit 403 and the communication unit 404 have the same configuration as the storage unit 105 and the communication unit 106 described in the first application example, detailed description thereof will be omitted.
 認識用信号処理部401には、イメージセンサ1から撮像画像についてのデータが入力される。例えば、イメージセンサ1から認識用信号処理部401に対して、HDR合成されたHDR画像信号が入力されてもよいし、HDR合成される前の第1信号S1と第2信号S2と画素Gごとの認識結果としての局所的明滅体尤度LFL2及び局所的動被写体尤度LFM2の情報が入力されてもよい。もちろん、これらの情報全てがイメージセンサ1から認識用信号処理部401に入力されてもよい。 Data about the captured image is input from the image sensor 1 to the recognition signal processing unit 401. For example, the HDR synthesized HDR image signal may be input from the image sensor 1 to the recognition signal processing unit 401, or the first signal S1, the second signal S2, and each pixel G before HDR synthesis may be input. Information of the local blinking object likelihood LFL2 and the local moving subject likelihood LFM2 as the recognition result of the above may be input. Of course, all of this information may be input from the image sensor 1 to the recognition signal processing unit 401.
 認識用信号処理部401は、イメージセンサ1から出力された局所的明滅体尤度LFL2や局所的動被写体尤度LFM2の情報に基づいて、先行車や歩行者や信号機や標識等を認識するための画像認識処理を行う。
 具体的には、イメージセンサ1から出力された局所的明滅体尤度LFL2の情報に基づいて、周期明滅体としての信号機やテールランプなどを認識することができ、例えば、先行車を認識することができる。また、イメージセンサ1から出力された局所的動被写体尤度LFM2の情報に基づいて、動被写体としての先行車や歩行者などを認識することができる。
 また、これらの画像認識処理は、認識用信号処理部401と認識用制御部402と協働することにより実現してもよい。
The recognition signal processing unit 401 recognizes a preceding vehicle, a pedestrian, a traffic light, a sign, or the like based on the information of the local blinking object likelihood LFL2 and the local moving subject likelihood LFM2 output from the image sensor 1. Image recognition processing is performed.
Specifically, based on the information of the local blinking body likelihood LFL2 output from the image sensor 1, it is possible to recognize a traffic light, a tail lamp, etc. as a periodic blinking body, for example, to recognize a preceding vehicle. can. Further, based on the information of the local moving subject likelihood LFM2 output from the image sensor 1, it is possible to recognize a preceding vehicle or a pedestrian as a moving subject.
Further, these image recognition processes may be realized by cooperating with the recognition signal processing unit 401 and the recognition control unit 402.
 なお、自車両が走行中である場合には、局所的動被写体尤度LFM2に基づいて被写体を動被写体であると推定できない場合がある。例えば、自車両の走行中においては、動いていない障害物などからも動被写体のような特徴が検出されてしまうためである。そのため、認識用信号処理部401は、自車両の走行速度などを加味して動被写体を画像認識する処理を行ってもよい。
 これにより、認識用信号処理部401は、被写体についての画像認識処理を適切に行うことが可能となる。
When the own vehicle is traveling, it may not be possible to estimate the subject as a moving subject based on the local moving subject likelihood LFM2. For example, while the own vehicle is traveling, a feature such as a moving subject is detected even from an obstacle that is not moving. Therefore, the recognition signal processing unit 401 may perform a process of recognizing a moving subject as an image in consideration of the traveling speed of the own vehicle and the like.
As a result, the recognition signal processing unit 401 can appropriately perform image recognition processing on the subject.
 認識用制御部402は、例えば、DNN(Deep Neural Network)を用いた画像認識処理を実行することにより、先行車や歩行者等の被写体を認識してもよい。
 このようにして得られた認識用信号処理部401及び認識用制御部402による認識結果は、通信部404を介してADASへと出力される。
The recognition control unit 402 may recognize a subject such as a preceding vehicle or a pedestrian by executing an image recognition process using, for example, a DNN (Deep Neural Network).
The recognition result obtained by the recognition signal processing unit 401 and the recognition control unit 402 thus obtained is output to the ADAS via the communication unit 404.
 このようにして歩行者や先行車などの被写体に関する適切な画像認識処理を行うことにより、車両の走行安全性を向上させることが可能となる。
By performing appropriate image recognition processing on a subject such as a pedestrian or a preceding vehicle in this way, it is possible to improve the running safety of the vehicle.
<5.三つ以上の信号を用いる例>
 受光感度の異なる三つ以上の読み出し処理によって得られる3種類以上の画素信号を用いて被写体の特徴検出を行う例について説明する。
<5. Example of using three or more signals>
An example of performing feature detection of a subject using three or more types of pixel signals obtained by three or more readout processes having different light receiving sensitivities will be described.
 イメージセンサ1は、面積の異なる3種類の画素を備えていてもよいし、図2に示すように面積の異なる2種類の画素を用いて3種類の画素信号を出力可能とされていてもよい。 The image sensor 1 may include three types of pixels having different areas, or may be capable of outputting three types of pixel signals using two types of pixels having different areas as shown in FIG. ..
 例えば、イメージセンサ1が備える第1画素G1からは受光感度の最も高い画素信号である第1信号S1が出力され、第2画素G2からは受光感度が第1画素G1よりも低い画素信号である第2信号S2が出力される。また、第1画素G1からは、露光時間を第1信号S1及び第2信号S2を出力する場合よりも短くすることにより最も受光感度の低い第3信号S3が出力される。 For example, the first pixel G1 included in the image sensor 1 outputs the first signal S1 which is the pixel signal having the highest light receiving sensitivity, and the second pixel G2 is a pixel signal whose light receiving sensitivity is lower than that of the first pixel G1. The second signal S2 is output. Further, from the first pixel G1, the third signal S3 having the lowest light receiving sensitivity is output by making the exposure time shorter than the case where the first signal S1 and the second signal S2 are output.
 一例を挙げると、第1画素G1の露光時間を8msecとすることにより第1信号S1が出力される。また、第2画素G2の露光時間を11msecとすることにより第2信号S2が出力される。更に、第1画素G1の露光時間を4msecとすることにより第3信号S3が出力される。 As an example, the first signal S1 is output by setting the exposure time of the first pixel G1 to 8 msec. Further, the second signal S2 is output by setting the exposure time of the second pixel G2 to 11 msec. Further, the third signal S3 is output by setting the exposure time of the first pixel G1 to 4 msec.
 図15は、イメージセンサ1から出力される3種類の画素信号を用いて被写体についての特徴を検出する信号処理部5Aの構成例である。なお、図6と同様の構成については、同じ符号を付し適宜説明を省略する。 FIG. 15 is a configuration example of a signal processing unit 5A that detects features of a subject using three types of pixel signals output from the image sensor 1. The same components as those in FIG. 6 are designated by the same reference numerals, and the description thereof will be omitted as appropriate.
 信号処理部5Aは、第1信号S1及び第2信号S2を用いて大局的特徴検出及び局所的特徴検出を行うために、画素間差分検出部7と大局的特徴検出部8と局所的特徴検出部9とブレンド率決定部10とを備えている。また、図6に示すブレンド部11の代わりに第1ブレンド部11Aを備えている。 The signal processing unit 5A uses the first signal S1 and the second signal S2 to perform global feature detection and local feature detection, so that the pixel-to-pixel difference detection unit 7, the global feature detection unit 8, and the local feature detection are performed. A unit 9 and a blend rate determining unit 10 are provided. Further, the first blending portion 11A is provided instead of the blending portion 11 shown in FIG.
 これらの各部は、図6に示す各部と同様の処理を行う処理部であるため、説明を省略する。 Since each of these parts is a processing part that performs the same processing as each part shown in FIG. 6, the description thereof will be omitted.
 なお、ブレンド率決定部10は、仮ブレンド率算出部12及びブレンド率補正部13を備えている。ブレンド率決定部10は、第1信号S1に対する第2信号S2のブレンドの比率をブレンド率BF1として決定する。 The blend rate determination unit 10 includes a temporary blend rate calculation unit 12 and a blend rate correction unit 13. The blend ratio determining unit 10 determines the ratio of the blend of the second signal S2 to the first signal S1 as the blend ratio BF1.
 第1ブレンド部11Aは、ブレンド率決定部10が決定したブレンド率BF1に従って第1信号S1と第2信号S2のαブレンド処理を行うことにより、HDR画像信号を出力する。 The first blending unit 11A outputs an HDR image signal by performing an alpha blending process of the first signal S1 and the second signal S2 according to the blending ratio BF1 determined by the blending ratio determining unit 10.
 信号処理部5Aは、第2信号S2及び第3信号S3を用いて大局的特徴検出及び局所的特徴検出を行うために、画素間差分検出部7Aと大局的特徴検出部8Aと局所的特徴検出部9Aとブレンド率決定部10Aとを備えている。 The signal processing unit 5A uses the second signal S2 and the third signal S3 to perform global feature detection and local feature detection, so that the pixel-to-pixel difference detection unit 7A, the global feature detection unit 8A, and the local feature detection are performed. A unit 9A and a blend rate determining unit 10A are provided.
 ブレンド率決定部10Aは、仮ブレンド率算出部12A及びブレンド率補正部13Aを備えている。 The blend rate determination unit 10A includes a temporary blend rate calculation unit 12A and a blend rate correction unit 13A.
 これらの各部は、第1信号S1の代わりに第2信号S2を用い、第2信号S2の代わりに第3信号S3を用いること以外は図6に示す各部と同様の処理を行う処理部である。
 但し、ブレンド率決定部10Aは、第1ブレンド部11Aから出力されるHDR画像信号に対する第3信号S3のブレンド比率をブレンド率BF2として決定する。
Each of these parts is a processing unit that performs the same processing as each part shown in FIG. 6 except that the second signal S2 is used instead of the first signal S1 and the third signal S3 is used instead of the second signal S2. ..
However, the blend ratio determining unit 10A determines the blend ratio of the third signal S3 with respect to the HDR image signal output from the first blend unit 11A as the blend ratio BF2.
 第2ブレンド部11Bは、ブレンド率決定部10Aが決定したブレンド率BF2に従って第1ブレンド部11Aから出力されるHDR画像信号と第3信号S3のαブレンド処理を行う。 The second blending unit 11B performs an alpha blending process of the HDR image signal output from the first blending unit 11A and the third signal S3 according to the blending ratio BF2 determined by the blending ratio determining unit 10A.
 このように、3種類以上の画素信号を用いることにより、HDR合成におけるダイナミックレンジを広げることが可能となる。
In this way, by using three or more types of pixel signals, it is possible to widen the dynamic range in HDR synthesis.
<6.変形例>
 ここではいくつかの上述した実施の形態に対する変形例を説明する。
 図7に示す大局的特徴検出部8においては、画素領域毎に求めた輝度平均値L1Aと輝度平均値L2Aが記憶される第1メモリ23Aと第2メモリ23Bが設けられている例を説明した。また、それぞれのメモリには1フレーム前の輝度平均値L1A,L2Aが記憶されることにより、後段の領域毎輝度変化第1算出部24A及び領域毎輝度変化第2算出部24Bで第1画素変化量G1V及び第2画素変化量G2Vを算出する例を説明した。
<6. Modification example>
Here, modifications to some of the above-described embodiments will be described.
In the global feature detection unit 8 shown in FIG. 7, an example is described in which a first memory 23A and a second memory 23B for storing the luminance average value L1A and the luminance average value L2A obtained for each pixel area are provided. .. Further, since the brightness average values L1A and L2A one frame before are stored in each memory, the first pixel change is performed by the region-by-region luminance change first calculation unit 24A and the region-by-region luminance change second calculation unit 24B in the subsequent stage. An example of calculating the quantity G1V and the second pixel change amount G2V has been described.
 本変形例においては、第1メモリ23A及び第2メモリ23Bが2フレーム以上の輝度平均値L1A,L2Aを記憶している。例えば、1フレーム前の輝度平均値L1A,L2Aと、2フレーム前の輝度平均値L1A,L2Aを記憶している。 In this modification, the first memory 23A and the second memory 23B store the brightness average values L1A and L2A of two frames or more. For example, the brightness average values L1A and L2A one frame before and the brightness average values L1A and L2A two frames before are stored.
 そして、領域毎輝度変化第1算出部24A及び領域毎輝度変化第2算出部24Bは、輝度変化の推移を観測する処理を行う。これにより、領域毎輝度変化第1算出部24A及び領域毎輝度変化第2算出部24Bは、周期明滅体の明滅周期を推定することが可能とされる。 Then, the region-by-region luminance change first calculation unit 24A and the region-by-region luminance change second calculation unit 24B perform processing for observing the transition of the luminance change. As a result, the region-by-region luminance change first calculation unit 24A and the region-by-region luminance change second calculation unit 24B can estimate the blinking cycle of the periodic blinker.
 周期明滅体の明滅周期が推定できれば、周期明滅体の発光状態が確実に撮像可能な露光時間を第2画素G2に対して設定することができる。これにより、LEDなどが確実に撮像されたHDR画像信号を生成することが可能となる。特に、先行車や信号機を認識する処理を後段の処理部で実行する場合に車両の走行安全性を向上させることが可能となり好適である。 If the blinking cycle of the periodic blinking object can be estimated, the exposure time at which the light emitting state of the periodic blinking object can be reliably imaged can be set for the second pixel G2. This makes it possible to generate an HDR image signal in which an LED or the like is reliably captured. In particular, when the process of recognizing the preceding vehicle or the traffic light is executed by the processing unit in the subsequent stage, it is possible to improve the running safety of the vehicle, which is preferable.
 その他の変形例として局所的特徴を検出しない例を説明する。
 上述した例においては、信号処理部5が大局的特徴検出部8と局所的特徴検出部9の双方を備え、局所的特徴検出部9から出力される局所的明滅体尤度LFL2や局所的動被写体尤度LFM2を用いてブレンド率を決定する場合を説明した。
As another modification, an example in which a local feature is not detected will be described.
In the above example, the signal processing unit 5 includes both the global feature detection unit 8 and the local feature detection unit 9, and the local blinker likelihood LFL2 and the local motion output from the local feature detection unit 9. The case where the blend ratio is determined using the subject likelihood LFM2 has been described.
 本変形例においては、信号処理部5が大局的特徴検出部8のみを備え、局所的特徴検出部9を備えていない場合について説明する。 In this modification, a case where the signal processing unit 5 includes only the global feature detection unit 8 and does not include the local feature detection unit 9 will be described.
 大局的特徴検出部8は、画素アレイ部2に仮想的に設定された画素領域GAごとに被写体の大局的な特徴を検出する。ここで、画素領域GAが十分に小さい領域として設定されていれば、大局的特徴検出部8によって検出される画素領域GAごとの被写体の特徴を用いることにより、所望のHDR画像信号を得るためのブレンド率BFを適切に決定できる場合がある。 The global feature detection unit 8 detects the global feature of the subject for each pixel region GA virtually set in the pixel array unit 2. Here, if the pixel region GA is set as a sufficiently small region, a desired HDR image signal can be obtained by using the characteristics of the subject for each pixel region GA detected by the global feature detection unit 8. In some cases, the blend ratio BF can be determined appropriately.
 この場合には、信号処理部5が図6に示す画素間差分検出部7及び局所的特徴検出部9を備えずに構成されていてもよい。即ち、ブレンド率補正部13は、大局的特徴検出部8から出力される画素領域GAごとの大局的明滅体尤度LFL1及び大局的動被写体尤度LFM1を用いて、画素領域GAごとのブレンド率BFを算出する。 In this case, the signal processing unit 5 may be configured without the inter-pixel difference detection unit 7 and the local feature detection unit 9 shown in FIG. That is, the blend rate correction unit 13 uses the global blinker likelihood LFL1 and the global moving subject likelihood LFM1 for each pixel region GA output from the global feature detection unit 8, and the blend ratio for each pixel region GA. Calculate BF.
 ブレンド部11は、画素領域GAごとのブレンド率BFを用いてHDR合成を行うことによりHDR画像信号を出力する。
 これにより、信号処理部5の構成を簡潔にすることができ、処理負担の軽減を図ることが可能となる。
The blending unit 11 outputs an HDR image signal by performing HDR composition using the blending ratio BF for each pixel region GA.
As a result, the configuration of the signal processing unit 5 can be simplified, and the processing load can be reduced.
 更に他の変形例として、図10に示す閾値Th1,Th2や図11に示す閾値Th3,Th4を可変とすることが考えられる。
 これらの閾値は、被写体が誤って周期明滅体であると判定されることや動被写体であると判定されるような誤判定を抑制するために設けられる閾値である。
As yet another modification, it is conceivable to make the threshold values Th1 and Th2 shown in FIG. 10 and the threshold values Th3 and Th4 shown in FIG. 11 variable.
These threshold values are set to suppress erroneous determination that the subject is erroneously determined to be a periodic blinker or is determined to be a moving subject.
 しかし、撮像環境によっては、それでも誤判定が多発してしまう場合がある。具体的には、暗い撮像環境などである。
 明度センサの出力に基づいて周囲環境の明るさが一定値以上低いと推定される場合には、閾値Th1,Th2,Th3,Th4の何れかまたは全てを上げることが考えられる。各閾値を上げることにより、図10に示す領域Ar3や図11に示す領域Ar4が狭くなり、被写体についての局所的明滅体尤度LFL2や局所的動被写体尤度LFM2が低めに算出される。
However, depending on the imaging environment, erroneous determination may still occur frequently. Specifically, it is a dark imaging environment or the like.
When it is estimated that the brightness of the surrounding environment is lower than a certain value based on the output of the brightness sensor, it is conceivable to raise any or all of the threshold values Th1, Th2, Th3, Th4. By increasing each threshold value, the region Ar3 shown in FIG. 10 and the region Ar4 shown in FIG. 11 become narrower, and the local blinking body likelihood LFL2 and the local moving subject likelihood LFM2 for the subject are calculated to be lower.
 これにより、上記のような誤判定を抑制することができ、適切なHDR画像信号を生成することが可能となる。 This makes it possible to suppress the above-mentioned erroneous determination and generate an appropriate HDR image signal.
 また、他の変形例について説明する。
 上述した例では、被写体についての大局的明滅体尤度LFL1や局所的明滅体尤度LFL2が高い場合に第2信号S2の比率が高くなるようにブレンド率を決定する処理について説明した。
 ここで、大局的明滅体尤度LFL1や局所的明滅体尤度LFL2が低い場合には、基本的に第1信号S1の比率が高くなるようにブレンド率を決定してもよい。更には、大局的明滅体尤度LFL1や局所的明滅体尤度LFL2が高くない限りは、第2信号S2の比率が0%となるようにブレンド率を決定してもよい。
Further, another modification will be described.
In the above-mentioned example, the process of determining the blend ratio so that the ratio of the second signal S2 becomes high when the global blinking body likelihood LFL1 or the local blinking body likelihood LFL2 for the subject is high has been described.
Here, when the global blinking body likelihood LFL1 or the local blinking body likelihood LFL2 is low, the blending ratio may be basically determined so that the ratio of the first signal S1 is high. Further, as long as the global blinking body likelihood LFL1 and the local blinking body likelihood LFL2 are not high, the blending ratio may be determined so that the ratio of the second signal S2 is 0%.
 これにより、大局的動被写体尤度LFM1や局所的動被写体尤度LFM2が低く算出されてしまい、動被写体が誤って動被写体である可能性が低いと判定されてしまった場合においても、当該被写体についてのブラーの発生を抑制することが可能となる。
As a result, even if the global moving subject likelihood LFM1 and the local moving subject likelihood LFM2 are calculated to be low and it is determined that the moving subject is unlikely to be a moving subject by mistake, the subject is concerned. It is possible to suppress the occurrence of blurring.
<7.まとめ>
 これまで説明してきたように、実施の形態におけるイメージセンサ1は、第1の受光感度による読み出しが行われる第1画素G1と第1の受光感度よりも低い感度である第2の受光感度による読み出しが行われる第2画素G2とが二次元に配列された画素アレイ部2と、第1の露光時間とされた第1画素G1から出力される第1信号S1と第1の露光時間とは異なる第2の露光時間とされた第2画素G2から出力される第2信号S2とに基づいて、第1画素G1と第2画素G2がそれぞれ複数含まれた画素領域GAについての特徴を大局的特徴として検出する大局的特徴検出部8(8A)を備える。また、イメージセンサ1は、大局的特徴検出部8(8A)の検出結果に基づいて第1信号S1と第2信号S2のブレンド率BF(BF1,BF2)を決定するブレンド率決定部10(10A)と、を備える。
 第1画素G1と第2画素G2の受光感度の違いは、画素における受光面積の違いによるものであってもよいし、露光時間の長さの違いによるものであってもよいし、受光素子の特性の違いによるものであってもよい。また、大局的特徴とは、一まとまりの画素領域GAにおいて撮像された被写体の特徴とされている。
 一まとまりの画素領域GAごとに大局的特徴の検出を行うことにより、画素領域GA毎に第1信号S1と第2信号S2のブレンド率BEを変えることができ、少ない処理負担で適切なHDR合成を行うことが可能となる。
 また、画素領域GAごとに露光時間の異なる第1画素G1及び第2画素G2それぞれの信号(第1信号S1,第2信号S2)を用いることで被写体が周期明滅体である場合の大局的特徴や被写体が動被写体である場合の大局的特徴を検出することが可能となる。
 更に、多くの被写体は画素アレイ部2においてある程度の範囲をもって撮像される。即ち、一つの画素Gのみで撮像される被写体は少ない。従って、画素領域GAごとに大局的特徴を検出することで、効率的に被写体の特徴を捉えることが可能となる。また、一つの画素Gが故障してしまい正常な画素信号の出力が行えなくなってしまった場合には、当該画素Gから出力される画素信号が例えば周期明滅体としての特徴を備えてしまうことがある。そのような場合には、当該画素Gについての被写体の特徴が誤検出されてしまうことが考えられる。しかし、本構成のように、周囲の複数の画素Gと共に大局的特徴が検出されることで、当該故障した画素Gについて検出される被写体の特徴はほぼ無視されることとなり、誤検出した特徴に基づく不適切なブレンド処理が実行されずに済む。これにより、画素Gに故障が発生した場合であっても適切なブレンド処理を実行することが可能となる。
<7. Summary>
As described above, the image sensor 1 in the embodiment has the first pixel G1 in which the reading is performed by the first light receiving sensitivity and the reading by the second light receiving sensitivity which is lower than the first light receiving sensitivity. The first signal S1 and the first exposure time output from the first pixel G1 which is the first exposure time are different from the pixel array unit 2 in which the second pixel G2 is arranged in two dimensions. Based on the second signal S2 output from the second pixel G2, which is the second exposure time, the feature of the pixel region GA including a plurality of the first pixel G1 and the second pixel G2 is a global feature. It is provided with a global feature detection unit 8 (8A) for detecting as. Further, the image sensor 1 determines the blend ratio BF (BF1, BF2) of the first signal S1 and the second signal S2 based on the detection result of the global feature detection unit 8 (8A). ) And.
The difference in the light receiving sensitivity between the first pixel G1 and the second pixel G2 may be due to the difference in the light receiving area in the pixel, the difference in the length of the exposure time, or the difference in the light receiving element. It may be due to the difference in characteristics. Further, the global feature is a feature of a subject imaged in a set of pixel region GA.
By detecting the global features for each group of pixel region GAs, the blend ratio BE of the first signal S1 and the second signal S2 can be changed for each pixel region GA, and appropriate HDR synthesis can be performed with a small processing load. Can be done.
Further, by using the signals of the first pixel G1 and the second pixel G2 (first signal S1 and second signal S2) having different exposure times for each pixel region GA, a global feature when the subject is a periodic blinker. It is possible to detect global features when the subject is a moving subject.
Further, many subjects are imaged in the pixel array unit 2 with a certain range. That is, few subjects are imaged with only one pixel G. Therefore, by detecting the global feature for each pixel region GA, it is possible to efficiently capture the feature of the subject. Further, when one pixel G fails and a normal pixel signal cannot be output, the pixel signal output from the pixel G may have a feature as, for example, a periodic blinker. be. In such a case, it is conceivable that the characteristics of the subject with respect to the pixel G may be erroneously detected. However, as in this configuration, when a global feature is detected together with a plurality of surrounding pixels G, the feature of the subject detected for the failed pixel G is almost ignored, and the feature is erroneously detected. Inappropriate blending process based on is not executed. This makes it possible to execute an appropriate blending process even when a failure occurs in the pixel G.
 信号処理部5の説明において示したように、大局的特徴は、時間経過に伴う輝度変化についての特徴とされてもよい。
 時間経過に伴う輝度変化についての特徴を有する被写体としては、例えば、数msecから数十msecの周期で明滅するLED(Light Emitting Diode)などの発光体や、発光体から発せられる光を反射している物体など、輝度変化が起きている被写体を挙げることができる。特に大きな輝度変化が起きている被写体ほど検出されやすい。また、被写体が動きの大きな動被写体である場合においても、大きな輝度変化が検出されやすい。
 このような被写体を検出することで、第1信号S1と第2信号S2を適切にブレンドすることが可能となり、周期明滅体や動被写体に起因して発生するアーチファクトを抑制することが可能となる。
As shown in the description of the signal processing unit 5, the global feature may be a feature regarding the change in luminance with the passage of time.
As a subject having a characteristic of a change in brightness with the passage of time, for example, a light emitting body such as an LED (Light Emitting Diode) that blinks in a cycle of several msec to several tens of msec, or a light emitting body that reflects light emitted from the light emitting body is reflected. It can be mentioned as a subject whose brightness is changing, such as an object. In particular, a subject with a large change in brightness is more likely to be detected. Further, even when the subject is a moving subject with a large movement, a large change in luminance is likely to be detected.
By detecting such a subject, it is possible to appropriately blend the first signal S1 and the second signal S2, and it is possible to suppress artifacts generated by a periodic blinker or a moving subject. ..
 イメージセンサ1における大局的特徴検出部8(8A)は、撮像被写体が周期明滅体である度合いを表す尤度を大局的明滅体尤度LFL1として算出することにより検出を行ってもよい。
 これにより、被写体がLEDなどの周期明滅体である場合に大局的明滅体尤度LFL1が高く算出される。特に、対象の画素領域GAの大部分に周期明滅体が撮像されている場合などに大局的明滅体尤度LFL1が高くされる。
 また、輝度変化が起きている被写体としては周期明滅体以外にも動被写体などが考えられるが、本構成のように大局的明滅体尤度LFL1を算出することで、被写体が周期明滅体と動被写体のいずれであるのかを区別することが可能となる。
 これにより、被写体が周期明滅体と動被写体の何れであるのかに応じてブレンド率BF(BF1,BF2)を変えることが可能となり、被写体に応じた適切なHDR合成が可能となる。
The global feature detection unit 8 (8A) in the image sensor 1 may perform detection by calculating the likelihood representing the degree to which the image pickup subject is a periodic blinker as the global blinker likelihood LFL1.
As a result, when the subject is a periodic blinking object such as an LED, the global blinking object likelihood LFL1 is calculated to be high. In particular, the global blinker likelihood LFL1 is increased when a periodic blinker is imaged in most of the target pixel region GA.
In addition to the periodic blinking body, moving subjects can be considered as the subject whose brightness changes, but by calculating the global blinking body likelihood LFL1 as in this configuration, the subject moves with the periodic blinking body. It is possible to distinguish which of the subjects it is.
As a result, the blend ratio BF (BF1, BF2) can be changed according to whether the subject is a periodic blinker or a moving subject, and appropriate HDR composition according to the subject becomes possible.
 図7等を用いて説明したように、イメージセンサ1の大局的特徴検出部8(8A)は、画素領域GAごとに第1画素G1の輝度平均値の時間的な変化量を第1画素変化量G1Vとして算出する処理と、画素領域GAごとに第2画素G2の輝度平均値の時間的な変化量を第2画素変化量G2Vとして算出する処理と、第1画素変化量G1Vと第2画素変化量G2Vの差分を変化量差分値とVDして算出する処理と、を実行することにより前記検出を行ってもよい。
 画素領域GAごとの輝度平均値L1A,L2Aの時間的な変化量である第1画素変化量G1Vや第2画素変化量G2Vの情報は、画素Gごとに算出されるよりもデータ量が少なくされる。同様に、第1画素変化量G1Vと第2画素変化量G2Vから算出される変化量差分値VDについてもデータ量が少なくされる。
 このような情報を用いることにより、少ない処理負担で大局的特徴を検出することができる。
As described with reference to FIG. 7 and the like, the global feature detection unit 8 (8A) of the image sensor 1 changes the amount of change over time in the brightness average value of the first pixel G1 for each pixel region GA by the first pixel. Processing to calculate as the amount G1V, processing to calculate the temporal change amount of the brightness average value of the second pixel G2 for each pixel area GA as the second pixel change amount G2V, and the first pixel change amount G1V and the second pixel. The detection may be performed by executing a process of calculating the difference of the change amount G2V by VDing the change amount difference value.
The amount of data for the information of the first pixel change amount G1V and the second pixel change amount G2V, which are the temporal changes in the brightness average values L1A and L2A for each pixel region GA, is smaller than that calculated for each pixel G. To. Similarly, the amount of data is also reduced for the change amount difference value VD calculated from the first pixel change amount G1V and the second pixel change amount G2V.
By using such information, it is possible to detect global features with a small processing load.
 図9等を用いて説明したように、イメージセンサ1において、大局的明滅体尤度LFL1は、変化量差分値VDが大きいほど大きく、第2画素変化量G2Vが小さいほど大きくなるように算出されてもよい。
 例えば、受光感度の高い第1画素G1の露光時間を受光感度の低い第2画素G2の露光時間よりも短くした場合に、第1画素G1の露光時間の全てが周期明滅体における非発光状態に含まれる場合がある。また、そのような場合であっても、第2画素G2の露光時間の一部は周期明滅体における発光状態に含まれる場合がある。このような場合には、変化量差分値VDが大きくなる。
 更に、第2画素変化量G2Vは、露光時間の長い第2画素G2は発光状態の周期明滅体が写っている可能性が高くなる。特に、周期明滅体の非発光時間よりも露光時間が長い場合には、確実に周期明滅体の発光状態が撮像されていることとなり、第2画素変化量G2Vは小さくなる。
 従って、変化量差分値VDが大きく且つ第2画素変化量G2Vが小さい場合に大局的明滅体尤度LFL1によって被写体が周期明滅体であることが適切に判定されることで、適切なブレンド率BF(BF1,BF2)を設定することが可能となる。
As described with reference to FIG. 9 and the like, in the image sensor 1, the global blinker likelihood LFL1 is calculated so as to be larger as the change amount difference value VD is larger and larger as the second pixel change amount G2V is smaller. You may.
For example, when the exposure time of the first pixel G1 having a high light receiving sensitivity is shorter than the exposure time of the second pixel G2 having a low light receiving sensitivity, all the exposure times of the first pixel G1 are in a non-light emitting state in the periodic blinker. May be included. Even in such a case, a part of the exposure time of the second pixel G2 may be included in the light emitting state in the periodic blinker. In such a case, the change amount difference value VD becomes large.
Further, with respect to the second pixel change amount G2V, there is a high possibility that the second pixel G2 having a long exposure time shows a periodic blinker in a light emitting state. In particular, when the exposure time is longer than the non-light emission time of the periodic blinker, the light emission state of the periodic blinker is surely captured, and the second pixel change amount G2V becomes small.
Therefore, when the change amount difference value VD is large and the second pixel change amount G2V is small, the global blinking body likelihood LFL1 appropriately determines that the subject is a periodic blinking body, so that an appropriate blend ratio BF is obtained. (BF1, BF2) can be set.
 図6等を用いて説明したように、イメージセンサ1においては、大局的特徴検出部8(8A)によって検出された大局的特徴を用いて画素Gごとの特徴を検出する局所的特徴検出部9(9A)を備えていてもよい。
 これにより、画素Gごとに被写体の特徴が検出される。
 従って、画素Gごとに適切なブレンド率BF(BF1,BF2)が決定され、最適なHDR合成を行うことが可能となる。
As described with reference to FIG. 6 and the like, in the image sensor 1, the local feature detection unit 9 that detects the feature of each pixel G by using the global feature detected by the global feature detection unit 8 (8A). (9A) may be provided.
As a result, the characteristics of the subject are detected for each pixel G.
Therefore, an appropriate blend ratio BF (BF1, BF2) is determined for each pixel G, and optimum HDR synthesis can be performed.
 図6や図10等を用いて説明したように、イメージセンサ1の局所的特徴検出部9(9A)は、第1画素G1と第2画素G2の輝度値の差分として算出された画素間差分値GDと大局的明滅体尤度LFL1に基づいて画素Gごとの明滅体尤度を局所的明滅体尤度LFL2として算出してもよい。
 画素間差分値GDや大局的明滅体尤度LFL1は、被写体が周期明滅体である度合いを推し量るために用いることができる。
 これにより、被写体が周期明滅体であることを検出することができるため、ブレンド率BF(BF1,BF2)を画素Gごとに適切に設定することが可能となる。
As described with reference to FIGS. 6 and 10, the local feature detection unit 9 (9A) of the image sensor 1 has a pixel-to-pixel difference calculated as a difference between the luminance values of the first pixel G1 and the second pixel G2. The blinking body likelihood for each pixel G may be calculated as the local blinking body likelihood LFL2 based on the value GD and the global blinking body likelihood LFL1.
The pixel-to-pixel difference value GD and the global blinking object likelihood LFL1 can be used to estimate the degree to which the subject is a periodic blinking object.
As a result, it is possible to detect that the subject is a periodic blinker, so that the blend ratio BF (BF1, BF2) can be appropriately set for each pixel G.
 図10等を用いて説明したように、イメージセンサ1において、局所的明滅体尤度LFL2は、画素間差分値GDが大きい画素Gほど大きく、大局的明滅体尤度LFL1が大きい画素Gほど大きくなるように算出されてもよい。
 例えば、画素間差分値GDが大きい被写体としては、動被写体や周期明滅体などを挙げることができる。これらの被写体のうち、画素間差分値GDが大きい被写体であって、且つ、大局的明滅体尤度LFL1が高い被写体は、動被写体ではなく周期明滅体である可能性が高い。
 このように、局所的明滅体尤度LFL2は画素Gに撮像された被写体が周期明滅体であることの確からしさを表す指標であるため、局所的明滅体尤度LFL2に基づいてブレンド率BF(BF1,BF2)を画素Gごとに適切に設定することが可能となる。
As described with reference to FIG. 10 and the like, in the image sensor 1, the local blinking body likelihood LFL2 is larger for the pixel G having a larger pixel-to-pixel difference value GD, and larger for the pixel G having a larger global blinking body likelihood LFL1. It may be calculated so as to be.
For example, as a subject having a large difference value GD between pixels, a moving subject, a periodic blinker, or the like can be mentioned. Among these subjects, a subject having a large inter-pixel difference value GD and having a high global blinking body likelihood LFL1 is likely to be a periodic blinking object rather than a moving subject.
As described above, since the local blinking body likelihood LFL2 is an index indicating the certainty that the subject imaged by the pixel G is a periodic blinking body, the blend ratio BF (blend ratio BF) is based on the local blinking body likelihood LFL2. BF1, BF2) can be appropriately set for each pixel G.
 変形例において説明したように、イメージセンサ1のブレンド率決定部10(10A)は、大局的明滅体尤度LFL1が高いほど第2信号S2の比率が高くなるようにブレンド率BF(BF1,BF2)を決定してもよい。
 第1画素G1と第2画素G2の露光時間の関係によっては、第2画素G2は第1画素G1よりも長い露光時間とされる。この場合における第2画素G2は、発光状態の周期明滅体が撮像されている可能性高い。
 従って、第2画素G2から出力される第2信号S2の比率を高めることにより、周期明滅体が発光している状態が撮像された画像データが得られる可能性を高めることができる。
As described in the modified example, the blend ratio determination unit 10 (10A) of the image sensor 1 has a blend ratio BF (BF1, BF2) so that the higher the global blinker likelihood LFL1, the higher the ratio of the second signal S2. ) May be determined.
Depending on the relationship between the exposure times of the first pixel G1 and the second pixel G2, the second pixel G2 has a longer exposure time than the first pixel G1. In this case, it is highly possible that the second pixel G2 is imaged with a periodic blinker in a light emitting state.
Therefore, by increasing the ratio of the second signal S2 output from the second pixel G2, it is possible to increase the possibility of obtaining image data in which the periodic blinker is emitting light.
 上述したように、イメージセンサ1のブレンド率決定部10(10A)は、局所的明滅体尤度LFL2が高いほど第2信号S2の比率が高くなるように画素Gごとにブレンド率BF(BF1,BF2)を決定してもよい。
 上述した例においては、第2画素G2は第1画素G1よりも長い露光時間とされる。この場合における第2画素G2は、発光状態の周期明滅体を捉えた状態で生成される可能性が高い。
 画素Gごとの局所的明滅体尤度LFL2の高さに基づいて第2信号S2の比率を高めることにより、周期明滅体が発光している状態が撮像された画像データが得られる可能性を高めることができる。特に、画素Gごとにこのようなブレンド率の決定方法を用いることで、周期明滅体が撮像されていない画素Gに対して周期明滅体が撮像されているかのようにブレンド率が決定されてしまうことが防止されるため、画素Gごとの被写体の特徴に応じた適切なHDR合成が可能となる。
As described above, the blend ratio determination unit 10 (10A) of the image sensor 1 has a blend ratio BF (BF1, BF1) for each pixel G so that the higher the local blinker likelihood LFL2, the higher the ratio of the second signal S2. BF2) may be determined.
In the above example, the second pixel G2 has a longer exposure time than the first pixel G1. In this case, the second pixel G2 is likely to be generated in a state of capturing a periodic blinker in a light emitting state.
By increasing the ratio of the second signal S2 based on the height of the local blinker likelihood LFL2 for each pixel G, the possibility of obtaining image data in which the periodic blinker is emitting light is increased. be able to. In particular, by using such a method for determining the blend ratio for each pixel G, the blend ratio is determined as if the periodic blinker is imaged for the pixel G in which the periodic blinker is not imaged. Therefore, it is possible to perform appropriate HDR composition according to the characteristics of the subject for each pixel G.
 図10等を用いて説明したように、イメージセンサ1の局所的特徴検出部9(9A)は、局所的動被写体尤度LFM2を算出する処理を実行可能とされ、局所的動被写体尤度LFM2は、画素間差分値GDが大きい画素Gほど大きく、大局的明滅体尤度LFL1が小さい画素Gほど大きくなるように算出されてもよい。
 例えば、画素間差分値GDが大きい被写体としては、動被写体や周期明滅体などを挙げることができる。これらの被写体のうち、画素間差分値GDが大きい被写体であって、且つ、大局的明滅体尤度LFL1が低い被写体は、動被写体である可能性が高い。
 このように、被写体が動被写体であることを適切に検出することで、ブレンド率BF(BF1,BF2)を画素Gごとに適切に設定することが可能となる。また、被写体が動被写体である度合いを表す指標値等を算出しなくてもよいため、処理負担の増大を抑制しつつ適切なブレンド率BF(BF1,BF2)を決定することができる。
As described with reference to FIG. 10 and the like, the local feature detection unit 9 (9A) of the image sensor 1 can execute the process of calculating the local moving subject likelihood LFM2, and the local moving subject likelihood LFM2 can be executed. May be calculated so that the larger the pixel-to-pixel difference value GD is, the larger the pixel G is, and the smaller the global blinking likelihood LFL1 is, the larger the pixel G is.
For example, as a subject having a large difference value GD between pixels, a moving subject, a periodic blinker, or the like can be mentioned. Among these subjects, a subject having a large inter-pixel difference value GD and having a low global blinking likelihood LFL1 is highly likely to be a moving subject.
By appropriately detecting that the subject is a moving subject in this way, the blend ratio BF (BF1, BF2) can be appropriately set for each pixel G. Further, since it is not necessary to calculate an index value or the like indicating the degree to which the subject is a moving subject, it is possible to determine an appropriate blend ratio BF (BF1, BF2) while suppressing an increase in the processing load.
 図7や図9等を用いて説明したように、イメージセンサ1の大局的特徴検出部8(8A)は、画素領域GAごとに第1画素G1の輝度の時間的な変化量を第1画素変化量G1Vとして算出する処理と、画素領域GAごとに第2画素G2の輝度の時間的な変化量を第2画素変化量G2Vとして算出する処理と、第1画素変化量G1Vと第2画素変化量G2Vの差分を変化量差分値VDとして算出する処理と、を実行し、撮像被写体が動被写体である度合いを表す大局的動被写体尤度LFM1は、変化量差分値VDが小さいほど大きく且つ第2画素変化量G2Vが大きいほど大きくなるように算出されてもよい。
 例えば、受光感度が異なっていても、第1画素G1と第2画素G2から出力される第1信号S1及び第2信号S2の変化量は被写体が動被写体だった場合に共に大きくなるため、その差分は小さくなることが考えられる。
 従って、変化量差分値VDが小さく且つ第2画素変化量G2Vが大きい場合に大局的動被写体尤度LFM1によって被写体が動被写体であることが適切に判定されることで、適切なブレンド率BF(BF1,BF2)を設定することが可能となる。
As described with reference to FIGS. 7 and 9, the global feature detection unit 8 (8A) of the image sensor 1 determines the amount of time change in the brightness of the first pixel G1 for each pixel region GA as the first pixel. The process of calculating the change amount G1V, the process of calculating the temporal change amount of the brightness of the second pixel G2 for each pixel area GA as the second pixel change amount G2V, and the first pixel change amount G1V and the second pixel change. The process of calculating the difference of the amount G2V as the change amount difference value VD is executed, and the global moving subject likelihood LFM1 indicating the degree to which the imaged subject is a moving subject is larger and the first as the change amount difference value VD is smaller. It may be calculated so that the larger the two-pixel change amount G2V is, the larger the amount is.
For example, even if the light receiving sensitivities are different, the amount of change in the first signal S1 and the second signal S2 output from the first pixel G1 and the second pixel G2 is large when the subject is a moving subject. It is possible that the difference will be small.
Therefore, when the change amount difference value VD is small and the second pixel change amount G2V is large, the global moving subject likelihood LFM1 appropriately determines that the subject is a moving subject, so that an appropriate blend ratio BF ( BF1, BF2) can be set.
 図6や図11等を用いて説明したように、イメージセンサ1においては、画素Gごとの動被写体尤度である局所的動被写体尤度LFM2を算出する処理を実行する局所的特徴検出部9(9A)を備え、局所的動被写体尤度LFM2は、第1画素G1と第2画素G2の輝度の差分として算出された画素間差分値GDが大きい画素ほど大きく且つ大局的動被写体尤度LFM1が大きい画素ほど大きくなるように算出されてもよい。
 例えば、画素間差分値GDが大きい被写体としては、動被写体や周期明滅体などを挙げることができる。これらの被写体のうち、画素間差分値GDが大きい被写体であって、且つ、局所的動被写体尤度LFM2が高い被写体は、周期明滅体ではなく動被写体である可能性が高い。
 このように、被写体が動被写体であることを適切に検出することができるため、ブレンド率BF(BF1,BF2)を画素Gごとに適切に設定することが可能となる。
As described with reference to FIGS. 6 and 11, in the image sensor 1, the local feature detection unit 9 that executes a process of calculating the local moving subject likelihood LFM2, which is the moving subject likelihood for each pixel G. (9A) is provided, and the local moving subject likelihood LFM2 is larger as the pixel-to-pixel difference value GD calculated as the difference in brightness between the first pixel G1 and the second pixel G2 is larger, and the global moving subject likelihood LFM1 is provided. It may be calculated so that the larger the pixel, the larger the pixel.
For example, as a subject having a large difference value GD between pixels, a moving subject, a periodic blinker, or the like can be mentioned. Among these subjects, a subject having a large inter-pixel difference value GD and having a high local moving subject likelihood LFM2 is likely to be a moving subject rather than a periodic blinker.
In this way, since it is possible to appropriately detect that the subject is a moving subject, it is possible to appropriately set the blend ratio BF (BF1, BF2) for each pixel G.
 変形例において説明したように、イメージセンサ1のブレンド率決定部10(10A)は、大局的動被写体尤度LFM1が高いほど第1信号S1の比率が高くなるようにブレンド率BF(BF1,BF2)を決定してもよい。
 第1信号S1は、第2信号S2よりも短い露光時間を経て得られる信号とされる。従って、第1信号S1に基づいて生成された画像データは、第2信号S2に基づいて生成された画像データよりも動被写体のブラーが抑制されたものとなる。
 従って、第1信号S1の比率を高めてブレンドを行うことにより、動被写体のブラーを抑制した画像データを得ることができる。
As described in the modified example, the blend ratio determination unit 10 (10A) of the image sensor 1 has a blend ratio BF (BF1, BF2) so that the higher the global moving subject likelihood LFM1, the higher the ratio of the first signal S1. ) May be determined.
The first signal S1 is a signal obtained after a shorter exposure time than the second signal S2. Therefore, the image data generated based on the first signal S1 has less blurring of the moving subject than the image data generated based on the second signal S2.
Therefore, by increasing the ratio of the first signal S1 and performing blending, it is possible to obtain image data in which blurring of a moving subject is suppressed.
 上述した各例で説明したように、イメージセンサ1のブレンド率決定部10(10A)は、局所的動被写体尤度LFM2が高いほど第1信号S1の比率が高くなるように画素Gごとにブレンド率BF(BF1,BF2)を決定してもよい。
 第1信号S1は、第2信号S2よりも短い露光時間を経て得られる信号とされる。従って、第1信号S1に基づいて生成された画像データは、第2信号S2に基づいて生成された画像データよりも動被写体のブラーが抑制されたものとなる。
 画素Gごとの局所的動被写体尤度LFM2の高さに基づいて第1信号S1の比率を高めることにより、動被写体のブラーを抑えた画像データを得ることができる。特に、画素Gごとにこのようなブレンド率BF(BF1,BF2)の決定方法を用いることで、動被写体のブラーを抑えつつ、周期明滅体が撮像された他の画素Gについては、第2信号S2の比率を高めることにより周期明滅体の発光状態が撮像された画像データを得ることができる。
As described in each of the above examples, the blend rate determining unit 10 (10A) of the image sensor 1 blends each pixel G so that the higher the local moving subject likelihood LFM2, the higher the ratio of the first signal S1. The rate BF (BF1, BF2) may be determined.
The first signal S1 is a signal obtained after a shorter exposure time than the second signal S2. Therefore, the image data generated based on the first signal S1 has less blurring of the moving subject than the image data generated based on the second signal S2.
By increasing the ratio of the first signal S1 based on the height of the local moving subject likelihood LFM2 for each pixel G, it is possible to obtain image data in which blurring of the moving subject is suppressed. In particular, by using such a method for determining the blend ratio BF (BF1, BF2) for each pixel G, the second signal is obtained for the other pixel G in which the periodic blinker is imaged while suppressing the blurring of the moving subject. By increasing the ratio of S2, it is possible to obtain image data in which the light emitting state of the periodic blinker is captured.
 図2等を用いて説明したように、イメージセンサ1の第1画素G1と第2画素G2は、受光面積が異なることにより異なる受光感度を有するように構成されていてもよい。
 これにより、受光感度の異なる第1画素G1と第2画素G2を用いて画素領域GAごとの大局的特徴を検出することができる。
 従って、大局的特徴に基づいた適切なブレンド率BF(BF1,BF2)を画素領域GAごとに決定することができる。
As described with reference to FIG. 2 and the like, the first pixel G1 and the second pixel G2 of the image sensor 1 may be configured to have different light receiving sensitivities due to different light receiving areas.
Thereby, the global feature of each pixel region GA can be detected by using the first pixel G1 and the second pixel G2 having different light receiving sensitivities.
Therefore, an appropriate blend ratio BF (BF1, BF2) based on the global characteristics can be determined for each pixel region GA.
 図2等を用いて説明したように、イメージセンサ1の第1画素G1の露光時間は第2画素G2の露光時間よりも短くされていてもよい。例えば、第1画素G1の露光時間は8msecとされ、第2画素G2の露光時間は11msecとされる。
 これにより、例えば、大局的特徴として周期的に明滅する周期明滅体の特徴を検出することが可能となる。
 従って、大局的特徴に基づいてブレンド率BF(BF1,BF2)を適切に決定してHDR合成を行うことが可能となる。
 なお、第2画素G2の露光時間は、周期明滅体の明滅周期に基づいて設定されることが望ましい。特に、車載用のイメージセンサ1として用いられるような場合においては、先行車のブレーキランプや信号機などのように周期明滅体としての被写体のターゲットが明確にされているため、適切な露光時間を設定することができる。
As described with reference to FIG. 2 and the like, the exposure time of the first pixel G1 of the image sensor 1 may be shorter than the exposure time of the second pixel G2. For example, the exposure time of the first pixel G1 is 8 msec, and the exposure time of the second pixel G2 is 11 msec.
This makes it possible to detect, for example, the characteristic of a periodic blinking body that blinks periodically as a global characteristic.
Therefore, it is possible to appropriately determine the blend ratio BF (BF1, BF2) based on the global characteristics and perform HDR synthesis.
It is desirable that the exposure time of the second pixel G2 is set based on the blinking cycle of the periodic blinking object. In particular, when it is used as an in-vehicle image sensor 1, the target of the subject as a periodic blinker such as a brake lamp or a traffic light of a preceding vehicle is clarified, so an appropriate exposure time is set. can do.
 三つ以上の信号を用いる例において図15を参照して説明したように、イメージセンサ1の画素アレイ部2は、第1の受光感度及び第2の受光感度の何れとも異なる第3の受光感度によって読み出される第3信号S3を出力可能とされてもよい。更に、大局的特徴検出部8(8A)は、第1信号S1と第2信号S2の何れか一方の信号と第3信号S3とに基づいて第3信号S3が出力される画素Gが複数含まれた画素領域GAについて大局的特徴を検出してもよい。
 これにより、ブレンド率決定部10(10A)は三つの画素信号をブレンドするためのブレンド率を決定する。
 従って、HDR合成におけるダイナミックレンジを高めることができる。
As described with reference to FIG. 15 in an example using three or more signals, the pixel array unit 2 of the image sensor 1 has a third light receiving sensitivity that is different from both the first light receiving sensitivity and the second light receiving sensitivity. The third signal S3 read by the above may be output. Further, the global feature detection unit 8 (8A) includes a plurality of pixels G to which the third signal S3 is output based on one of the first signal S1 and the second signal S2 and the third signal S3. Global features may be detected for the pixel region GA.
As a result, the blend rate determination unit 10 (10A) determines the blend rate for blending the three pixel signals.
Therefore, the dynamic range in HDR synthesis can be increased.
 なお、本明細書に記載された効果はあくまでも例示であって限定されるものではなく、また他の効果があってもよい。
It should be noted that the effects described in the present specification are merely examples and are not limited, and other effects may be obtained.
<8.本技術>
(1)
 第1の受光感度による読み出しが行われる第1画素と前記第1の受光感度よりも低い感度である第2の受光感度による読み出しが行われる第2画素とが二次元に配列された画素アレイ部と、
 第1の露光時間とされた前記第1画素から出力される第1信号と前記第1の露光時間とは異なる第2の露光時間とされた前記第2画素から出力される第2信号とに基づいて、前記第1画素と前記第2画素がそれぞれ複数含まれた画素領域についての特徴を大局的特徴として検出する大局的特徴検出部と、
 前記大局的特徴検出部の検出結果に基づいて前記第1信号と前記第2信号のブレンド率を決定するブレンド率決定部と、を備えた
 イメージセンサ。
(2)
 前記大局的特徴は、時間経過に伴う輝度変化についての特徴とされた
 上記(1)に記載のイメージセンサ。
(3)
 前記大局的特徴検出部は、撮像被写体が周期明滅体である度合いを表す尤度を大局的明滅体尤度として算出することにより前記検出を行う
 上記(2)に記載のイメージセンサ。
(4)
 前記大局的特徴検出部は、前記画素領域ごとに前記第1画素の輝度平均値の時間的な変化量を第1画素変化量として算出する処理と、前記画素領域ごとに前記第2画素の輝度平均値の時間的な変化量を第2画素変化量として算出する処理と、前記第1画素変化量と前記第2画素変化量の差分を変化量差分値として算出する処理と、を実行することにより前記検出を行う
 上記(3)に記載のイメージセンサ。
(5)
 前記大局的明滅体尤度は、前記変化量差分値が大きいほど大きく、前記第2画素変化量が小さいほど大きくなるように算出される
 上記(4)に記載のイメージセンサ。
(6)
 前記大局的特徴検出部によって検出された前記大局的特徴を用いて画素ごとの特徴を検出する局所的特徴検出部を備えた
 上記(3)から上記(5)の何れかに記載のイメージセンサ。
(7)
 前記局所的特徴検出部は、前記第1画素と前記第2画素の輝度値の差分として算出された画素間差分値と前記大局的明滅体尤度に基づいて画素ごとの明滅体尤度を局所的明滅体尤度として算出する
 上記(6)に記載のイメージセンサ。
(8)
 前記局所的明滅体尤度は、前記画素間差分値が大きい画素ほど大きく、前記大局的明滅体尤度が大きい画素ほど大きくなるように算出される
 上記(7)に記載のイメージセンサ。
(9)
 前記ブレンド率決定部は、前記大局的明滅体尤度が高いほど前記第2信号の比率が高くなるように前記ブレンド率を決定する
 上記(3)から上記(8)の何れかに記載のイメージセンサ。
(10)
 前記ブレンド率決定部は、前記局所的明滅体尤度が高いほど前記第2信号の比率が高くなるように画素ごとに前記ブレンド率を決定する
 上記(8)に記載のイメージセンサ。
(11)
 前記局所的特徴検出部は、
 局所的動被写体尤度を算出する処理を実行可能とされ、
 前記局所的動被写体尤度は、前記画素間差分値が大きい画素ほど大きく、前記大局的明滅体尤度が小さい画素ほど大きくなるように算出される
 上記(8)または上記(10)の何れかに記載のイメージセンサ。
(12)
 前記大局的特徴検出部は、
 前記画素領域ごとに前記第1画素の輝度の時間的な変化量を第1画素変化量として算出する処理と、前記画素領域ごとに前記第2画素の輝度の時間的な変化量を第2画素変化量として算出する処理と、前記第1画素変化量と前記第2画素変化量の差分を変化量差分値として算出する処理と、を実行し、
 撮像被写体が動被写体である度合いを表す大局的動被写体尤度は、前記変化量差分値が小さいほど大きく且つ前記第2画素変化量が大きいほど大きくなるように算出される
 上記(3)から上記(11)の何れかに記載のイメージセンサ。
(13)
 画素ごとの動被写体尤度である局所的動被写体尤度を算出する処理を実行する局所的特徴検出部を備え、
 前記局所的動被写体尤度は、
 前記第1画素と前記第2画素の輝度の差分として算出された画素間差分値が大きい画素ほど大きく且つ前記大局的動被写体尤度が大きい画素ほど大きくなるように算出される
 上記(12)に記載のイメージセンサ。
(14)
 前記ブレンド率決定部は、前記大局的動被写体尤度が高いほど前記第1信号の比率が高くなるように前記ブレンド率を決定する
 上記(12)から上記(13)の何れかに記載のイメージセンサ。
(15)
 前記ブレンド率決定部は、前記局所的動被写体尤度が高いほど前記第1信号の比率が高くなるように画素ごとに前記ブレンド率を決定する
 上記(13)に記載のイメージセンサ。
(16)
 前記第1画素と前記第2画素は、受光面積が異なることにより異なる受光感度を有する
 上記(1)から上記(15)の何れかに記載のイメージセンサ。
(17)
 前記第1画素の露光時間は前記第2画素の露光時間よりも短くされた
 上記(1)から上記(16)の何れかに記載のイメージセンサ。
(18)
 前記画素アレイ部は、前記第1の受光感度及び前記第2の受光感度の何れとも異なる第3の受光感度によって読み出される第3信号を出力可能とされ、
 前記大局的特徴検出部は、前記第1信号と前記第2信号の何れか一方の信号と前記第3信号とに基づいて前記第3信号が出力される画素が複数含まれた画素領域について大局的特徴を検出する
 上記(1)から上記(17)の何れかに記載のイメージセンサ。
(19)
 光電変換を行うイメージセンサと、
 被写体からの反射光を前記イメージセンサに集光する光学部材と、を備え、
 前記イメージセンサは、
 第1の受光感度による読み出しが行われる第1画素と前記第1の受光感度よりも低い感度である第2の受光感度による読み出しが行われる第2画素とが二次元に配列された画素アレイ部と、
 第1の露光時間とされた前記第1画素から出力される第1信号と前記第1の露光時間とは異なる第2の露光時間とされた前記第2画素から出力される第2信号とに基づいて、前記第1画素と前記第2画素がそれぞれ複数含まれた画素領域についての特徴を大局的特徴として検出する大局的特徴検出部と、
 前記大局的特徴検出部の検出結果に基づいて前記第1信号と前記第2信号のブレンド率を決定するブレンド率決定部と、を備えた
 撮像装置。
(20)
 第1の露光時間により第1の受光感度による読み出しが行われる第1画素から出力される第1信号と前記第1の露光時間とは異なる第2の露光時間により前記第1の受光感度よりも低い感度である第2の受光感度による読み出しが行われる前記第2画素から出力される第2信号とに基づいて、前記第1画素と前記第2画素がそれぞれ複数含まれた画素領域についての特徴を大局的特徴として検出する処理と、
 前記大局的特徴の検出結果に基づいて前記第1信号と前記第2信号のブレンド率を決定する処理と、を備えた
 信号処理方法。
<8. This technology>
(1)
A pixel array unit in which a first pixel that is read by a first light receiving sensitivity and a second pixel that is read by a second light receiving sensitivity, which is lower than the first light receiving sensitivity, are arranged two-dimensionally. When,
The first signal output from the first pixel, which is the first exposure time, and the second signal, which is output from the second pixel, which is the second exposure time different from the first exposure time. Based on this, a global feature detection unit that detects features of a pixel region containing a plurality of the first pixel and the second pixel as global features,
An image sensor including a blend rate determining unit that determines the blending rate of the first signal and the second signal based on the detection result of the global feature detecting unit.
(2)
The image sensor according to (1) above, wherein the global feature is a feature regarding a change in luminance with the passage of time.
(3)
The image sensor according to (2) above, wherein the global feature detection unit performs the detection by calculating the likelihood representing the degree to which the image-receiving subject is a periodic blinker as the global blinker likelihood.
(4)
The global feature detection unit calculates a temporal change amount of the brightness average value of the first pixel for each pixel area as a first pixel change amount, and a processing for calculating the brightness of the second pixel for each pixel area. The process of calculating the temporal change amount of the average value as the second pixel change amount and the process of calculating the difference between the first pixel change amount and the second pixel change amount as the change amount difference value are executed. The image sensor according to (3) above.
(5)
The image sensor according to (4) above, wherein the global blinking body likelihood is calculated so as to be larger as the change amount difference value is larger and larger as the second pixel change amount is smaller.
(6)
The image sensor according to any one of (3) to (5) above, comprising a local feature detection unit that detects a feature for each pixel using the global feature detected by the global feature detection unit.
(7)
The local feature detection unit locally determines the blinking body likelihood for each pixel based on the pixel-to-pixel difference value calculated as the difference between the luminance values of the first pixel and the second pixel and the global blinking body likelihood. The image sensor according to (6) above, which is calculated as the target blinking body likelihood.
(8)
The image sensor according to (7) above, wherein the local blinking body likelihood is calculated so that the pixel having a larger difference value between pixels is larger and the pixel having a larger global blinking body likelihood is larger.
(9)
The image according to any one of (3) to (8) above, wherein the blend ratio determining unit determines the blend ratio so that the higher the global blinking likelihood is, the higher the ratio of the second signal is. Sensor.
(10)
The image sensor according to (8) above, wherein the blend ratio determining unit determines the blend ratio for each pixel so that the higher the local blinker likelihood is, the higher the ratio of the second signal is.
(11)
The local feature detector is
It is possible to execute the process of calculating the likelihood of a local moving subject.
The local moving subject likelihood is calculated so as to be larger for pixels having a larger difference value between pixels and larger for pixels with a smaller global blinking object likelihood. Either (8) or (10) above. Image sensor described in.
(12)
The global feature detector is
The process of calculating the temporal change in the brightness of the first pixel for each pixel area as the first pixel change, and the temporal change in the brightness of the second pixel for each pixel region in the second pixel. A process of calculating as a change amount and a process of calculating the difference between the first pixel change amount and the second pixel change amount as a change amount difference value are executed.
The global moving subject likelihood, which indicates the degree to which the imaged subject is a moving subject, is calculated so as to be larger as the change amount difference value is smaller and larger as the second pixel change amount is larger. The image sensor according to any one of (11).
(13)
It is equipped with a local feature detection unit that executes processing to calculate the local moving subject likelihood, which is the moving subject likelihood for each pixel.
The local moving subject likelihood is
In the above (12), the pixel-to-pixel difference value calculated as the difference in brightness between the first pixel and the second pixel is calculated to be larger as the pixel is larger and the pixel having the global moving subject likelihood is larger. The image sensor described.
(14)
The image according to any one of (12) to (13) above, wherein the blend ratio determining unit determines the blend ratio so that the higher the global moving subject likelihood is, the higher the ratio of the first signal is. Sensor.
(15)
The image sensor according to (13) above, wherein the blend ratio determining unit determines the blend ratio for each pixel so that the higher the local moving subject likelihood is, the higher the ratio of the first signal is.
(16)
The image sensor according to any one of (1) to (15) above, wherein the first pixel and the second pixel have different light receiving sensitivities due to different light receiving areas.
(17)
The image sensor according to any one of (1) to (16) above, wherein the exposure time of the first pixel is shorter than the exposure time of the second pixel.
(18)
The pixel array unit is capable of outputting a third signal read by a third light receiving sensitivity that is different from both the first light receiving sensitivity and the second light receiving sensitivity.
The global feature detection unit is a global feature region for a pixel region including a plurality of pixels to which the third signal is output based on the signal of either one of the first signal and the second signal and the third signal. The image sensor according to any one of the above (1) to (17).
(19)
An image sensor that performs photoelectric conversion and
An optical member that collects the reflected light from the subject on the image sensor is provided.
The image sensor is
A pixel array unit in which a first pixel that is read by a first light receiving sensitivity and a second pixel that is read by a second light receiving sensitivity, which is lower than the first light receiving sensitivity, are arranged two-dimensionally. When,
The first signal output from the first pixel, which is the first exposure time, and the second signal, which is output from the second pixel, which is the second exposure time different from the first exposure time. Based on this, a global feature detection unit that detects features of a pixel region containing a plurality of the first pixel and the second pixel as global features,
An image pickup apparatus including a blend rate determining unit that determines the blending rate of the first signal and the second signal based on the detection result of the global feature detecting unit.
(20)
The first signal output from the first pixel, which is read out by the first light receiving sensitivity according to the first exposure time, and the second exposure time, which is different from the first exposure time, are higher than the first light receiving sensitivity. Features of a pixel region containing a plurality of the first pixel and the second pixel based on a second signal output from the second pixel, which is read out by a second light receiving sensitivity having a low sensitivity. And the process of detecting as a global feature,
A signal processing method comprising a process of determining a blend ratio of the first signal and the second signal based on a detection result of the global feature.
1 イメージセンサ
2 画素アレイ部
5、5A 信号処理部
8、8A 大局的特徴検出部
9、9A 局所的特徴検出部
10、10A ブレンド率決定部
G 画素
G1 第1画素
G2 第2画素
GA 画素領域
S1 第1信号
S2 第2信号
S3 第3信号
G1V 第1画素変化量
G2V 第2画素変化量
VD 変化量差分値
LFL1 大局的明滅体尤度
LFL2 局所的明滅体尤度
LFM1 大局的動被写体尤度
LFM2 局所的動被写体尤度
GD 画素間差分値
BF、BF1、BF2 ブレンド率
1 Image sensor 2 Pixel array unit 5, 5A Signal processing unit 8, 8A Global feature detection unit 9, 9A Local feature detection unit 10, 10A Blend rate determination unit G pixel G1 1st pixel G2 2nd pixel GA pixel area S1 1st signal S2 2nd signal S3 3rd signal G1V 1st pixel change amount G2V 2nd pixel change amount VD change amount difference value LFL1 global blinker likelihood LFL2 local blinker likelihood LFM1 global moving subject likelihood LFM2 Local moving subject likelihood GD Pixel-to-pixel difference value BF, BF1, BF2 Blend rate

Claims (20)

  1.  第1の受光感度による読み出しが行われる第1画素と前記第1の受光感度よりも低い感度である第2の受光感度による読み出しが行われる第2画素とが二次元に配列された画素アレイ部と、
     第1の露光時間とされた前記第1画素から出力される第1信号と前記第1の露光時間とは異なる第2の露光時間とされた前記第2画素から出力される第2信号とに基づいて、前記第1画素と前記第2画素がそれぞれ複数含まれた画素領域についての特徴を大局的特徴として検出する大局的特徴検出部と、
     前記大局的特徴検出部の検出結果に基づいて前記第1信号と前記第2信号のブレンド率を決定するブレンド率決定部と、を備えた
     イメージセンサ。
    A pixel array unit in which a first pixel that is read by a first light receiving sensitivity and a second pixel that is read by a second light receiving sensitivity, which is lower than the first light receiving sensitivity, are arranged two-dimensionally. When,
    The first signal output from the first pixel, which is the first exposure time, and the second signal, which is output from the second pixel, which is the second exposure time different from the first exposure time. Based on this, a global feature detection unit that detects features of a pixel region containing a plurality of the first pixel and the second pixel as global features,
    An image sensor including a blend rate determining unit that determines the blending rate of the first signal and the second signal based on the detection result of the global feature detecting unit.
  2.  前記大局的特徴は、時間経過に伴う輝度変化についての特徴とされた
     請求項1に記載のイメージセンサ。
    The image sensor according to claim 1, wherein the global feature is a feature regarding a change in luminance with the passage of time.
  3.  前記大局的特徴検出部は、撮像被写体が周期明滅体である度合いを表す尤度を大局的明滅体尤度として算出することにより前記検出を行う
     請求項2に記載のイメージセンサ。
    The image sensor according to claim 2, wherein the global feature detection unit performs the detection by calculating the likelihood representing the degree to which the image-receiving subject is a periodic blinking object as the global blinking object likelihood.
  4.  前記大局的特徴検出部は、前記画素領域ごとに前記第1画素の輝度平均値の時間的な変化量を第1画素変化量として算出する処理と、前記画素領域ごとに前記第2画素の輝度平均値の時間的な変化量を第2画素変化量として算出する処理と、前記第1画素変化量と前記第2画素変化量の差分を変化量差分値として算出する処理と、を実行することにより前記検出を行う
     請求項3に記載のイメージセンサ。
    The global feature detection unit calculates a temporal change amount of the brightness average value of the first pixel for each pixel area as a first pixel change amount, and a processing for calculating the brightness of the second pixel for each pixel area. The process of calculating the temporal change amount of the average value as the second pixel change amount and the process of calculating the difference between the first pixel change amount and the second pixel change amount as the change amount difference value are executed. The image sensor according to claim 3, wherein the detection is performed by the above.
  5.  前記大局的明滅体尤度は、前記変化量差分値が大きいほど大きく、前記第2画素変化量が小さいほど大きくなるように算出される
     請求項4に記載のイメージセンサ。
    The image sensor according to claim 4, wherein the global blinking body likelihood is calculated so that the larger the change amount difference value is, the larger the change amount is, and the smaller the second pixel change amount is, the larger the change amount is.
  6.  前記大局的特徴検出部によって検出された前記大局的特徴を用いて画素ごとの特徴を検出する局所的特徴検出部を備えた
     請求項3に記載のイメージセンサ。
    The image sensor according to claim 3, further comprising a local feature detection unit that detects a feature for each pixel using the global feature detected by the global feature detection unit.
  7.  前記局所的特徴検出部は、前記第1画素と前記第2画素の輝度値の差分として算出された画素間差分値と前記大局的明滅体尤度に基づいて画素ごとの明滅体尤度を局所的明滅体尤度として算出する
     請求項6に記載のイメージセンサ。
    The local feature detection unit locally determines the blinking body likelihood for each pixel based on the pixel-to-pixel difference value calculated as the difference between the luminance values of the first pixel and the second pixel and the global blinking body likelihood. The image sensor according to claim 6, which is calculated as a target blinking body likelihood.
  8.  前記局所的明滅体尤度は、前記画素間差分値が大きい画素ほど大きく、前記大局的明滅体尤度が大きい画素ほど大きくなるように算出される
     請求項7に記載のイメージセンサ。
    The image sensor according to claim 7, wherein the local blinking body likelihood is calculated so that the pixel having a larger difference value between pixels is larger and the pixel having a larger global blinking body likelihood is larger.
  9.  前記ブレンド率決定部は、前記大局的明滅体尤度が高いほど前記第2信号の比率が高くなるように前記ブレンド率を決定する
     請求項3に記載のイメージセンサ。
    The image sensor according to claim 3, wherein the blend ratio determining unit determines the blend ratio so that the higher the global blinking likelihood is, the higher the ratio of the second signal is.
  10.  前記ブレンド率決定部は、前記局所的明滅体尤度が高いほど前記第2信号の比率が高くなるように画素ごとに前記ブレンド率を決定する
     請求項8に記載のイメージセンサ。
    The image sensor according to claim 8, wherein the blend ratio determining unit determines the blend ratio for each pixel so that the higher the local blinker likelihood is, the higher the ratio of the second signal is.
  11.  前記局所的特徴検出部は、
     局所的動被写体尤度を算出する処理を実行可能とされ、
     前記局所的動被写体尤度は、前記画素間差分値が大きい画素ほど大きく、前記大局的明滅体尤度が小さい画素ほど大きくなるように算出される
     請求項8に記載のイメージセンサ。
    The local feature detector is
    It is possible to execute the process of calculating the likelihood of a local moving subject.
    The image sensor according to claim 8, wherein the local moving subject likelihood is calculated so that the pixel having a larger difference value between pixels has a larger likelihood, and the pixel having a smaller global blinking body likelihood has a higher likelihood.
  12.  前記大局的特徴検出部は、
     前記画素領域ごとに前記第1画素の輝度の時間的な変化量を第1画素変化量として算出する処理と、前記画素領域ごとに前記第2画素の輝度の時間的な変化量を第2画素変化量として算出する処理と、前記第1画素変化量と前記第2画素変化量の差分を変化量差分値として算出する処理と、を実行し、
     撮像被写体が動被写体である度合いを表す大局的動被写体尤度は、前記変化量差分値が小さいほど大きく且つ前記第2画素変化量が大きいほど大きくなるように算出される
     請求項3に記載のイメージセンサ。
    The global feature detector is
    The process of calculating the temporal change in the brightness of the first pixel for each pixel area as the first pixel change, and the temporal change in the brightness of the second pixel for each pixel region in the second pixel. A process of calculating as a change amount and a process of calculating the difference between the first pixel change amount and the second pixel change amount as a change amount difference value are executed.
    The third aspect of claim 3 is that the global moving subject likelihood, which indicates the degree to which the imaged subject is a moving subject, is calculated so as to be larger as the change amount difference value is smaller and larger as the second pixel change amount is larger. Image sensor.
  13.  画素ごとの動被写体尤度である局所的動被写体尤度を算出する処理を実行する局所的特徴検出部を備え、
     前記局所的動被写体尤度は、
     前記第1画素と前記第2画素の輝度の差分として算出された画素間差分値が大きい画素ほど大きく且つ前記大局的動被写体尤度が大きい画素ほど大きくなるように算出される
     請求項12に記載のイメージセンサ。
    It is equipped with a local feature detection unit that executes processing to calculate the local moving subject likelihood, which is the moving subject likelihood for each pixel.
    The local moving subject likelihood is
    The twelfth claim is calculated so that the larger the pixel-to-pixel difference value calculated as the difference in brightness between the first pixel and the second pixel, the larger the pixel, and the larger the global moving subject likelihood is, the larger the pixel is. Image sensor.
  14.  前記ブレンド率決定部は、前記大局的動被写体尤度が高いほど前記第1信号の比率が高くなるように前記ブレンド率を決定する
     請求項12に記載のイメージセンサ。
    The image sensor according to claim 12, wherein the blend ratio determining unit determines the blend ratio so that the ratio of the first signal increases as the global moving subject likelihood increases.
  15.  前記ブレンド率決定部は、前記局所的動被写体尤度が高いほど前記第1信号の比率が高くなるように画素ごとに前記ブレンド率を決定する
     請求項13に記載のイメージセンサ。
    The image sensor according to claim 13, wherein the blend ratio determining unit determines the blend ratio for each pixel so that the higher the local moving subject likelihood is, the higher the ratio of the first signal is.
  16.  前記第1画素と前記第2画素は、受光面積が異なることにより異なる受光感度を有する
     請求項1に記載のイメージセンサ。
    The image sensor according to claim 1, wherein the first pixel and the second pixel have different light receiving sensitivities due to different light receiving areas.
  17.  前記第1画素の露光時間は前記第2画素の露光時間よりも短くされた
     請求項1に記載のイメージセンサ。
    The image sensor according to claim 1, wherein the exposure time of the first pixel is shorter than the exposure time of the second pixel.
  18.  前記画素アレイ部は、前記第1の受光感度及び前記第2の受光感度の何れとも異なる第3の受光感度によって読み出される第3信号を出力可能とされ、
     前記大局的特徴検出部は、前記第1信号と前記第2信号の何れか一方の信号と前記第3信号とに基づいて前記第3信号が出力される画素が複数含まれた画素領域について大局的特徴を検出する
     請求項1に記載のイメージセンサ。
    The pixel array unit is capable of outputting a third signal read by a third light receiving sensitivity that is different from both the first light receiving sensitivity and the second light receiving sensitivity.
    The global feature detection unit is a pixel region including a plurality of pixels to which the third signal is output based on the signal of either one of the first signal and the second signal and the third signal. The image sensor according to claim 1, wherein the feature is detected.
  19.  光電変換を行うイメージセンサと、
     被写体からの反射光を前記イメージセンサに集光する光学部材と、を備え、
     前記イメージセンサは、
     第1の受光感度による読み出しが行われる第1画素と前記第1の受光感度よりも低い感度である第2の受光感度による読み出しが行われる第2画素とが二次元に配列された画素アレイ部と、
     第1の露光時間とされた前記第1画素から出力される第1信号と前記第1の露光時間とは異なる第2の露光時間とされた前記第2画素から出力される第2信号とに基づいて、前記第1画素と前記第2画素がそれぞれ複数含まれた画素領域についての特徴を大局的特徴として検出する大局的特徴検出部と、
     前記大局的特徴検出部の検出結果に基づいて前記第1信号と前記第2信号のブレンド率を決定するブレンド率決定部と、を備えた
     撮像装置。
    An image sensor that performs photoelectric conversion and
    An optical member that collects the reflected light from the subject on the image sensor is provided.
    The image sensor is
    A pixel array unit in which a first pixel that is read by a first light receiving sensitivity and a second pixel that is read by a second light receiving sensitivity, which is lower than the first light receiving sensitivity, are arranged two-dimensionally. When,
    The first signal output from the first pixel, which is the first exposure time, and the second signal, which is output from the second pixel, which is the second exposure time different from the first exposure time. Based on this, a global feature detection unit that detects features of a pixel region containing a plurality of the first pixel and the second pixel as global features,
    An image pickup apparatus including a blend rate determining unit that determines the blending rate of the first signal and the second signal based on the detection result of the global feature detecting unit.
  20.  第1の露光時間により第1の受光感度による読み出しが行われる第1画素から出力される第1信号と前記第1の露光時間とは異なる第2の露光時間により前記第1の受光感度よりも低い感度である第2の受光感度による読み出しが行われる第2画素から出力される第2信号とに基づいて、前記第1画素と前記第2画素がそれぞれ複数含まれた画素領域についての特徴を大局的特徴として検出する処理と、
     前記大局的特徴の検出結果に基づいて前記第1信号と前記第2信号のブレンド率を決定する処理と、を備えた
     信号処理方法。
    The first signal output from the first pixel, which is read out by the first light receiving sensitivity according to the first exposure time, and the second exposure time different from the first exposure time, are higher than the first light receiving sensitivity. Based on the second signal output from the second pixel, which is read out by the second light receiving sensitivity, which has a low sensitivity, the characteristics of the pixel region including a plurality of the first pixel and the second pixel are described. Processing to detect as a global feature and
    A signal processing method comprising a process of determining a blend ratio of the first signal and the second signal based on a detection result of the global feature.
PCT/JP2021/046511 2021-01-08 2021-12-16 Image sensor, imaging device, and signal processing method WO2022149431A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-002060 2021-01-08
JP2021002060A JP2022107233A (en) 2021-01-08 2021-01-08 Image sensor, imaging device, and signal processing method

Publications (1)

Publication Number Publication Date
WO2022149431A1 true WO2022149431A1 (en) 2022-07-14

Family

ID=82357690

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/046511 WO2022149431A1 (en) 2021-01-08 2021-12-16 Image sensor, imaging device, and signal processing method

Country Status (2)

Country Link
JP (1) JP2022107233A (en)
WO (1) WO2022149431A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014030073A (en) * 2012-07-31 2014-02-13 Sony Corp Image processing apparatus, image processing method, and program
JP2016146592A (en) * 2015-02-09 2016-08-12 ソニー株式会社 Image processing device, image processing method, and electronic apparatus
JP2020088791A (en) * 2018-11-30 2020-06-04 パナソニックIpマネジメント株式会社 Image processing device, image processing method, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014030073A (en) * 2012-07-31 2014-02-13 Sony Corp Image processing apparatus, image processing method, and program
JP2016146592A (en) * 2015-02-09 2016-08-12 ソニー株式会社 Image processing device, image processing method, and electronic apparatus
JP2020088791A (en) * 2018-11-30 2020-06-04 パナソニックIpマネジメント株式会社 Image processing device, image processing method, and program

Also Published As

Publication number Publication date
JP2022107233A (en) 2022-07-21

Similar Documents

Publication Publication Date Title
US8081224B2 (en) Method and apparatus for image stabilization using multiple image captures
CN106561046B (en) High dynamic range imaging system with improved readout and method of system operation
US10674099B2 (en) Beam split extended dynamic range image capture system
US7884868B2 (en) Image capturing element, image capturing apparatus, image capturing method, image capturing system, and image processing apparatus
US10193627B1 (en) Detection of visible light communication sources over a high dynamic range
US10791288B2 (en) Image processing apparatus, image processing method, and computer-readable recording medium
WO2011108039A1 (en) Obstacle detection device, obstacle detection system provided therewith, and obstacle detection method
KR20200022041A (en) Multiplexed high dynamic range image
US9842284B2 (en) Image processing apparatus and method, and program
US10957028B2 (en) Image generation device, image generation method and recording medium
US10158811B2 (en) Image processing apparatus, image processing method, and computer-readable recording medium
JP6739066B2 (en) Imaging control apparatus, imaging control method, program, and recording medium recording the same
US9813687B1 (en) Image-capturing device, image-processing device, image-processing method, and image-processing program
JP2018201196A (en) Imaging device, imaging system, vehicle travel control system, and image processing apparatus
US20150221095A1 (en) Image processing apparatus, image processing method, and program
KR20130139788A (en) Imaging apparatus which suppresses fixed pattern noise generated by an image sensor of the apparatus
WO2022149431A1 (en) Image sensor, imaging device, and signal processing method
US10182186B2 (en) Image capturing apparatus and control method thereof
JP2021114762A (en) Low-light imaging system
JP2008147777A (en) Monitoring camera system
JP2021196643A (en) Inference device, imaging device, learning device, inference method, learning method and program
US10205870B2 (en) Image capturing apparatus and control method thereof
US11451724B2 (en) Imaging device and imaging system
US10791289B2 (en) Image processing apparatus, image processing method, and non-transitory computer readable recording medium
JP4881050B2 (en) In-vehicle imaging device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21917652

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21917652

Country of ref document: EP

Kind code of ref document: A1