CN109429021B - High dynamic range imaging apparatus and method for generating high dynamic range image - Google Patents

High dynamic range imaging apparatus and method for generating high dynamic range image Download PDF

Info

Publication number
CN109429021B
CN109429021B CN201811006574.4A CN201811006574A CN109429021B CN 109429021 B CN109429021 B CN 109429021B CN 201811006574 A CN201811006574 A CN 201811006574A CN 109429021 B CN109429021 B CN 109429021B
Authority
CN
China
Prior art keywords
pixel
signal
image
dynamic range
high dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811006574.4A
Other languages
Chinese (zh)
Other versions
CN109429021A (en
Inventor
M·米利纳尔
S·斯罗特
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Semiconductor Components Industries LLC
Original Assignee
Semiconductor Components Industries LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/042,167 external-priority patent/US10708524B2/en
Application filed by Semiconductor Components Industries LLC filed Critical Semiconductor Components Industries LLC
Publication of CN109429021A publication Critical patent/CN109429021A/en
Application granted granted Critical
Publication of CN109429021B publication Critical patent/CN109429021B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/53Control of the integration time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a high dynamic range imaging apparatus and method for generating a high dynamic range image. The present invention relates to a high dynamic range imaging apparatus and method for generating a high dynamic range image. The technical problem solved by the present invention is that the conventional imaging apparatus using the multiple accumulation time method can provide inaccurate image data. The technical effect achieved by the present invention is to provide a high dynamic range image using a multiple accumulation time technique that operates by selecting the most reliable image data.

Description

High dynamic range imaging apparatus and method for generating high dynamic range image
Cross Reference to Related Applications
This application claims the benefit of U.S. provisional patent application serial No. 62/553,461, filed on 2017, 9/1, and the disclosure of which is incorporated herein by reference in its entirety.
Technical Field
The present invention relates to a high dynamic range imaging apparatus and method for generating a high dynamic range image.
Background
In many applications, such as automotive and other applications, image sensors with High Dynamic Range (HDR) are required. High dynamic range may be achieved with low dynamic range sensors. The low dynamic range sensor may use a multiple integration time (integration time) method to achieve the required dynamic range. For low dynamic range sensors, the multiple accumulation time signals are linearized (normalized to the longest accumulation time signal) and combined onto a single HDR signal on the sensor in order to render such sensors as true HDR sensors. However, the combined signals may produce motion and flicker artifacts that may change the actual color of a moving or flickering object if one color channel output signal is composed of one combination of accumulation time signals and the other output color channel signal is composed of another combination. Once such an HDR output is constructed, it is not possible to decode which accumulation time signal combination is used to construct a given pixel HDR value. In the absence of such information, applications such as machine vision and/or Advanced Driver Assistance Systems (ADAS) cannot rely on the color of a given area of an image frame and may result in unreliable operation of the application.
Disclosure of Invention
The technical problem solved by the present invention is that the conventional imaging apparatus using the multiple accumulation time method can provide inaccurate image data.
According to various embodiments, methods and apparatus for high dynamic range imaging are configured to select the "best" signal for each pixel location to construct an HDR output. In one embodiment, the "best" signal is selected among the various pixel signals, and the pixel signal selected as the "best" signal is based on the value of the pixel signal and the identification of non-saturation. The "best" signal is a non-saturated signal if only one pixel value of a particular pixel location is non-saturated. If more than one pixel value is unsaturated, the "best" signal is the pixel signal with the largest value. If more than two pixel values are unsaturated and both have equal larger values, the "best" signal is the pixel signal with the shortest accumulation time.
According to one aspect, a high dynamic range imaging device comprises: a pixel array comprising a plurality of pixels, wherein each pixel from the plurality of pixels is defined by a pixel location; and wherein: the pixel array is configured to generate a plurality of successive image frames, wherein each frame has a different accumulation time; each image frame includes a plurality of pixel signals; and each pixel signal corresponds to one pixel from the pixel array; an image signal processor connected to the pixel array and configured to: determining a value of each pixel signal from each image frame; determining, for each pixel signal, whether the pixel signal is one of unsaturated and saturated according to the determined value; selecting a pixel signal from a plurality of image frames for each pixel position according to at least one of: a value of the pixel signal; and an accumulation time; and constructing a High Dynamic Range (HDR) output using the selected pixel signals.
In one embodiment of the above high dynamic range imaging device, the image signal processor selects the pixel signal having the maximum value if all pixel signals of one pixel position are unsaturated.
In one embodiment of the above-described high dynamic range imaging apparatus, if all pixel signals of one pixel position are unsaturated and all pixel signals have the same value, the image signal processor selects the pixel signal having the shortest accumulation time.
In one embodiment of the above-described high dynamic range imaging apparatus, if all pixel signals of one position are saturated, the image signal processor selects the pixel signal having the shortest accumulation time.
In one embodiment of the above high dynamic range imaging device, the image signal processor is further configured to apply a linearization gain to each pixel signal from the plurality of image frames.
In an embodiment of the above high dynamic range imaging device of claim 1, wherein the image signal processor is further configured to: prior to constructing the HDR output, compressing the selected signal from an N-bit value to an M-bit value, where N is greater than M; and assigning at least 1-bit code corresponding to the accumulation time to each pixel signal from the plurality of image frames.
In another aspect, a method for generating a high dynamic range image includes: generating a plurality of successive image frames with an array of pixels, wherein: the pixel array comprises a plurality of pixels; and each pixel is defined by a pixel location; and wherein: each image frame includes a plurality of pixel signals; each image frame has a different accumulation time; and each pixel signal corresponds to one pixel from the pixel array; determining a value of each pixel signal from each image frame; for each pixel signal, determining whether the pixel signal is one of unsaturated and saturated; selecting an optimal signal in a plurality of image frames for each pixel location; wherein the best signal comprises at least one of: pixel signals identified as being unsaturated; a pixel signal having a maximum value; and a pixel signal having the shortest accumulation time; and constructing a High Dynamic Range (HDR) output using the optimal pixel signal.
In one operation of the above method, if all pixel signals of one pixel location are unsaturated, the pixel signal having the maximum value is selected.
In one operation of the above method, if all pixel signals of one pixel position are unsaturated and all pixel signals have the same value, the pixel signal having the shortest accumulation time is selected.
In one operation of the above method, if all pixel signals of one pixel position are saturated, the pixel signal having the shortest accumulation time is selected.
The technical effect achieved by the present invention is to provide a high dynamic range image using a multiple accumulation time technique that operates by selecting the most reliable image data.
Drawings
The present technology may be more fully understood with reference to the detailed description when considered in connection with the following exemplary figures. In the following drawings, like elements and steps in the various figures are referred to by like reference numerals throughout.
FIG. 1 representatively illustrates a camera system in accordance with a first embodiment of the present technique;
FIG. 2 is a block diagram of an imaging device in accordance with various embodiments of the present technique;
FIG. 3 is a block diagram of a vehicle system in accordance with a second embodiment of the present technique;
FIG. 4 representatively illustrates a first image frame in accordance with an exemplary embodiment of the present technique;
FIG. 5 representatively illustrates a second image frame in accordance with an exemplary embodiment of the present technique;
FIG. 6 representatively illustrates a third image frame in accordance with an exemplary embodiment of the present technique;
FIG. 7 representatively illustrates a High Dynamic Range (HDR) output in accordance with an exemplary embodiment of the present technique;
FIG. 8 is a flow diagram for constructing an HDR output, in accordance with exemplary embodiments of the present technique;
FIG. 9 is pseudo code for selecting an optimal pixel signal in accordance with an exemplary embodiment of the present technique;
FIG. 10 is pseudo code for applying linearization gain in accordance with an exemplary embodiment of the present technique;
FIG. 11 is pseudo code for applying linearization gain according to an example embodiment; and is
FIG. 12 is pseudo code for selecting an optimal pixel signal in accordance with an alternative embodiment of the present technology.
Detailed Description
The present techniques may be described in terms of functional block components and various processing steps. Such functional blocks may be implemented by any number of components configured to perform the specified functions and achieve the various results. For example, the present technology may employ various types of image sensors, image signal processors, logic units, readout circuits, signal converters, and the like, which may perform various functions. Further, the present techniques may be implemented in connection with any number of imaging applications. In addition, the present techniques may employ any number of conventional techniques to capture image data, transmit signals, sample signals, and the like.
Methods and apparatus for high dynamic range imaging in accordance with various aspects of the present technique may be used in conjunction with any suitable system, such as a camera system, video system, machine vision, vehicle navigation, surveillance system, motion detection system, advanced Driver Assistance System (ADAS), and the like. Various representative embodiments of the present technology may be applied to, for example, any image sensor, imaging device, pixel array, and the like.
Methods and apparatus for high dynamic range imaging in accordance with aspects of the present technique may operate in conjunction with any suitable system. According to various applications, the system may include an imaging device 145 to capture image data and process the image data.
Referring to fig. 1, in a first application, the method and apparatus for high dynamic range imaging may be incorporated in an electronic device system, such as a digital camera 105. According to the present application, the digital camera 105 may include a Central Processing Unit (CPU) 110 that communicates with various devices over a bus 115. Some of the devices connected to the bus 115 may provide communication into and out of the system, such as input/output (I/O) devices 120 and imaging devices 145. Other devices connected to the bus 115 provide memory, such as a Random Access Memory (RAM) 125, a hard disk drive, and one or more peripheral memory devices 130, such as a removable memory device. Although bus 115 is shown as a single bus, any number of buses may be used to provide a communication path to interconnect the devices.
In various embodiments, the digital camera 105 may also include a lens 135 configured to focus an image on a sensing surface of the imaging device 145. For example, the lens 35 may comprise a fixed and/or adjustable lens adjacent to the sensing surface of the imaging device 145.
Referring to fig. 3, in a second application, the method and apparatus for high dynamic range imaging may be incorporated into a vehicle system (such as ADAS 300). In the present application, the system 300 may include a plurality of imaging devices 145 (1 n), where each imaging device 145 is connected to the host processor 310. Host processor 310 may receive image data from imaging device 145 and make decisions based on the image data. In this embodiment, the host processor 310 may control various peripheral systems 305, such as a braking system, a steering system, and the like. For example, the host processor 310 may transmit control signals to various peripheral systems 305 based on image data.
Referring to fig. 2, the imaging device 145 captures image data by generating and collecting electric charges. For example, light may enter and impinge on a photosensitive surface of the imaging device 145 and generate charge. The imaging device 145 may further process the collected charge by converting the charge into an electrical signal. In various embodiments, the imaging device 145 may be configured as an integrated circuit (i.e., die) that includes various devices and/or systems to perform image capture and various readout functions. The imaging device 145 may be implemented in conjunction with any suitable technology, such as active pixel sensors in Complementary Metal Oxide Semiconductors (CMOS) and Charge Coupled Devices (CCD). In an exemplary embodiment, the imaging device 145 may include a pixel array 205. The imaging device 145 may also include various circuits to perform sampling, amplification, and signal conversion, as well as processing circuits such as an image signal processor 230.
Pixel array 205 detects light and delivers information that constitutes an image by: the variable attenuation of the waves (as they pass through or reflect off objects) is converted into electrical signals. Pixel array 205 may include a plurality of pixels 210 arranged to form rows and columns, and pixel array 205 may include any number of rows and columns, such as hundreds or thousands of rows and columns. The pixel array 205 may be coupled to the image signal processor 230 and configured to transmit pixel signals thereto.
Each pixel 210 may include a photosensitive region (not shown) for collecting charge, such as a photogate or photodiode, to detect light and convert the detected light into charge, and each pixel 210 may include various circuits and/or devices to convert charge into a pixel signal and facilitate readout of the pixel signal. Each pixel signal may contain various information and/or image data such as color information, light intensity, pixel location, and the like. The location (i.e., coordinates) of each pixel 210 may be defined by a particular location within pixel array 205. Thus, each pixel 210 may be identified by a number of rows i and a number of columns j (i.e., 210 (i, j)) within the pixel array 205. For example, the pixels 210 located in the first row and the first column are referred to as pixels 210 (1, 1).
In various implementations, the pixel array 205 may also include various circuitry to facilitate readout of pixel signals. For example, pixel array 205 may include row circuitry 215, column circuitry 220, and timing and control circuitry 225.
Row circuitry 215 may receive a row address corresponding to a particular location on pixel array 205 from timing and control circuitry 225 and provide corresponding row control signals (such as a reset control signal, a row select control signal, a charge transfer control signal, and a readout control signal) to pixels 210 via row control paths. Row circuitry 215 may include various wires, electrical connections, and/or devices integrated within pixel array 205 and coupled to each pixel 210.
The column circuitry 220 may include column control circuitry, readout circuitry, signal conversion circuitry, and/or column decoder circuitry, and may receive pixel signals, such as analog pixel signals generated by the pixels 210. The column path may be configured to couple each column of the pixel array 205 to the column circuitry 220. The column path may be used to read out pixel signals from the pixels 210 and/or to provide a bias signal (e.g., a bias current or a bias voltage). The column circuitry 220 may also include various wires, electrical connections, and/or devices integrated within the pixel array 205 and coupled to each pixel 210.
In various embodiments, the timing and control circuit 225 may be communicatively coupled to the image signal processor 230 and/or the host processor 310 (fig. 3) to receive read operation instructions and/or to facilitate the reading of pixel signals. For example, the timing and control circuitry 225 may be configured to adjust the timing of pixel signal readout and other desired operations according to instructions provided by the image signal processor 230 and/or the host processor 310. The timing and control circuitry 225 may receive read-out operation instructions from the image signal processor 230 and/or the host processor 310 (fig. 3) depending on the desired application and/or operation.
The timing and control circuitry 225 may be configured to selectively activate and/or read out signals from the various pixels 210 according to readout instructions provided by the image signal processor 230 and/or the host processor 310. For example, timing and control circuitry 225 may be electrically coupled to pixel array 205 and may transmit control signals, such as pixel reset signals, pixel readout signals, charge transfer signals, and the like. The particular control signals generated and transmitted by the timing and control circuit 225 may be based on the pixel architecture, the desired image capture mode (e.g., global reset release mode, global shutter mode, and electronic rolling shutter mode), and the desired accumulation time. For example, the timing and control circuitry 225 may be configured to capture a plurality of image frames, each having a different accumulation time. In various embodiments, the signals from each pixel 210 are read out sequentially from the first row to the last row, with each row being read out from left to right.
In various embodiments, the imaging device 145 may also include a color filter system (not shown), such as a Color Filter Array (CFA), to filter the illumination light according to wavelength. The CFA may include a color filter pattern on the pixel array 205 to capture color information. In various embodiments, each pixel 210 in pixel array 205 is covered with a CFA of one color. For example, a bayer color filter array may be provided that includes red, blue, and green color filter patterns, where each pixel 210 is covered with one of the red, blue, or green color filters. In other embodiments, the CFA may be formed using other color filters, such as RCCG filters (one red, two light-transmitting, and one green), RCCC filters (one red, and three light-transmitting), CRGB filters (one cyan, one red, one green, and one blue), and any other suitable color pattern. In various embodiments, the CFA may include "clear" or transparent color filter elements. The CFA may form a 2 x 2 color pattern, a 4 x 4 color pattern, a 2 x 4 color pattern, or any other suitable pattern size. In various implementations, the CFA may be repeated to cover the entire pixel array 205.
The Image Signal Processor (ISP) 230 may perform various digital signal processing functions such as color interpolation, color correction, facilitating auto-focus, exposure adjustment, noise reduction, white balance adjustment, compression and/or expansion, etc. to generate output data. ISP 230 may include any number of devices and/or systems for performing calculations, transmitting and receiving image pixel data, and the like. The ISP 230 may also include a storage unit (not shown), such as random access memory, non-volatile memory, or any other memory device suitable for a particular application, for storing pixel data. In various embodiments, ISP 230 may be implemented with a programmable logic device, such as a Field Programmable Gate Array (FPGA), or any other device having reconfigurable digital circuitry. In other embodiments, the ISP 230 may be implemented in hardware using non-programmable devices. In other embodiments, the ISP 230 may be implemented as a combination of software and hardware. The ISP 230 may be partially or fully formed within a silicon-containing integrated circuit using any suitable Complementary Metal Oxide Semiconductor (CMOS) technology or fabrication process, partially or fully formed in an ASIC (application specific integrated circuit) using a processor and memory system, or partially or fully formed using another suitable implementation.
In an exemplary embodiment, the ISP 230 may receive a pixel signal and convert the signal to a pixel value. The ISP 230 may also store pixel values from the respective image frame to which each pixel signal corresponds. The ISP 230 may transmit the pixel values and information related to their accumulation times to the host processor 310, where the host processor 310 may use the data from the pixel values to make operational decisions.
The ISP 230 may also be configured to transmit the output image data to a display system, such as a display screen or a memory component, for storing the image data and/or allowing a person to view the output image. The display system may receive digital image data, such as video data, image data, frame data, and/or gain information, from the ISP 230. In various embodiments, the display system may include an external device, such as a computer display, a memory card, or some other external unit.
In various embodiments, a system may include a compander. The compander may be configured to receive pixel data represented by a binary number having N bits, compress the pixel data into M bits, where N > M, transmit the data, and expand the data. Various devices may be responsible for compression functions and expansion functions. For example, the image signal processor 230 may be equipped with a compressor to compress pixel data before transmitting the data to the host processor 310, and the host processor 310 may be equipped with an expander to receive and expand the compressed data. Compander operations may be implemented using conventional companding algorithms, such as the μ law algorithm.
In one embodiment, the ISP 230 may also include a register to store a preferred image frame index (e.g., 1, 2, 3, \ 8230; N), wherein the preferred image frame index corresponds to one image frame from a plurality of image frames, such as the first image frame 400, the second image frame 500, and the third image frame 600. The ISP 230 may be further configured to compare pixel signals from image frames having a preferred image frame index to a first predetermined threshold and a second predetermined threshold.
In accordance with various aspects of the present technique, the system may be configured to select and output a plurality of "best" pixel signals (i.e., pixel values) from a plurality of image frames. The best pixel signals may be those that have the most information (i.e., have the largest value) and are not saturated. The selected (best) pixel signals may then be compressed and sent with additional data indicating the image frame (and accumulation time) from which each pixel signal came. Information about the accumulation time may be used by systems such as the camera system 105 and the ADAS 300 to enable the system to use the most representative pixel signals in making operational decisions. After the "best" pixel signal has been selected, the system may be configured to determine a confidence level based on the image frame from which the selected pixel signal is derived. The system may also be configured to compare the preferred signal to predetermined parameters to increase the reliability of the information and output the preferred signal if predetermined requirements are met.
Referring to fig. 1-8, the system may capture a plurality of image frames, where each image frame is captured at a different accumulation time TX (where X =1, 2, 3 \ 8230, Y). Thus, each pixel in a given image frame will have the same accumulation time. For example, the first image frame 400 may have a first accumulation time T1 (e.g., T1=10 ms), while the second image frame 500 may be captured at a second accumulation time T2 (e.g., T2=5 ms), and the third image frame 600 may be captured at a third accumulation time T3 (e.g., T3=1 ms). According to various embodiments, the system may be configured to capture any number of image frames at any accumulation time. The length of time for each accumulation time may be predetermined and selected according to the particular application, desired output, etc.
During image capture, each pixel 210 in pixel array 205 generates a pixel signal P having a value (e.g., a magnitude) N (i, j), where N is the number of frames (also referred to as the image frame index), and (i, j) represents the coordinate location on the pixel array 205. For example, the pixel 210 in the first row and the first column is identified as the pixel 210 (1, 1), and the pixel signal generated by the specific pixel of the first image frame 400 is identified as the pixel signal P 1 (1,1). The same pixel 210 (1, 1) will also generate the pixel signal P of the second frame 500 2 (1,1). The same is true for all pixel signals and all subsequent image frames.
In an exemplary operation, the imaging device 145 may generate a first image frame 400 having a first accumulation time T1 (800). Pixel signal P for each pixel 210 1 (i, j) may be read out and transmitted to the image signal processor 230. The image signal processor 230 may process each pixel signal P 1 (i, j) determining whether the pixel signal is saturated or unsaturated (805). The pixel signal may be saturated if the pixel value is greater than or equal to the first predetermined threshold. The pixel signal may be unsaturated if the pixel value is less than the first predetermined threshold. The image signal processor 230 may store the pixel values in a memory (not shown).
The imaging device 145 may then generate a second image frame 500 at the second accumulation time T2 (810). Pixel signal P for each pixel 210 2 (i, j) may be read out and transmitted to the image signal processor 230. The image signal processor 230 may process each pixel signal P 2 (i, j) determining whether the pixel signal is saturated or unsaturated (815). The pixel signal may be saturated if the pixel value is greater than or equal to a second predetermined threshold value, and the pixel signal may be unsaturated if the pixel value is less than the second predetermined threshold value. The image signal processor 230 may store the pixel values in a memory (not shown).
The imaging device 145 may then generate a third image frame 500 at a third accumulation time T3 (820). Pixel signal P for each pixel 210 3 (i, j) may be read out and transmitted to the image signal processor 230. The image signal processor 230 may process each pixel signal P 3 (i, j) determining whether the pixel signal is saturated or unsaturated (825). The pixel signal may be saturated if the pixel value is greater than or equal to a third predetermined threshold value, and the pixel signal may be unsaturated if the pixel value is less than the third predetermined threshold value. The image signal processor 230 may store the pixel values in a memory (not shown).
After imaging device 145 generates a desired number of image frames with corresponding pixel signals, image signal processor 230 may select an optimal signal from each pixel location (840) and use the optimal signal to construct an HDR output (e.g., an HDR image) (845).
Selecting the best signal for each pixel location may include selecting a pixel signal that is not saturated. In addition, the optimum signal may include a pixel signal having a maximum value. In addition, the optimum signal may include the pixel signal having the shortest accumulation time compared to all other accumulation times.
According to an exemplary embodiment, ISP 230 may select a pixel signal that is unsaturated if only one pixel value of a particular pixel location is unsaturated. Additionally, if more than one pixel value is unsaturated, the ISP 230 may select the pixel signal having the largest value. If more than two pixel values are unsaturated and both have equal larger values, then the ISP 230 may select the pixel signal with the shortest accumulation time.
For example, if the imaging device 145 generates the first image frame 400, the second image frame 500, and the third image frame 600, and it is assumed that for the pixel location 210 (4, 1), the pixel signal P 1 (4, 1) is unsaturated and has a value of 1000, pixel signal P 2 (4, 1) is unsaturated and has a value of 700, and the pixel signal P 3 (4, 1) is unsaturated and has a value of 500, then the pixel signal selected as the best signal for HDR output 700 for position (4, 1) is pixel signal P 1 . Suppose that for pixel position 210 (4, 2), the pixel signal P 1 (4, 2) is saturated, and the pixel signal P 2 (4, 2) is unsaturated and has a value of 500, and the pixel signal P 3 (4, 2) is unsaturated and has a value of 500, then the pixel signal selected as the best signal for HDR output 700 at location (4, 2) is pixel signal P 3 . Suppose that for pixel positions 210 (4, 6), the pixel signal P is 1 (4, 6) is saturated, and the pixel signal P 2 (4, 6) is saturated, and the pixel signal P 3 (4, 6) is saturated, then the pixel signal selected as the best signal for HDR output 700 for location (4, 6) is pixel signal P 3 . Suppose that for pixel position 210 (4, 3), pixel signal P 1 (4, 3) is saturated, and the pixel signal P 2 (4, 3) is unsaturated and has a value of 600, and the pixel signal P 3 (4, 3) is unsaturated and has a value of 500, is selected asThe pixel signal of the best signal of the HDR output 700 at location (4, 3) is the pixel signal P 2 . It should be noted that the values of the pixel signals described above are arbitrary and are used as examples only for the purpose of explanation.
Similarly, and referring to fig. 9, where the imaging device 145 generates four image frames, the ISP 230 may be configured to execute an exemplary pseudo code for each location on the pixel array 205. In the present case, the variables "P3_ failed value", "P2_ failed value", and "P1_ failed value" are predetermined thresholds, and P1 is a pixel value from a first image frame having an accumulation time T1, P2 is a pixel value from a second image frame having an accumulation time T2, P3 is a pixel value from a third image frame having an accumulation time T3, and P4 is a pixel value from a fourth image frame having an accumulation time T4, where T1 is the longest accumulation time, and T4 is the shortest accumulation time (i.e., T1> T2> T3> T4). Accordingly, the ISP 230 compares the pixel values P1, P2, and P3 with predetermined thresholds P1_ salted value, P2_ salted value, and P3_ salted value, respectively, and sets the pixel value to zero if the pixel value is greater than or equal to the corresponding threshold. ISP 230 then compares pixel value P4 with pixel values P1, P2, and P3. If the pixel value P4 is greater than or equal to each of P1, P2, and P3, then the ISP 230 outputs P4 as the best signal and sets the variable "out _ sel" to 3. If pixel value P4 is not greater than pixel values P1, P2, and P3, ISP 230 determines whether pixel value P3 is greater than or equal to P2 and P1. If the pixel value P3 is greater than or equal to the pixel values P2 and P1, the ISP 230 outputs the pixel value P3 as the optimal signal and sets the variable "out _ sel" to 2. If pixel value P3 is not greater than or equal to pixel values P2 and P1, ISP 230 determines whether pixel value P2 is greater than or equal to pixel value P1. If the pixel value P2 is greater than or equal to P1, the ISP 230 outputs the pixel value P2 as the optimal signal and sets the variable "out _ sel" to 1. If the pixel value P2 is not greater than or equal to the pixel value P1, the ISP 230 outputs the pixel value P1 as the optimal signal and sets the variable "out _ sel" to 0. The above process is performed for each pixel location.
According to various embodiments, a system may include a compander to reduce the bandwidth requirements of the system. For example, the imaging device 145 (such as the ISP 230) may include a compressor to compress the optimal pixel values and add an accumulation time index corresponding to the accumulation time. The accumulation time index may comprise at least a 1-bit binary value, and in exemplary embodiments, the accumulation time index comprises a 2-bit binary value. For example, the pixel value may be represented as a 12-bit binary value, which is compressed into a 10-bit value, and a 2-bit value corresponding to the accumulation time is added to the 10-bit value, for a total of 12 bits. In an exemplary embodiment, the accumulation time index occupies the 2 highest order bit positions in the binary number within the total number of bits. The imaging device 145 may transmit the 12-bit value to the host processor 310, where the host processor 310 may include an expander to expand the 12-bit value and make various decisions using the accumulation time index. Thus, the number of bits sent to the host processor 310 uses the same number of bits per pixel as in conventional approaches. The system may perform the compression and expansion using any conventional companding method and technique.
According to various embodiments, image signal processor 230 and/or host processor 310 may further apply a linearization gain to each selected optimal signal, and construct an HDR output using the linearization values. Generally, the linearization gain is the ratio of the longest accumulation time to the shorter accumulation time compared to all other accumulation times. The particular linearization gain applied to any given signal may be based on the accumulation time and the selected output signal. For example, and referring to fig. 10, host processor 310 may use the variables "out _ sel" (e.g., 0, 1, 2, 3) and "out" (e.g., P1, P2, P3, P4) as determined by the process shown in fig. 9. If the variable "out _ sel" is set to 0, the host processor 310 outputs an "out" value; if the variable "out _ sel" is set to 1, the host processor 310 outputs the "out" value multiplied by T1/T2; if the variable "out _ sel" is set to 2, the host processor 310 outputs the "out" value multiplied by T1/T3; if the variable "out _ sel" is set to 3, the host processor 310 outputs the "out" value multiplied by T1/T4. Thus, if the variable "out _ sel" is set to 0, the output is P1 and no linearization is required, so the HDR output value for that particular pixel location ("out _ HDR") is set to the value of P1.
According to an exemplary embodiment, after host processor 310 has applied the appropriate linearization gain to all pixel locations, the system can use the selected best signal information to perform confidence evaluation on the selected pixel values and/or corresponding colors. For example, the confidence evaluation may be high if all selected pixel signals are from the same image frame (i.e., all color channels have the same accumulation time), and the confidence evaluation may be low if all selected pixel signals are from different image frames. The system may also use the output signal information to determine optical flow.
According to various applications, the system may be equipped with a display screen to display HDR output data for human viewing purposes, such as in the form of HDR images. In such a case, the host processor 310 and/or the image signal processor 230 may be configured to adjust the color of the HDR output data for more pleasant viewing by a person. For example, the host processor 310 and/or the image signal processor 230 may perform various calculations to enhance the graytones of the HDR output data. In such a case, the host processor 310 and/or the image signal processor 230 may operate under the assumption that pixel signals from the image frame having the longest accumulation time are saturated. Host processor 310 and/or image signal processor 230 may determine the greater of the predetermined saturation value (e.g., p3_ saturated value, p2_ saturated value, and p1_ saturated value) and the selected signal (e.g., "out" from fig. 9) before applying the linearization gain. For example, the system may operate as described by the pseudo code in FIG. 11, where T1, T2, T3, and T4 are accumulation times, where T1> T2> T3> T4. The particular accumulation ratio (e.g., T2/T1, T3/T2, T4/T3) may be based on the variable "out _ sel" and/or the selected pixel value (e.g., P1, P2, P3, or P4).
According to an alternative embodiment, the host processor 310 may select the best signal for each pixel location according to a preferred image frame index. For example, the host processor 310 may be configured to output a signal from the preferred image frame index (i.e., the preferred signal) if the preferred signal satisfies a particular set of requirements. For example, host processor 310 may compare the preferred signal to various thresholds and output a signal based on the comparison. The output signal (i.e., the best signal) may correspond to an image frame index that is lower or higher than the preferred image frame index. For example, if the pixel signal from an image frame with a preferred image frame index is less than a first predetermined threshold, the host processor 310 may output the pixel signal from the lower indexed image frame for each pixel position, and if the pixel signal from the image frame with the preferred image frame index is greater than or equal to a second predetermined threshold, the host processor 310 may output the pixel signal from the higher indexed image frame. For example, where four different image frames are used to construct the HDR output and assuming the preferred signal is a pixel signal from a second image frame (i.e., preferred image frame index = 2), then the image signal processor 230 and/or host processor 310 compares the pixel signal from the second image frame to a first predetermined threshold. As long as the pixel signals from the second image frame are less than the first predetermined threshold and the preferred image frame index is greater than 1, the image signal processor 230 and/or the host processor 310 will output the pixel signals from the first image frame and set the variable "out _ sel" to 0. If the above requirements are not met, the image signal processor 230 and/or the host processor 310 will determine whether the pixel signal is greater than or equal to the second threshold and whether the preferred image frame is less than the total number of image frames. As long as the pixel signal is greater than or equal to the second threshold and the preferred image frame is less than the total number of image frames, the image signal processor 230 and/or host processor 310 will output the next higher indexed image frame (e.g., the pixel signal from the third image frame) and the variable "out _ sel" is set to 2. The preferred image frame index may be set according to a specific application. For example, the image signal processor 230 and/or the host processor 310 may be configured to perform the operations described by the pseudo code in fig. 12. In this embodiment, the best signal is the signal output from the selection method described above, and the best signal from each pixel location is used to construct the HDR output. Further, linearization gains, such as those described above, may be applied to the optimal signal according to the methods described above.
In ADAS 300, and referring to fig. 3, host processor 310 may use information from imaging device 145 (such as HDR output, confidence estimates, and accumulated time index) to make decisions and control peripheral system 305 based on these decisions. For example, host processor 310 may be configured to transmit various control signals to the relevant systems within peripheral system 305.
In the foregoing specification, the technology has been described with reference to specific exemplary embodiments. The particular embodiments shown and described are illustrative of the technology and its best mode and are not intended to otherwise limit the scope of the technology in any way. Indeed, for the sake of brevity, conventional manufacturing, connecting, fabrication, and other functional aspects of the methods and systems may not be described in detail. Furthermore, the connecting lines shown in the various figures are intended to represent example functional relationships and/or steps between the various elements. There may be many alternative or additional functional relationships or physical connections in a practical system.
The techniques have been described with reference to specific exemplary embodiments. However, various modifications and changes may be made without departing from the scope of the present technology. The specification and figures are to be regarded in an illustrative rather than a restrictive manner, and all such modifications are intended to be included within the scope of present technology. Accordingly, the scope of the described technology should be determined by the generic embodiments described and their legal equivalents, rather than by merely the specific examples described above. For example, the steps recited in any method or process embodiment may be performed in any order, unless explicitly stated otherwise, and are not limited to the explicit order provided in the specific examples. Additionally, the components and/or elements recited in any apparatus embodiment may be assembled or otherwise operationally configured in a variety of permutations to produce substantially the same result as the present technique and are accordingly not limited to the specific configuration set forth in the specific example.
Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, any benefit, advantage, solution to problems or any element that may cause any particular benefit, advantage, or solution to occur or to become more pronounced are not to be construed as a critical, required, or essential feature or element.
The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, composition, or apparatus that comprises a list of elements does not include only those elements recited, but may include other elements not expressly listed or inherent to such process, method, article, composition, or apparatus. Other combinations and/or modifications of the above-described structures, arrangements, applications, proportions, elements, materials or components used in the practice of the present technology, in addition to those not specifically recited, may be varied or otherwise particularly adapted to specific environments, manufacturing specifications, design parameters or other operating requirements without departing from the general principles thereof.
The present technology has been described above in connection with exemplary embodiments. However, variations and modifications may be made to the exemplary embodiments without departing from the scope of the present techniques. These and other variations and modifications are intended to be included within the scope of the present technology, as set forth in the appended claims.
According to one aspect, a high dynamic range imaging device comprises: a pixel array comprising a plurality of pixels, wherein each pixel from the plurality of pixels is defined by a pixel location; and wherein: the pixel array is configured to generate a plurality of successive image frames, wherein each frame has a different accumulation time; each image frame includes a plurality of pixel signals; and each pixel signal corresponds to one pixel from the pixel array; an image signal processor connected to the pixel array and configured to: determining a value of each pixel signal from each image frame; determining, for each pixel signal, whether the pixel signal is one of unsaturated and saturated according to the determined value; selecting a pixel signal from a plurality of image frames for each pixel position according to at least one of: a value of the pixel signal; and an accumulation time; and constructing a High Dynamic Range (HDR) output using the selected pixel signals.
In one embodiment of the high dynamic range imaging device, the image signal processor selects the pixel signal having the maximum value if all pixel signals of one pixel location are unsaturated.
In one embodiment of the high dynamic range imaging device, the image signal processor selects the pixel signal having the shortest accumulation time if all pixel signals of one pixel position are unsaturated and all pixel signals have the same value.
In one embodiment of the high dynamic range imaging device, the image signal processor selects the pixel signal having the shortest accumulation time if all pixel signals of one location are saturated.
In one embodiment of the high dynamic range imaging device, the image signal processor is further configured to apply a linearization gain to each pixel signal from the plurality of image frames.
In one embodiment of the high dynamic range imaging device, the image signal processor is further configured to compress the selected signal from an N-bit value to an M-bit value, where N is greater than M, prior to constructing the HDR output.
In one embodiment of the high dynamic range imaging device, the image signal processor is further configured to assign at least a 1-bit code corresponding to the accumulation time to each pixel signal from the plurality of image frames.
According to another aspect, a method for generating a high dynamic range image includes: generating a plurality of successive image frames with an array of pixels, wherein: the pixel array comprises a plurality of pixels; and each pixel is defined by a pixel location; and wherein: each image frame includes a plurality of pixel signals; each image frame has a different accumulation time; and each pixel signal corresponds to one pixel from the pixel array; determining a value of each pixel signal from each image frame; determining, for each pixel signal, whether the pixel signal is one of unsaturated and saturated; selecting an optimal signal in a plurality of image frames for each pixel location; wherein the best signal comprises at least one of: pixel signals identified as being unsaturated; a pixel signal having a maximum value; and a pixel signal having the shortest accumulation time; and constructing a High Dynamic Range (HDR) output using the optimal pixel signal.
In one operation of the method, if all pixel signals of a pixel location are unsaturated, the pixel signal having the maximum value is selected.
In one operation of the method, if all pixel signals of one pixel position are unsaturated and all pixel signals have the same value, the pixel signal having the shortest accumulation time is selected.
In one operation of the method, if all pixel signals of one pixel position are saturated, the pixel signal having the shortest accumulation time is selected.
In one operation, the method further includes applying a linearization gain to each pixel signal from each image frame, where the linearization gain is a ratio of the longest accumulation time to the shorter accumulation time.
According to yet another aspect, a machine vision system includes: an image forming apparatus, comprising: a pixel array comprising a plurality of pixels, wherein each pixel from the plurality of pixels is defined by a pixel location; and wherein: the pixel array is configured to generate a plurality of successive image frames, wherein each frame has a different accumulation time; each image frame includes a plurality of pixel signals; and each pixel signal corresponds to one pixel from the pixel array; an image signal processor connected to the pixel array and configured to: determining a value of each pixel signal from each image frame; determining, for each pixel signal, whether the pixel signal is one of unsaturated and saturated according to the determined value; selecting a pixel signal from a plurality of image frames for each pixel position according to at least one of: a value of the pixel signal; and an accumulation time; and constructing a High Dynamic Range (HDR) output using the selected pixel signals; and a host processor connected to the imaging device and configured to receive the HDR output and generate a decision as a function of the HDR output.
In one embodiment of the machine vision system, the image signal processor selects the pixel signal having the maximum value if all pixel signals of one pixel position are unsaturated.
In one embodiment of the machine vision system, the image signal processor selects the pixel signal having the shortest accumulation time if all pixel signals of one pixel position are unsaturated and all pixel signals have the same value.
In one embodiment of machine vision, if all pixel signals of a location are saturated, the image signal processor selects the pixel signal having the shortest accumulation time.
In one embodiment of the machine vision system, the image signal processor is further configured to apply a linearization gain to each pixel signal from the plurality of image frames.
In one embodiment of the machine vision system: the image signal processor includes a register to store a preferred image frame index, wherein the preferred image frame index corresponds to one image frame from the plurality of image frames; and the image signal processor is further configured to compare pixel signals from the image frame with the preferred image frame index to a first predetermined threshold and a second predetermined threshold.
In one embodiment of the machine vision system, selecting a pixel signal from a plurality of image frames comprises selecting one of: selecting a pixel signal from a lower indexed image frame if the pixel signal from the image frame with the preferred image frame index is less than a first predetermined threshold; and selecting a pixel signal from a higher indexed image frame if the pixel signal from the image frame having the preferred image frame index is greater than or equal to a second predetermined threshold.
In one embodiment of the machine vision system, the image signal processor further performs a confidence evaluation of the HDR output as a function of the accumulation time; and wherein: the confidence level is evaluated high if all selected pixel signals are from the same image frame; and the confidence level is evaluated low if all selected pixel signals are from different image frames.

Claims (10)

1. A high dynamic range imaging apparatus, comprising:
a pixel array comprising a plurality of pixels, wherein each pixel of the plurality of pixels is defined by a pixel location; and is
Wherein:
the pixel array is configured to generate a plurality of successive image frames, wherein each frame has a different accumulation time;
each image frame includes a plurality of pixel signals;
each pixel signal corresponds to one pixel from the pixel array; and is
Each pixel in an image frame of the plurality of successive image frames has a same accumulation time;
an image signal processor connected to the pixel array and configured to:
determining a value for each pixel signal from each image frame;
for each pixel signal, determining whether the pixel signal is one of unsaturated and saturated according to the determined value;
selecting, for each pixel location, an optimal pixel signal from the plurality of successive image frames,
wherein the best pixel signal is unsaturated if at least one pixel signal is unsaturated; and
if all of the pixel signals are saturated, the optimal pixel signal is saturated; and
the selected pixel signals are used to construct a High Dynamic Range (HDR) output.
2. The high dynamic range imaging apparatus of claim 1, wherein: the image signal processor selects the pixel signal having the maximum value if all pixel signals of one pixel position are unsaturated.
3. The high dynamic range imaging apparatus of claim 1, wherein: the image signal processor selects the pixel signal having the shortest accumulation time if all pixel signals of one pixel position are unsaturated and all pixel signals have the same value.
4. The high dynamic range imaging apparatus of claim 1, wherein: the image signal processor selects the pixel signal having the shortest accumulation time if all pixel signals of one position are saturated.
5. The high dynamic range imaging apparatus of claim 1, wherein: the image signal processor is further configured to apply a linearization gain to each pixel signal from the plurality of successive image frames.
6. The high dynamic range imaging apparatus of claim 1, wherein: the image signal processor is further configured to:
prior to constructing the High Dynamic Range (HDR) output, compressing the selected signal from an N-bit value to an M-bit value, where N is greater than M; and
assigning at least a 1-bit code corresponding to the accumulation time to each pixel signal from the plurality of consecutive image frames.
7. A method for generating a high dynamic range image, comprising:
a plurality of successive image frames are generated using an array of pixels,
wherein:
the pixel array comprises a plurality of pixels; and is
Each pixel is defined by a pixel location; and is provided with
Wherein:
each image frame includes a plurality of pixel signals;
each image frame has a different accumulation time;
each pixel signal corresponds to one pixel from the pixel array; and is
Each pixel in an image frame of the plurality of successive image frames has a same accumulation time;
determining a value of each pixel signal from each image frame;
determining, for each pixel signal, whether the pixel signal is one of unsaturated and saturated;
selecting, for each pixel location, an optimal pixel signal in the plurality of successive image frames; wherein the best pixel signal is unsaturated if at least one pixel signal is unsaturated; and
if all of the pixel signals are saturated, the optimal pixel signal is saturated; and
constructing a High Dynamic Range (HDR) output using the optimal pixel signal.
8. The method of claim 7, wherein: if all pixel signals of a pixel location are unsaturated, the pixel signal having the maximum value is selected.
9. The method of claim 7, wherein: if all pixel signals of one pixel position are unsaturated and all pixel signals have the same value, the pixel signal having the shortest accumulation time is selected.
10. The method of claim 7, wherein: if all pixel signals of one pixel position are saturated, the pixel signal having the shortest accumulation time is selected.
CN201811006574.4A 2017-09-01 2018-08-31 High dynamic range imaging apparatus and method for generating high dynamic range image Active CN109429021B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201762553461P 2017-09-01 2017-09-01
US62/553,461 2017-09-01
US16/042,167 2018-07-23
US16/042,167 US10708524B2 (en) 2017-09-01 2018-07-23 Methods and apparatus for high dynamic range imaging

Publications (2)

Publication Number Publication Date
CN109429021A CN109429021A (en) 2019-03-05
CN109429021B true CN109429021B (en) 2022-10-28

Family

ID=65514786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811006574.4A Active CN109429021B (en) 2017-09-01 2018-08-31 High dynamic range imaging apparatus and method for generating high dynamic range image

Country Status (1)

Country Link
CN (1) CN109429021B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5845010A (en) * 1991-05-30 1998-12-01 Canon Kabushiki Kaisha Compression enhancement in graphics system
CN102497490A (en) * 2011-12-16 2012-06-13 上海富瀚微电子有限公司 System and method for realizing image high dynamic range compression
JP2014039170A (en) * 2012-08-16 2014-02-27 Sony Corp Image processing device, image processing method, and program
CN103916669A (en) * 2014-04-11 2014-07-09 浙江宇视科技有限公司 High dynamic range image compression method and device
CN104159042A (en) * 2014-07-29 2014-11-19 中国科学院长春光学精密机械与物理研究所 DMD quick dimming method for high-dynamic-range imaging

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6987536B2 (en) * 2001-03-30 2006-01-17 Pixim, Inc. Method and apparatus for storing image information for multiple sampling operations in a digital pixel sensor
JP2012235332A (en) * 2011-05-02 2012-11-29 Sony Corp Imaging apparatus, imaging apparatus control method and program
EP3165874B1 (en) * 2015-11-04 2020-08-19 Hexagon Technology Center GmbH Method and device for triangulation-based distance measurement

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5845010A (en) * 1991-05-30 1998-12-01 Canon Kabushiki Kaisha Compression enhancement in graphics system
CN102497490A (en) * 2011-12-16 2012-06-13 上海富瀚微电子有限公司 System and method for realizing image high dynamic range compression
JP2014039170A (en) * 2012-08-16 2014-02-27 Sony Corp Image processing device, image processing method, and program
CN103916669A (en) * 2014-04-11 2014-07-09 浙江宇视科技有限公司 High dynamic range image compression method and device
CN104159042A (en) * 2014-07-29 2014-11-19 中国科学院长春光学精密机械与物理研究所 DMD quick dimming method for high-dynamic-range imaging

Also Published As

Publication number Publication date
CN109429021A (en) 2019-03-05

Similar Documents

Publication Publication Date Title
US8462220B2 (en) Method and apparatus for improving low-light performance for small pixel image sensors
US8248481B2 (en) Method and apparatus for motion artifact removal in multiple-exposure high-dynamic range imaging
US10136107B2 (en) Imaging systems with visible light sensitive pixels and infrared light sensitive pixels
CN110753192B (en) Integrated circuit image sensor
US8442345B2 (en) Method and apparatus for image noise reduction using noise models
US9118883B2 (en) High dynamic range imaging with multi-storage pixels
US10708524B2 (en) Methods and apparatus for high dynamic range imaging
US20180241953A1 (en) Methods and apparatus for pixel binning and readout
US9438827B2 (en) Imaging systems and methods for generating binned high-dynamic-range images
US9007488B2 (en) Systems and methods for generating interpolated high-dynamic-range images
US10063762B2 (en) Image sensor and driving method thereof, and image capturing apparatus with output signal control according to color
US11082625B2 (en) Imaging systems for generating HDR images and operating methods thereof
US10750106B2 (en) Imaging unit, imaging apparatus, and computer-readable medium having stored thereon a control program
CN111741242A (en) Image sensor and method of operating the same
US9854186B2 (en) Methods and apparatus for an images sensor with row-level gain control
JP2018182543A (en) Imaging apparatus, imaging system, and control method for imaging element
US10438332B2 (en) Methods and apparatus for selective pixel readout for image transformation
CN112750087A (en) Image processing method and device
CN109429021B (en) High dynamic range imaging apparatus and method for generating high dynamic range image
CN114697571A (en) Image sensing device and operation method thereof
KR100791397B1 (en) Method and apparatus for composing image signal having different exposure time
US20240107134A1 (en) Image acquisition apparatus and electronic apparatus including same, and method of controlling image acquisition apparatus
JP2023061390A (en) Imaging apparatus
Yamashita et al. Wide-dynamic-range camera using a novel optical beam splitting system
JP2023016776A (en) Imaging apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant