CN112449127A - Compact high-frame-rate image sensor system and imaging method thereof - Google Patents

Compact high-frame-rate image sensor system and imaging method thereof Download PDF

Info

Publication number
CN112449127A
CN112449127A CN201910806524.2A CN201910806524A CN112449127A CN 112449127 A CN112449127 A CN 112449127A CN 201910806524 A CN201910806524 A CN 201910806524A CN 112449127 A CN112449127 A CN 112449127A
Authority
CN
China
Prior art keywords
bit
image
image sensor
pixel data
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910806524.2A
Other languages
Chinese (zh)
Inventor
汪小勇
徐辰
何金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siteway Shanghai Electronic Technology Co ltd
Original Assignee
Siteway Shanghai Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siteway Shanghai Electronic Technology Co ltd filed Critical Siteway Shanghai Electronic Technology Co ltd
Priority to CN201910806524.2A priority Critical patent/CN112449127A/en
Publication of CN112449127A publication Critical patent/CN112449127A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/71Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
    • H04N25/75Circuitry for providing, modifying or processing image signals from the pixel array

Abstract

The invention provides a compact high frame rate image sensor system comprising stacked CMOS semiconductor circuit chips. Wherein the bottom chip comprises a photosensitive area array and an image capturing structure; the top chip includes image processing and logic to quickly extract the image or the main features of the image determined by the boundary detection filter. The image sensor system also includes a compilation tool for providing programming parameters to the programmable filter to optimize the image sensor for a particular motion detection application. The invention also provides an imaging method of the compact high frame rate image sensor system.

Description

Compact high-frame-rate image sensor system and imaging method thereof
Technical Field
The present invention relates to an image sensor, and more particularly, to a CMOS image sensor using semiconductor circuit chip stacking and an imaging method thereof.
Background
The image capturing apparatus includes an image sensor and an imaging lens. The imaging lens focuses light onto an image sensor to form an image, and the image sensor converts an optical signal into an electrical signal. The image capture device outputs electrical signals to other components of the host system. The image capture device and other components of the host system form an image sensor system or imaging system. The application of image sensors has become widespread and can be applied to various electronic systems such as mobile devices, digital cameras, medical devices or computers.
A typical image sensor includes a two-dimensional array of a plurality of light-sensitive elements ("pixels"). Such image sensors may be configured to produce color images by forming a Color Filter Array (CFA) over the pixels. The technology for fabricating image sensors, and particularly complementary metal oxide semiconductor ("CMOS") image sensors, continues to advance rapidly. For example, the demands for high resolution and low power consumption have facilitated further miniaturization and integration of such image sensors. However, miniaturization comes at the expense of pixel photosensitivity and dynamic range, and new approaches are needed to address this problem.
As the pixel size decreases, the total light absorption depth within the substrate becomes insufficient for some light, especially long wavelength light. This is particularly a problem for image sensors employing backside illuminated (BSI) technology when the image light is incident on the backside of the sensor substrate. In back-illuminated (BSI) technology, the sensor silicon substrate may be only 2 microns thick, which is sufficient to absorb blue light but very insufficient to absorb red light, which may require 10 microns thick to be fully absorbed.
It is well known to form so-called stacked image sensors. In a typical arrangement of this type, the photodiodes or other photosensitive elements of the pixel array are formed in a first semiconductor die or substrate, while the associated read circuitry for processing the photosensitive element signals is formed in a second semiconductor die or substrate, which directly overlies the first semiconductor die or substrate. The first semiconductor die or substrate and the second semiconductor die or substrate are generally referred to herein as a sensor chip and a circuit chip, respectively. More specifically, after the associated inter-wafer electrical connections are aligned, the first and second semiconductor dies are formed with other similar dies on the stacked first and second semiconductor wafers and diced or cut into stacked assemblies commonly referred to as semiconductor chips. In the image sensor claimed in the present disclosure, the same substrate is used for the photosensitive element and the reading circuit. When stacking two chips, it should be understood that in one general embodiment, two wafers are stacked and diced into chips and held in the stack to form an electrical system, e.g., a stacked image sensor. However, it is also possible to stack individual chips taken from the first semiconductor wafer onto other chips still in wafer form, or even to stack two chips. Further, the inter-wafer electrical interconnects coupled to the sensor and circuit wafers may be referred to as inter-chip interconnects, while the intra-wafer interconnects and intra-chip interconnects refer to interconnects formed between devices located on the same wafer and chip, respectively. The advantage of this arrangement is that the resulting image sensor system occupies a smaller area than an unstacked arrangement. Another advantage is that each chip can be manufactured using different manufacturing methods and materials to allow independent optimization.
The two most common methods for reading image signals generated on the sensor chip are a rolling shutter mode and a global shutter mode. The rolling shutter mode involves exposing different lines of the sensor array at different times and reading out the lines in a selected order. The global shutter mode involves exposing each pixel simultaneously for the same time, similar to the way a mechanical shutter operates on a conventional "snapshot" camera. Prior art digital imaging systems use either a rolling shutter read mode or a global shutter read mode. However, an imaging system capable of having two reading modes would be advantageous, and wherein the reading modes are selectively operable by an operator.
The Rolling Shutter (RS) mode exposes and reads out adjacent rows of the array at different times, that is, each row will begin and end its exposure slightly offset from the adjacent row in time. Each row is followed by a readout after the exposure is complete and charge is transferred from each row to the read node of the pixel. Although each row experiences the same exposure time, the top row will end its exposure at some time prior to the end of the sensor's bottom row exposure. This time depends on the number of rows and the time offset between adjacent rows. A potential drawback of the rolling shutter read mode is spatial distortion caused by the above-mentioned reasons. The distortion becomes more pronounced when the moving speed of the larger object is faster than the read-out rate. Another disadvantage is that different areas of the exposed image are not very accurate in timeliness, appearing as image distortions. In order to improve the signal-to-noise ratio at the time of final readout of an image signal, particularly to reduce temporal dark noise, a reference reading mode called Correlated Double Sampling (CDS) is performed before each pixel charge is converted into an output signal by an amplifier transistor. The amplifier transistor may typically be a transistor in a source-follower (SF) configuration.
The Global Shutter (GS) mode exposes all pixels of the array simultaneously. This helps to capture fast moving events and freeze them in time. Before exposure begins, all pixels are Reset (RST) to the seemingly same dark level by depleting all charges. At the start of the exposure, each pixel starts to collect charge at the same time and is allowed to continue collecting during the exposure period. At the end of the exposure, each pixel simultaneously transfers charge to its readout node. The global shutter mode may be configured to operate in a continuous manner whereby the next exposure may be made at the same time as the previous exposure is read out by the readout storage node of each pixel. In this mode, the sensor has a 100% duty cycle, which optimizes temporal resolution and photon collection efficiency. There are no artifacts in the image of the transient read cycle that occurs in the rolling shutter mode. A global shutter is considered necessary when an accurate time correlation is required between different regions of the sensor area. The global shutter is also easily synchronized with the light source or other device.
The pixels of the global shutter mode include at least one more transistor or memory component than the pixels of the rolling shutter mode. These additional components are used to store image charge for readout during a period of time after the simultaneous exposures. Again, to improve the signal-to-noise ratio in the image signal, it is not only necessary to read out each pixel charge before it is converted into an output signal by the amplifier transistor, but it is also necessary to read out the pixel charge before it is transferred to an additional component of the pixel for storing the image charge.
In summary, a rolling shutter can achieve the lowest read noise, and is very useful for very fast data streaming, without the need to synchronize with the light source or peripheral device. However, it carries the risk of spatial distortion, particularly when imaging relatively large, fast moving objects. The use of a global shutter does not risk spatial distortion and when synchronizing fast switching peripherals, it is relatively simple and a faster frame rate can be achieved. Therefore, it would be highly advantageous to be able to conveniently provide operating modes for both rolling and global shutters.
Real-time image processing is difficult to achieve. This is caused by several factors, such as the large data set represented by the image and the complex operations that may need to be performed on the image. At a real-time video rate of 30 frames per second, a single operation performed on each pixel of a color image may be equivalent to several thousand operations per second. Many image processing applications require multiple operations to be performed on each pixel in an image, resulting in an even greater amount of operations required per second. Typically, to achieve this, an Image Signal Processor (ISP) is used in the imaging system. It provides color interpolation to determine which color each pixel represents and interpolates regions at and near the pixels. It can also control the auto-focus, exposure and white balance of the imaging system. In recent years, lens defect correction such as halation and chromatic aberration due to lens defects has been increased, and techniques such as HDR reconstruction, noise reduction, filtering, face or object detection, and the like have been increased. It may also provide focus assembly control if desired. ISPs typically perform their required functions through an embedded CPU. While they are rarely or hardly reconfigurable and must generally be redesigned and manufactured for each application change. The ISP may be included on the circuit wafer or as an additional discrete chip.
An alternative is to use Field Programmable Gate Arrays (FPGAs) as the platform for the implementation of the desired image processing and imaging control functions, especially real-time video processing. The FPGA is composed of a logic module matrix, and the logic modules are connected through a switching network. Both the logic modules and the switching network are reprogrammable, allowing applications to build specific hardware while maintaining the ability to easily change system functionality. Thus, FPGAs provide a compromise between the flexibility of general-purpose processors and the speed of hardware-based application specific circuits (ASICs). FPGAs also have the ability to perform parallel processing, thereby improving performance, compared to serial processing provided by many Image Signal Processor (ISP) circuits. In addition to signal processing functions, FPGAs may provide control circuit and input/output (I/O) circuit configurability.
When certain novel circuit elements are used with edge recognition filters, which can be application optimized by on-chip programmability, then there is an opportunity to improve real-time image processing by increasing the high frame rate capability of the stacked image sensor. The present invention fulfills these needs and further provides advantages as described in the summary below.
Disclosure of Invention
The following description sets forth the contributions of the present invention.
The image sensor has, among its components, a pixel unit having a photodiode, a transfer transistor, a source follower amplifier transistor, and a reading circuit. A photodiode, a transfer transistor, a source follower amplifier transistor, and a reset transistor are disposed within the first substrate of the first semiconductor chip for accumulating image charge in response to light incident on the photodiode. The reading circuit block may also be disposed within the first substrate. The first substrate may also include a memory interface that allows the image sensor to be directly coupled to a memory interface port of a standard memory, such as a DRAM memory or an SRAM memory. The second substrate may be stacked on the image sensor substrate and include standard or custom memory circuitry, such as DRAM or SRAM memory chips connected to the image sensor on the first substrate. The third substrate may be stacked on the opposite side of the memory circuit substrate and disposed within the third substrate may be some other circuitry useful in image processing, such as FPGA, I/O, PLL and ISP circuit blocks.
It is a primary object of the present invention to provide an image sensor having a pixel cell with advantages not suggested by the prior art.
Another object is to provide an image sensor having a pixel unit occupying less area, so that the pixel array size and manufacturing cost can be reduced.
It is another object of the present invention to provide an image sensor having stacked pixels that achieve a read mode of very high frame rate operation by using computer programmable digital registers applied to image processing filters and single bit edge or boundary extraction techniques.
It is another object of the present invention to provide an image sensor having a frame memory stacked thereon to more quickly capture and retain image frames in the memory, allowing an image feature detection filter to be applied to subsequently collected image frames.
It is another object of the present invention to provide an image sensor system that includes specific trigger regions within the imaging array for quickly identifying objects moving into the scene and quickly transmitting edge or boundary-defined images.
It is another object of the present invention to provide an image sensor system imaging method that includes specific trigger areas within the imaging array to quickly identify objects moving into the scene and quickly transmit edge or boundary defined images.
Other features and advantages of the present invention will become more apparent from the following more detailed description, taken in conjunction with the accompanying drawings, which illustrate, by way of example, the advantages of the invention.
Drawings
FIG. 1 is a schematic block diagram of an imaging system including a pixel array having stacked image sensor pixel cells included in an integrated circuit system, according to one embodiment of the invention;
FIG. 2 is a circuit diagram of a stacked image sensor pixel cell with rolling exposure read in the prior art;
FIG. 3 is a schematic diagram of an image sensor system including a pixel array having stacked image sensor chips, digital data memory chips, and image signal processor chips, in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram of single-bit image data output by an edge or boundary detection filter in an image sensor system in accordance with one embodiment of the present invention;
FIG. 5 is a schematic diagram of an image sensor system including a pixel array having stacked image sensor chips, digital data memory chips, and an image signal processor chip, according to another embodiment of the present invention;
FIG. 6 is a diagram illustrating an embodiment of an image sensor system for detecting moving objects according to the present invention;
FIG. 7 shows a flow chart of a high frame rate imaging method of an image sensor system according to a first embodiment of the invention; and
fig. 8 shows a flowchart of a high frame rate imaging method of an image sensor system according to a second embodiment of the invention.
Detailed Description
The above figures illustrate the present invention, a compact high frame rate image sensor with field programmable image feature edge or boundary detection filter blocks. Various embodiments of stacked image sensor systems are disclosed herein. In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring particular content. One substrate may have a front side and a back side. Any process operation from the front side may be considered a front side operation and from the back side may be considered a back side operation. Structures or devices such as photodiodes and associated transistors may be formed on the front side of the substrate. An alternating layer dielectric stack including a metal wiring layer and a conductive layer is formed on the front surface of the substrate.
"connected" and "coupled" as used herein are defined as follows. "connect" is used to describe a direct connection between two circuit elements, such as metal lines formed according to common integrated circuit processing techniques. In contrast, "coupled" is used to describe a direct or indirect connection between two circuit elements. For example, two coupling elements may be directly connected by a metal line or indirectly connected by an intervening circuit element (e.g., a capacitor, a resistor, or a source or drain of a transistor). In the stacked chip arrangement of the present invention, the front sides of the two chips can be directly connected since the electrical interconnections on each chip are formed on the front sides of the chips. However, it is also common practice to connect the circuitry of the front sides of two stacked substrates, one with its back side on the front side of the other, by through substrate vias. When a particular circuit element is located or formed on a substrate, it is generally considered that the circuit is located on the front side of the substrate.
FIG. 1 shows a schematic block diagram of an imaging system 100 according to one embodiment of the present invention, the imaging system 100 including a pixel array 102, the pixel array 102 having a plurality of image sensor pixel cells included in an integrated circuit system according to the teachings of the present invention. As shown in fig. 1, in imaging system 100, pixel array 102 is coupled to control circuitry 108 and read circuitry 104, and read circuitry 104 is coupled to functional logic unit 106. Control circuitry 108 and read circuitry 104 are also coupled to status register 110. In one embodiment, pixel array 102 is a two-dimensional (2D) array of image sensor pixels (e.g., pixel P1, P2.., Pn). As shown in FIG. 1, each pixel may be arranged in rows (e.g., rows R1 through Ry) and columns (e.g., columns C1 through Cx) to obtain image data of a person, place, object, etc., which may then be used to render a 2D image of the person, place, object, etc. In one embodiment, after each pixel has acquired its image data or image charge, the image data is read out by read circuit 104 using the read mode specified by status register 110 and then transferred to functional logic 106. In various examples, the read circuit 104 may include an amplifier circuit, an analog-to-digital conversion circuit, and the like. The status register 112 may include a digital programming selection system to determine whether the read mode passes through the rolling exposure mode or the global exposure mode. The functional logic 106 may store only image data or may process the image data according to later image effects (e.g., crop, rotate, remove red-eye, adjust brightness, adjust contrast, or otherwise). In one embodiment, read circuit 104 may read out the image data row by row along readout column lines (as shown in FIG. 1), or may use other techniques (not shown) to read out the image data, such as serial readout or readout of all pixels in full parallel at the same time. In one embodiment, control circuitry 108 is coupled to pixel array 102 to control the operability of pixel array 102. The operation of the control circuit 108 may be determined by the current setting of the status register 112. For example, the control circuitry 108 may generate an exposure signal for controlling image acquisition. In one embodiment, the exposure signal is a global exposure signal, such that all pixels in the pixel array 102 respectively acquire image data simultaneously through a single acquisition window. In another embodiment, the exposure signal is a rolling exposure signal, with each row, column or group of pixels being acquired consecutively through consecutive acquisition windows.
Fig. 2 shows a circuit diagram of an image sensor pixel cell 200 in a prior art rolling exposure read mode. The figure and exemplary pixel are provided to simplify the explanation intended to describe the operation of the pixel of the present invention. As shown in fig. 2, each sensor pixel 200 includes a photodiode 210 (e.g., a photosensitive element) and a corresponding transfer transistor 215 and pixel support circuit 211. The photodiode 210 may be a "Pinned" photodiode commonly used in existing CMOS image sensors. In the example of fig. 2, the pixel support circuit 211 includes a reset transistor 220, a Source Follower (SF) transistor 225, and a row select transistor 230 on the circuit chip, the row select transistor 230 coupled to a transfer transistor 215 and a photodiode 210 on the sensor chip. The amplifier transistor in the source follower configuration is an amplifier transistor in which a signal is input on the gate and output on the source. In other embodiments not shown, the pixel support circuit 211 includes a row select transistor 230 coupled to a reset transistor 220, a Source Follower (SF) transistor 225, a transfer transistor 215, and a photodiode 210 on a sensor chip of a stacked die system. During operation, the photosensitive element 210 generates photo-generated electrons in response to incident light during exposure. Transfer transistor 215 is coupled to receive a transfer signal TX such that transfer transistor 215 transfers charge accumulated in photodiode 210 to a Floating Diffusion (FD) node 217. Floating diffusion node 217 is effectively the drain of transfer transistor 215 and photodiode 210 is the source of transfer transistor 215. In one embodiment, transfer transistor 215 is a Metal Oxide Semiconductor Field Effect Transistor (MOSFET). A reset transistor 220 is coupled between a power supply VDD and the floating diffusion node 217 to reset the sensor pixel 200 (e.g., discharge or charge the floating diffusion node 217 and the photodiode 210 to a preset voltage) in response to a reset signal RST. The floating diffusion node 217 is coupled to control the gate of a source follower transistor 225. A source follower transistor 225 is coupled between a power supply VDD and a row select transistor 230 to amplify the signal generated by the charge on the floating diffusion node 217. The row select transistor 230 couples the output of the pixel circuit from the source follower transistor 225 to a readout column or bit line 235 in response to a row select signal RS. The photodiode 210 and the floating diffusion node 217 are reset by the temporarily asserted reset signal RST and the transfer signal TX. An accumulation period or accumulation window (e.g., an exposure period) begins when the transmission signal TX is disabled, such that incident light is converted into photo-generated electrons in the photodiode 210. When photo-generated electrons accumulate in the photodiode 210, their voltage decreases (the electrons are negative charge carriers). The voltage or charge on the photodiode 210 during exposure represents the intensity of the illumination incident on the photodiode 210. After the exposure period ends, the reset signal RST is de-asserted, turning off the reset transistor 220 and isolating the floating diffusion node 217 from the power supply VDD. The transmit signal TX is asserted coupling the photodiode 210 to the floating diffusion node 217. The photo-generated electrons are transferred from the photodiode 210 through the transfer transistor 215 to the floating diffusion node 217, thereby causing the voltage of the floating diffusion node 217 to drop by an amount proportional to the photo-generated electrons accumulated on the photodiode 210 during exposure. Since the photodiode 210 is still accumulating and transferring charge to the floating diffusion node 217 while the transfer transistor 215 is active, the accumulation period or exposure window is effectively over when the transfer transistor 215 is inactive.
Image sensors now have many important applications, such as the need to capture images of fast moving objects without distortion. Example applications include autonomous vehicles, fast robots, and unmanned aerial vehicles, and may also include video-based Artificial Intelligence (AI) applications. However, because only one pixel data can be read out at a time from an image sensor, sensor readout is slow, especially for larger image arrays. The speed of conventional digital imaging systems is limited by the pixel transfer rate of the pixel data bus, which becomes a bottleneck for data transfer, resulting in high latency and limited frame rate. This is particularly a problem when the pixel signals are converted and output as 10-bit and 12-bit high dynamic range signals. A further limitation is that since the image sensor derives pixel data from a preloaded pixel access pattern, it is difficult to change the pixel access pattern without stopping normal full field pixel array readout, and it is not easy to change the access pattern without first stopping readout. The disclosed compact image sensor addresses high latency and has customizable capabilities.
Fig. 3 shows an image sensor system 300 according to a first embodiment of the present invention, which includes three stacked semiconductor integrated circuit chips, including an image sensor array chip 310, a digital data memory chip 320, and a logic or image signal processor chip 330. The image sensor array chip 310 may be a backside illuminated (BSI) image sensor, in which a captured image is incident on the backside of a substrate of the image sensor. The stacking direction is downward in the illustration. Digital data memory chip 320 is stacked on the front side of image sensor array chip 310. The front side or circuit side of digital data memory chip 320 may face the front side of image sensor array chip 310, or its back side may face the front side of image sensor array chip 310. The image signal processor chip 330 is stacked on top of the digital data memory chip 320 with its front or circuit side facing the digital data memory chip 320. The stacked chip arrangement allows the image sensor to be more compact and, in many cases, the area occupied by the stacked system in the image sensor array area is limited. The close contact of the digital data memory chip 320 and the image sensor array chip 310 allows for direct coupling therebetween to facilitate high speed pixel reading, mitigating data transmission bottlenecks associated with the use of remote memory. The digital data memory chip 320 may store at least one frame of image data from the image sensor array, and it may at least temporarily store pixel signals for a plurality of frames. The digital data memory chip 320 may be a DRAM or SRAM, or any other memory capable of storing at least one frame of digitized image signal data. In one embodiment, digital data memory chip 320 may have a single-ended or double-ended memory interface. In another embodiment, a single-ended or double-ended memory interface may be included on image sensor array chip 310. In yet another embodiment, the data memory chip 320 may include a row driver of the image sensor chip 310 to further reduce the area of the image sensor chip 310. In any case, stacked-chip image sensor system 300 allows for a compact single component, with image sensor array chip 310 closely and directly coupled to digital data memory chip 320 and image signal processor chip 330, facilitating high-speed pixel readout between image sensor array chip 310 and image signal processor chip 330.
The image signal processor chip 330 performs various signal processes, such as filtering, on the pixel signals stored in the memory chip, or may perform signal processes on the pixel signals directly from the image sensor chip 310. In this manner, the image sensor can provide high or low frame rate imaging, as well as the ability to switch quickly. A key feature of the image signal processor chip 330 is the on-chip programmable reconfiguration circuitry 340, which allows the filter parameters to be changed, for example, by using an off-chip compiler. By using the on-chip programmable reconfiguration circuitry 340 to appropriately modify filter parameters, edge or motion extraction filters such as Isotropic Sobel, Roberts, Prewitt, Laplacian, and Canny can be applied to the image data. When motion detection is the primary application, edge detection using image data based on a single bit representation can significantly increase high frame rate capability to reduce image processing time and power consumption. The on-chip programmable reconfiguration circuitry 340 may be an edge or boundary detection filter block and wherein the programmable parameters enable the filter block to extract a single bit mapped representation of the multi-bit data stored in the data memory chip 320.
FIG. 4 is a diagram of a single-bit image data output by an edge or boundary detection filter in the image sensor system 300 of the present invention. In one embodiment, the edge or boundary detection filter is in a 3 × 3 array structure, and in other embodiments, the edge or boundary detection filter may be in a 3 × 5 array structure or other array form. The multi-bit image data comprises a plurality of multi-bit pixel data, the edge or boundary detection filter respectively calculates the multi-bit pixel data based on the programming parameters, compares the calculated result values with threshold values respectively, outputs a plurality of single-bit pixel data according to the comparison result, and the single-bit pixel data is formed by combining the single-bit pixel data. In one embodiment, when the result value of the multi-bit pixel data after the edge or boundary detection filter operation is greater than the threshold value, the single-bit pixel data is "1" to map a first feature of the multi-bit pixel data; and when the result value of the multi-bit pixel data after the operation of the edge or boundary detection filter is smaller than the threshold value, the single-bit pixel data is '0' to map a second feature representing the multi-bit pixel data.
In one embodiment, there may be several edge or boundary detection filters, and after further fusing several single-bit pixel data output by several edge or boundary detection filters, the single-bit pixel data is output again for image signal processing. In one embodiment, the fusion is by voting, for example, if three of the edge or boundary detection filters have two outputs "1", and one output "0" single-bit pixel data, then the fusion outputs "1" single-bit pixel data. If there are two output '0's of the three edge or boundary detection filters, and one output '1' pixel data, the fused single-bit pixel data of '0' is output. In another embodiment, the fusion is performed by means of a weighted calculation, i.e. each edge or boundary detection filter is given a certain weight to perform the final fusion and output the final single-bit pixel data. All the single-bit pixel data in the image signal are combined to constitute the single-bit image data.
Fig. 5 shows an image sensor system 500 according to a second embodiment of the invention, which includes the three-chip stack structure shown in fig. 3, including an image sensor array chip 510, a digital data memory chip 520, an image signal processor chip 530, and an on-chip programmable reconfiguration circuit 540, but with a trigger region 550 defined within the image sensor array chip 510. The trigger region is a special region for interest identified by the image sensor system application. For example, if the image sensor system is applied to an autonomous vehicle, and particular attention is paid to pedestrians or bicycles entering the field of view from the right side, then the trigger area may be defined in the right portion of the image sensor array. When an object in motion enters a preset trigger zone, an alarm signal may be sent to the image processor and give priority to imaging of that part, and may switch to a high frame rate mode, such as a single bit filtering mode.
FIG. 6 is a diagram of an image sensor system for detecting moving objects according to the present invention. In one embodiment, as shown in fig. 6, the image signal processor chip 530 rapidly determines whether a moving object, such as the moving object 610, is present based on multi-bit image data of frames before and after the image, and the image signal processor chip 530 further determines whether the moving object 610 enters one or more predetermined trigger regions 650. When the moving object 610 enters the trigger region 650, the image signal processor chip 530 generates an alarm signal and generates a predefined action based on the alarm signal, for example, the alarm signal is sent to the image processor array chip 510 to preferentially acquire the image data of the trigger region 650, i.e., the image data of the trigger region 650 is preferentially processed to acquire the priority imaging right.
Specifically, the multi-bit image data includes a plurality of multi-bit pixel data, the edge or boundary detection filter calculates an absolute value of subtraction between the multi-bit pixel data of the previous and subsequent frames in the digital data memory 520 based on the programming parameter, compares the calculated result values with a threshold value, respectively, and outputs a plurality of single-bit pixel data according to the comparison result, and the plurality of single-bit pixel data are combined to constitute the single-bit image data. In one embodiment, when the result value of the multi-bit pixel data after the edge or boundary detection filter operation is greater than the threshold value, the single-bit pixel data is "1" to indicate that a moving object appears in the image signal; and when the result value of the multi-bit pixel data after the operation of the edge or boundary detection filter is smaller than the threshold value, the single-bit pixel data is '0' to indicate that no moving object appears in the image signal.
Fig. 7 shows a flowchart of an image sensor high frame rate imaging method 700 according to a first embodiment of the invention, comprising the following steps.
Step 702: acquiring an image signal, wherein the image signal is multi-bit image data.
Step 704: storing the multi-bit image data.
In one embodiment, at least one frame of image data may be stored, and pixel signals of a plurality of frames may be at least temporarily stored, so that various signal processing, such as filtering, may be subsequently performed on the pixel signals, and in one embodiment, the signal processing may also be directly performed on the pixel signals.
Step 706: programming parameters are provided to the edge or boundary detection filter.
In one embodiment, edge or motion extraction filters such as Isotropic Sobel, Roberts, Prewitt, Laplacian, and Canny, may be applied to the computation of the image data by using an off-chip compiler to change the filter parameters.
Step 708: the edge or boundary detection filter converts the multi-bit pixel data into single-bit image data, the single-bit image data mapping representing the multi-bit image data.
The multi-bit image data comprises a plurality of multi-bit pixel data, the edge or boundary detection filter respectively calculates the multi-bit pixel data based on the programming parameters, compares the calculated result values with threshold values respectively, outputs a plurality of single-bit pixel data according to the comparison result, and the single-bit pixel data is formed by combining the single-bit pixel data. In one embodiment, when the result value of the multi-bit pixel data after the edge or boundary detection filter operation is greater than the threshold value, the single-bit pixel data is "1" to map a first feature of the multi-bit pixel data; and when the result value of the multi-bit pixel data after the operation of the edge or boundary detection filter is smaller than the threshold value, the single-bit pixel data is '0' to map a second feature representing the multi-bit pixel data.
In one embodiment, there may be several edge or boundary detection filters, and after further fusing several single-bit image data output by several edge or boundary detection filters, the single-bit image data is output again for image signal processing. In one embodiment, the fusion is by voting, for example, if three of the edge or boundary detection filters have two outputs "1", and one output "0" single-bit pixel data, then the fusion outputs "1" single-bit pixel data. If there are two outputs "0" and one output "1" in the three edge or boundary detection filters, the single-bit pixel data of "0" is output after fusion. In another embodiment, the fusion is performed by means of a weighted calculation, i.e. each edge or boundary detection filter is given a certain weight to perform the final fusion and output the final single-bit image data. All the single-bit pixel data in the image signal are combined to constitute the single-bit image data.
Step 710: and performing image signal processing based on the single-bit image data.
In this manner, the image sensor can provide high or low frame rate imaging, as well as the ability to switch quickly.
Fig. 8 shows a flowchart of an image sensor high frame rate imaging method 800 according to a second embodiment of the invention. Unlike the image sensor high frame rate imaging method 700, the following steps are also included.
Step 802: one or several trigger zones are defined.
The trigger region is a special region for interest identified by the image sensor system application. For example, if the image sensor system is applied to an autonomous vehicle, and particular attention is paid to pedestrians or bicycles entering the field of view from the right side, then the trigger area may be defined in the right portion of the image sensor array chip.
Step 804: whether there is a moving object is judged based on the multi-bit image data of the preceding and following frames.
In one embodiment, the multi-bit image data of the previous and next frames are subtracted, and then the subtracted multi-bit image data is input into an edge or boundary detection filter, namely, the edge or boundary detection filter calculates the absolute value of the subtraction of the multi-bit pixel data of the previous and next frames based on the programming parameters, compares the calculated result values with threshold values respectively, and outputs a plurality of single-bit pixel data according to the comparison result, wherein the plurality of single-bit pixel data are combined to form the single-bit image data. In one embodiment, when the result value of the multi-bit pixel data after the edge or boundary detection filter operation is greater than the threshold value, the single-bit pixel data is "1" to indicate that a moving object appears in the image signal; and when the result value of the multi-bit pixel data after the operation of the edge or boundary detection filter is smaller than the threshold value, the single-bit pixel data is '0' to indicate that no moving object appears in the image signal.
Step 806: and judging whether the moving object enters the trigger area.
In one embodiment, whether a moving object enters the trigger region is determined by performing arithmetic processing on the image data of the trigger region.
Step 808: when the moving object enters the trigger zone, an alarm signal is generated.
Step 810: based on the alarm signal, a predefined action is generated.
In one embodiment, the predefined action includes: the image data of the trigger area is preferentially acquired for imaging, and the imaging can be switched to a high frame rate mode, such as a single-bit filtering mode. Thus, when motion detection is the primary application, edge detection using image data based on a single bit representation can significantly increase high frame rate capability to reduce image processing time and power consumption.
In summary, the present invention provides an image sensor system with stacked pixels and an imaging method thereof, which obtains a read mode for very high frame rate operation by using a computer programmable digital register applied to an image processing filter and a single bit edge or boundary extraction technique; and by providing an image sensor having a frame memory stacked thereon, image frames within the memory can be captured and retained more quickly, allowing an image feature detection filter to be applied to subsequently collected image frames; and by including specific trigger regions within the imaging array, objects moving into the scene can be quickly identified and edge or boundary-defined images quickly transmitted.
Reference throughout this specification to "one embodiment," "an embodiment," "one example" or "an example" means that a particular feature, structure or characteristic described in connection with the embodiment or example is included in at least one embodiment. Or an example of the present invention. Thus, the appearances of the phrases such as "in one embodiment" or "in an example" in various places throughout this specification are not necessarily all referring to the same embodiment or example. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments or examples. Directional terms such as "top," "down," "above," and "below" are used with reference to the orientation of the drawings as described. Furthermore, the terms "having," "including," "containing," and similar terms are defined as meaning "including" unless specifically stated otherwise. The particular features, structures, or characteristics may be included in an integrated circuit, an electronic circuit, a combinational logic circuit, or other suitable components that provide the described functionality. Additionally, it should be understood that the drawings provided herein are for illustrative purposes only of those of ordinary skill in the art and that the drawings are not necessarily drawn to scale.
The above description of illustrated examples of the present invention, including what is described in the abstract, is not intended to be exhaustive or to be limited to the precise forms disclosed. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications can be made without departing from the broader spirit and scope of the invention. Indeed, it should be understood that the specific example structures and materials are provided for purposes of explanation, and that other structures and materials may be used in other embodiments and examples in accordance with the teachings of the present invention. These modifications can be made to embodiments of the present invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification and the claims. Rather, the scope is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.
The present invention also provides an image sensor device including the pixel circuit described in each of the above embodiments. The image sensor device comprises a pixel circuit array which is arranged in a plurality of rows and columns and is provided in the plurality of embodiments. The image sensor device further comprises peripheral circuitry, which is mainly used for controlling and processing the output of the pixel circuits.
The examples given in the embodiments of the present invention include, but are not limited to, the explanation and illustration of the present invention as set forth herein. The above-described embodiments are for illustrative purposes only and are not to be construed as limiting the invention. Reasonable modifications and adaptations of the various embodiments of the invention are within the scope of the invention.

Claims (27)

1. A compact high frame rate image sensor system, comprising:
the image sensor array chip is used for acquiring an image signal, and the image signal is multi-bit image data;
an image signal processor chip stacked and connected to the image sensor array chip, wherein the image signal processor chip includes an edge or boundary detection filter having field programmable image features, and wherein programming parameters enable the edge or boundary detection filter to extract single-bit image data that maps representations of multi-bit image data to reduce image processing time and power consumption, and
an on-chip programmable reconfiguration circuit for providing programming parameters to the edge or boundary detection filter to optimize the image sensor for a particular application.
2. The image sensor system of claim 1, further comprising a digital data memory chip stacked and connected between the image sensor array chip and the image signal processor chip for storing multi-bit image data from the image sensor array chip.
3. The image sensor system of claim 1, wherein the multi-bit image data includes a plurality of multi-bit pixel data, the edge or boundary detection filter calculates the plurality of multi-bit pixel data based on the programming parameter, compares the calculated result values with a threshold value, respectively, and outputs a plurality of single-bit pixel data according to the comparison result, the plurality of single-bit pixel data being combined to constitute the single-bit image data.
4. The image sensor system of claim 3, wherein the single-bit pixel data is "1" to map a first feature representing the multi-bit pixel data when a result value of the multi-bit pixel data after the edge or boundary detection filter operation is greater than the threshold value; and when the result value of the multi-bit pixel data after the operation of the edge or boundary detection filter is smaller than the threshold value, the single-bit pixel data is '0' to map a second feature representing the multi-bit pixel data.
5. The image sensor system of claim 1, wherein the edge or boundary detection filter is one of a Sobel, Isotropic Sobel, Roberts, Prewitt, Laplacian, or Canny edge filter.
6. The image sensor system according to claim 1, wherein the image signal processor chip comprises a plurality of the edge or boundary detection filters, and after the image signal processor chip performs the re-fusion calculation on the single-bit image data output from the plurality of the edge or boundary detection filters by means of voting or weighting calculation, the image signal processor chip outputs the final single-bit image data for image signal processing.
7. The image sensor system of claim 2, wherein one or more trigger regions are defined within the image sensor array chip such that when a moving object enters one of the trigger regions, an alarm signal is forwarded to the image signal processor chip to cause a predefined action within the image signal processor chip.
8. The image sensor system of claim 7, wherein the predefined action comprises extracting an image signal of the single-bit representation from the trigger region.
9. The image sensor system of claim 7, wherein the multi-bit image data includes a plurality of multi-bit pixel data, the edge or boundary detection filter calculates absolute values of subtraction of the multi-bit pixel data of previous and subsequent frames in the digital data memory based on the programmed parameters, compares the calculated result values with a threshold value, respectively, and outputs a plurality of single-bit pixel data according to the comparison result, the plurality of single-bit pixel data being combined to constitute the single-bit image data.
10. The image sensor system of claim 8, wherein the single-bit pixel data is "1" when the result value of the multi-bit pixel data after the edge or boundary detection filter operation is greater than the threshold value, which indicates the occurrence of a moving object in the image signal; and when the result value of the multi-bit pixel data after the operation of the edge or boundary detection filter is smaller than the threshold value, the single-bit pixel data is '0', and the image signal is mapped to indicate that no moving object exists in the image signal.
11. The image sensor system of claim 1, wherein the image sensor system is disposed in one of an autonomous automobile, a robot, an unmanned aerial vehicle, and a machine vision-related system.
12. The image sensor system of claim 1, wherein the image signal processor chip outputs multi-bit image data at a standard video rate or single-bit image data at a high frame rate.
13. The image sensor system of claim 2, wherein the digital data memory chip comprises one of a Static Random Access Memory (SRAM) or a Dynamic Random Access Memory (DRAM) frame buffer circuit.
14. The image sensor system of claim 2, wherein the image sensor chip comprises one of a single-ended or double-ended memory interface.
15. The image sensor system of claim 1, wherein the image sensor array chip is a backside illuminated (BSI) image sensor.
16. The image sensor system of claim 1, wherein the image sensor array chip is a global shutter readout image sensor or a rolling shutter readout image sensor.
17. The image sensor system of claim 2, wherein a row driver and a control circuit supporting an image sensor array chip are provided on the digital data memory chip or the image signal processor chip to reduce an area of the image sensor array chip.
18. A compact high frame rate image sensor system imaging method, comprising:
acquiring an image signal, wherein the image signal is multi-bit image data;
providing programming parameters to an edge or boundary detection filter;
the edge or boundary detection filter converts the multi-bit pixel data into single-bit image data, the single-bit image data map representing the multi-bit image data; and
and performing image signal processing based on the single-bit image data.
19. The method of claim 18, further comprising storing the multi-bit image data.
20. The image sensor system imaging method of claim 18, wherein the multi-bit image data includes a plurality of multi-bit pixel data, the edge or boundary detection filter respectively calculates the plurality of multi-bit pixel data based on the programming parameter, respectively compares the calculated result values with threshold values, respectively, and outputs a plurality of single-bit pixel data according to the comparison result, the plurality of single-bit pixel data being combined to constitute the single-bit image data.
21. The image sensor system imaging method of claim 20, wherein when a result value of the multi-bit pixel data after the edge or boundary detection filter operation is greater than the threshold value, the single-bit pixel data is "1" to map a first feature representing the multi-bit pixel data; and when the result value of the multi-bit pixel data after the operation of the edge or boundary detection filter is smaller than the threshold value, the single-bit pixel data is '0' to map a second feature representing the multi-bit pixel data.
22. The method of compact high frame rate image sensor imaging according to claim 18, further comprising the steps of:
defining one or several trigger areas;
judging whether a moving object exists based on multi-bit image data of front and rear frames in a memory;
judging whether the moving object enters the trigger area or not;
generating an alarm signal when the moving object enters the trigger zone;
based on the alarm signal, a predefined action is generated.
23. The method of tight high frame rate image sensor imaging according to claim 22, wherein the predefined action comprises: and preferentially extracting multi-bit image data or single-bit image data of the trigger area for imaging.
24. The image sensor system imaging method of claim 22, wherein the multi-bit image data includes a plurality of multi-bit pixel data, the edge or boundary detection filter calculates absolute values of subtraction of the multi-bit pixel data of previous and subsequent frames in the digital data memory based on the programmed parameters, compares the calculated result values with threshold values, respectively, and outputs a plurality of single-bit pixel data according to the comparison result, the plurality of single-bit pixel data being combined to constitute the single-bit image data.
25. The imaging method of image sensor system according to claim 24, wherein when the result value of the multi-bit pixel data after the edge or boundary detection filter operation is greater than the threshold value, the single-bit pixel data is "1" to map the occurrence of moving objects in the image signal; and when the result value of the multi-bit pixel data after the operation of the edge or boundary detection filter is smaller than the threshold value, the single-bit pixel data is '0' to indicate that no moving object appears in the image signal.
26. The method of claim 18, wherein the edge or boundary detection filter is one of a Sobel, Isotropic Sobel, Roberts, Prewitt, Laplacian, or Canny edge filter.
27. The image sensor system imaging method of claim 18, further comprising: and after the single-bit image data output by the edge or boundary detection filters are subjected to fusion calculation again in a voting or weighting calculation mode, outputting the final single-bit image data for image signal processing.
CN201910806524.2A 2019-08-29 2019-08-29 Compact high-frame-rate image sensor system and imaging method thereof Pending CN112449127A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910806524.2A CN112449127A (en) 2019-08-29 2019-08-29 Compact high-frame-rate image sensor system and imaging method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910806524.2A CN112449127A (en) 2019-08-29 2019-08-29 Compact high-frame-rate image sensor system and imaging method thereof

Publications (1)

Publication Number Publication Date
CN112449127A true CN112449127A (en) 2021-03-05

Family

ID=74741952

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910806524.2A Pending CN112449127A (en) 2019-08-29 2019-08-29 Compact high-frame-rate image sensor system and imaging method thereof

Country Status (1)

Country Link
CN (1) CN112449127A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022262640A1 (en) * 2021-06-16 2022-12-22 北京与光科技有限公司 Spectral analysis apparatus and spectral video recording method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022262640A1 (en) * 2021-06-16 2022-12-22 北京与光科技有限公司 Spectral analysis apparatus and spectral video recording method

Similar Documents

Publication Publication Date Title
CN108200367B (en) Pixel unit, method for forming pixel unit and digital camera imaging system assembly
US20240056695A1 (en) Solid-state imaging device, method of driving the same, and electronic apparatus
US10271037B2 (en) Image sensors with hybrid three-dimensional imaging
CN108305884B (en) Pixel unit, method for forming pixel unit and digital camera imaging system assembly
CN110113546B (en) Imaging system and method for combining and reading out adjacent pixel units in pixel array
EP2253017B1 (en) Circuit and photo sensor overlap for backside illumination image sensor
EP3171408B1 (en) Stacked-chip imaging systems
US10855939B1 (en) Stacked image sensor with programmable edge detection for high frame rate imaging and an imaging method thereof
JP3887420B2 (en) Active pixel sensor array with multi-resolution readout
US5949483A (en) Active pixel sensor array with multiresolution readout
TWI650021B (en) Image sensor with mixed heterostructure
CN108234910B (en) Imaging system and method of forming a stacked imaging system and digital camera imaging system assembly
US10070079B2 (en) High dynamic range global shutter image sensors having high shutter efficiency
CN108282625B (en) Pixel unit, method for forming pixel unit and digital camera imaging system assembly
CN108200366B (en) Pixel unit, method for forming pixel unit and digital camera imaging system
CN111430388A (en) Imaging pixel
US9729806B2 (en) Imaging systems with phase detection pixels
CN108269819B (en) Pixel cell, method for forming pixel cell and digital camera imaging system component
US10075663B2 (en) Phase detection pixels with high speed readout
US20180367747A1 (en) Four shared pixel with phase detection and full array readout modes
US10873716B2 (en) Dual row control signal circuit for reduced image sensor shading
CN211208448U (en) Stacked-chip image sensor and solid-state stacked-chip image sensor
CN112449127A (en) Compact high-frame-rate image sensor system and imaging method thereof
CN210518592U (en) Compact high frame rate image sensor system
Abdallah et al. A general overview of solid state imaging sensors types

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 612, 6th floor, No. 111 Building, Xiangke Road, Shanghai Pudong New Area Free Trade Pilot Area, 201203

Applicant after: Starway (Shanghai) Electronic Technology Co.,Ltd.

Address before: Room 612, 6th floor, No. 111 Building, Xiangke Road, Shanghai Pudong New Area Free Trade Pilot Area, 201203

Applicant before: Siteway (Shanghai) Electronic Technology Co.,Ltd.