CN117692811A - Visual sensor chip based on hybrid array - Google Patents

Visual sensor chip based on hybrid array Download PDF

Info

Publication number
CN117692811A
CN117692811A CN202311420673.8A CN202311420673A CN117692811A CN 117692811 A CN117692811 A CN 117692811A CN 202311420673 A CN202311420673 A CN 202311420673A CN 117692811 A CN117692811 A CN 117692811A
Authority
CN
China
Prior art keywords
pixel
pixel unit
time difference
difference
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311420673.8A
Other languages
Chinese (zh)
Inventor
赵蓉
陈雨过
王韬毅
林逸晗
施路平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202311420673.8A priority Critical patent/CN117692811A/en
Publication of CN117692811A publication Critical patent/CN117692811A/en
Pending legal-status Critical Current

Links

Landscapes

  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

The invention provides a visual sensor chip based on a hybrid array, which comprises a pixel array and a visual sensing channel correspondingly arranged; the visual sense path includes an intensity path, a time differential path, and a space differential path; the pixel array comprises a plurality of multiplexing pixel units and a plurality of single pixel units; the multiplexing pixel units are pixel units multiplexed by two elements of intensity, time difference and space difference; the single pixel unit is a pixel unit different from the element of the multiplexing pixel unit; the intensity passage is used for determining a quantized value of the incident light intensity converted into an electric signal; the time difference path is used for determining a time difference value; the time differential path is used to determine a spatial differential value. The three-way vision sensing chip architecture based on the hybrid array can greatly improve the perception capability of the vision sensor chip to space-time dynamic information, and realize high-precision, high-frame rate, high-dynamic range and high-efficiency robust vision representation.

Description

Visual sensor chip based on hybrid array
Technical Field
The invention relates to the technical field of photoelectric imaging, in particular to a vision sensor chip based on a hybrid array.
Background
A vision sensor is a device for sensing visible light information in an environment and converting it into an electric signal, and is widely used in digital cameras and other electronic optical devices.
At present, a common vision sensor is a CIS (CMOS Image Sensor ), and the CIS realizes video shooting through a frame-based sampling principle and has the advantages of high resolution of a pixel array, high color reduction degree and high image quality. However, the dynamic range of the image signal acquired by the CIS is small, and the photographing speed is slow. Another novel image sensor DVS (Dynamic Vision Sensor ) can sense the change of dynamic scene in the form of sparse event stream, the shooting speed is faster, and the dynamic range is larger. However, such sensors have problems of low spatial resolution and excessive loss of effective information.
Accordingly, there is a need in the art to provide an improved vision sensor.
Disclosure of Invention
In order to overcome the problems, the invention provides a visual sensor chip based on a hybrid array, which can greatly improve the perception capability of the visual sensor chip to space-time dynamic information through a three-way visual sensor chip architecture based on the hybrid array, and realize high-precision, high-frame rate, high-dynamic range and high-efficiency robust visual representation.
The invention provides a visual sensor chip based on a hybrid array, which comprises a pixel array and a visual sensing channel correspondingly arranged; the visual sense path comprises an intensity path, a time differential path and a space differential path; the pixel array comprises a plurality of multiplexing pixel units and a plurality of single pixel units; the multiplexing pixel units are pixel units multiplexed by two elements of intensity, time difference and space difference; the single pixel unit is a pixel unit different from the element of the multiplexing pixel unit; the intensity passage is used for determining a quantized value of an electric signal converted by the light intensity of incident light at the current pixel unit position at the current moment; the time differential path is used for carrying out differential and quantization operation on a signal of the current pixel unit position at the current moment and a signal of the current pixel unit position at the previous moment in a charge, analog or digital domain to obtain a time differential value; the space difference path is used for carrying out space difference and quantization operation on a signal of a current pixel unit position at the current moment and a signal of a space associated pixel unit position at the current moment in charge, analog or digital domain to obtain a space difference value, and the space associated pixel unit is any one or more pixel units except the current pixel unit in the pixel array.
According to the vision sensor chip based on the hybrid array, the multiplexing pixel units are pixel units multiplexed by time difference and space difference, and the single pixel unit is an intensity pixel unit; the intensity path corresponds to the intensity pixel cell, and the time differential path and the space differential path correspond to the time differential and space differential multiplexed pixel cell.
According to the vision sensor chip based on the hybrid array, the strength channel comprises a strength storage module and a strength quantization module; the storage module for intensity is used for storing an electric signal converted by the light intensity of the incident light at the current pixel unit position at the current moment; the intensity quantization module is used for performing analog-digital conversion on the electric signal converted by the light intensity of the incident light at the current pixel unit position at the current moment to obtain a quantized value of the electric signal of the light intensity of the incident light at the current pixel unit position at the current moment; the time difference path comprises a storage module for time difference and a time difference and quantization module; the storage module for time difference is used for storing the electric signals of the current pixel unit positions at different times in a ping-pong cache mode; the time difference storage module comprises a first time difference storage node and a second time difference storage node; the ping-pong buffer is configured to store the electric signal of the current pixel unit position at the current time in the second time difference storage node/the first time difference storage node when the electric signal of the current pixel unit position at the previous time is stored in the first time difference storage node/the second time difference storage node; the time difference and quantization module is used for performing time difference and quantization operation on the electric signal of the current pixel unit position at the current moment and the electric signal of the current pixel unit position at the previous moment to obtain a time difference value; the space difference path comprises a space difference storage node multiplexing the first time difference storage node and a space difference and quantization module; the storage node for space difference is used for storing the electric signal of the current pixel unit position at the current moment; the spatial difference and quantization module is used for performing difference and quantization operation on the electric signal of the current pixel unit position at the current moment and the electric signal of the current time space-related pixel unit position to obtain a spatial difference value.
According to the visual sensor chip based on the hybrid array, the quantization module for intensity is arranged in the intensity pixel unit, and a pixel-level signal reading mode is adopted; or the intensity quantization module is arranged outside the intensity pixel units and shared by the intensity pixel units in the same column, and a column-level signal reading mode is adopted; the time difference and quantization module is arranged in the pixel unit which is multiplexed by the time difference and the space difference, and adopts a pixel level signal reading mode; or the time difference and quantization module is arranged outside the time difference and space difference multiplexing pixel units and shared by the same column of time difference and space difference multiplexing pixel units, and a column-level signal reading mode is adopted; the spatial difference and quantization module is arranged in the pixel unit which is multiplexed by the time difference and the spatial difference, and adopts a pixel level signal reading mode; or the spatial difference and quantization module is arranged outside the pixel units multiplexed by the time difference and the spatial difference, and is shared by the pixel units multiplexed by the time difference and the spatial difference in the same column, and a column-level signal reading mode is adopted.
According to the vision sensor chip based on the hybrid array, a pulse generation module is arranged in each pixel unit in the pixel array; or, all pixel units in the pixel array are commonly connected with a pulse generation module; or dividing the pixel array into a plurality of subareas, wherein all pixel units in each subarea are commonly connected with a pulse generation module; the pulse generation module is used for generating a trigger signal at fixed time intervals or generating a trigger signal at self-adaptive and programmable variable time intervals so as to control the starting exposure time and exposure time of the photosensitive module, synchronously exposing the pixel units connected to the same pulse generation module and synchronously exposing or asynchronously exposing the pixel units connected to different pulse generation modules; the photosensitive module is arranged in the pixel unit and is used for converting the optical signal of the current pixel unit position into an analog electric signal.
According to the vision sensor chip based on the hybrid array, the exposure mode of each pixel unit in the pixel array is global exposure or rolling exposure.
According to the vision sensor chip based on the hybrid array, under the condition that the pixel units are provided with the color filters, the output color types of the corresponding passages of the pixel units are color values; and under the condition that the pixel unit is not provided with a color filter, the output color type of the corresponding passage of the pixel unit is a gray value.
The invention also provides a visual sensor chip based on the hybrid array, which comprises a pixel array and a visual sensing channel correspondingly arranged; the visual sense path comprises an intensity path, a time differential path and a space differential path; the pixel array comprises an intensity pixel unit, a time difference pixel unit and a space difference pixel unit; the intensity passage corresponds to the intensity pixel unit, the time difference passage corresponds to the time difference pixel unit, and the space difference passage corresponds to the space difference pixel unit; the intensity passage is used for determining a quantized value of an electric signal converted by the light intensity of the incident light at the current intensity pixel unit position at the current moment; the time difference path is used for carrying out time difference and quantization operation in the charge, analog or digital domain on the signals of the current time difference pixel unit position at the current moment and the signals of the current time difference pixel unit position at the previous moment to obtain a time difference value; the space differential path is used for carrying out space differential and quantization operation on a signal of a current space differential pixel unit position at the current moment and a signal of a space associated space differential pixel unit position at the current moment in charge, analog or digital domain to obtain a space differential value, and the space associated pixel unit is any one or more pixel units except the current space differential pixel unit in the pixel array.
According to the vision sensor chip based on the hybrid array, the strength channel comprises a strength storage module and a strength quantization module; the storage module for intensity is used for storing an electric signal converted by the light intensity of the incident light at the current intensity pixel unit position at the current moment; the quantization module for intensity is used for carrying out analog-digital conversion on the electric signal converted by the light intensity of the incident light at the current intensity pixel unit position at the current moment to obtain a quantized value of the electric signal of the light intensity of the incident light at the current intensity pixel unit position at the current moment; the time difference path comprises a storage module for time difference and a time difference and quantization module; the storage module for time difference is used for storing the electric signals of the current time difference pixel unit positions at different moments in a ping-pong buffer mode; the time difference storage module comprises a first time difference storage node and a second time difference storage node; the ping-pong buffer is configured to store the electric signal of the current time difference pixel unit position in the second time difference storage node/the first time difference storage node when the electric signal of the current time difference pixel unit position in the previous time is stored in the first time difference storage node/the second time difference storage node; the time difference and quantization module is used for performing time difference and quantization operation on the electric signal of the current time difference pixel unit position at the current moment and the electric signal of the current time difference pixel unit position at the previous moment to obtain a time difference value; the space difference path comprises a storage node for space difference and a space difference and quantization module; the storage node for space difference is used for storing the electric signals of the current space difference pixel unit position at the current moment; the space difference and quantization module is used for performing difference and quantization operation on the electric signal of the current space difference pixel unit position at the current moment and the electric signal of the space correlation space difference pixel unit position at the current moment to obtain a space difference value.
According to the visual sensor chip based on the hybrid array, the quantization module for intensity is arranged in the intensity pixel unit, and a pixel-level signal reading mode is adopted; or the intensity quantization module is arranged outside the intensity pixel units and shared by the intensity pixel units in the same column, and a column-level signal reading mode is adopted; the time difference and quantization module is arranged in the pixel unit which is multiplexed by the time difference and the space difference, and adopts a pixel level signal reading mode; or the time difference and quantization module is arranged outside the time difference and space difference multiplexing pixel units and shared by the same column of time difference and space difference multiplexing pixel units, and a column-level signal reading mode is adopted; the spatial difference and quantization module is arranged in the pixel unit which is multiplexed by the time difference and the spatial difference, and adopts a pixel level signal reading mode; or the spatial difference and quantization module is arranged outside the pixel units multiplexed by the time difference and the spatial difference, and is shared by the pixel units multiplexed by the time difference and the spatial difference in the same column, and a column-level signal reading mode is adopted.
According to the vision sensor chip based on the hybrid array, a pulse generation module is arranged in each pixel unit in the pixel array; or, all pixel units in the pixel array are commonly connected with a pulse generation module; or dividing the pixel array into a plurality of subareas, wherein all pixel units in each subarea are commonly connected with a pulse generation module; the pulse generation module is used for generating a trigger signal at fixed time intervals or generating a trigger signal at self-adaptive and programmable variable time intervals so as to control the starting exposure time and exposure time of the photosensitive module, synchronously exposing the pixel units connected to the same pulse generation module and synchronously exposing or asynchronously exposing the pixel units connected to different pulse generation modules; the photosensitive module is arranged in the pixel unit and is used for converting the optical signal of the current pixel unit position into an analog electric signal.
According to the vision sensor chip based on the hybrid array, the exposure mode of each pixel unit in the pixel array is global exposure or rolling exposure.
According to the vision sensor chip based on the hybrid array, under the condition that the pixel units are provided with the color filters, the output color types of the corresponding passages of the pixel units are color values; and under the condition that the pixel unit is not provided with a color filter, the output color type of the corresponding passage of the pixel unit is a gray value.
The invention provides a visual sensor chip based on a mixed array, which comprises a pixel array and a visual sensing channel correspondingly arranged; the visual sense path includes an intensity path, a time differential path, and a space differential path; the pixel array comprises a plurality of multiplexing pixel units and a plurality of single pixel units; the multiplexing pixel units are pixel units multiplexed by two elements of intensity, time difference and space difference; the single pixel unit is a pixel unit different from the element of the multiplexing pixel unit; the intensity passage is used for determining a quantized value of the incident light intensity converted into an electric signal; the time difference path is used for determining a time difference value; the time differential path is used to determine a spatial differential value. The three-way vision sensing chip architecture based on the hybrid array can greatly improve the perception capability of the vision sensor chip to space-time dynamic information, and realize high-precision, high-frame rate, high-dynamic range and high-efficiency robust vision representation.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a hybrid array-based visual sensor chip according to one embodiment of the present invention;
FIG. 2a is a schematic diagram of the readout principle of quantized values of electrical signals of light intensity conversion of incident light of an intensity pixel unit according to the present invention;
FIG. 2b is a schematic diagram of a time differential value readout principle of a space-time differential pixel unit according to the present invention;
FIG. 2c is a schematic diagram of a spatial differential value readout principle of a spatial differential pixel unit according to the present invention;
FIG. 2d is a schematic diagram of a second principle of reading out the spatial differential values of the spatial differential pixel units according to the present invention;
FIG. 3 is a schematic diagram of a specific principle of time difference value readout of a space-time differential pixel unit according to the present invention;
FIG. 4 is a schematic diagram of a specific principle of spatial differential value readout of a spatial differential pixel unit according to the present invention;
FIG. 5 is a schematic diagram of a hybrid array-based vision sensor chip according to a second embodiment of the present invention;
FIG. 6 is a third schematic diagram of a hybrid array-based visual sensor chip according to the present invention;
FIG. 7a is a schematic diagram of a third embodiment of the time difference value and the spatial difference value readout principle of the space-time difference pixel unit according to the present invention;
FIG. 7b is a schematic diagram of a time difference value and a space difference value readout principle of a space-time difference pixel unit according to the present invention;
FIG. 8 is a schematic diagram of a hybrid array-based visual sensor chip according to the present invention;
fig. 9 is a schematic diagram of a hybrid array-based visual sensor chip according to the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
At present, a CMOS Image Sensor (CIS) mainly realizes video shooting based on a frame-based sampling principle, that is, each frame of image of the CIS records output of all pixel points in a pixel array, and each frame is equally spaced. CIS integrates transistors inside the pixel to achieve high performance charge-voltage conversion. The CIS can sense visible light of different wavelengths through a color filter array covered on the pixel array to obtain a color image. The CIS has the advantages of high resolution of the pixel array, high color reproducibility and high image quality. Dynamic Vision Sensor (DVS), a new type of imaging system. Compared with the traditional camera which uses a shutter to control the frame rate, the DVS is sensitive to the light intensity change rate, wherein all pixels record light intensity according to frames, each pixel independently records the change of the light intensity logarithmic value at the pixel, and when the change exceeds a threshold value, a positive or negative pulse is generated. The DVS asynchronous pulse issuing characteristic makes the DVS asynchronous pulse issuing pulse free from fast threshold restriction, has extremely high time resolution, and combines the characteristic of sensitivity to change, so that the DVS asynchronous pulse issuing device has natural adaptability to tasks such as motion monitoring and the like. Another camera, called DAVIS, combines the conventional CIS with the DVS, and can record both single frame images and event information, while having the advantages of high spatial resolution of the conventional camera and high temporal resolution of the DVS camera.
The visual sensor with only CIS and DVS paths is still incomplete for information acquisition from a visual primitive perspective. For example, when there is a large-scale flash or a strong change in light intensity in the picture, the differential pixels output events at all times to cause saturation, the DVS path cannot output effective information, and the CIS path cannot respond immediately due to the frame rate limitation. Such extreme scenes are very common in autopilot and are critical for driving safety, such as entering and exiting tunnels, night-time snapshot camera flashes, etc.
Referring to fig. 1, fig. 1 is a schematic diagram of an architecture of a visual sensor chip based on a hybrid array according to the present invention.
Referring to fig. 2a, fig. 2a is a schematic diagram of a quantized value readout principle of an electrical signal for light intensity conversion of an incident light of an intensity pixel unit according to the present invention.
Referring to fig. 2b, fig. 2b is a schematic diagram of a time differential value readout principle of a space-time differential pixel unit according to the present invention.
Referring to fig. 2c, fig. 2c is a schematic diagram illustrating a principle of reading spatial differential values of a spatio-temporal differential pixel unit according to the present invention.
Referring to fig. 2d, fig. 2d is a schematic diagram illustrating a space-time differential value readout principle of the space-time differential pixel unit according to the present invention.
In order to solve the technical problems in the prior art, the invention provides a visual sensor chip based on a hybrid array, which comprises a pixel array and a visual sensing channel correspondingly arranged; the visual sense path includes an intensity path, a time differential path, and a space differential path; the pixel array comprises a plurality of multiplexing pixel units and a plurality of single pixel units; the multiplexing pixel units are pixel units multiplexed by two elements of intensity, time difference and space difference; the single pixel unit is a pixel unit different from the element of the multiplexing pixel unit; the intensity passage is used for determining a quantized value of an electric signal converted by the light intensity of the incident light at the current pixel unit position at the current moment; the time difference path is used for carrying out difference and quantization operation on the signal of the current pixel unit position at the current moment and the signal of the current pixel unit position at the previous moment in the charge, analog or digital domain to obtain a time difference value; the spatial differential path is used for carrying out spatial differential and quantization operation on a signal of a current pixel unit position at the current moment and a signal of a spatial associated pixel unit position at the current moment in charge, analog or digital domain to obtain a spatial differential value, wherein the spatial associated pixel unit is any one or more pixel units except the current pixel unit in the pixel array.
In contrast, the human visual system, whether in noon or dusk, whether in an open scene or partially occluded, can achieve rapid recognition of moving objects. The robustness and versatility of a far superior existing DAVIS or hybrid array system is achieved. The human eyes have a space differential path besides an output intensity path and a time differential path, and the three paths are organically fused and combined into different primitives, so that an efficient and robust visual representation is formed. The invention is inspired by human vision, and adds the space differential path of the human retina to the existing single pixel multiplexing or mixed pixel array solution. I.e. the vision sensor has three outputs at the same time: intensity output, time differential (TD, temporal Difference) output, and space differential (SD, spatial Difference) output.
The intensity path outputs the current time t n The intensity I (x, y, t) of the incident light at the current pixel cell position (x, y) n ) Quantized values of (i), i.e.
A(x,y,t n )=Q A (I(x,y,t n ))
Wherein Q is A Is a quantization method of the intensity path.
The time difference path outputs the time difference values of the current pixel unit position (x, y) at different moments, and the obtained expression of the time difference path output is
TD(x,y,t n )=Q TD (I(x,y,t n )-I(x,y,t n-1 ))
Wherein Q is TD Is a quantization method of the intensity path.
The space differential path outputs the current time t n A spatial difference value of the current pixel cell position (x, y) and the spatially associated pixel cell position (e.g., diagonal or xy direction).
SD i (x,y,t n )=Q SD (I(x,y,t n )-I(x i ,y i ,t n ))
Wherein Q is SD SD in the above method for quantifying intensity path i Representing the current pixel cell and the associated pixel cell (x i ,y i ) A spatial difference value between them.
All signals involved in the three visual sense paths are three-dimensional quantities, including x, y, a spatial two-dimensional quantity and a temporal dimension t.
The pixel elements of the pixel array may take the following forms:
the pixel array is a binary mixed pixel array formed by a plurality of multiplexing pixel units and a plurality of single pixel units;
the multiplexing pixel units are pixel units with intensity and time multiplexed in a differential mode; the single pixel unit is a space differential pixel unit;
or the multiplexing pixel unit is a pixel unit with intensity and space multiplexed in a differential mode; the single pixel unit is a time difference pixel unit;
or, the multiplexing pixel unit is a pixel unit of time difference and space difference multiplexing; the single pixel unit is an intensity pixel unit.
Or the pixel array is a ternary mixed pixel array formed by a plurality of intensity pixel units, a plurality of time difference pixel units and a plurality of space difference pixel units in a separated mode.
The visual sensor chip provided by the invention has the three-way visual sensor chip architecture with the intensity output, the time differential output and the space differential output, and can greatly improve the perception capability of the visual sensor chip on space-time dynamic information, and realize the visual representation with high precision, high frame rate, high dynamic range and high efficiency and robustness.
Based on the above embodiments:
as a preferred embodiment, the multiplexed pixel units are time-and space-differentially multiplexed pixel units, and the single pixel unit is an intensity pixel unit; the intensity channels correspond to intensity pixel cells and the time-differential channels and the space-differential channels correspond to time-differential and space-differential multiplexed pixel cells.
The present embodiment provides a three-channel vision sensor of mixed pixels, in which a pixel array includes an intensity pixel unit and pixel units (spatiotemporal differential pixel units) multiplexed by temporal and spatial differences, a quantized value of an intensity of incident light of the intensity pixel unit is read out through the intensity channel, a temporal differential value of the spatiotemporal differential pixel unit is read out through the temporal differential channel, and a spatial differential value is read out through the spatial differential channel.
As a preferred embodiment, the strength pathway includes a strength storage module and a strength quantization module; the intensity storage module is used for storing an electric signal converted by the light intensity of the incident light at the current pixel unit position at the current moment; the intensity quantization module is used for performing analog-digital conversion on the electric signal converted by the light intensity of the incident light at the current pixel unit position at the current moment to obtain a quantized value of the electric signal of the light intensity of the incident light at the current pixel unit position at the current moment; the time difference path comprises a storage module for time difference and a time difference and quantization module; the storage module for time difference is used for storing the electric signals of the current pixel unit positions at different times in a ping-pong cache mode; the time difference storage module comprises a first time difference storage node and a second time difference storage node; the ping-pong buffer mode is that when the electric signal of the current pixel unit position at the previous moment is stored in a first time difference storage node/a second time difference storage node, the electric signal of the current pixel unit position at the current moment is stored in the second time difference storage node/the first time difference storage node; the time difference and quantization module is used for performing time difference and quantization operation on the electric signal of the current pixel unit position at the current moment and the electric signal of the current pixel unit position at the previous moment to obtain a time difference value; the space difference path comprises a space difference storage node multiplexing the first time difference storage node and a space difference and quantization module; the storage node for space difference is used for storing the electric signal of the current pixel unit position at the current moment; the spatial difference and quantization module is used for performing difference and quantization operation on the electric signal of the current pixel unit position at the current moment and the electric signal of the current moment spatial correlation pixel unit position to obtain a spatial difference value.
The storage nodes for the first time difference and the storage nodes for the second time difference can be selected from high-speed low-precision storage nodes, and the storage module for the strength can be selected from low-speed high-precision storage nodes, so that complementary advantages between the two storage nodes are realized, and the storage nodes are matched with post-processing, so that the storage module for the strength is expected to realize high-speed high-precision overall performance with smaller generation. The first time difference storage node and the second time difference storage node of the same pixel adopt a mode of alternately storing information of different times by ping pong. Ping-pong buffer means that if the current time of the electrical signal of the pixel unit is stored in the first time difference storage node, the next time is stored in the second time difference storage node, and the next time is stored in the first time difference storage node, and the two time signals (I (x, y, t n ),I(x,y,t n-1 ))。
The intensity quantization module can be arranged outside the intensity pixel unit or inside the intensity pixel unit; the time difference and quantization module and the space difference and quantization module can be arranged outside the space-time difference pixel unit or inside the space-time difference pixel unit.
In this embodiment, the intensity quantization module is disposed outside the intensity pixel unit, and the space-time difference and quantization module and the space difference and quantization module are disposed outside the space-time difference pixel unit.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating a time differential value readout principle of a space-time differential pixel unit according to the present invention.
In the time difference value reading out, the time difference and quantization module is shared by a plurality of time-space difference pixel units, so that one time can be divided into a plurality of sub-time t 1 、t 2 、t 3 Wherein t is 1 The time difference value t of the time-space difference pixel unit of the first area can be read at the sub-moment 2 The time difference of the time-space difference pixel units of the second region can be read at the sub-momentValue t 3 The time difference values of the space-time differential pixel units in the third area can be read at the sub-moment, and similarly, the time difference values of all the space-time differential pixel units in the pixel array can be read. The bold squares in the figure represent the pixels being read out at the current sub-moment.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating a principle of reading spatial differential values of a space-time differential pixel unit according to the present invention.
In the process of reading out the space difference value, the space difference pixel units share one space difference and quantization module in consideration of the number of the space difference and quantization modules, and one time can be divided into a plurality of sub-time t 1 、t 2 、t 3 、t 4 Wherein t is 1 The space difference value, t, of the space-time difference pixel unit of the first area can be read at the sub-moment 2 The space difference value, t, of the space-time difference pixel unit of the second area can be read at the sub-moment 3 The space difference value, t, of the space-time difference pixel unit of the third region can be read at the sub-moment 4 The space difference values of the space-time differential pixel units in the fourth region can be read at the sub-moment, and similarly, the space difference values of all the space-time differential pixel units in the pixel array can be read. The bold squares in the figure represent the pixels being read out at the current sub-moment.
There are various ways of selecting the pixel units at the spatial correlation positions, and typically, the pixel units may be pixel units at adjacent positions in the xy direction of the pixel unit at the current position or pixel units at oblique adjacent positions of the pixel unit at the current position.
For xy direction difference, the obtained expression of the spatial difference channel output is
SD x (x,y,t n )=Q SD (I(x,y,t n )-I(x-1,y,t n ))
SD y (x,y,t n )=Q SD (I(x,y,t n )-I(x,y-1,t n ))
For oblique differential, the acquired expression of the spatial differential path output is
Alternatively, for example, only adjacent pixels in a certain direction may be selected and differentiated to obtain the difference information in the certain direction, or more than two associated pixels may be simultaneously selected to improve the accuracy of the spatial difference.
The quantization methods of the temporal and spatial difference and quantization modules may be either multi-valued (> 1 bit) or single-valued (positive and negative pulses). The time of acquiring the signal can be acquired in a mode of full array synchronization and same time interval, or full array synchronization but variable time interval, or full array asynchronization.
In the field of digital signal processing, quantization mainly refers to the process of converting an analog signal into a digital signal. The sampling and quantization of the signal is typically achieved by an analog-to-digital converter ADC.
As a preferred embodiment, the intensity quantization module is arranged in the intensity pixel unit, and adopts a pixel level signal reading mode; or the intensity quantization module is arranged outside the intensity pixel units and shared by the intensity pixel units in the same column, and a column-level signal reading mode is adopted; the time difference and quantization module is arranged in a pixel unit of time difference and space difference multiplexing, and adopts a pixel level signal reading mode; or the time difference and quantization module is arranged outside the time difference and space difference multiplexing pixel units and shared by the same column of time difference and space difference multiplexing pixel units, and a column-level signal reading mode is adopted; the spatial difference and quantization module is arranged in a pixel unit which is multiplexed by the time difference and the spatial difference, and adopts a pixel level signal reading mode; or the space difference and quantization module is arranged outside the pixel units multiplexed by the time difference and the space difference and shared by the pixel units multiplexed by the time difference and the space difference in the same column, and a column-level signal reading mode is adopted.
In this embodiment, if the time difference and quantization module and the space difference and quantization module are disposed outside the space-time difference pixel unit, the total quantization module number is reduced, and the hardware resource consumption is reduced. If the time difference and quantization module and the space difference and quantization module are arranged in the space-time difference pixel unit, the flexibility is improved, and the output delay is reduced.
The present invention relates to a hybrid array of two different pixels or a hybrid array of three different pixels, and the arrangement method of the hybrid array is not unique, for example, the arrangement method of fig. 5 is also used in addition to fig. 1.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating a second architecture of a visual sensor chip based on a hybrid array according to the present invention.
Referring to fig. 6, fig. 6 is a schematic diagram of a third architecture of a vision sensor chip based on a hybrid array according to the present invention.
Referring to fig. 7a, fig. 7a is a schematic diagram illustrating a third principle of time difference and space difference readout of the spatio-temporal difference pixel unit according to the present invention.
Referring to fig. 7b, fig. 7b is a schematic diagram illustrating the time difference value and the space difference value readout principle of the space-time difference pixel unit according to the present invention.
The intensity module of the visual sense chip is arranged outside the intensity pixel unit, and the time difference and quantization module and the space difference and quantization module are both arranged in the space-time difference pixel unit.
Of course, the intensity module of the vision sensing chip of the present invention may be disposed outside the intensity pixel unit and the time difference and quantization module, and the space difference and quantization module may be arbitrarily combined inside and outside the space-time difference pixel unit, and the present invention is not limited thereto.
As a preferred embodiment, a pulse generating module is arranged in each pixel unit in the pixel array;
or, all pixel units in the pixel array are commonly connected with a pulse generation module;
or dividing the pixel array into a plurality of subareas, wherein all pixel units in each subarea are commonly connected with a pulse generation module;
the pulse generation module is used for generating a trigger signal at fixed time intervals or generating a trigger signal at self-adaptive and programmable variable time intervals so as to control the starting exposure time and exposure time of the photosensitive module, and the pixel units connected to the same pulse generation module are synchronously exposed and the pixel units connected to different pulse generation modules are synchronously exposed or asynchronously exposed;
the photosensitive module is arranged in the pixel unit and is used for converting the optical signal of the current pixel unit position into an analog electric signal.
In this embodiment, the vision sensor chip further includes a trigger pulse generator, where the trigger pulse generator can generate a trigger signal, so as to implement exposure control on the photosensitive module, i.e. determine the time t of collecting the signal n . If trigger pulse generators are designed in each pixel unit, full-array asynchronous exposure can be adopted, and at the moment, each trigger pulse generator can independently adjust the moment of triggering calculation of the time-space differential signal according to the light intensity level sensed by the pixel unit where the trigger pulse generator is located, and the trigger moment of each pixel unit is different. The pixel unit can output information at any time, so that flexibility is improved, and output delay is reduced. Of course, the full-array synchronous exposure can be set as needed in this case. If some pixel units share the same trigger pulse generator, then the pixel units are synchronously exposed.
The trigger pulse generator may generate the trigger signal at the same time interval or at an adaptively, programmable variable interval.
If all pixel units in the array share a trigger pulse generator, all pixel units need to be exposed at the same moment, and the output of the pixel units needs to follow a certain rule.
As a preferred embodiment, the exposure mode of each pixel unit in the pixel array is global exposure or rolling exposure.
Of course, the manner in which the intensity path, the time differential path, and the space differential path are subjected to global exposure or rolling exposure may be arbitrarily combined, and the present invention is not particularly limited herein.
As a preferred embodiment, in the case where the pixel unit is provided with a color filter, the output color type of the corresponding path of the pixel unit is a color value; in the case that the pixel unit is not provided with a color filter, the output color type of the corresponding channel of the pixel unit is a gray value.
If the pixel unit is covered with a color filter, the information acquired by the pixel is only information of a certain color channel. Typical color filters are combinations of three colors, red, green, and blue, called RGB types, and other color channels, such as CMY arrays of three complementary colors (cyan, magenta, yellow), and the like, may be used. The same type of pixels have color channel differences, such as X-color spatiotemporal differential pixels, Y-channel spatiotemporal differential pixels, and Z-channel spatiotemporal differential pixels. In the case of spatial differentiation, spatial differentiation may be performed between pixels of the same color, or spatial differentiation may be performed between pixels of different colors (e.g., subtraction between X-color pixels and Y-color pixels).
In addition, an externally programmable demosaicing device can be embedded in the pixel unit, and output values of all other color channels of the pixel position of the X color channel are obtained through a demosaicing algorithm, namely, the output values are realized through interpolation by selecting points of surrounding pixels.
Referring to fig. 8, fig. 8 is a schematic diagram of a hybrid array-based visual sensor chip according to the present invention.
Referring to fig. 9, fig. 9 is a schematic diagram of a hybrid array-based visual sensor chip according to the present invention.
The invention also provides a visual sensor chip based on the hybrid array, a pixel array and a visual sensing channel correspondingly arranged; the visual sense path includes an intensity path, a time differential path, and a space differential path; the pixel array comprises an intensity pixel unit, a time difference pixel unit and a space difference pixel unit; the intensity passage corresponds to the intensity pixel unit, the time difference passage corresponds to the time difference pixel unit, and the space difference passage corresponds to the space difference pixel unit; the intensity passage is used for determining a quantized value of an electric signal converted by the light intensity of the incident light at the current intensity pixel unit position at the current moment; the time difference path is used for carrying out time difference and quantization operation on the signals of the current time difference pixel unit position at the current moment and the signals of the current time difference pixel unit position at the previous moment in the charge, analog or digital domain to obtain a time difference value; the space differential path is used for carrying out space differential and quantization operation on a signal of a current space differential pixel unit position at the current moment and a signal of a space associated space differential pixel unit position at the current moment in charge, analog or digital domain to obtain a space differential value, wherein the space associated pixel unit is any one or more pixel units except the current space differential pixel unit in the pixel array.
The main difference between the present embodiment and the vision sensor chip of the above embodiment is that: the pixel array of the vision sensor is divided into three different types of pixel units, namely an intensity pixel unit, a time difference pixel unit and a space difference pixel unit, the three types of pixel units can be randomly combined in the pixel array, and corresponding vision sensing passages can be arranged corresponding to different arrangement modes so as to realize quantization and readout of the light intensity quantization value of the intensity pixel unit, the time difference value of the time difference pixel unit and the space difference value of the space difference pixel unit in a high-precision multi-value format (more than or equal to 1 bit). The invention can greatly improve the perception capability of the visual sensor chip to space-time dynamic information through the three-channel visual sensor chip architecture, and realize the visual representation with high precision, high frame rate, high dynamic range and high efficiency and robustness.
As a preferred embodiment, the strength pathway includes a strength storage module and a strength quantization module; the intensity storage module is used for storing an electric signal converted by the light intensity of the incident light at the current intensity pixel unit position at the current moment; the intensity quantization module is used for performing analog-digital conversion on the electric signal converted by the light intensity of the incident light at the current intensity pixel unit position at the current moment to obtain a quantized value of the electric signal of the light intensity of the incident light at the current intensity pixel unit position at the current moment; the time difference path comprises a storage module for time difference and a time difference and quantization module; the time difference storage module is used for storing the electric signals of the current time difference pixel unit positions in different time by adopting a ping-pong buffer mode; the time difference storage module comprises a first time difference storage node and a second time difference storage node; the ping-pong buffer mode is that when the electric signal of the current time difference pixel unit position is stored in the first time difference storage node/the second time difference storage node at the last moment, the electric signal of the current time difference pixel unit position is stored in the second time difference storage node/the first time difference storage node; the time difference and quantization module is used for carrying out time difference and quantization operation on the electric signal of the current time difference pixel unit position at the current moment and the electric signal of the current time difference pixel unit position at the previous moment to obtain a time difference value; the space difference path comprises a storage node for space difference and a space difference and quantization module; the storage node for space difference is used for storing the electric signals of the current space difference pixel unit position at the current moment; the spatial difference and quantization module is used for performing difference and quantization operation on the electric signal of the current spatial difference pixel unit position at the current moment and the electric signal of the spatial correlation spatial difference pixel unit position at the current moment to obtain a spatial difference value.
The first time difference storage node and the second time difference storage node may be high-speed low-precision storage nodes, the intensity storage module may be low-speed high-precision storage nodes, and a ping-pong alternate storage manner is adopted between the first time difference storage node and the second time difference storage node of the same pixel. Ping-pong buffer means that if the electrical signal of the pixel unit is stored in the first time difference storage node at the current moment, the next moment is stored in the second time difference storage node, and the next moment is stored in the first time difference storage node, and the above steps are alternately repeated), and the input is performedTwo time signals (I (x, y, t n ),I(x,y,t n-1 ))。
The intensity quantization module can be arranged outside the intensity pixel unit or inside the intensity pixel unit; the time difference and quantization module and the space difference and quantization module can be arranged outside the space-time difference pixel unit or inside the space-time difference pixel unit.
In this embodiment, the intensity quantization module is disposed outside the intensity pixel unit, the space-time difference and quantization module is disposed outside the time difference pixel unit, and the space difference and quantization module is disposed outside the space difference pixel unit.
There are various ways of selecting the pixel units at the spatial correlation positions, and typically, the pixel units may be pixel units at adjacent positions in the xy direction of the pixel unit at the current position or pixel units at oblique adjacent positions of the pixel unit at the current position.
For xy direction difference, the obtained expression of the spatial difference channel output is
SD x (x,y,t n )=Q SD (I(x,y,t n )-I(x-1,y,t n ))
SD y (x,y,t n )=Q SD (I(x,y,t n )-I(x,y-1,t n ))
For oblique differential, the acquired expression of the spatial differential path output is
Alternatively, for example, only adjacent pixels in a certain direction may be selected and differentiated to obtain the difference information in the certain direction, or more than two associated pixels may be simultaneously selected to improve the accuracy of the spatial difference.
The quantization methods of the temporal and spatial difference and quantization modules may be either multi-valued (> 1 bit) or single-valued (positive and negative pulses). The time of acquiring the signal can be acquired in a mode of full array synchronization and same time interval, or full array synchronization but variable time interval, or full array asynchronization.
In the field of digital signal processing, quantization mainly refers to the process of converting an analog signal into a digital signal. The sampling and quantization of the signal is typically achieved by an analog-to-digital converter ADC.
As a preferred embodiment, the intensity quantization module is arranged in the intensity pixel unit, and adopts a pixel level signal reading mode; or the intensity quantization module is arranged outside the intensity pixel units and shared by the intensity pixel units in the same column, and a column-level signal reading mode is adopted; the time difference and quantization module is arranged in a pixel unit of time difference and space difference multiplexing, and adopts a pixel level signal reading mode; or the time difference and quantization module is arranged outside the time difference and space difference multiplexing pixel units and shared by the same column of time difference and space difference multiplexing pixel units, and a column-level signal reading mode is adopted; the spatial difference and quantization module is arranged in a pixel unit which is multiplexed by the time difference and the spatial difference, and adopts a pixel level signal reading mode; or the space difference and quantization module is arranged outside the pixel units multiplexed by the time difference and the space difference and shared by the pixel units multiplexed by the time difference and the space difference in the same column, and a column-level signal reading mode is adopted.
In this embodiment, if the time difference and quantization module is disposed outside the time difference pixel unit and the space difference and quantization module is disposed outside the space difference pixel unit, the total quantization module number is reduced, and the hardware resource consumption is reduced.
If the time difference and quantization module is arranged in the time difference pixel unit and the space difference and quantization module is arranged in the space difference pixel unit, the flexibility is improved, and the output delay is reduced.
Of course, the case where the intensity module of the vision sensing chip of the present invention is disposed outside the intensity pixel unit and the time difference and quantization module is disposed inside and outside the time difference pixel, and the space difference and quantization module is disposed inside and outside the space difference pixel unit may be arbitrarily combined, and the present invention is not limited thereto.
As a preferred embodiment, a pulse generating module is arranged in each pixel unit in the pixel array;
or, all pixel units in the pixel array are commonly connected with a pulse generation module;
or dividing the pixel array into a plurality of subareas, wherein all pixel units in each subarea are commonly connected with a pulse generation module;
the pulse generation module is used for generating a trigger signal at fixed time intervals or generating a trigger signal at self-adaptive and programmable variable time intervals so as to control the starting exposure time and exposure time of the photosensitive module, and the pixel units connected to the same pulse generation module are synchronously exposed and the pixel units connected to different pulse generation modules are synchronously exposed or asynchronously exposed;
The photosensitive module is arranged in the pixel unit and is used for converting the optical signal of the current pixel unit position into an analog electric signal.
In this embodiment, the vision sensor chip further includes a trigger pulse generator, where the trigger pulse generator can generate a trigger signal, so as to implement exposure control on the photosensitive module, i.e. determine the time t of collecting the signal n . If trigger pulse generators are designed in each pixel unit, full-array asynchronous exposure can be adopted, and at the moment, each trigger pulse generator can independently adjust the moment of triggering calculation of the time-space differential signal according to the light intensity level sensed by the pixel unit where the trigger pulse generator is located, and the trigger moment of each pixel unit is different. The pixel unit can output information at any time, so that flexibility is improved, and output delay is reduced. Of course, the full-array synchronous exposure can be set as needed in this case. If some pixel units share the same trigger pulse generator, then the pixel units are synchronously exposed.
The trigger pulse generator may generate the trigger signal at the same time interval or at an adaptively, programmable variable interval.
If all pixel units in the array share a trigger pulse generator, all pixel units need to be exposed at the same moment, and the output of the pixel units needs to follow a certain rule.
As a preferred embodiment, the exposure mode of each pixel unit in the pixel array is global exposure or rolling exposure.
Of course, the manner in which the intensity path, the time differential path, and the space differential path are subjected to global exposure or rolling exposure may be arbitrarily combined, and the present invention is not particularly limited herein.
As a preferred embodiment, in the case where the pixel unit is provided with a color filter, the output color type of the corresponding path of the pixel unit is a color value; in the case that the pixel unit is not provided with a color filter, the output color type of the corresponding channel of the pixel unit is a gray value.
If the pixel unit is covered with a color filter, the information acquired by the pixel is only information of a certain color channel. Typical color filters are combinations of three colors, red, green, and blue, called RGB types, and other color channels, such as CMY arrays of three complementary colors (cyan, magenta, yellow), and the like, may be used. The same type of pixels have color channel differences, such as X-color spatiotemporal differential pixels, Y-channel spatiotemporal differential pixels, and Z-channel spatiotemporal differential pixels. In the case of spatial differentiation, spatial differentiation may be performed between pixels of the same color, or spatial differentiation may be performed between pixels of different colors (e.g., subtraction between X-color pixels and Y-color pixels).
In addition, an externally programmable demosaicing device can be embedded in the pixel unit, and output values of all other color channels of the pixel position of the X color channel are obtained through a demosaicing algorithm, namely, the output values are realized through interpolation by selecting points of surrounding pixels.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (13)

1. The vision sensor chip based on the hybrid array is characterized by comprising a pixel array and a vision sensing channel which is correspondingly arranged; the visual sense path comprises an intensity path, a time differential path and a space differential path; the pixel array comprises a plurality of multiplexing pixel units and a plurality of single pixel units; the multiplexing pixel units are pixel units multiplexed by two elements of intensity, time difference and space difference; the single pixel unit is a pixel unit different from the element of the multiplexing pixel unit; the intensity passage is used for determining a quantized value of an electric signal converted by the light intensity of incident light at the current pixel unit position at the current moment;
The time differential path is used for carrying out differential and quantization operation on a signal of the current pixel unit position at the current moment and a signal of the current pixel unit position at the previous moment in a charge, analog or digital domain to obtain a time differential value;
the space difference path is used for carrying out space difference and quantization operation on a signal of a current pixel unit position at the current moment and a signal of a space associated pixel unit position at the current moment in charge, analog or digital domain to obtain a space difference value, and the space associated pixel unit is any one or more pixel units except the current pixel unit in the pixel array.
2. The hybrid array-based vision sensor chip of claim 1, wherein the multiplexed pixel cells are time-and space-differentially multiplexed pixel cells, and the single pixel cell is an intensity pixel cell; the intensity path corresponds to the intensity pixel cell, and the time differential path and the space differential path correspond to the time differential and space differential multiplexed pixel cell.
3. The hybrid array-based vision sensor chip of claim 2,
The intensity path comprises an intensity storage module and an intensity quantization module;
the storage module for intensity is used for storing an electric signal converted by the light intensity of the incident light at the current pixel unit position at the current moment;
the intensity quantization module is used for performing analog-digital conversion on the electric signal converted by the light intensity of the incident light at the current pixel unit position at the current moment to obtain a quantized value of the electric signal of the light intensity of the incident light at the current pixel unit position at the current moment;
the time difference path comprises a storage module for time difference and a time difference and quantization module;
the storage module for time difference is used for storing the electric signals of the current pixel unit positions at different times in a ping-pong cache mode; the time difference storage module comprises a first time difference storage node and a second time difference storage node; the ping-pong buffer is configured to store the electric signal of the current pixel unit position at the current time in the second time difference storage node/the first time difference storage node when the electric signal of the current pixel unit position at the previous time is stored in the first time difference storage node/the second time difference storage node;
The time difference and quantization module is used for performing time difference and quantization operation on the electric signal of the current pixel unit position at the current moment and the electric signal of the current pixel unit position at the previous moment to obtain a time difference value;
the space difference path comprises a space difference storage node multiplexing the first time difference storage node and a space difference and quantization module;
the storage node for space difference is used for storing the electric signal of the current pixel unit position at the current moment;
the spatial difference and quantization module is used for performing difference and quantization operation on the electric signal of the current pixel unit position at the current moment and the electric signal of the current time space-related pixel unit position to obtain a spatial difference value.
4. The hybrid array-based vision sensor chip of claim 3, wherein the intensity quantization module is disposed in the intensity pixel unit in a manner of pixel-level signal readout; or the intensity quantization module is arranged outside the intensity pixel units and shared by the intensity pixel units in the same column, and a column-level signal reading mode is adopted;
the time difference and quantization module is arranged in the pixel unit which is multiplexed by the time difference and the space difference, and adopts a pixel level signal reading mode; or the time difference and quantization module is arranged outside the time difference and space difference multiplexing pixel units and shared by the same column of time difference and space difference multiplexing pixel units, and a column-level signal reading mode is adopted;
The spatial difference and quantization module is arranged in the pixel unit which is multiplexed by the time difference and the spatial difference, and adopts a pixel level signal reading mode; or the spatial difference and quantization module is arranged outside the pixel units multiplexed by the time difference and the spatial difference, and is shared by the pixel units multiplexed by the time difference and the spatial difference in the same column, and a column-level signal reading mode is adopted.
5. The hybrid array-based vision sensor chip of claim 1, wherein one pulse generation module is disposed in each pixel cell in the pixel array;
or, all pixel units in the pixel array are commonly connected with a pulse generation module;
or dividing the pixel array into a plurality of subareas, wherein all pixel units in each subarea are commonly connected with a pulse generation module;
the pulse generation module is used for generating a trigger signal at fixed time intervals or generating a trigger signal at self-adaptive and programmable variable time intervals so as to control the starting exposure time and exposure time of the photosensitive module, synchronously exposing the pixel units connected to the same pulse generation module and synchronously exposing or asynchronously exposing the pixel units connected to different pulse generation modules;
The photosensitive module is arranged in the pixel unit and is used for converting the optical signal of the current pixel unit position into an analog electric signal.
6. The hybrid array-based vision sensor chip of claim 1, wherein the exposure mode of each pixel unit in the pixel array is global exposure or rolling exposure.
7. The hybrid array-based vision sensor chip of any one of claims 1 to 6, wherein, in the case where the pixel units are provided with color filters, the output color types of the corresponding paths of the pixel units are color values; and under the condition that the pixel unit is not provided with a color filter, the output color type of the corresponding passage of the pixel unit is a gray value.
8. The vision sensor chip based on the hybrid array is characterized by comprising a pixel array and a vision sensing channel which is correspondingly arranged; the visual sense path comprises an intensity path, a time differential path and a space differential path; the pixel array comprises an intensity pixel unit, a time difference pixel unit and a space difference pixel unit; the intensity passage corresponds to the intensity pixel unit, the time difference passage corresponds to the time difference pixel unit, and the space difference passage corresponds to the space difference pixel unit;
The intensity passage is used for determining a quantized value of an electric signal converted by the light intensity of the incident light at the current intensity pixel unit position at the current moment;
the time difference path is used for carrying out time difference and quantization operation in the charge, analog or digital domain on the signals of the current time difference pixel unit position at the current moment and the signals of the current time difference pixel unit position at the previous moment to obtain a time difference value;
the space differential path is used for carrying out space differential and quantization operation on a signal of a current space differential pixel unit position at the current moment and a signal of a space associated space differential pixel unit position at the current moment in charge, analog or digital domain to obtain a space differential value, and the space associated pixel unit is any one or more pixel units except the current space differential pixel unit in the pixel array.
9. The hybrid array-based vision sensor chip of claim 8,
the intensity path comprises an intensity storage module and an intensity quantization module;
the storage module for intensity is used for storing an electric signal converted by the light intensity of the incident light at the current intensity pixel unit position at the current moment;
The quantization module for intensity is used for carrying out analog-digital conversion on the electric signal converted by the light intensity of the incident light at the current intensity pixel unit position at the current moment to obtain a quantized value of the electric signal of the light intensity of the incident light at the current intensity pixel unit position at the current moment;
the time difference path comprises a storage module for time difference and a time difference and quantization module;
the storage module for time difference is used for storing the electric signals of the current time difference pixel unit positions at different moments in a ping-pong buffer mode; the time difference storage module comprises a first time difference storage node and a second time difference storage node; the ping-pong buffer is configured to store the electric signal of the current time difference pixel unit position in the second time difference storage node/the first time difference storage node when the electric signal of the current time difference pixel unit position in the previous time is stored in the first time difference storage node/the second time difference storage node;
the time difference and quantization module is used for performing time difference and quantization operation on the electric signal of the current time difference pixel unit position at the current moment and the electric signal of the current time difference pixel unit position at the previous moment to obtain a time difference value;
The space difference path comprises a storage node for space difference and a space difference and quantization module;
the storage node for space difference is used for storing the electric signals of the current space difference pixel unit position at the current moment;
the space difference and quantization module is used for performing difference and quantization operation on the electric signal of the current space difference pixel unit position at the current moment and the electric signal of the space correlation space difference pixel unit position at the current moment to obtain a space difference value.
10. The hybrid array-based vision sensor chip of claim 9, wherein the intensity quantization module is disposed in the intensity pixel unit in a manner of pixel-level signal readout; or the intensity quantization module is arranged outside the intensity pixel units and shared by the intensity pixel units in the same column, and a column-level signal reading mode is adopted;
the time difference and quantization module is arranged in the pixel unit which is multiplexed by the time difference and the space difference, and adopts a pixel level signal reading mode; or the time difference and quantization module is arranged outside the time difference and space difference multiplexing pixel units and shared by the same column of time difference and space difference multiplexing pixel units, and a column-level signal reading mode is adopted;
The spatial difference and quantization module is arranged in the pixel unit which is multiplexed by the time difference and the spatial difference, and adopts a pixel level signal reading mode; or the spatial difference and quantization module is arranged outside the pixel units multiplexed by the time difference and the spatial difference, and is shared by the pixel units multiplexed by the time difference and the spatial difference in the same column, and a column-level signal reading mode is adopted.
11. The hybrid array-based vision sensor chip of claim 8, wherein one pulse generation module is disposed in each pixel cell in the pixel array;
or, all pixel units in the pixel array are commonly connected with a pulse generation module;
or dividing the pixel array into a plurality of subareas, wherein all pixel units in each subarea are commonly connected with a pulse generation module;
the pulse generation module is used for generating a trigger signal at fixed time intervals or generating a trigger signal at self-adaptive and programmable variable time intervals so as to control the starting exposure time and exposure time of the photosensitive module, synchronously exposing the pixel units connected to the same pulse generation module and synchronously exposing or asynchronously exposing the pixel units connected to different pulse generation modules;
The photosensitive module is arranged in the pixel unit and is used for converting the optical signal of the current pixel unit position into an analog electric signal.
12. The hybrid array-based vision sensor chip of claim 8, wherein the exposure mode of each pixel unit in the pixel array is global exposure or rolling exposure.
13. The hybrid array-based vision sensor chip of any one of claims 8 to 12, wherein, in the case where the pixel units are provided with color filters, the output color types of the corresponding paths of the pixel units are color values; and under the condition that the pixel unit is not provided with a color filter, the output color type of the corresponding passage of the pixel unit is a gray value.
CN202311420673.8A 2023-10-30 2023-10-30 Visual sensor chip based on hybrid array Pending CN117692811A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311420673.8A CN117692811A (en) 2023-10-30 2023-10-30 Visual sensor chip based on hybrid array

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311420673.8A CN117692811A (en) 2023-10-30 2023-10-30 Visual sensor chip based on hybrid array

Publications (1)

Publication Number Publication Date
CN117692811A true CN117692811A (en) 2024-03-12

Family

ID=90130820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311420673.8A Pending CN117692811A (en) 2023-10-30 2023-10-30 Visual sensor chip based on hybrid array

Country Status (1)

Country Link
CN (1) CN117692811A (en)

Similar Documents

Publication Publication Date Title
US6970195B1 (en) Digital image sensor with improved color reproduction
US7362894B2 (en) Image processing apparatus and method, recording medium, and program
CN110753192B (en) Integrated circuit image sensor
US8605177B2 (en) Image sensor with wide dynamic range
EP1255410A2 (en) System and method for capturing color images that extends the dynamic range of an image sensor
KR100477318B1 (en) Methods and circuits for intrinsic processing of image data within image sensing devices
US9438827B2 (en) Imaging systems and methods for generating binned high-dynamic-range images
CN102892008A (en) Dual image capture processing
CN104010128A (en) Image capturing apparatus and method for controlling the same
CN112640431A (en) Image sensor with multiple superpixels
CN110880163B (en) Low-light color imaging method based on deep learning
CN104023173A (en) Solid-state imaging device and camera module
US20170019583A1 (en) Image sensor and driving method thereof, and image capturing apparatus
US8970721B2 (en) Imaging device, solid-state imaging element, image generation method, and program
KR20200096949A (en) High-dynamic image sensor
US8279306B2 (en) Image sensor noise reduction
KR20220030877A (en) Image sensor employing varied intra-frame analog binning
CN117692811A (en) Visual sensor chip based on hybrid array
US11988835B2 (en) Systems and methods for power efficient image acquisition using single photon avalanche diodes (SPADs)
WO2022078036A1 (en) Camera and control method therefor
KR20190100833A (en) Apparatus for generating high dynamic range image
CN117692809A (en) Visual sensor chip based on pixel fusion technology
CN117692807A (en) Visual sensor chip
JP2005217955A (en) Imaging device, its control method, program, and storage medium
CN117692808A (en) Space-time differential vision sensor chip and imaging system based on three-dimensional stacking technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination