CN117692809A - Visual sensor chip based on pixel fusion technology - Google Patents

Visual sensor chip based on pixel fusion technology Download PDF

Info

Publication number
CN117692809A
CN117692809A CN202311420671.9A CN202311420671A CN117692809A CN 117692809 A CN117692809 A CN 117692809A CN 202311420671 A CN202311420671 A CN 202311420671A CN 117692809 A CN117692809 A CN 117692809A
Authority
CN
China
Prior art keywords
fusion
pixel
module
spatial
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311420671.9A
Other languages
Chinese (zh)
Inventor
赵蓉
陈雨过
王韬毅
林逸晗
施路平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202311420671.9A priority Critical patent/CN117692809A/en
Publication of CN117692809A publication Critical patent/CN117692809A/en
Pending legal-status Critical Current

Links

Landscapes

  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

The invention provides a vision sensor chip based on a pixel fusion technology, which comprises: pixel array, intensity path, time differential path and space differential path; the fusion technology is used for fusing signals of a plurality of pixel units in a fused pixel range into one signal and outputting the signal; the intensity path determines a quantized value of the electrical signal fused with the light intensity conversion of the incident light of the pixel; the time difference path carries out difference, fusion and quantization operation on the signal of the current fusion pixel position at the current moment and the signal of the current fusion pixel position at the previous moment in a charge domain, an analog domain or a digital domain; and the spatial differential path performs differential, fusion and quantization operations in a charge domain, an analog domain or a digital domain on the signal of the current fusion pixel position at the current moment and the signal of the current fusion pixel position in spatial association with the current moment. The pixel fusion technology is introduced into the three-way vision sensing chip architecture, so that the output signal-to-noise ratio can be improved, and the transmission data volume can be reduced to relieve the bandwidth pressure.

Description

Visual sensor chip based on pixel fusion technology
Technical Field
The invention relates to the technical field of visual sensing, in particular to a visual sensor chip based on a pixel fusion technology.
Background
The vision sensor is an instrument for converting external optical information into an electric signal image by using a photosensitive element, and the most commonly used vision sensor in the prior art is a CIS (CMOS Image Sensor ). The CIS is an image sensor based on a frame sampling principle, is widely applied to a camera module of a mobile phone or a camera, has the advantages of high color rendition and high image quality, has a small dynamic range of an acquired image signal, and is difficult to improve the shooting speed under a limited bandwidth. Another novel image sensor DVS (Dynamic Vision Sensor ) is characterized by being capable of sensing the change of a dynamic scene in the form of a sparse event stream, because the shooting speed is high, and the dynamic range of the acquired image signal is large, however, the sensor has the problems of low resolution and excessive effective information loss.
In the prior art, the frame rate of video frames collected by the CIS is not high, and the number of the video frames collected in a certain time period is limited; when there is a large-scale flash or a severe change in light intensity in the DVS, the DVS may not output an image normally, so that spatial change in the scene cannot be obtained, and the DVS is susceptible to noise interference, so that the present invention is highly demanded to provide an improved vision sensor.
Disclosure of Invention
The invention provides a visual sensor chip based on a pixel fusion technology, which is used for solving the defects of obvious noise problem and limited bandwidth of a sensor in the prior art, and the pixel fusion technology is introduced into a three-way visual sensor chip architecture, so that the output signal-to-noise ratio can be effectively improved, and the transmission bandwidth can be further reduced.
The invention provides a visual sensor chip based on a pixel fusion technology, which comprises a pixel array, an intensity passage, a time differential passage and a space differential passage; the pixel array comprises a plurality of pixel units; the fusion technology is used for fusing signals of a plurality of pixel units within a fused pixel range into one signal and outputting the signal; the intensity passage is used for determining a quantized value of an electric signal converted by the light intensity of the incident light of the fusion pixel; the time difference path is used for carrying out time difference, fusion and quantization operation on a signal of the current fusion pixel position at the current moment and a signal of the current fusion pixel position at the previous moment in a charge domain, an analog domain or a digital domain to obtain a time difference value of the fusion pixel; the signals of the fusion pixels are signals of a plurality of pixel units in the fusion pixel range; the spatial differential path is used for carrying out spatial differential, fusion and quantization operation on a signal of a current fusion pixel position at the current moment and a signal of a spatial correlation fusion pixel position at the current moment in a charge domain, an analog domain or a digital domain to obtain a spatial differential value of the fusion pixel; the spatial correlation fusion pixels are any one or more fusion pixels except the current fusion pixel in the array.
According to the visual sensor chip based on the pixel fusion technology, the intensity path comprises a first intensity fusion module and a first intensity quantization module; the first intensity fusion module is used for fusing the analog signals of the pixel units of the current fusion pixel position at the current moment to obtain a first intensity fusion signal; the first intensity quantization module is used for performing analog-digital conversion on the first intensity fusion signal to obtain a quantized value of an electric signal converted by the light intensity of the incident light of the fusion pixel.
According to the visual sensor chip based on the pixel fusion technology, the intensity path comprises a second intensity quantization module and a second intensity fusion module; the second intensity quantization module is used for performing analog-digital conversion on analog signals of a plurality of pixel units which are fused with the pixel position at the current moment to obtain a quantized value of an electric signal converted by the light intensity of incident light of each pixel unit; the second intensity fusion module is used for fusing quantized values of the incident light intensities of the pixel units to obtain quantized values of electric signals converted by the incident light intensities of the fused pixels.
According to the vision sensor chip based on the pixel fusion technology, the time difference path comprises a fusion module for the first time difference, a first time difference module and a first time quantization module; the first time difference fusion module is used for fusing the electrical signals of the pixel units to obtain first time difference fusion signals of the current fusion pixel positions at different times; the first time difference module is used for performing time difference operation on a fusion signal for the first time difference of the current fusion pixel position at the current moment and a fusion signal for the first time difference of the current fusion pixel position at the previous moment to obtain a time difference value of the fusion pixel; and in the fusion and time difference process, the analog-digital signal conversion is completed through the quantization module for the first time.
According to the vision sensor chip based on the pixel fusion technology, the time difference path comprises a second time difference module, a second time quantization module and a second time difference fusion module; the second time difference module is used for determining a time difference value of each pixel unit; the time difference value of each pixel unit is obtained by performing time difference operation on the electric signal of the current pixel unit position at the current moment and the electric signal of the current pixel unit position at the previous moment; the second time difference fusion module is used for fusing the time difference values of the pixel units to obtain the time difference values of the fused pixels; and in the fusion and time difference process, the second time quantization module is used for completing the conversion of analog and digital signals.
According to the vision sensor chip based on the pixel fusion technology, the space difference channel comprises a first space difference fusion module, a first space difference module and a first space quantization module; the first spatial difference fusion module is used for fusing the electrical signals of the pixel units to obtain a first spatial difference fusion signal of the current fusion pixel position at the current moment; the first spatial difference module is used for performing spatial difference operation on a fusion signal for the first spatial difference of the current fusion pixel position at the current moment and a fusion signal for the first spatial difference of the current spatial association fusion pixel position at the current moment to obtain a spatial difference value of the fusion pixel; and in the fusion and space difference process, the first space quantization module is used for completing the conversion of analog and digital signals.
According to the vision sensor chip based on the pixel fusion technology, the space differential path comprises a second space differential module, a second space quantization module and a second space differential fusion module; the second spatial difference module is used for determining a spatial difference value of each pixel unit; the spatial difference value of each pixel unit is obtained by performing spatial difference operation on an electric signal of the current pixel unit position at the current moment and an electric signal of the spatial associated pixel position at the current moment; the second spatial difference fusion module is used for fusing the spatial difference values of the pixel units to obtain the spatial difference values of the fused pixels; and in the fusion and space difference process, the second space quantization module is used for completing the conversion of analog and digital signals.
According to the vision sensor chip based on the pixel fusion technology, a pulse generation module is arranged in each pixel unit in the pixel array; or, all pixel units in the pixel array are commonly connected with a pulse generation module; or dividing the pixel array into a plurality of subareas, wherein all pixel units in each subarea are commonly connected with a pulse generation module; the pulse generation module is used for generating a trigger signal at fixed time intervals or generating a trigger signal at self-adaptive and programmable variable time intervals so as to control the starting exposure time and exposure time of the photosensitive module, synchronously exposing the pixel units connected to the same pulse generation module and synchronously exposing or asynchronously exposing the pixel units connected to different pulse generation modules; the photosensitive module is arranged in the pixel unit and is used for converting the optical signal of the current pixel unit position into an analog electric signal.
According to the vision sensor chip based on the pixel fusion technology, the exposure mode of each unit pixel in the pixel array is global exposure or rolling exposure.
According to the visual sensor chip based on the pixel fusion technology, under the condition that the pixel units are provided with the color filters, the output color types of the corresponding passages of the pixel units are color values; and under the condition that the pixel unit is not provided with a color filter, the output color type of the corresponding passage of the pixel unit is a gray value.
The invention provides a vision sensor chip based on a pixel fusion technology, which comprises: pixel array, intensity path, time differential path and space differential path; the pixel array comprises a plurality of pixel units; the fusion technology is used for fusing signals of a plurality of pixel units in a fused pixel range into one signal and outputting the signal; the intensity path determines a quantized value of the electrical signal fused with the light intensity conversion of the incident light of the pixel; the time difference path carries out difference, fusion and quantization operation on the signal of the current fusion pixel position at the current moment and the signal of the current fusion pixel position at the previous moment in a charge domain, an analog domain or a digital domain; and the spatial differential path carries out differential, fusion and quantization operation in a charge domain, an analog domain or a digital domain on the signal of the current fusion pixel position at the current moment and the signal of the adjacent fusion pixel position related to the current moment. The pixel fusion technology is introduced into the three-way vision sensing chip architecture, so that the output signal-to-noise ratio can be improved, and the transmission data volume can be reduced to relieve the bandwidth pressure.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a visual sensor chip based on a pixel fusion technology according to the present invention;
FIG. 2a is a schematic diagram of pixel fusion according to the present invention;
FIG. 2b is a schematic diagram of a second embodiment of the pixel fusion method according to the present invention;
FIG. 3 is a schematic diagram of a three-way vision sensor with multiplexed pixels according to the present invention;
FIG. 4 is a schematic diagram of a hybrid pixel vision sensor according to the present invention;
FIG. 5 is a schematic diagram of a second embodiment of a visual sensor chip based on pixel fusion technology;
FIG. 6 is a schematic diagram of a third embodiment of a visual sensor chip based on pixel fusion technology;
FIG. 7 is a schematic diagram of the differential in the diagonal direction provided by the present invention;
fig. 8 is a schematic diagram of xy direction difference provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Most of the CMOS Image Sensors (CIS) today capture video based on a frame-based sampling principle, i.e. each frame of image of the CIS records the output of all pixels in the pixel array, and each frame is equally spaced. CIS is also known as Active Pixel Sensor (APS) because it integrates transistors inside the pixel to achieve high performance charge-voltage conversion. Through the color filter array covered on the pixel array, the CIS can sense visible light with different wavelengths through covering the color filters on the pixels so as to obtain a color image. The CIS has the advantages of high resolution of the pixel array, high color reproducibility and high image quality. DVS is a new type of imaging system. Compared with the traditional camera which uses a shutter to control the frame rate, the DVS is sensitive to the light intensity change rate, wherein all pixels record light intensity according to frames, each pixel independently records the change of the light intensity logarithmic value at the pixel, and when the change exceeds a threshold value, a positive or negative pulse is generated. It is due to the asynchronous nature of DVS that it has extremely high temporal resolution. The characteristic of sensitivity to change of DVS is combined, so that the DVS has natural adaptability to tasks such as motion monitoring and the like. Another camera, known as DAVIS, combines a conventional Active Pixel Sensor (APS) with a DVS to record both single frame images and event information, while having the advantages of high spatial resolution of the conventional camera and high temporal resolution of the DVS camera.
The visual sensor with only CIS and DVS paths is incomplete for information acquisition from the visual primitive point of view. For example, when there is a large-scale flash or a strong change in light intensity in the picture, the differential pixels output events at all times cause saturation, the DVS path cannot output effective information, and the noise problem becomes more remarkable; the CIS path cannot respond in time due to frame rate limitation, and it is difficult to increase the shooting speed under a limited bandwidth. Such extreme scenes are very common in autopilot and are critical for driving safety, such as entering and exiting tunnels, night-time snap camera flashing, etc.
Referring to fig. 1, fig. 1 is a schematic diagram of a visual sensor chip based on a pixel fusion technology according to the present invention.
Referring to fig. 2a, fig. 2a is a schematic diagram of pixel fusion according to the present invention.
Referring to fig. 2b, fig. 2b is a schematic diagram illustrating a second principle of pixel fusion according to the present invention.
In order to solve the technical problems existing in the prior art, the invention provides a visual sensor chip based on a pixel fusion technology, which comprises a pixel array, an intensity passage, a time differential passage and a space differential passage; the pixel array comprises a plurality of pixel units; the fusion technology is used for fusing signals of a plurality of pixel units in a fused pixel range into one signal and outputting the signal; the intensity path is used for determining a quantized value of an electric signal converted by the light intensity of the incident light of the fusion pixel; the time difference path is used for carrying out time difference, fusion and quantization operation on the signal of the current fusion pixel position at the current moment and the signal of the current fusion pixel position at the previous moment in a charge domain, an analog domain or a digital domain to obtain a time difference value of the fusion pixel; the signal of the fusion pixel is the signal of a plurality of pixel units in the range of the fusion pixel; the spatial differential path is used for carrying out spatial differential, fusion and quantization operation on a signal of a current fusion pixel position at the current moment and a signal of a spatial correlation fusion pixel position at the current moment in a charge domain, an analog domain or a digital domain to obtain a spatial differential value of the fusion pixel; the spatially correlated fused pixels are any one or more fused pixels in the array other than the current fused pixel.
In contrast, the human visual system, whether in noon or dusk, whether in an open scene or partially occluded, can achieve rapid recognition of moving objects. The robustness and versatility of a far superior existing DAVIS or hybrid array system is achieved. The human eyes output a color channel and a time differential channel, and also have a space differential channel, and the three channels are organically fused and combined into different primitives, so that an efficient and robust visual representation is formed. Pixel fusion is mainly used for noise reduction and data bandwidth reduction. The invention is inspired by human vision, and adds the space differential path of the human retina to the existing single pixel multiplexing or mixed pixel array solution. I.e. the vision sensor has three outputs at the same time: intensity output, time difference (TD, temporal Difference) output, space difference (SD, spatial Difference) output.
The invention supports pixel spatial fusion (binning) to perform temporal and spatial differential sensing with larger receptive fields, larger spatial scales and higher sensitivity. The operation of fusing a plurality of similar pixel output values together by sharing the read-out switch and the storage node is that a typical fusion range comprises pixel unit fusion in a 2×2 range, pixel unit fusion in a 3×3 range, and pixel unit fusion in other ranges. The fused pixels are combined into a large fused pixel output, and the output value can be the sum of all fused pixel output values, can be an average value, and can also be a median value, a maximum value, a minimum value or other functional relations.
I.e.
I Fusion of =f(I 1 ,I 2 ,…I n )
Wherein I is Fusion of As a result after fusion, I i (i= … n) is the output value of each small pixel in the fused range, f is the fusion function, and most typically the sum and average of the two.
The fusion process may be performed in the charge domain, analog domain, or digital domain. Pixel fusion reduces the amount of data that needs to be processed/transmitted, and in some cases can increase the frame rate. In addition, the signal-to-noise ratio of the fused pixels is improved.
The electrical signal may be a charge, current or voltage signal.
All signals involved in the three sensing paths are three-dimensional quantities, including x, y, a spatial two-dimensional quantity and a temporal dimension t.
The 2 x 2 fused pixel output values in the dashed box are represented as
Wherein TD is i The TD path output value at the current time of the pixel (1) is represented. n is n i Is the weight. Typically, e.g. n 1~2 =1,n 3~8 =0;n 1~2 =6,n 3~6 =1,n 7~8 =0; the method is not unique.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a three-way vision sensor with multiplexing pixels according to the present invention.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a hybrid pixel vision sensor according to the present invention.
The fusion pixels are a plurality of multiplexing pixels, or a plurality of binary mixed pixels and a plurality of single pixels, or a plurality of ternary mixed pixels; the multiplexing pixels are pixels which are multiplexed by the intensity and the space-time difference; the binary mixed pixels are pixels multiplexed by two elements of intensity, time difference and space difference; the single pixel is a pixel different from the element of the binary mixed pixel; the ternary mixed pixels are formed by respectively corresponding to three different pixels in intensity, time difference and space difference and performing mixed arrangement.
In addition, the storage nodes may be flexibly set to store signals according to the signal transmission requirements, and the present invention is not particularly limited herein with respect to the number and setting positions of the storage nodes.
The intensity quantization module may be disposed outside the pixel unit; the time difference module, the time quantization module, the space difference module, the space quantization module and the fusion module may be disposed outside the pixel unit and shared by the pixels in the same column, or may be disposed inside the pixel unit, and the present invention is not limited in particular herein.
The fusion module may be designed for each pixel unit separately, or may be distributed in columns, or may be performed in a post-image processing module, which is not particularly limited herein.
Fusion modules having the same fusion logic may be multiplexed.
The pixel fusion technology is introduced into the three-way vision sensing chip architecture, so that the output signal-to-noise ratio can be improved, and the transmission data volume can be reduced to relieve the bandwidth pressure.
Based on the above embodiments:
as a preferred embodiment, the intensity pathway comprises a first intensity fusion module and a first intensity quantization module; the first intensity fusion module is used for fusing the analog signals of the pixel units of the current fusion pixel position at the current moment to obtain a first intensity fusion signal; the first intensity quantization module is used for performing analog-digital conversion on the first intensity fusion signal to obtain a quantized value of an electric signal for converting the light intensity of the incident light of the fusion pixel.
Where xy is the coordinates of the fused pixel, x i ,y i For the coordinates of each pixel unit, Q A Is the quantification of the intensity path.
Of course, the manner of pixel fusion is in addition to summationIn addition, can also average
As a preferred embodiment, the intensity path includes a second intensity quantization module and a second intensity fusion module; the second intensity quantization module is used for carrying out analog-digital conversion on analog signals of a plurality of pixel units which are fused with the pixel position at the current moment to obtain a quantized value of an electric signal converted by the light intensity of incident light of each pixel unit; the second intensity fusion module is used for fusing quantized values of the incident light intensities of the pixel units to obtain quantized values of the electric signals converted by the incident light intensities of the fused pixels.
The second intensity is quantified by I (x) i ,y i ,t j )=Q A (I(x i ,y i ,t j ) Obtaining a quantized value of the electrical signal converted from the intensity of the incident light of each pixel unit;
the second intensity is passed through with the fusion moduleAnd obtaining the quantized value of the electric signal converted by the light intensity of the incident light of the fusion pixel.
As a preferred embodiment, the time difference path includes a first time difference fusion module, a first time difference module, and a first time quantization module; the fusion module for the first time difference is used for fusing the electric signals of the pixel units to obtain fusion signals for the first time difference of the current fusion pixel positions at different moments; the first time difference module is used for carrying out time difference operation on a fusion signal for the first time difference of the current fusion pixel position at the current moment and a fusion signal for the first time difference of the current fusion pixel position at the previous moment to obtain a time difference value of the fusion pixel; in the fusion and time difference process, the analog-digital signal conversion is completed through a first time quantization module.
In all subscripts below, analog indicates that the signal is an analog signal and digital indicates that the signal is a digital signal. I i Representing the initial analog signal, TD and SD representing the final digital signal
First time difference is passed through with fusion moduleObtaining a fusion signal for a first time difference;
the first time difference module is combined with the first time quantization module to obtain time difference values TD (x, y, t) of the fused pixels n )=Q TD (I Fusion-analog (x,y,t n )-I Fusion-analog (x,y,t n-1 ))。
Or, the first time difference fusion module is combined with the first time quantization module to obtain a first time difference fusion signal
The first time difference module is controlled by TD (x, y, t n )=I Fusion-digital (x,y,t n )-I Fusion-digital (x,y,t n-1 ) And obtaining the time difference value of the fused pixel.
Or, the first time quantization module passes I i-digital (x i ,y i ,t j )=Q A (I i (x i ,y i ,t j ) Obtaining a quantized value of the intensity of incident light of each pixel unit;
first time difference is passed through with fusion moduleObtaining a fusion signal for a first time difference;
the first time difference module is controlled by TD (x, y, t n )=I Fusion-digital (x,y,t n )-I Fusion-digital (x,y,t n-1 ) And obtaining the time difference value of the fused pixel.
As a preferred embodiment, the time differential path includes a second time differential module, a second time quantization module, and a second time differential fusion module; the second time difference module is used for determining a time difference value of each pixel unit; the time difference value of each pixel unit is obtained by performing time difference operation on the electric signal of the current pixel unit position at the current moment and the electric signal of the current pixel unit position at the previous moment; the second time difference fusion module is used for fusing the time difference values of the pixel units to obtain the time difference values of fused pixels; and in the fusion and time difference process, the analog-digital signal conversion is completed through a second time quantization module.
The second time difference module passes through TD i-analog (x i ,y i ,t n )=I i (x i ,y i ,t n )-I i (x i ,y i ,t n-1 ) Determining a time difference value for each pixel cell;
the second time difference fusion module is combined with the second time quantization module to obtain a time difference value of the fusion pixels
Or, the second time difference module combines with the second time quantization module to determine the time difference value TD of each pixel unit i-digital (x i ,y i ,t n )=Q TD (I i (x i ,y i ,t n )-I i (x i ,y i ,t n-1 ));
Fusion module for second time differencePassing throughAnd obtaining the time difference value of the fused pixel.
Or, the second time quantization module passes I i-digital (x i ,y i ,t j )=Q A (I i (x i ,y i ,t j ) Obtaining a quantized value of the intensity of the incident light of each pixel unit;
the second time difference module passes through TD i-digital (x i ,y i ,t n )=(I i-digital (x i ,y i ,t n )-I i-digital (x i ,y i ,t n-1 ) Determining a time differential value for each pixel cell;
the second time difference is passed through by a fusion moduleAnd obtaining the time difference value of the fused pixel.
Above content Q TD Quantization for time differential paths.
Referring to fig. 5, fig. 5 is a schematic diagram of a visual sensor chip based on a pixel fusion technique according to the second embodiment of the present invention.
Referring to fig. 6, fig. 6 is a schematic diagram of a visual sensor chip based on a pixel fusion technique according to the third embodiment of the present invention.
A hybrid pixel array involves two different pixels, for example TD (a, SD is the same).
Wherein N is the pixel unit in the selected range of the fusion pixel and a circle of adjacent pixel units around the fusion pixel. Only pixels with differential outputs are selected as the selected range.
The 3×3 fused pixel output values in the dashed box of fig. 5 are represented as
Wherein TD is i The TD path output value at the current time of pixel (i) is represented. n is n i Is the weight. For example n 1~5 =1,n 6~13 =0。
The 3×3 fused pixel output values in the dashed box of fig. 6 are represented as
Wherein TD is i The TD path output value at the current time of pixel (i) is represented. n is n i Is the weight. For example n 1~4 =1,n 5~12 =0。
Referring to fig. 7, fig. 7 is a schematic diagram of an oblique direction difference provided by the present invention.
Referring to fig. 8, fig. 8 is a schematic diagram of xy direction difference according to the present invention.
In addition to these two differential modes, there are other differential modes, such as x-differential only, or selecting more than two adjacent pixels to differential, etc.
As a preferred embodiment, the spatial differential path includes a first spatial differential fusion module, a first spatial differential module, and a first spatial quantization module; the first spatial difference fusion module is used for fusing the electrical signals of the pixel units to obtain a first spatial difference fusion signal of the current fusion pixel position at the current moment; the first spatial difference module is used for performing spatial difference operation on a fusion signal for the first spatial difference of the current fusion pixel position at the current moment and a fusion signal for the first spatial difference of the current spatial association fusion pixel position at the current moment to obtain a spatial difference value of the fusion pixel; in the fusion and space difference process, the analog-digital signal conversion is completed through a first space quantization module.
The first space difference is passed through by a fusion moduleObtaining a first fusion signal for space difference;
the first spatial difference module is combined with the first spatial quantization module to obtain a spatial difference value SD of the fused pixel X (x,y,t n )=Q SD (I Fusion-a (x,y,t n )-I Fusion-anal (x-1,y,t n ))
SD Y (x,y,t n )=Q SD (I Fusion-analog (x,y,t n )-I Fusion-analog (x,y-1,t n ))
Or, the first spatial difference fusion module is combined with the first spatial quantization module to obtain a first spatial difference fusion signal
The first spatial difference module determines a spatial difference value of the fused pixel
SD_X(x,y,tn)=I Fusion of (x,y,tn)-I Fusion of (x-1,y,t n )
SD_Y(x,y,tn)=I Fusion of (x,y,tn)-I Fusion of (x,y-1,t n )。
Or, the first space quantization module passes I i-analog (x i ,y i ,t j )=Q A (I i (x i ,y i ,t j ) Obtaining a quantized value of the intensity of incident light of each pixel unit;
the first space difference is passed through by a fusion moduleObtaining a first fusion signal for space difference;
the first spatial difference module determines a spatial difference value of the fused pixel
SD_X(x,y,t n )=I Fusion of (x,y,t n )-I Fusion of (x-1,y,t n )
SD_Y(x,y,t n )=I Fusion of (x,y,t n )-I Fusion of (x,y-1,t n )。
As a preferred embodiment, the spatial differential path includes a second spatial differential module, a second spatial quantization module, and a second spatial differential fusion module; the second spatial difference module is used for determining a spatial difference value of each pixel unit; the spatial differential value of each pixel unit is obtained by performing spatial differential operation on an electric signal of the current pixel unit position at the current moment and an electric signal of the spatial associated pixel position at the current moment; the second fusion module for spatial difference is used for fusing the spatial difference values of the pixel units to obtain the spatial difference values of the fused pixels; and in the fusion and space difference process, the analog-digital signal conversion is completed through a second space quantization module.
The second spatial difference module determines the spatial difference value of each pixel unit
The second spatial difference fusion module is combined with the second time quantization module to obtain the spatial difference value of the fusion pixel
Or, the second spatial difference module combines with the second time quantization module to determine the spatial difference value of each pixel unit
SD x (x i ,y i ,t n )=Q SD (I(x i ,y i ,t n )-I(x i -1,y i ,t n ))
SD y (x i ,y i ,t n )=Q SD (I(x i ,y i ,t n )-I(x i ,y i -1,t n ))
The second fusion module for spatial difference determines the spatial difference value of the fused pixel
Or, the second time quantization module passes I i-digital (x i ,y i ,t j )=Q A (I i (x i ,y i ,t j ) A quantized value of the intensity of the incident light of each pixel unit is obtained.
The second spatial difference module determines the spatial difference value of each pixel unit
SD x-i (x i ,y i ,t n )=(I i-digital (x i ,y i ,t n )-I i-digital (x i -1,y i ,t n ))
SD y-i (x i ,y i ,t n )=(I i-digital (x i ,y i ,t n )-I i-digital (x i ,y i -1,t n ));
The second fusion module for spatial difference determines the spatial difference value of the fused pixel
As a preferred embodiment, a pulse generating module is arranged in each pixel unit in the pixel array;
or, all pixel units in the pixel array are commonly connected with a pulse generation module;
or dividing the pixel array into a plurality of subareas, wherein all pixel units in each subarea are commonly connected with a pulse generation module;
the pulse generation module is used for generating a trigger signal at fixed time intervals or generating a trigger signal at self-adaptive and programmable variable time intervals so as to control the starting exposure time and exposure time of the photosensitive module, and the pixel units connected to the same pulse generation module are synchronously exposed and the pixel units connected to different pulse generation modules are synchronously exposed or asynchronously exposed;
the photosensitive module is arranged in the pixel unit and is used for converting the optical signal of the current pixel unit position into an analog electric signal.
In this embodiment, the vision sensing chip further includes a trigger pulse generating module, where the trigger pulse generating module may generate a trigger signal, so as to implement exposure control on the photosensitive module, i.e. determine the time t of collecting the signal n . If trigger pulse generating modules are designed in each pixel unit, full-array asynchronous exposure can be adopted, and at the moment, each trigger pulse generating module can independently adjust the moment of triggering calculation of the time-space differential signal in a self-adaptive manner according to the light intensity level sensed by the pixel unit where the trigger pulse generating module is located, and the trigger moment of each pixel unit is different. The pixel unit can output information at any time, so that flexibility is improved, and output delay is reduced. Of course, the full-array synchronous exposure can be set as needed in this case. If some pixel units share the same trigger pulse generating module, the pixel units are synchronously exposed.
The trigger pulse generation module may generate the trigger signal at the same time interval or at an adaptive, programmable variable interval.
If all pixel units in the array share a trigger pulse generation module, all pixel units need to be exposed at the same moment, and the output of the pixel units needs to follow a certain rule.
The quantization methods of the temporal quantization module and the spatial quantization module may be either multi-valued (> 1 bit) or single-valued (positive and negative pulses). The time of acquiring the signal can be acquired in a mode of full array synchronization and same time interval, or full array synchronization but variable time interval, or full array asynchronization.
In the field of digital signal processing, quantization mainly refers to the process of converting an analog signal into a digital signal. The sampling and quantization of the signal is typically achieved by an analog-to-digital converter ADC.
As a preferred embodiment, the exposure mode of each unit pixel in the pixel array is global exposure or rolling exposure.
Of course, the manner in which the intensity path, the time differential path, and the space differential path are subjected to global exposure or rolling exposure may be arbitrarily combined, and the present invention is not particularly limited herein.
As a preferred embodiment, in the case where the pixel unit is provided with a color filter, the output color type of the corresponding path of the pixel unit is a color value; in the case that the pixel unit is not provided with a color filter, the output color type of the corresponding channel of the pixel unit is a gray value.
If the pixel unit is covered with a color filter, the information acquired by the pixel is only information of a certain color channel. Typical color filters are combinations of three colors, red, green, and blue, called RGB types, and other color channels, such as CMY arrays of three complementary colors (cyan, magenta, yellow), and the like, may be used. The same type of pixels have color channel differences, such as X-color spatiotemporal differential pixels, Y-channel spatiotemporal differential pixels, and Z-channel spatiotemporal differential pixels. In the case of spatial differentiation, spatial differentiation may be performed between pixels of the same color, or spatial differentiation may be performed between pixels of different colors (e.g., subtraction between X-color pixels and Y-color pixels).
In addition, an externally programmable demosaicing device can be embedded in the pixel unit, and output values of all other color channels of the pixel position of the X color channel are obtained through a demosaicing algorithm, namely, the output values are realized through interpolation by selecting points of surrounding pixels.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The visual sensor chip based on the pixel fusion technology is characterized by comprising a pixel array, an intensity path, a time differential path and a space differential path; the pixel array comprises a plurality of pixel units;
the fusion technology is used for fusing signals of a plurality of pixel units within a fused pixel range into one signal and outputting the signal; the intensity passage is used for determining a quantized value of an electric signal converted by the light intensity of the incident light of the fusion pixel;
the time difference path is used for carrying out time difference, fusion and quantization operation on a signal of the current fusion pixel position at the current moment and a signal of the current fusion pixel position at the previous moment in a charge domain, an analog domain or a digital domain to obtain a time difference value of the fusion pixel; the signals of the fusion pixels are signals of a plurality of pixel units in the fusion pixel range;
the spatial differential path is used for carrying out spatial differential, fusion and quantization operation on a signal of a current fusion pixel position at the current moment and a signal of a spatial correlation fusion pixel position at the current moment in a charge domain, an analog domain or a digital domain to obtain a spatial differential value of the fusion pixel; the spatial correlation fusion pixels are any one or more fusion pixels except the current fusion pixel in the array.
2. The pixel fusion technique-based vision sensor chip of claim 1, wherein the intensity path includes a first intensity fusion module and a first intensity quantization module;
the first intensity fusion module is used for fusing the analog signals of the pixel units of the current fusion pixel position at the current moment to obtain a first intensity fusion signal;
the first intensity quantization module is used for performing analog-digital conversion on the first intensity fusion signal to obtain a quantized value of an electric signal converted by the light intensity of the incident light of the fusion pixel.
3. The pixel fusion technique-based vision sensor chip of claim 1, wherein the intensity path includes a second intensity quantization module and a second intensity fusion module;
the second intensity quantization module is used for performing analog-digital conversion on analog signals of a plurality of pixel units which are fused with the pixel position at the current moment to obtain a quantized value of an electric signal converted by the light intensity of incident light of each pixel unit;
the second intensity fusion module is used for fusing quantized values of the incident light intensities of the pixel units to obtain quantized values of electric signals converted by the incident light intensities of the fused pixels.
4. The pixel fusion technique-based vision sensor chip of claim 1, wherein the time differential path comprises a first time differential fusion module, a first time differential module, and a first time quantization module;
the first time difference fusion module is used for fusing the electrical signals of the pixel units to obtain first time difference fusion signals of the current fusion pixel positions at different times;
the first time difference module is used for performing time difference operation on a fusion signal for the first time difference of the current fusion pixel position at the current moment and a fusion signal for the first time difference of the current fusion pixel position at the previous moment to obtain a time difference value of the fusion pixel; and in the fusion and time difference process, the analog-digital signal conversion is completed through the quantization module for the first time.
5. The pixel fusion technique-based vision sensor chip of claim 1, wherein the time differential path comprises a second time differential module, a second time quantization module, and a second time differential fusion module;
the second time difference module is used for determining a time difference value of each pixel unit; the time difference value of each pixel unit is obtained by performing time difference operation on the electric signal of the current pixel unit position at the current moment and the electric signal of the current pixel unit position at the previous moment;
the second time difference fusion module is used for fusing the time difference values of the pixel units to obtain the time difference values of the fused pixels; and in the fusion and time difference process, the second time quantization module is used for completing the conversion of analog and digital signals.
6. The pixel fusion technique-based vision sensor chip of claim 1, wherein the spatial differential path comprises a first spatial differential fusion module, a first spatial differential module, and a first spatial quantization module;
the first spatial difference fusion module is used for fusing the electrical signals of the pixel units to obtain a first spatial difference fusion signal of the current fusion pixel position at the current moment;
the first spatial difference module is used for performing spatial difference operation on a fusion signal for the first spatial difference of the current fusion pixel position at the current moment and a fusion signal for the first spatial difference of the current spatial association fusion pixel position at the current moment to obtain a spatial difference value of the fusion pixel; and in the fusion and space difference process, the first space quantization module is used for completing the conversion of analog and digital signals.
7. The pixel fusion technique-based vision sensor chip of claim 1, wherein the spatial differential path includes a second spatial differential module, a second spatial quantization module, and a second spatial differential fusion module;
the second spatial difference module is used for determining a spatial difference value of each pixel unit; the spatial difference value of each pixel unit is obtained by performing spatial difference operation on an electric signal of the current pixel unit position at the current moment and an electric signal of the spatial associated pixel position at the current moment;
the second spatial difference fusion module is used for fusing the spatial difference values of the pixel units to obtain the spatial difference values of the fused pixels; and in the fusion and space difference process, the second space quantization module is used for completing the conversion of analog and digital signals.
8. The vision sensor chip based on the pixel fusion technology according to claim 1, wherein a pulse generating module is arranged in each pixel unit in the pixel array;
or, all pixel units in the pixel array are commonly connected with a pulse generation module;
or dividing the pixel array into a plurality of subareas, wherein all pixel units in each subarea are commonly connected with a pulse generation module;
the pulse generation module is used for generating a trigger signal at fixed time intervals or generating a trigger signal at self-adaptive and programmable variable time intervals so as to control the starting exposure time and exposure time of the photosensitive module, synchronously exposing the pixel units connected to the same pulse generation module and synchronously exposing or asynchronously exposing the pixel units connected to different pulse generation modules;
the photosensitive module is arranged in the pixel unit and is used for converting the optical signal of the current pixel unit position into an analog electric signal.
9. The visual sensor chip of claim 1, wherein the exposure mode of each unit pixel in the pixel array is global exposure or rolling exposure.
10. The visual sensor chip based on the pixel fusion technique according to any one of claims 1 to 9, wherein, in the case where the pixel unit is provided with a color filter, the output color type of the pixel unit corresponding path is a color value; and under the condition that the pixel unit is not provided with a color filter, the output color type of the corresponding passage of the pixel unit is a gray value.
CN202311420671.9A 2023-10-30 2023-10-30 Visual sensor chip based on pixel fusion technology Pending CN117692809A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311420671.9A CN117692809A (en) 2023-10-30 2023-10-30 Visual sensor chip based on pixel fusion technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311420671.9A CN117692809A (en) 2023-10-30 2023-10-30 Visual sensor chip based on pixel fusion technology

Publications (1)

Publication Number Publication Date
CN117692809A true CN117692809A (en) 2024-03-12

Family

ID=90130817

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311420671.9A Pending CN117692809A (en) 2023-10-30 2023-10-30 Visual sensor chip based on pixel fusion technology

Country Status (1)

Country Link
CN (1) CN117692809A (en)

Similar Documents

Publication Publication Date Title
US6970195B1 (en) Digital image sensor with improved color reproduction
CN108965654B (en) Double-spectrum camera system based on single sensor and image processing method
US6924841B2 (en) System and method for capturing color images that extends the dynamic range of an image sensor using first and second groups of pixels
CN111405204B (en) Image acquisition method, imaging device, electronic device, and readable storage medium
US7362894B2 (en) Image processing apparatus and method, recording medium, and program
US7466358B1 (en) Solid-state imaging device for enlargement of dynamic range
US10136107B2 (en) Imaging systems with visible light sensitive pixels and infrared light sensitive pixels
US8514322B2 (en) Systems and methods for adaptive control and dynamic range extension of image sensors
CN110880163B (en) Low-light color imaging method based on deep learning
CN112640431A (en) Image sensor with multiple superpixels
US7034874B1 (en) Automatic bad pixel correction in image sensors
US8970721B2 (en) Imaging device, solid-state imaging element, image generation method, and program
KR20170095572A (en) Image sensor and method for generating restoration image
CN112714301A (en) Dual-mode image signal processor and image sensor
CN117692809A (en) Visual sensor chip based on pixel fusion technology
US20130308021A1 (en) Systems and methods for adaptive control and dynamic range extension of image sensors
US20230037953A1 (en) Image processing method and sensor device
US11988835B2 (en) Systems and methods for power efficient image acquisition using single photon avalanche diodes (SPADs)
KR20190100833A (en) Apparatus for generating high dynamic range image
CN117692811A (en) Visual sensor chip based on hybrid array
CN117692807A (en) Visual sensor chip
JP2011211497A (en) Image input device
US6812963B1 (en) Focus and exposure measurement in digital camera using charge binning
CN117692813A (en) Visual sensor chip based on multi-scale space-time differential technology
US20230388668A1 (en) Image sensor circuit and image sensor device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination