WO2017205829A1 - Procédé de réduction de l'erreur induite dans des mesures projectives de capteur d'image par des signaux de commande de sortie de pixels - Google Patents

Procédé de réduction de l'erreur induite dans des mesures projectives de capteur d'image par des signaux de commande de sortie de pixels Download PDF

Info

Publication number
WO2017205829A1
WO2017205829A1 PCT/US2017/034830 US2017034830W WO2017205829A1 WO 2017205829 A1 WO2017205829 A1 WO 2017205829A1 US 2017034830 W US2017034830 W US 2017034830W WO 2017205829 A1 WO2017205829 A1 WO 2017205829A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
output
conductor
pixels
image sensor
Prior art date
Application number
PCT/US2017/034830
Other languages
English (en)
Inventor
John Mcgarry
Original Assignee
Cognex Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cognex Corporation filed Critical Cognex Corporation
Publication of WO2017205829A1 publication Critical patent/WO2017205829A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/78Readout circuits for addressed sensors, e.g. output amplifiers or A/D converters

Definitions

  • the disclosed technologies relate generally to machine vision, and more particularly to machine vision systems for sensing depth information of a scene illuminated by a plane of light.
  • a method for acquiring 3 -dimensional (3D) range images includes providing a light source with line generating optics to illuminate a single plane of a scene, positioning a digital camera to view the light plane such that objects illuminated by the light source appear in the optical image formed by the camera lens, capturing a digital image of the scene, processing the digital image to extract the image coordinates of points in the scene illuminated by the light source, and processing the image coordinates according to the triangulation geometry of the optical system to form a set of physical coordinates suitable for measurement of objects in the scene.
  • a major limitation associated with such a conventional machine vision process is that a 2-dimensional intensity image of substantial size must be captured by the digital camera for each and every line of physical coordinates formed by the system. This can make the time to capture the 3D image of a scene as much as 100 times longer than the time required to acquire an intensity image of the same size scene, thereby rendering laser-line based 3D image formation methods too slow for many industrial machine-vision applications.
  • CMOS Complementary Metal-Oxide Semiconductor
  • This influence is related to capacitive coupling of the selected signal conductor with the floating-diffusion node, which may change its effective capacitance, and therefore the charge to voltage conversion factor.
  • this influence is generally not a problem because pixels only have one output and only one row of the image sensor can be selected at any given time. Therefore, even though activation of the pixel output select signal may influence the charge-to-voltage conversion factor of pixels on the row selected, the influence is, substantially, similar for each pixel selected for readout and the influence on the output image is uniform.
  • the disclosed technologies are related to image sensors that form coefficients of a projective measurement by selecting the output of a first plurality of pixels of a pixel array to a first conductor of a pixel output bus, while selecting the output of a second plurality of pixels to a second conductor of the pixel output bus, according to a set of pixel output control signals determined by information of a sampling matrix as described below.
  • the influence on the charge-to-voltage conversion factor, induced at a pixel's floating-diffusion node through capacitive coupling, may be different for each state of the output control signals in spatial proximity to the pixel.
  • 3 : 1 spatial interleaving described below in connection with FIGs. 3-4
  • the sampling matrix ⁇ may be designed to increase the sparseness of the signal encoded in a measurement Y of an image signal X by rejecting certain aspects of the image signal that are unrelated to image features of interest.
  • An example of such unrelated image features is a constant background level in the sensed signal that may be related to biasing of pixel current sources.
  • non-uniform capacitive coupling of pixel output select conductors with the pixel floating-diffusion nodes can substantially alter the intended spatial frequency response characteristic of the image sensor, ultimately resulting in the need to acquire substantially more measurement coefficients to provide for the accurate encoding of a signal of interest.
  • the disclosed technologies can be implemented as an image sensor including a set of pixels; a first output conductor; and a second output conductor.
  • a first subset of the set of pixels are coupled to respective output control buses to receive a first pixel output control signal to switch pixel output to the first output conductor, and to receive a second pixel output control signal to switch pixel output to the second output conductor.
  • a second subset of the set of pixels are coupled to respective output control buses to receive the first pixel output control signal to switch pixel output to the second output conductor, and to receive the second pixel output control signal to switch pixel output to the first output conductor.
  • the image sensor can include a rectangular pixel-array.
  • the set of pixels is a column of the rectangular pixel-array, and pixels in every other row of the rectangular pixel- array belong to the second subset of the set of pixels.
  • each pixel of the set of pixels can include a crossover to couple the output of the set of pixels.
  • the second subset of pixels can receive pixel output control signals that are inverted by crossovers coupled to the pixel output control buses.
  • the disclosed technologies can be implemented as an image sensor including a first output conductor of a pixel output bus; a second output conductor of the pixel output bus; and a set of pixels including (i) a first pixel comprising a first output select transistor coupled to a first pixel output control bus for switching pixel output to the first output conductor, and a second output select transistor coupled to the first pixel output control bus for switching pixel output to the second output conductor, and (ii) a second pixel comprising a first output select transistor coupled to a second pixel output control bus for switching pixel output to the second output conductor, and a second output select transistor coupled to the second pixel output control bus for switching pixel output to the first output conductor.
  • Implementations can include one or more of the following features.
  • the set of pixels can correspond to a column of a rectangular pixel-array. Further, each pixel of the set of pixels can be coupled to the pixel output bus through a respective crossover. Furthermore, the second pixel of the set of pixels can be coupled to the second pixel output control bus through crossovers.
  • the disclosed technologies can be implemented as an image sensor including a pixel array including pixels partitioned into rows and columns, wherein each pixel of the pixel array is coupled with (i) a first conductor and a second conductor of a pixel select line, of a given row of the pixel array, with which at least some other pixels from the same given row also are coupled, and (ii) a first conductor and a second conductor of a pixel output bus, of a given column of the pixel array, with which all other pixels from the same given column also are coupled, wherein the first conductor and the second conductor of the pixel output bus, of each respective column of the pixel array, are swapped on at least one row of the pixel array.
  • Implementations can include one or more of the following features.
  • the first conductor and the second conductor of the pixel output bus, of each respective column of the pixel array can be swapped on every row of the pixel array.
  • the first conductor and the second conductor of the pixel select line, of at least one row of the pixel array can be swapped in correspondence with the swapped first conductor and the second conductor of the pixel output bus on the at least one row of the pixel array.
  • the first conductor and the second conductor of the pixel output bus, of each respective column of the pixel array can be swapped on every row of the pixel array, and the first conductor and the second conductor of the pixel select line, of alternating rows of the pixel array, can be swapped.
  • the image sensor can include circuitry coupled with the pixel array and configured to provide select signals on the select lines for the pixels in the rows.
  • the select signals are provided in accordance with a sampling matrix comprising a product of a random basis function and a filtering function, such that coefficients associated with the sampling matrix have support from an equal number of even and odd rows of the pixel array.
  • current signals from a first set of pixels selected with respective pixel select signals provided on the first conductor of the pixel select lines are summed on the first conductor of the pixel output bus of the column, and current signals from a second set of pixels selected with respective select signals provided on the second conductor of the pixel select lines are summed on the second conductor of the pixel output bus of the column.
  • the image sensor can include comparators.
  • Each respective one of the comparators is coupled with the first and second conductors of the pixel output bus of each respective column of the pixel array, and each respective one of the comparators is configured to binarize, for each respective column of the pixel array, a difference between the summed current signals on the first conductor of the pixel output bus and the summed current signals on the second conductor of the pixel output bus.
  • the random basis function is a sparse random basis function.
  • all pixels within each respective row can be coupled with the same pixel select line of the respective row. Furthermore in the foregoing implementations, every other k ⁇ -one of the pixels within each respective row are coupled with a common one of multiple pixel select lines of the respective row.
  • FIG. 1 shows aspects of a machine vision system in an operational environment.
  • FIG. 2A is a flow diagram of an example of a process depicting computations performed by the machine vision system of FIG. 1.
  • FIG. 2B is a flow diagram of an example of another process depicting computations performed by the machine vision system of FIG. 1.
  • FIG. 3 is a high-level block-diagram of an image sensor architecture that can be configured to perform the processes of FIGs. 2 A and 2B.
  • FIG. 4 is a circuit diagram showing more detailed aspects of the image sensor of FIG. 3.
  • FIG. 5 is a circuit diagram of a portion of an image sensor having a pixel array where, in certain pixels, the pattern of coupling to a pixel output bus is reversed and the state of a control signal provided to the pixel is inverted.
  • FIG. 1 is a diagram of a vision system 100 for implementing a method for capturing 3D range images.
  • FIG. 1 comprises, laser-line generator 101, object conveyor 102, object of interest 103, laser illuminated object plane 104, digital camera 105, digital communication channel 109, and digital computer for storing, processing, interpreting and displaying 3D range data extracted from an object of interest 103, which are graphically represented in FIG. 1 by result 110.
  • Digital camera 105 further comprises, imaging lens 106, image sensor 107, and local image processor 108.
  • a narrow plane of illumination 112, formed by laser-line generator 101 intersects a 3D scene including conveyor 102 and object-of-interest 103.
  • the narrow plane of illumination formed by laser-line generator 101 is coincident with object plane 104 of imaging lens 106.
  • the imaging lens 106 collects light scattered by the 3D scene and focuses it on image sensor 107.
  • Image sensor 107 which comprises a rectangular array of photosensitive pixels, captures an electrical signal representative of the average light intensity signal formed by lens 106 over an exposure time period.
  • the electrical signal formed on image sensor 107 is converted into a digital information stream, which is received by local digital processor 108.
  • Digital processor 108 formats the digital image information for transmission to digital computer 111.
  • local digital processor 108 also processes the image to form an alternative representation of the image or to extract relevant features to arrive at a critical measurement or some other form of compact classification based on the information of the digital image.
  • the image captured by digital camera 105 is processed, either by local digital processor 108 or digital computer 111, to measure the displacement of the line formed by the intersection of the illumination-plane with the object in the scene.
  • Each displacement measurement represents an image coordinate that may be transformed into an object surface coordinate in object plane 104, according to a predetermined camera calibration.
  • object 103 is moved through the plane of the laser-line generator 101 while successively capturing images and extracting displacements coordinates at regular intervals.
  • the laser-line generator 101 is moved relative to a stationary object 103 while successively capturing images and extracting displacements coordinates. In either of these ways, a map of the surface of object 103 that is visible to the vision system of FIG.
  • uppercase symbols generally, represent matrix quantities, row numbers of a matrix are identified by the subscript i, column numbers by the subscript j, and frame time by the subscript t.
  • the image signal X formed on the image sensor 107 includes three segments of a laser line, with a third segment being horizontally between and vertically offset from a first segment and a second segment, representative of, for example, the image of the intersection of illumination plane 112 with conveyor 102 and object 103.
  • the image signal X may also include unwanted off-plane illumination artifacts and noise (not shown).
  • the illumination artifacts may be light internally diffused from one portion of an object to another, for example, light of the laser line, and the noise may be introduced by ambient light or by the image sensor.
  • the function of the computations performed by the vision system of FIG. 1 is to extract row offset parameters associated with the image features of the curve formed of the intersection of a plane of illumination with objects of interest in a physical scene.
  • Conventional technologies for performing such computations include sampling the image signal, forming a digital image, filtering the digital image, and extracting image features from the filtered digital image.
  • the disclosed technologies use 1-bit compressive sensing techniques in which an image signal X is filtered as it is being encoded in a measurement signal Y.
  • 1-bit compressive sensing each measurement is quantized to 1-bit by the function sign . ) , and only the signs of the measurements are stored in the measurement vectors y.
  • Y sign(AX), where X E NixN 2 and Y E ⁇ -l,l ⁇ MxN2 .
  • the image signal X is formed on an image sensor 107 having a pixel array with N 1 rows and N 2 columns, and A is a sampling matrix, where A G ⁇ — l,0,l ⁇ MxNl and M « N .
  • This aspect of the disclosed technologies represents a simplification of the analog-to-digital conversion process, and the fact that the number of samples M can be much smaller than the number of rows of the image signal X allows for the noted increase in processing speed.
  • the original image signal X is not encoded in the measurement Y, because doing so would, necessarily, require the encoding of additional image information that is not directly relevant to extracting the offset parameters of the intersection of the illumination plane with objects of interest in the physical scene. Rather, a filtered image signal Z is encoded in the measurement Y.
  • e the number of samples required to embed all variation of the signal to a specific error tolerance e is of order O (K log (N)).
  • the sparseness of Z increases, such that K z ⁇ K x and the number of samples required to robustly encode the filtered signal in the measurement Y will, in practice, always be less (often much less) than, the number of samples required to encode the raw image signal X, assuming that the error tolerance e remains the same.
  • FIG. 2A is a flow diagram of an example of a process depicting computations performed by the machine vision system 100.
  • the symbol ⁇ , ⁇ G ⁇ — l,0,l ⁇ Wl Wl represents an image filtering function comprised of, and in some embodiments consisting of coefficients used to compute a central difference approximation of the partial first derivative with respect to rows of the image signal X.
  • r G ⁇ — ⁇ , ⁇ , ⁇ " 3 represents a sparse random sequence, which in some embodiments is based on a Markov chain of order m, where m > 1.
  • N 3 is the size of a spatial filtering kernel ⁇ , as described below.
  • the symbol ⁇ , ⁇ G ⁇ — l,0,l ⁇ MxWl represents an image sampling function, formed from the product of the random basis ⁇ and the filtering function ⁇ .
  • Y, Y G ⁇ — l,l ⁇ MxW2 represents a measurement of the filtered image intensity signal, formed from the product of the sampling function ⁇ and the image signal X, quantized by sign(. ) to two levels ⁇ —1,1 ⁇ .
  • block 215 represents information of the image signal ⁇ , which is information representative of light energy of a scene.
  • the information may be received by an image sensor, for example image sensor 107 of FIG. 1.
  • the light energy may be light scattered from the scene, with at least some of the light focused by a lens onto the image sensor.
  • the image may also include unwanted off-plane illumination artifacts and noise (not shown).
  • the illumination artifacts may be light internally diffused from one portion of an object to another, for example light of the laser line, and the noise may be introduced by ambient light or by the image sensor, for example.
  • Block 217 includes a representation of a process that generates a measurement Y of the image intensity signal X.
  • the measurement Y represents a product of the image signal X and the sampling function ⁇ , quantized to two levels.
  • the sampling function is a product of a random basis function and a spatial filtering function.
  • the random basis function is sparse, the non-zero elements drawn from a Bernoulli distribution or some other generally random distribution.
  • the sampling function is expected to generally pass spatial frequencies associated with portions of an image forming a laser line and to substantially reject spatial frequencies associated with portions of an image including noise and other unwanted image information.
  • the process of block 217 extracts information of the image signal X by iteratively generating elements of a measurement Y. Generation of the information of the measurement Y may be performed, in some embodiments, by an image sensor device and/or an image sensor device in conjunction with associated circuitry.
  • elements of Fare generated in M iterations with for example each of the M iterations generating elements of a different y t .
  • each iteration information of a different particular row of the sampling function is effectively applied to columns of the image sensor to obtain, after performing sign operations on a per column basis, a y t .
  • elements of a y t are obtained substantially simultaneously.
  • comparators are used to perform the sign operations.
  • each iteration information of each row ⁇ pi of the sampling function is used to generate pixel output control signals (also referred to as select signals) applied to pixel elements of the image sensor, with each row of pixel elements receiving the same control signal or signals.
  • pixel output control signals also referred to as select signals
  • control signal(s) based on information of ⁇ p 1 2 may be applied to pixel elements of a second row, and so on.
  • control signal(s) based on information of ⁇ ⁇ 1 may be applied to pixel elements of the first row
  • control signal(s) based on information of ⁇ ⁇ 2 may be applied to pixel elements of the second row, and so on.
  • the image signal sampling information is provided from the sampling function generator block 260.
  • the sampling function generator block 260 is associated with an image processor 220, which in various embodiments may be the local digital processor 108 or digital computer 1 1 1 of FIG. 1. It should be recognized, however, that in various embodiments the sampling function generator 260, or portions thereof, may be included in the image sensor 21 1.
  • the image sensor 21 1, or memory or circuitry associated with the image sensor 21 provides storage for storing the image signal sampling information, for example as illustrated by block 216 of FIG. 2 A.
  • neither the image sensor nor the image processor include a sampling function generator block, with instead pre-generated image signal sampling information being stored in storage of or associated with the image sensor.
  • the image signal sampling information may be stored in both of two storage elements, with a first storage element physically closer to some pixel elements and a second storage element physically closer to other pixel elements. For example, if columns of pixel elements forming the pixel array are considered to be arranged in a manner defining a square or rectangle, the first storage element may be about what may be considered one side of the pixel array, and the second storage element may be about an opposing side of the pixel array. In some such embodiments, pixel elements closer to the first storage element may receive pixel output control signals associated with the first storage element, and pixel elements closer to the second storage element may receive pixel output control signals associated with the second storage element.
  • the rows of the random basis functions ⁇ are N 1 element segments of r that are shifted by no less than m relative to each other.
  • sampling functions ⁇ can be thought of as being formed from the convolution of the rows of ⁇ with a filtering kernel ⁇ as follows:
  • m should be of sufficient size to ensure that the range of the sampling function ⁇ , which is limited by the image sensor hardware to discrete levels, is guaranteed.
  • the elements of O are all in range, i.e., ⁇ j G ⁇ —1,0,1 ⁇ and that the rows of the sampling function ⁇ are sufficiently uncorrected.
  • the process buffers a measurement Y of the image signal.
  • the measurement is comprised of the column vectors y ; - of the measurement of the image intensity signals.
  • the measurement of the image signal is formed by circuitry of or associated with the image sensor 211, and the measurement may be stored in memory of or associated with the image processor 220.
  • the image sensor and the image processor for the embodiment of FIG. 2 A and the other embodiments may be coupled by a serial data link, in some embodiments, or a parallel data link, in other embodiments.
  • operations of blocks 225- 231, discussed below, may also be performed by circuitry of or associated with the image processor.
  • the process forms W as a first estimate of the filtered image Z.
  • the estimate is determined by the product of the transpose of the random basis function ⁇ and the measurement Y.
  • the process refines the estimate of the filtered image Z.
  • the estimate of the filtered image formed by the process of block 225 is refined by convolution with a kernel .
  • the laser-line may sometimes be modeled by a square pulse of finite width where the width of the laser-line pulse is greater than (or equal to) the support of the filtering kernel ⁇ .
  • the refinement step of block 227 can be performed in block 225 by folding the kernel into the transpose of the random basis function ⁇ before computing its product with the measurement Y.
  • performing the operation by convolution in block 227 provides for a significant computational advantage in some embodiments where the matrix multiplication of block 225 is performed by methods of sparse matrix multiplication.
  • Block 229 buffers a final estimate of the filtered image Z. Locations of edges of laser lines in the estimate are determined by the process in block 231 , for example using a peak detection algorithm.
  • FIG. 2B is a flow diagram of an example of another process depicting computations performed by the machine vision system 100.
  • the process of FIG. 2B takes advantage of the a priori knowledge that the temporal image stream formed of an illumination plane passing over a 3-dimensional object of interest is more generally sparse than anticipated by methods of FIG. 2A; the image signal being sparse and/or compressible, not only with respect to the row dimension of the signal X, but also with respect to columns and with respect to time.
  • adjacent columns j of X are likely to be very similar, i.e., highly correlated with each other.
  • the image signal X is typically very similar from one frame time to another.
  • a frame time may be, for example, a time period in which M samples are obtained for each of the columns of the image signal.
  • FIG. 2B shows computations of a vision system, similar to that of FIG. 2A, except that the random basis function ⁇ and sampling function ⁇ are partitioned into multiple independent segments, and these segments are used in a spatiotemporally interleaved fashion.
  • the spatiotemporally interleaving guarantees that, in any given frame time t, no column j of the image is sampled with the same pattern as either of its spatial neighbors j— 1 or j + 1 and that the sampling pattern used in the current frame-time is different from the sampling pattern of the previous frame time and the sampling pattern of the next frame time.
  • FIG. 2B shows, what may be thought of as, nine smaller sampling functions used over three frame times, three sampling functions being applied concurrently to X at any given time t.
  • this method allows the number of samples M per frame-time t to be reduced relative to the methods outlined in FIG. 2A, while maintaining the same error tolerance associated with the binary e-stable embedding of the signal Z, and thereby providing for significantly more computational efficiency relative to the vision system of FIG. 2A.
  • FIG. 2B shows the use of both spatial and temporal interleaving of the sampling function, in alternative embodiments, however, use of sampling functions may be interleaved in space only, or in time only.
  • X t , X G represents an image intensity signal as it exists on the N x pixel rows and N 2 pixel columns of the pixel array at time t.
  • the symbol ⁇ , ⁇ G ⁇ —l,0,l ⁇ NlXNl represents an image filtering function comprised of, and in some embodiments consisting of coefficients used to compute a central difference approximation of the partial first derivative.
  • r G ⁇ — ⁇ , ⁇ , ⁇ " 3 represents a sparse random sequence, which in some embodiments is based on a Markov chain of order m, where m > 1.
  • Y t , Y G ⁇ —i,i ⁇ MxW 2 represents a measurement of the filtered image intensity signal at time t, formed from the product of the sampling functions ⁇ , ⁇ 2 ⁇ ⁇ and ⁇ 3 k , and the image signal X t , quantized by sign(. ) to two levels ⁇ —1,1 ⁇
  • W t , W G ⁇ — M ... M ⁇ NlXN represents an estimate of the filtered image signal, formed from the product of the measurement Y t and the transpose of the random basis functions ⁇ , 0 2 k and 0 3 k convolved by .
  • the symbol 7, t _ x , Z G ⁇ — M ... M ⁇ NlXN represents an estimate of the product of the original image signal X and the filtering function ⁇ , formed from the sum of W t , W t _ x and W t _ 2 ⁇ •
  • the symbol ⁇ , ⁇ G ⁇ 0,1,2 ... NJ ⁇ " 2 represents image offset parameters of the local signal extremes, i.e., the P relevant signal peaks of the signal Z on each column at time t— 1.
  • the process of FIG. 2B receives information representative of light energy of a scene, and in block 256 the process iteratively generates vectors of a measurement of image intensity signals, based on the relative light energy of the image of the scene.
  • the functions provided by blocks 255 and 256 may be performed using an image sensor 251.
  • sampling functions are used, interleaved spatially and temporally.
  • three different sampling functions are used at any frame-time t, with a prior frame time and a succeeding frame time using different sets of three sampling functions.
  • the nine sampling functions, or information to generate the sampling functions may be dynamically generated, at 259', 26 , 262, and/or stored in memory 291 of, or associated with, the image sensor 251.
  • the process buffers a measurement Y t of the image signal X at frame time t.
  • the measurement Y of the image signal is formed by circuitry of or associated with an image sensor and stored in memory of or associated with an image processor.
  • operations of blocks 265-281, discussed below, may also be performed by circuitry of or associated with the image processor.
  • the process computes partial estimates of the filtered image signal Z.
  • the estimate W is determined by taking the product of the transpose of the corresponding random basis function ⁇ h;k 293 and the measurement Y t , with a new estimate W formed for each frame-time t.
  • the process convolves the partial sums emitted by block 265 kernel , which in addition to refining the estimate of the filtered image as described earlier, with respect to FIG. 2A, combines neighboring column vectors, such that each column vector is replaced by the sum of itself and its immediate neighbors on the left and right.
  • the process combines the partial sums output by block 267 over the previous three frame times 269 to form the final estimate of the filtered image signal Z at frame- time t— 1, storing the result in block 280.
  • parameters of the illumination plane are determined by the process in block 281, for example using a peak detection algorithm.
  • FIG. 3 is a high-level block-diagram depicting an image sensor architecture.
  • the image sensor of FIG. 3 can be used in the machine vision system 100 in conjunction with either of the processes described above in connection with FIGs. 2A-2B.
  • the image sensor of FIG. 3 includes sampling function storage buffer 300; sampling function shift register input buffers 311,312,313; sampling function shift registers 321,322,323; pixel array 301 with pixel columns 331,332,333,334 included therein; analog signal comparator array 340, including analog signal comparators 341,342,343, and 344; 1-bit digital output signal lines 351,352,353,354; and output data multiplexer 302.
  • Each of the pixel columns include a plurality of pixel elements.
  • each pixel element includes a radiation sensitive sensor (light sensitive in most embodiments) and associated circuitry.
  • Pixel elements of pixel array 301 accumulate photo-generated electrical charge at local charge storage sites.
  • the photo-generated charge on the image sensor pixels may be considered an image intensity signal in some aspects.
  • each pixel element includes a fixed capacitance that converts accumulated charge into a pixel voltage signal.
  • Each pixel voltage signal controls a local current source, so as to provide for a pixel current signal.
  • the pixel current source can be selected and switched, under the control of a sampling function, on to one of two signal output lines 314 available per pixel column.
  • a pair of output lines 314 associated with a column of the image sensor of FIG. 3 is also referred to as a pixel output bus.
  • a pixel output bus 314 is shared by all pixels on a column, such that each of the two current output signals formed on a column represent the summation of current supplied by selected pixels.
  • FIG. 3 As may be seen from the use of the three sampling function shift registers, the embodiment of FIG. 3 is suited for use in a system implementing spatial interleaving (and spatio- temporal interleaving) as discussed with respect to FIG. 2B.
  • the architecture of FIG. 3 may also be used for the non-interleaved embodiment of FIG. 2A, with either the three shift registers filled with identical information, or with the three shift registers replaced by a single register.
  • the rows of the sampling function ⁇ are dynamically formed from the contents of a memory buffer using shift registers.
  • Sampling function shift register 321, which contains 0 i lj/c provides the pixel output control signals for all pixels in columns ⁇ 1,4,7 ... ⁇ .
  • Sampling function shift register 322, which contains 0 ⁇ provides the output control for all pixels in columns ⁇ 2,5,8 ... ⁇ .
  • Sampling function shift register 323, which contains 0 ⁇ 3 ⁇ ; provides the pixel output control signals for all pixels in columns ⁇ 3,6,9 ... ⁇ .
  • the sampling function storage buffer 300 is a digital memory buffer holding pixel controls signals, each pixel control signal consisting of 2-bits representing which, if any, of the two current output lines 314 to be selected.
  • the digital memory holding the pixel output control signals is accessed as words of 2(m)-bits in length, where m > 2(supp(ip)).
  • m 16 > 2(support(xp)) and the memory data width is 32-bits.
  • sampling function shift registers 321,322,323 further comprise an N x element long shadow register to provide a means to maintain the state of the pixel output control signals applied to pixel array 301 while the next shift operation occurs.
  • sampling function memory buffer 300 is accessed in a cyclical pattern such that the process of filling shift registers 321,322,323 with the first row need only be performed once, on power-up initialization.
  • FIG. 4 is a circuit diagram showing more detailed aspects of portions of an image sensor in accordance with aspects of the disclosed technologies.
  • the portions of the image sensor of FIG. 4 are, in some embodiments, portions of the image sensor of FIG. 3.
  • Each of the columns of the image sensor of FIG. 4 includes two current output lines 414 to which all the pixels of the column are coupled. Each of the two current output lines is a conductor.
  • the two current output lines 414 lead to a current comparator 404 through a current conveyor 401, a current limiter 402, and a current mirror 403.
  • Each of the rows of the image sensor of FIG. 4 includes pixel output control lines 405.
  • the pixel output control lines 405 include a pair of pixel output control lines connected with a pixel of col(j-l) and other pixels of every other third column of the of the image sensor of FIG. 4; a pair of pixel output control lines connected with a pixel of col(j) and other pixels of every other third column of the of the image sensor of FIG. 4; and a pair of pixel output control lines connected with a pixel of col(j+l) and other pixels of every other third column of the of the image sensor of FIG. 4.
  • each pixel of the pixel array 400 includes a pinned photodiode 406, a reset transistor 407, a transfer gate 408, a transconductor 409, output select transistors 410, 411 and floating diffusion node 412.
  • the pinned photodiode 406 can be reset through reset transistor 407, allowed to accumulate photo-generated electric charge for an exposure period, with the charge transferred to the floating diffusion node 412 through transfer gate 408 for temporary storage.
  • the voltage V FD at the floating diffusion node 412 controls transconductor 409 to provide a current source that is proportional to the voltage signal.
  • the current from a pixel can be switched through transistors 410 or 41 1 to one of two current output lines 414 shared by all the pixels on a column. For this reason, the two current output lines 414(j-l) form a pixel output bus for the column.
  • the two current output lines 414(j) for each column j are also referred to as conductors of the pixel output bus.
  • the column output currents represent the simple sum of the currents from selected pixels, but in practice, there are additional factors.
  • a more realistic estimate includes offset and gain error introduced by readout circuitry blocks and the non-linearity error introduced by transconductor 409, as follows:
  • the coefficients depend on the operation point of the transistor (V dd , Vo+ and Vo-). Although the coefficients a, b and c are approximately equal for all pixels, some mismatch may need to be considered.
  • Voltages Vo+ and Vo- of each column are fixed using respective current conveyors 401.
  • the current conveyor 401 is based on a single p-channel metal-oxide semiconductor (PMOS) transistor, where.
  • V 0 _ V aw + V t +
  • Current conveyor 401 is biased with a current I cc to ensure the minimum speed necessary to fulfill the settling requirements.
  • the positive and negative branches are balanced using a current mirror 403, and the sign is obtained using current comparator 404.
  • a current limiter 402 is included to avoid break-off problems caused by image columns having an excessive number of bright pixels driving the column output lines.
  • a first select transistor of a pixel on an even row of the pixel array is connected to the first conductor of its column output bus and the second output select transistor connected to the second conductor of its column output bus, as described above in connection with the embodiment shown in FIG. 4.
  • the first select transistor of a pixel in odd rows of the pixel-array is connected to the second output conductor and the second output select transistor connected to the first output conductor, and, in odd rows, the state of output select signals applied to the pixel is inverted.
  • this arrangement of pixel connections provides for measurement coefficients that are substantially free of the effects associated with capacitive coupling of pixel floating-diffusion nodes with pixel output select signal conductors.
  • each pixel must be in one of two output selection states, implying that there are a total of 8 possible multiplicative error coefficients corresponding to each of the 8, equally probable, patterns of 3 : 1 interleaved pixel selection.
  • Given a pixels output selection state there exist a first subset of 4 equally possible error coefficients corresponding to one output selection state and a second complementary set of 4 equally possible error coefficients corresponding to the other output selection state.
  • FIG. 5 is a circuit diagram showing portions of an image sensor in accordance with some embodiments of the disclosed technologies.
  • the image sensor shown in FIG. 5 can be used in the machine vision system 100 in conjunction with processes similar to the ones described above in connection with FIGs. 2A-2B.
  • each pixel e.g., from the set of pixels 501, 502, 503, etc.
  • each pixel includes a pinned photodiode 506, a reset transistor 507, a transfer gate 508, a transconductor 509, a first output select transistor 510, a second output select transistor 511, and a floating-diffusion node 512.
  • the pinned photodiode 506 can be reset through reset transistor 507, allowed to accumulate photo-generated electrical charge for an exposure period, with the charge transferred to floating diffusion-nodes 512, through transfer gate 508 for temporary storage.
  • the voltage at the floating- diffusion node controls transconductor 509 to supply output current that is proportional to the voltage signal sensed on floating-diffusion node 512.
  • a pixel output current can be switched to conductors 514a, 514b of pixel output bus 514 through activation of the first output select transistors 510 or the second output select transistors 511 according to the state of a pixel output control bus 505.
  • the pixel output control bus 505 is also referred to as a pixel select line.
  • circuit diagram of FIG. 5 depicts parasitic capacitive coupling elements 521, 531, 541.
  • the state of the pixel select bus 505 may influence the effective capacitance of floating diffusion-node 512 and therefore the voltage that is supplied to transconductor 509, for a fixed amount of stored charge.
  • the pixels of the image sensor are arranged in a pixel-array having rows 535, 536, 537 and columns 545, 546, 547.
  • row 536 of the pixel-array 500 is arranged such that first output select transistor 510 switches the pixel output to the first conductor 514a of the pixel's column output bus 514, and second output select transistor 511 switches the pixel output to the second conductor 514b of the pixel's column output bus 514, while on row 535 and 537 the configuration is reversed, by means of crossovers 513, such that the first output select transistor 510 switches the pixel's output to the second conductor 514b of the pixel's column output bus 514, and second output select transistor switches the pixel output to the first conductor 514a of the pixel' s column output bus 514.
  • a first subset of pixels including pixel 502, etc. are coupled to respective output control buses 505 to receive a first pixel output control signal (e.g. 1,0) to switch pixel output to the first output conductor 514a, and to receive a second pixel output control signal (e.g.
  • each of the first and second subsets of pixels is a proper subset including one or more pixels.
  • each crossover 513 is formed by swapping the first conductor 514a and the second conductor 514b of a pixel output bus 514.
  • each crossover 515 is formed by swapping a pair of conductors of a pixel output control bus 505.
  • the pixel output configuration can be reversed on rows 535 and 537 without using crossovers 513 along the pixel output bus 514.
  • uncrossed first and second conductors are used for the pixel output bus, like in the case of the pixel output bus 414 shown in FIG. 4.
  • an output terminal of the first output select transistor 510 extends, and is directly connected, to the first conductor of the pixel's column output bus, and an output terminal of the second output select transistor 511 extends, and is directly connected, to the second conductor of the pixel's column output bus; while for pixels in rows 535 and 537, an output terminal of the first output select transistor 510 extends, and is directly connected, to the second conductor of the pixel's column output bus, and an output terminal of the second output select transistor 511 extends, and is directly connected, to the first conductor of the pixel's column output bus.
  • the pixel output control signals supplied on pixel output control bus 505 can be inverted by other means, for example by negating coefficients (comprising ternary values +1, 0 or -1) on every other column of sampling matrix ⁇ .
  • the negating of the noted coefficients can be performed upon retrieval of the sampling matrix ⁇ from storage 216 or 291.
  • columns of the sampling matrix are related through the readout operation to rows of the image and negating the coefficients on every other column of the sampling matrix ⁇ represents a functionally equivalent alternative to providing crossovers 515.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un capteur d'images destiné à former des mesures projectives et comprend un réseau de pixels dans lequel chaque pixel est couplé à des conducteurs d'un bus de commande de sortie de pixels et à une paire de conducteurs d'un bus de sortie de pixels. Dans certains pixels, le motif de couplage au bus de sortie de pixels est inversé, ce qui supprime avantageusement le bruit d'image induit par les signaux de commande de sortie de pixels présent dans les vecteurs de la base projective.
PCT/US2017/034830 2016-05-27 2017-05-26 Procédé de réduction de l'erreur induite dans des mesures projectives de capteur d'image par des signaux de commande de sortie de pixels WO2017205829A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662342632P 2016-05-27 2016-05-27
US62/342,632 2016-05-27

Publications (1)

Publication Number Publication Date
WO2017205829A1 true WO2017205829A1 (fr) 2017-11-30

Family

ID=59153274

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/034830 WO2017205829A1 (fr) 2016-05-27 2017-05-26 Procédé de réduction de l'erreur induite dans des mesures projectives de capteur d'image par des signaux de commande de sortie de pixels

Country Status (1)

Country Link
WO (1) WO2017205829A1 (fr)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11600184B2 (en) 2019-01-31 2023-03-07 Brunswick Corporation Marine propulsion control system and method
US11702178B2 (en) * 2019-01-31 2023-07-18 Brunswick Corporation Marine propulsion control system, method, and user interface for marine vessel docking and launch
US11794865B1 (en) 2018-11-21 2023-10-24 Brunswick Corporation Proximity sensing system and method for a marine vessel
US11804137B1 (en) 2018-12-21 2023-10-31 Brunswick Corporation Marine propulsion control system and method with collision avoidance override
US11816994B1 (en) 2018-11-21 2023-11-14 Brunswick Corporation Proximity sensing system and method for a marine vessel with automated proximity sensor location estimation
US11862026B2 (en) 2018-12-14 2024-01-02 Brunswick Corporation Marine propulsion control system and method with proximity-based velocity limiting
US11904996B2 (en) 2018-11-01 2024-02-20 Brunswick Corporation Methods and systems for controlling propulsion of a marine vessel to enhance proximity sensing in a marine environment
US12046144B2 (en) 2018-11-21 2024-07-23 Brunswick Corporation Proximity sensing system and method for a marine vessel
US12084160B2 (en) 2018-11-01 2024-09-10 Brunswick Corporation Methods and systems for controlling low-speed propulsion of a marine vessel
US12125389B1 (en) 2023-11-20 2024-10-22 Brunswick Corporation Marine propulsion control system and method with proximity-based velocity limiting

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150062396A1 (en) * 2013-08-29 2015-03-05 Kabushiki Kaisha Toshiba Solid-state imaging device
US20160010990A1 (en) * 2013-03-20 2016-01-14 Cognex Corporation Machine Vision System for Forming a Digital Representation of a Low Information Content Scene

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160010990A1 (en) * 2013-03-20 2016-01-14 Cognex Corporation Machine Vision System for Forming a Digital Representation of a Low Information Content Scene
US20150062396A1 (en) * 2013-08-29 2015-03-05 Kabushiki Kaisha Toshiba Solid-state imaging device

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11904996B2 (en) 2018-11-01 2024-02-20 Brunswick Corporation Methods and systems for controlling propulsion of a marine vessel to enhance proximity sensing in a marine environment
US12084160B2 (en) 2018-11-01 2024-09-10 Brunswick Corporation Methods and systems for controlling low-speed propulsion of a marine vessel
US11794865B1 (en) 2018-11-21 2023-10-24 Brunswick Corporation Proximity sensing system and method for a marine vessel
US11816994B1 (en) 2018-11-21 2023-11-14 Brunswick Corporation Proximity sensing system and method for a marine vessel with automated proximity sensor location estimation
US12046144B2 (en) 2018-11-21 2024-07-23 Brunswick Corporation Proximity sensing system and method for a marine vessel
US11862026B2 (en) 2018-12-14 2024-01-02 Brunswick Corporation Marine propulsion control system and method with proximity-based velocity limiting
US11804137B1 (en) 2018-12-21 2023-10-31 Brunswick Corporation Marine propulsion control system and method with collision avoidance override
US11600184B2 (en) 2019-01-31 2023-03-07 Brunswick Corporation Marine propulsion control system and method
US11702178B2 (en) * 2019-01-31 2023-07-18 Brunswick Corporation Marine propulsion control system, method, and user interface for marine vessel docking and launch
US12024273B1 (en) 2019-01-31 2024-07-02 Brunswick Corporation Marine propulsion control system, method, and user interface for marine vessel docking and launch
US12125389B1 (en) 2023-11-20 2024-10-22 Brunswick Corporation Marine propulsion control system and method with proximity-based velocity limiting

Similar Documents

Publication Publication Date Title
WO2017205829A1 (fr) Procédé de réduction de l'erreur induite dans des mesures projectives de capteur d'image par des signaux de commande de sortie de pixels
US10630960B2 (en) Machine vision 3D line scan image acquisition and processing
US10677593B2 (en) Machine vision system for forming a digital representation of a low information content scene
US10284793B2 (en) Machine vision system for forming a one dimensional digital representation of a low information content scene
EP0135578B1 (fr) Amelioration de la resolution et zoom
Benosman et al. Asynchronous event-based Hebbian epipolar geometry
Kuo et al. DiffuserCam: diffuser-based lensless cameras
WO2018057063A1 (fr) Système de vision artificielle pour capturer une image numérique d'une scène faiblement éclairée
Lee et al. Dual-branch structured de-striping convolution network using parametric noise model
Luo et al. A novel integration of on-sensor wavelet compression for a CMOS imager
Yang Analog CCD processors for image filtering
Hong et al. On-chip binary image processing with CMOS image sensors
Meynants et al. Sensor for optical flow measurement based on differencing in space and time
Navarro et al. A block matching approach for movement estimation in a CMOS retina: principle and results
Cho Three-Dimensional Target Recognition Under Photon-Starved Conditions Using Photon Counting Axially Distributed Sensing and Nonlinear Correlation
Marcia et al. Fast disambiguation of superimposed images for increased field of view
Yang et al. MWIR image deep denoising reconstruction based on single-pixel imaging
Hasler et al. Low-power analog image processing using transform imagers
Schuler et al. Alias reduction and resolution enhancement by a temporal accumulation of registered data from focal plane array sensors
Pitsianis et al. The MONTAGE least gradient image reconstruction
Gruev et al. On-chip normal flow computation with aperture problem compensation circuitry
Yang The architecture and design of CCD processors for computer vision
Ghannoum et al. Image processing system dedicated to a visual intra‐cortical stimulator

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17732642

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17732642

Country of ref document: EP

Kind code of ref document: A1