WO2020075357A1 - Élément d'imagerie à semi-conducteur, dispositif d'imagerie, et procédé de contrôle d'élément d'imagerie à semi-conducteur - Google Patents

Élément d'imagerie à semi-conducteur, dispositif d'imagerie, et procédé de contrôle d'élément d'imagerie à semi-conducteur Download PDF

Info

Publication number
WO2020075357A1
WO2020075357A1 PCT/JP2019/027623 JP2019027623W WO2020075357A1 WO 2020075357 A1 WO2020075357 A1 WO 2020075357A1 JP 2019027623 W JP2019027623 W JP 2019027623W WO 2020075357 A1 WO2020075357 A1 WO 2020075357A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
correction
dark current
solid
value
Prior art date
Application number
PCT/JP2019/027623
Other languages
English (en)
Japanese (ja)
Inventor
中村 信男
正武 尾崎
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Publication of WO2020075357A1 publication Critical patent/WO2020075357A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/63Noise processing, e.g. detecting, correcting, reducing or removing noise applied to dark current

Definitions

  • the present technology relates to a solid-state imaging device, an imaging device, and a method for controlling the solid-state imaging device. More specifically, the present invention relates to a solid-state imaging device that removes dark current noise, an imaging device, and a method for controlling the solid-state imaging device.
  • the solid-state image sensor corrects the light and dark distortion in block units using the correction coefficient.
  • the dark current in the solid-state image sensor is not always constant and may change locally, there is a possibility that sufficient correction accuracy cannot be obtained in the above-described conventional technique. For example, when the dark current locally changes in a block of a certain shape, the correction of each block cannot completely correct the light and dark distortion due to the dark current noise, and the correction accuracy is deteriorated.
  • the present technology was created in view of such circumstances, and its purpose is to improve the correction accuracy in a solid-state image sensor that corrects the brightness distortion.
  • a first aspect thereof is a pixel array based on the distribution of dark current in a pixel array unit in which a plurality of pixels are arranged in a two-dimensional lattice shape.
  • a region dividing unit that divides a portion into a plurality of regions, a correction value holding unit that holds the value of the dark current at each boundary of the plurality of regions as a correction value for dark current correction, and a plurality of pixels
  • a solid-state image sensor including a correction unit that performs the dark current correction on each pixel signal using the correction value, and a control method thereof. This brings about the effect that the dark current correction is performed by the correction value for each area divided based on the distribution of the dark current.
  • a plurality of rows are arranged in the pixel array section, a predetermined number of pixels are arranged in a predetermined direction in each of the plurality of rows, and the area dividing section is Alternatively, the distribution may be acquired for each of the plurality of rows and the boundary may be set based on the distribution. This brings about the effect that the pixel array section is divided into a plurality of regions by the boundaries set for each row.
  • the pixel array section is divided into a plurality of vertical divided areas at regular intervals in a direction perpendicular to a predetermined direction, and the area dividing section is provided for each of the plurality of vertical divided areas.
  • the distribution may be acquired and the boundary may be set based on the distribution. This brings about an effect that the pixel array section is divided into a plurality of areas by the boundary set for each vertical division area.
  • an environment information acquisition unit that acquires environment information including at least one of temperature, power consumption, and voltage is further provided, and the correction unit is based on the environment information and the correction value.
  • the above dark current correction may be performed. This brings about the effect that the dark current correction is performed according to the dark current distribution and the environmental information.
  • a plurality of pixel blocks in which a plurality of pixels sharing a floating diffusion layer are arranged are arranged in the pixel array section, and the area dividing section is arranged in the pixel array section. You may divide into the said some area
  • the area dividing unit may obtain, as the boundary, coordinates at which the amount of change in dark current in the distribution exceeds a predetermined threshold. This brings about the effect that the dark current correction is performed by the correction value for each area divided by the coordinates in which the change amount of the dark current exceeds the threshold value.
  • the area dividing unit may obtain the inflection point of the function indicating the distribution as the boundary. This brings about the effect that the dark current correction is performed by the correction value for each region divided at the inflection point.
  • the correction unit may interpolate a value of the dark current at a coordinate that does not correspond to the boundary from the correction value and perform the dark current correction using the interpolated interpolation value. Good. This brings about an effect that the capacity of the correction value holding unit is reduced.
  • the correction unit may obtain the interpolation value by linear interpolation. This brings about the effect that the dark current correction is performed by the correction value obtained by the linear interpolation.
  • a second aspect of the present technology is an area dividing section that divides the pixel array section into a plurality of areas based on the distribution of dark current in the pixel array section in which a plurality of pixels are arranged in a two-dimensional lattice shape,
  • a correction value holding unit that holds the value of the dark current at each boundary of the regions as a correction value for dark current correction, and the dark current correction for each pixel signal of the plurality of pixels using the correction value.
  • the image pickup apparatus includes a correction unit that performs the dark current correction and a signal processing unit that processes the pixel signal subjected to the dark current correction. As a result, the dark current correction is performed by the correction value for each area divided based on the distribution of the dark current, and the pixel signal after the dark current correction is processed.
  • FIG. 3 is an example of a plan view of a pixel array section in the first embodiment of the present technology. It is a block diagram showing an example of 1 composition of a column signal processing part in a 1st embodiment of this art. It is a figure showing an example of distribution of a dark level and a correction value in a 1st embodiment of this art. It is a figure which shows an example of the correction value for every boundary in 1st Embodiment of this technique.
  • First embodiment (example of division based on dark current distribution) 2.
  • Second embodiment (example of dividing into rectangular areas based on dark current distribution) 3.
  • Third embodiment (example in which division is performed based on dark current distribution and correction is performed based on environmental information) 4.
  • Fourth Embodiment (an example of dividing based on a distribution of dark current in a pixel array section in which pixels sharing a floating diffusion layer are arranged) 5.
  • FIG. 1 is a block diagram showing a configuration example of an imaging device 100 according to the first embodiment of the present technology.
  • the image pickup apparatus 100 is an apparatus for picking up image data, and includes an optical unit 110, a solid-state image pickup element 200, and a DSP circuit 120. Furthermore, the imaging device 100 includes a display unit 130, an operation unit 140, a bus 150, a frame memory 160, a storage unit 170, and a power supply unit 180.
  • a digital camera such as a digital still camera, a smartphone having an imaging function, a personal computer, an in-vehicle camera, or the like is assumed.
  • the optical unit 110 collects light from a subject and guides it to the solid-state imaging device 200.
  • the solid-state image sensor 200 is to generate image data by photoelectric conversion in synchronization with the vertical synchronization signal VSYNC.
  • the vertical synchronization signal VSYNC is a periodic signal of a predetermined frequency that indicates the timing of image capturing.
  • the solid-state imaging device 200 supplies the generated image data to the DSP circuit 120 via the signal line 209.
  • the DSP circuit 120 executes predetermined signal processing on the image data from the solid-state image sensor 200.
  • the DSP circuit 120 outputs the processed image data to the frame memory 160 or the like via the bus 150.
  • the DSP circuit 120 is an example of the signal processing unit described in the claims.
  • the display unit 130 displays image data.
  • a liquid crystal panel or an organic EL (Electro Luminescence) panel is assumed.
  • the operation unit 140 generates an operation signal according to a user operation.
  • the bus 150 is a common path for the optical unit 110, the solid-state image sensor 200, the DSP circuit 120, the display unit 130, the operation unit 140, the frame memory 160, the storage unit 170, and the power supply unit 180 to exchange data with each other.
  • the frame memory 160 holds image data.
  • the storage unit 170 stores various data such as image data.
  • the power supply unit 180 supplies power to the solid-state imaging device 200, the DSP circuit 120, the display unit 130, and the like.
  • FIG. 2 is a block diagram showing a configuration example of the solid-state imaging device 200 according to the first embodiment of the present technology.
  • the solid-state imaging device 200 includes a row selection unit 210, a timing control unit 220, a DAC (Digital to Analog Converter) 230, a pixel array unit 250, a column signal processing unit 260, and a horizontal transfer scanning unit 270. Further, in the pixel array unit 250, a plurality of pixels are arranged in a two-dimensional lattice shape.
  • a set of pixels arranged in a predetermined direction is called a "row”
  • a set of pixels arranged in a direction perpendicular to the row is called a "column”.
  • the number of rows of the pixel array section 250 is M (M is an integer), and the number of columns is N (N is an integer).
  • the row selection unit 210 sequentially selects and drives rows in synchronization with a horizontal sync signal having a frequency higher than that of the vertical sync signal VSYNC.
  • Pixel signals of each column are transmitted from the N columns of pixels in the pixel array unit 250 to the column signal processing unit 260 via N vertical signal lines 259.
  • the DAC 230 generates a ramp signal that changes over time by DA (Digital to Analog) conversion of the digital signal from the timing control unit 220.
  • the DAC 230 supplies the ramp signal to the column signal processing unit 260.
  • the timing control unit 220 controls the operation timing of each of the row selection unit 210, the DAC 230, the column signal processing unit 260, and the horizontal transfer scanning unit 270 in synchronization with the vertical synchronization signal VSYNC.
  • the column signal processing unit 260 executes predetermined signal processing on pixel signals for each column.
  • This signal processing includes AD (Analog to Digital) conversion processing for converting an analog pixel signal into a digital signal, and dark current correction processing for correcting light-dark distortion due to dark current.
  • the column signal processing unit 260 outputs the signal-processed digital signal to the DSP circuit 120 under the control of the horizontal transfer scanning unit 270.
  • Image data is generated by M ⁇ N digital signals.
  • the horizontal transfer scanning unit 270 controls the column signal processing unit 260 to sequentially output digital signals.
  • FIG. 3 is an example of a plan view of the pixel array section according to the first embodiment of the present technology.
  • the pixel array section 250 is provided with an effective area 253 and a light shielding area 251.
  • An area surrounded by a constant chain line in the figure shows an effective area 253, and an area around the effective area 253 shows a light shielding area 251.
  • the effective area 253 is an area that is not shielded from light, and a plurality of effective pixels 254 are arranged in a two-dimensional grid pattern.
  • the light shielding area 251 is a light shielded area, and a plurality of light shielding pixels 252 are arranged.
  • the effective pixel 254 is a pixel that is not shielded from light and generates and outputs a pixel signal by photoelectric conversion of incident light.
  • the light-shielded pixel 252 is a light-shielded pixel and outputs a signal corresponding to the dark current as a pixel signal.
  • FIG. 4 is a block diagram showing a configuration example of the column signal processing unit 260 according to the first embodiment of the present technology.
  • the column signal processing unit 260 includes an analog / digital conversion unit 261, an area dividing unit 262, a correction value setting unit 263, a correction value holding unit 264, and a shading correction unit 265.
  • a mode signal MODE is also input to the column signal processing unit 260.
  • the mode signal MODE is a signal for setting any one of a plurality of modes including a correction mode and an imaging mode.
  • the correction mode is a mode in which the solid-state imaging device 200 generates and holds a correction value for dark current correction. In this correction mode, the image data is captured as a dark image with the entire pixel array section 250 shielded from light.
  • the imaging mode is a mode in which the solid-state imaging device 200 performs dark current correction on a pixel signal for each pixel and generates image data. In this imaging mode, the pixel array section 250 is not shielded during imaging.
  • the correction mode is set, for example, at the time of factory shipment of the image pickup apparatus 100.
  • the analog-to-digital converter 261 converts the analog pixel signals Ain1 to AinN into digital signals Dout1 to DoutN using the ramp signal.
  • the pixel signal Ainj (j is an integer from 1 to N) is the pixel signal in the j-th column.
  • the analog-digital conversion unit 261 is realized by, for example, an ADC provided for each column.
  • the AD conversion for N columns is sequentially executed for each of M rows. These AD conversions generate M ⁇ N digital signals.
  • the analog-digital conversion unit 261 supplies the digital signal to the area division unit 262, the correction value setting unit 263, and the shading correction unit 265.
  • the area dividing section 262 divides the effective area 253 in the pixel array section 250 into a plurality of divided areas based on the distribution of dark current in the pixel array section 250.
  • the area dividing unit 262 acquires the distribution of the dark current in each row and sets the boundaries of the plurality of areas based on the distribution. .
  • the area dividing unit 262 sets the respective values of the digital signals for N columns indicating the dark current as the target variables Y 1 to Y N, and sets the X coordinates of the N columns as the explanatory variables X 1 to X N. Then, the area dividing unit 262 obtains a function such as a polynomial expression shown in the following equation by curve fitting.
  • y j f (x j )
  • the least squares method for finding the fitting parameter of the function f (x j ) so that the error J represented by the following equation is minimized is used. If the function f (x j ) is a polynomial, then each coefficient of the term is found as a fitting parameter.
  • the area dividing unit 262 first-order differentiates the obtained function f (x j ) and obtains a differential value for each pixel. This differential value indicates the amount of change in dark current.
  • the area division unit 262 determines for each pixel whether or not the differential value (change amount) exceeds a predetermined threshold value. Further, the area dividing unit 262 second-order differentiates the function f (x j ) and obtains a pixel (that is, an inflection point) whose differential value is “0”.
  • the area division unit 262 sets the coordinates of the pixel and the inflection point whose differential value exceeds the threshold value as the boundary coordinates which are the coordinates of the boundary of the divided area.
  • the area dividing unit 262 obtains, for each row, the boundary coordinates within the row.
  • the pixel array section 250 is divided into a plurality of divided areas having a boundary or a curve or straight line composed of boundary coordinates.
  • the area division unit 262 supplies the boundary coordinates for each row to the correction value setting unit 263.
  • the area division unit 262 also supplies the coordinates of pixels adjacent to the effective area 253 in the light-shielded area 251 to the correction value setting unit 263 as boundary coordinates.
  • the area division unit 262 is an example of the area division unit described in the claims.
  • the area division unit 262 obtains the coordinates of both the pixel and the inflection point whose differential value exceeds the threshold value as the boundary coordinates, but may be configured to obtain only one of them as the boundary coordinates.
  • the correction value setting unit 263 acquires, for each boundary coordinate, the value of the digital signal Doutj corresponding to the coordinate from the analog-digital conversion unit 261 and causes the correction value holding unit 264 to hold the value as the correction value for the coordinate. These correction values are used for dark current correction.
  • the correction value holding unit 264 holds the correction value for each coordinate in the light-shielded area and for each boundary coordinate.
  • the shading correction unit 265 performs dark current correction on the digital signal Doutj when the imaging mode is set by the mode signal MODE.
  • the shading correction unit 265 reads, for each row, a correction value corresponding to the boundary coordinates in the row from the correction value holding unit 264.
  • the shading correction unit 265 interpolates the correction value of the coordinates not corresponding to the boundary coordinates by linear interpolation or the like based on the read correction value.
  • the interpolated correction value will be referred to as "interpolation value" below.
  • D (X1, Y1) is a correction value of the boundary coordinate (X 1, Y 1)
  • D (X2, Y1) is the correction value of the boundary coordinate (X 2, Y 1).
  • the shading correction unit 265 subtracts the corresponding correction value from the digital signal Doutj. Since the correction value is a value according to the dark current, the subtraction removes the dark current noise from the digital signal. As a result, the light and dark distortion of the digital signal Doutj due to the dark current is corrected (that is, dark current correction).
  • the shading correction unit 265 supplies the corrected digital signal Doutj to the DSP circuit 120 as Doutj ′.
  • the shading correction unit 265 is an example of the correction unit described in the claims.
  • a circuit outside the solid-state imaging device 200 (such as the DSP circuit 120) is used for part or all of area division of processing in the column signal processing unit 260, setting of correction values, and shading correction. May be executed.
  • FIG. 5 is a diagram showing an example of the distribution of dark levels and correction values according to the first embodiment of the present technology.
  • a indicates an example of the dark level distribution of a certain row.
  • B in the figure shows an example of the distribution of the dark level when the area is divided.
  • C in the figure shows an example of the distribution of the correction values in a certain row.
  • the vertical axes of a and b show the dark level which is the level of the digital signal in the correction mode
  • the horizontal axis shows the column address.
  • the vertical axis of c indicates the correction value
  • the horizontal axis indicates the column address.
  • the area division unit 262 acquires digital signals of all column addresses in each row in the correction mode.
  • the level of the pixel signal corresponds to the dark level which is the level of dark current.
  • the spatial distribution of the dark level is not always constant, but may vary locally.
  • the area dividing unit 262 obtains a function f (x i ) representing the distribution of dark current by curve fitting, and performs the first derivative and second derivative of the function.
  • the area division unit 262 sets, as boundary coordinates, the coordinates of the pixel having the differential value of the primary differential equal to or greater than the threshold value and the pixel having the differential value of the secondary differential of “0” (inflection point).
  • the area division unit 262 also sets the coordinates of pixels adjacent to the effective area 253 in the light-shielded area 251 as boundary coordinates. For example, X1 to X8 are set as the X coordinate of the boundary coordinates composed of the X coordinate and the Y coordinate, as illustrated in b in the figure. This line is divided into division areas A1 to A7 by these boundary coordinates.
  • the correction value setting unit 263 causes the correction value holding unit 264 to hold the dark level of each boundary coordinate such as X1 to X8 as a correction value.
  • the correction value holding unit 264 can be realized with a memory having a smaller capacity than in the case where the correction values of all pixels are held.
  • the shading correction unit 265 reads the correction value for each boundary coordinate from the correction value holding unit 264, and interpolates the correction value for coordinates other than the boundary coordinate by linear interpolation.
  • the black circles of c in the figure show the correction values of the boundary coordinates, and the solid line correction values other than the black circles show the interpolated values.
  • FIG. 6 is a diagram showing an example of a correction value for each boundary according to the first embodiment of the present technology.
  • the correction value holding unit 264 holds the correction value for each boundary coordinate. For example, in the row of the coordinate Y1, when the X coordinate of X1 to X8 is set as the boundary, the boundary coordinates (X1, Y1) to (X8, Y1) are held. Further, the correction values D (X1, Y1) to D (X8, Y1) are held in association with those boundary coordinates.
  • FIG. 7 is a diagram illustrating an example of a result of area division according to the first embodiment of the present technology.
  • the effective area 253 is divided into a plurality of divided areas such as the divided areas A1 to A7.
  • the black circles in the dotted line indicate the boundary coordinates of the line.
  • the distribution of dark levels in this row is represented by, for example, the graph of a in FIG.
  • the area division unit 262 obtains the boundary coordinates of each M row.
  • a curved line or a straight line composed of all the boundary coordinates of the M row corresponds to the boundary of the divided area in FIG. 7.
  • the dark level is not always constant but changes locally.
  • the position where the amount of change is large is set as the boundary coordinate. Therefore, the amount of change is lower than the threshold value inside the divided area, and the correction value within the divided area can be obtained from the boundary correction value by interpolation such as linear interpolation.
  • the shading correction unit 265 can detect a corner for each divided area and change the interpolation method to quadratic function interpolation or the like within a certain range centered on the corner. Thereby, the interpolation accuracy can be further improved.
  • FIG. 8 is a diagram showing an example of the correction value and the signal levels before and after the correction according to the first embodiment of the present technology.
  • a shows an example of the distribution of the correction values in a certain row.
  • B in the same figure shows an example of the distribution of the digital signal before correction of a certain row.
  • C in the figure shows an example of the distribution of the digital signal after correction of a certain row.
  • the shading correction unit 265 obtains a correction value for each column in a row by linear interpolation, as illustrated in a in the figure. Then, in the imaging mode, it is assumed that the amount of received light for each pixel in a certain row is constant. Even in this case, when dark current noise occurs, the digital signal in the row is not constant, as illustrated in b in the figure. If the dark current noise component is not removed by the dark current correction, shading occurs and the image quality deteriorates. Therefore, the shading correction unit 265 subtracts the corresponding correction value from the value of the digital signal for each column in the row. As a result, the digital signal in the row becomes constant and shading is corrected, as illustrated in c in the figure.
  • FIG. 9 is a flowchart showing an example of the operation of the solid-state image sensor 200 according to the first embodiment of the present technology. This operation is started, for example, when the correction mode is set.
  • the solid-state image sensor 200 captures image data as a dark image in a light-shielded state (step S901).
  • the solid-state imaging device 200 obtains the function f (x i ) for each row, performs the first-order differentiation, and sets the boundary by comparison with the threshold value (step S902).
  • the solid-state imaging device 200 performs second-order differentiation for each row and sets an inflection point as a boundary (step S903). Then, the solid-state image sensor 200 holds the value of the dark current at the boundary as a correction value (step S904).
  • the solid-state imaging device 200 determines whether or not the imaging mode is set (step S905).
  • the imaging mode is set (step S905: Yes)
  • the solid-state imaging device 200 images the image data (step S906).
  • the solid-state imaging device 200 interpolates correction values of coordinates that do not correspond to the boundary by linear interpolation (step S907), and performs shading correction using these correction values (step S908).
  • step S905 If the imaging mode is not set (step S905: No), or after step S908, the solid-state imaging device 200 repeatedly executes step S905 and subsequent steps.
  • the column signal processing unit 260 divides the pixel array unit 250 into a plurality of regions based on the distribution of the dark current, and performs the correction corresponding to the boundaries of those regions. Dark current correction is performed using the value. As a result, even if the dark current locally changes in the dark current distribution, the dark current noise can be sufficiently removed by the correction value according to the distribution. Therefore, the correction accuracy of dark current correction can be improved.
  • the correction value holding unit 264 sets the boundary coordinates by calculation for each row and holds the correction value for each boundary coordinate.
  • the total number of values and the amount of calculation increase.
  • the column signal processing unit 260 of the second embodiment differs from that of the first embodiment in that the calculation amount and the total number of correction values are reduced by dividing the column signal processing unit 260 by a rectangular division area.
  • FIG. 10 is a diagram showing an example of a result of vertical area division according to the second embodiment of the present technology. As illustrated in the figure, the effective area 253 in the pixel array unit 250 is divided at regular intervals in the vertical direction. Hereinafter, these areas will be referred to as “vertical division areas”.
  • FIG. 11 is a diagram showing an example of a distribution of average values of dark levels according to the second embodiment of the present technology.
  • a is a diagram showing an example of the distribution of the average value of the dark level before the area division
  • b in the figure is a diagram showing an example of the distribution of the average value of the dark level after the area division.
  • the vertical axis in the figure shows an example of the average value of the dark level for each column in a certain vertical division area
  • the horizontal axis in the figure shows the column address.
  • the area division unit 262 of the second embodiment calculates, for each column in the vertical division area, the average value of the digital signals (dark level) of all the pixels in the column. To do. Then, the area dividing unit 262 sets the boundary coordinates in the horizontal direction by the same method as in the first embodiment, as illustrated in b in FIG. As a result, the vertical divided area is divided into a plurality of divided areas in the horizontal direction. Hereinafter, these areas will be referred to as “horizontal divided areas”. The process of dividing in the horizontal direction is executed for each vertical divided area.
  • FIG. 12 is a diagram showing an example of a result of horizontal area division according to the second embodiment of the present technology. As illustrated in the figure, in the horizontal direction, the vertical division area V1 in the pixel array section 250 is divided into vertical division areas H11 to H16. The thick dashed line in the figure shows the boundaries parallel to the respective column directions of the horizontal divided areas. Vertical division areas other than the vertical division area V1 are similarly divided into a plurality of horizontal division areas.
  • the effective area 253 is divided into a plurality of rectangular horizontal divided areas by the vertical and horizontal area divisions illustrated in FIGS. 10 to 12. Then, for each vertical division area, the average value of the digital signals of the columns in the area is held as a correction value. As a result, it is possible to reduce the total number of correction values as compared with the first embodiment in which the correction value is calculated for each pixel within the boundary. Accordingly, the correction value holding unit 264 can be realized by a memory having a small capacity.
  • the shading correction unit 265 performs a linear interpolation only in the row direction for each vertical division area to calculate a correction value for each column, and performs shading correction. As described above, the shading correction unit 265 only needs to perform the linear interpolation for each vertical division area, and thus the amount of calculation can be reduced as compared with the first embodiment in which the linear interpolation is performed for each row.
  • the area division unit 262 obtains the distribution of the average value of the dark levels of each column for each vertical division area, and therefore the dark level of each row is calculated.
  • the calculation amount can be reduced as compared with the case of obtaining the distribution.
  • the column signal processing unit 260 holds the value of the dark current as the correction value for each boundary coordinate without measuring the temperature or the voltage.
  • the value of dark current generally fluctuates depending on environmental information such as voltage and temperature. For this reason, the dark current value may fluctuate under conditions of voltage and temperature different from when the correction value is held, and dark current noise may not be sufficiently removed.
  • the image pickup apparatus 100 of the third embodiment is different from that of the first embodiment in that dark current correction is performed based on environment information.
  • FIG. 13 is a block diagram showing a configuration example of the column signal processing unit 260 according to the third embodiment of the present technology.
  • the column signal processing unit 260 of the third embodiment differs from that of the first embodiment in that the area dividing unit 262, the correction value setting unit 263, the correction value holding unit 264, and the shading correction unit 265 are not provided.
  • FIG. 14 is a block diagram showing a configuration example of the DSP circuit 120 according to the third embodiment of the present technology.
  • the DSP circuit 120 includes an area division unit 121, a correction value setting unit 122, a correction value holding unit 123, an environment information acquisition unit 124, and a shading correction unit 125.
  • the configurations of the area dividing unit 121, the correction value setting unit 122, and the correction value holding unit 123 are as follows: the area dividing unit 262, the correction value setting unit 263, and the correction value holding unit 264 in the column signal processing unit 260 according to the first embodiment. Is the same as. However, the correction value holding unit 264 further holds the environment information in the correction mode.
  • the environment information is information about the environment of the imaging device 100 and includes at least one of temperature, power consumption, and voltage.
  • the environment information acquisition unit 124 acquires environment information.
  • the environment information acquisition unit 124 is realized by a temperature sensor, a circuit that calculates power consumption, a voltmeter, and the like.
  • the environment information acquisition unit 124 causes the correction value holding unit 123 to hold the environment information in association with the correction value when the correction mode is set.
  • the environment information acquisition unit 124 supplies the environment information and the correction value to the shading correction unit 125.
  • the shading correction unit 125 performs shading correction based on the held correction value and environment information. For example, the shading correction unit 125 calculates a correction value using a function indicating the relationship between environmental information and dark current. Generally, the higher the temperature, the larger the value of dark current. Therefore, for example, the shading correction unit 125 calculates a correction value corresponding to the acquired environment information using a function that increases the original correction value as the temperature increases.
  • the shading correction unit 125 interpolates the correction value corresponding to the environment information in the imaging mode by linear interpolation or the like based on the correction value corresponding to the environment information in the correction mode.
  • the environment information acquisition unit 124 obtains the correction value under a plurality of conditions in the correction mode, and the correction value holding unit 123 holds the correction value for each environment information related to the condition.
  • the correction value D (X1, Y1, T1) and D (X2, Y1 and T1) are held. Further, it is assumed that the correction values D (X1, Y1, T2) and D (X2, Y1, T2) are held at the boundary coordinates of the temperature T 2 .
  • the shading correction unit 125 causes the boundary coordinates (X 1 , Y 1 ) and (X 2 , Y 1 ) between (X 2 , Y 1 ), the correction value D (X, Y1, T1) is first calculated by the following equation.
  • the shading correction unit 125 can perform linear interpolation by the same method when power consumption and voltage are acquired in addition to temperature.
  • the shading correction unit 125 performs shading correction after linear interpolation and outputs the corrected image data.
  • the area dividing unit 121, the correction value setting unit 122, and the correction value holding unit 123 are arranged in the DSP circuit 120, some or all of them are arranged as in the column signal processing unit as in the first embodiment. It can also be located within 260.
  • the area dividing unit 262 can also be divided into rectangular divided areas as in the second embodiment.
  • FIG. 15 is a flowchart showing an example of the operation of the solid-state imaging device 200 according to the third embodiment of the present technology.
  • the operation of the solid-state image sensor 200 of the third embodiment is different from that of the first embodiment in that step S910 is further executed.
  • the solid-state imaging device 200 captures image data (step S906) and acquires environmental information such as temperature and voltage (step S910). Then, the solid-state imaging device 200 performs linear interpolation using Equations 2 to 4 (step S907) and performs shading correction (step S908).
  • the shading correction unit 125 since the shading correction unit 125 performs the dark current correction based on the environmental information such as the temperature, the shading correction unit 125 corrects the dark current even when the temperature changes. The accuracy can be made sufficiently high.
  • the circuit scale of the pixel array section 250 increases as the number of pixels increases.
  • the solid-state imaging device 200 of the fourth embodiment is different from that of the first embodiment in that dark current correction is performed in an FD (Floating Diffusion) sharing type configuration in which a floating diffusion layer is shared.
  • FIG. 16 is a circuit diagram showing a configuration example of a pixel block 280 according to the fourth embodiment of the present technology.
  • a plurality of pixel blocks 280 are arranged in a two-dimensional grid pattern in each of the effective area 253 and the light shielding area 251 in the pixel array section 250 of the fourth embodiment.
  • a plurality of pixels sharing the floating diffusion layer are arranged in each pixel block 280.
  • four pixels of Gr (Green) pixels, R (Red) pixels, Gb (Green) pixels and B (Blue) pixels are arrayed in the pixel block 280.
  • the pixel block 280 includes photodiodes 282, 284, 286 and 288, and transfer transistors 281, 283, 285 and 287.
  • the pixel block 280 also includes a floating diffusion layer 289, a reset transistor 290, an amplification transistor 291, and a selection transistor 292.
  • the photodiodes 282, 284, 286, and 288 generate electric charges by photoelectric conversion. However, the photodiodes 282 and 288 receive green light, the photodiode 284 receives red light, and the photodiode 286 receives blue light.
  • the transfer transistor 281 transfers charges from the photodiode 282 to the floating diffusion layer 289 according to the transfer signal TX_Gr from the row selection unit 210.
  • the transfer transistor 283 transfers charges from the photodiode 284 to the floating diffusion layer 289 according to the transfer signal TX_R from the row selection unit 210.
  • the transfer transistor 285 transfers charges from the photodiode 286 to the floating diffusion layer 289 according to the transfer signal TX_B from the row selection unit 210.
  • the transfer transistor 287 transfers charges from the photodiode 288 to the floating diffusion layer 289 according to the transfer signal TX_Gb from the row selection unit 210.
  • the floating diffusion layer 289 accumulates the transferred charges and generates a voltage according to the amount of charges.
  • the reset transistor 290 extracts electric charge from the floating diffusion layer 289 and initializes it according to a reset signal RST from the row selection unit 210.
  • the amplification transistor 291 amplifies the voltage of the floating diffusion layer 289.
  • the selection transistor 292 outputs the signal of the voltage amplified according to the selection signal SEL from the row selection unit 210 to the column signal processing unit 260 as a pixel signal.
  • the floating diffusion layer 289, the reset transistor 290, the amplification transistor 291 and the selection transistor 292 are shared by the four pixels.
  • a circuit including the transfer transistor 281, the photodiode 282, the shared floating diffusion layer 289, and the like functions as a Gr pixel.
  • a circuit including the transfer transistor 283, the photodiode 284, the shared floating diffusion layer 289, and the like functions as an R pixel.
  • a circuit including the transfer transistor 285, the photodiode 286, the shared floating diffusion layer 289, and the like functions as a B pixel.
  • a circuit including the transfer transistor 287, the photodiode 288, and the shared floating diffusion layer 289 and the like functions as a Gb pixel.
  • the area dividing unit 262 of the fourth embodiment divides each pixel in the pixel block 280 having the same relative position into a plurality of divided areas based on the dark current distribution. For example, when K (K is an integer) pixel blocks 280 are arranged, K Gr pixels, R pixels, Gb pixels, and B pixels are also arranged K each. The area dividing unit 262 divides the image composed of K Gr pixels into a plurality of divided areas. Similarly, an image composed of K R pixels, an image composed of K Gb pixels, and an image composed of K B pixels are also individually divided.
  • the configurations of the correction value setting unit 263, the correction value holding unit 264, and the shading correction unit 265 are the same as those in the first embodiment.
  • the floating diffusion layer 289 is shared by 4 pixels, the number of pixels shared is not limited to 4 pixels, and may be 8 pixels or the like. Further, the pixels may be arranged in an arrangement other than the Bayer arrangement. Further, the configurations of the first to third embodiments can be applied to the fourth embodiment.
  • the circuit scale of the pixel array section 250 is larger than that in the case where the floating diffusion layer 289 is not shared. Can be reduced.
  • the technology (the present technology) according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure is realized as a device mounted on any type of moving object such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, and a robot. You may.
  • FIG. 17 is a block diagram showing a schematic configuration example of a vehicle control system which is an example of a mobile body control system to which the technology according to the present disclosure can be applied.
  • the vehicle control system 12000 includes a plurality of electronic control units connected via a communication network 12001.
  • the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, a vehicle exterior information detection unit 12030, a vehicle interior information detection unit 12040, and an integrated control unit 12050.
  • a microcomputer 12051, a voice image output unit 12052, and an in-vehicle network I / F (interface) 12053 are shown as a functional configuration of the integrated control unit 12050.
  • the drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs.
  • the driving system control unit 12010 includes a driving force generating device for generating driving force of the vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting driving force to wheels, and a steering angle of the vehicle. It functions as a control mechanism such as a steering mechanism for adjusting and a braking device for generating a braking force of the vehicle.
  • the body control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs.
  • the body control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as a head lamp, a back lamp, a brake lamp, a blinker, and a fog lamp.
  • a radio wave or various switch signals transmitted from a portable device replacing the key may be input to the body control unit 12020.
  • the body control unit 12020 receives the input of these radio waves or signals and controls a door lock device, a power window device, a lamp, and the like of the vehicle.
  • Out-of-vehicle information detection unit 12030 detects information external to the vehicle on which vehicle control system 12000 is mounted.
  • an imaging unit 12031 is connected to the outside-of-vehicle information detection unit 12030.
  • the out-of-vehicle information detection unit 12030 causes the imaging unit 12031 to capture an image outside the vehicle, and receives the captured image.
  • the out-of-vehicle information detection unit 12030 may perform an object detection process or a distance detection process of a person, a vehicle, an obstacle, a sign, a character on a road surface, or the like based on the received image.
  • the image pickup unit 12031 is an optical sensor that receives light and outputs an electric signal according to the amount of received light.
  • the imaging unit 12031 can output an electric signal as an image or can output the information as distance measurement information.
  • the light received by the imaging unit 12031 may be visible light or non-visible light such as infrared light.
  • the in-vehicle information detection unit 12040 detects information in the vehicle.
  • the in-vehicle information detection unit 12040 is connected to, for example, a driver status detection unit 12041 that detects the status of the driver.
  • the driver state detection unit 12041 includes, for example, a camera that captures an image of the driver, and the in-vehicle information detection unit 12040 determines the degree of driver fatigue or concentration based on the detection information input from the driver state detection unit 12041. The calculation may be performed, or it may be determined whether the driver has fallen asleep.
  • the microcomputer 12051 calculates a control target value of the driving force generation device, the steering mechanism or the braking device based on the information on the inside and outside of the vehicle acquired by the outside information detection unit 12030 or the inside information detection unit 12040, and the drive system control unit A control command can be output to 12010.
  • the microcomputer 12051 realizes functions of ADAS (Advanced Driver Assistance System) including collision avoidance or impact mitigation of a vehicle, follow-up traveling based on an inter-vehicle distance, vehicle speed maintenance traveling, a vehicle collision warning, or a vehicle lane departure warning. It is possible to perform cooperative control for the purpose.
  • ADAS Advanced Driver Assistance System
  • the microcomputer 12051 controls the driving force generation device, the steering mechanism, the braking device, and the like based on the information about the surroundings of the vehicle obtained by the outside information detection unit 12030 or the inside information detection unit 12040, so that the driver 120 It is possible to perform cooperative control for automatic driving or the like in which the vehicle travels autonomously without depending on the operation.
  • the microcomputer 12051 can output a control command to the body system control unit 12020 based on information on the outside of the vehicle acquired by the outside information detection unit 12030.
  • the microcomputer 12051 controls the headlamp according to the position of the preceding vehicle or the oncoming vehicle detected by the outside information detection unit 12030, and performs cooperative control for the purpose of preventing glare such as switching a high beam to a low beam. It can be carried out.
  • the sound image output unit 12052 transmits at least one of a sound signal and an image signal to an output device capable of visually or audibly notifying a passenger of the vehicle or the outside of the vehicle of information.
  • an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are illustrated as output devices.
  • the display unit 12062 may include, for example, at least one of an on-board display and a head-up display.
  • FIG. 18 is a diagram showing an example of the installation position of the imaging unit 12031.
  • the image pickup unit 12031 includes image pickup units 12101, 12102, 12103, 12104, and 12105.
  • the imaging units 12101, 12102, 12103, 12104, 12105 are provided at positions such as the front nose of the vehicle 12100, the side mirrors, the rear bumper, the back door, and the upper portion of the windshield in the vehicle interior.
  • the imaging unit 12101 provided on the front nose and the imaging unit 12105 provided above the windshield in the passenger compartment mainly acquire an image in front of the vehicle 12100.
  • the imaging units 12102 and 12103 included in the side mirrors mainly acquire images of the side of the vehicle 12100.
  • the imaging unit 12104 provided in the rear bumper or the back door mainly acquires an image behind the vehicle 12100.
  • the imaging unit 12105 provided above the windshield in the passenger compartment is mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, and the like.
  • FIG. 18 shows an example of the shooting range of the imaging units 12101 to 12104.
  • the imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose
  • the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively
  • the imaging range 12114 indicates 13 shows an imaging range of an imaging unit 12104 provided in a rear bumper or a back door.
  • a bird's-eye view image of the vehicle 12100 viewed from above is obtained by superimposing image data captured by the imaging units 12101 to 12104.
  • At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
  • at least one of the imaging units 12101 to 12104 may be a stereo camera including a plurality of imaging elements or an imaging element having pixels for detecting a phase difference.
  • the microcomputer 12051 based on the distance information obtained from the imaging units 12101 to 12104, the distance to each three-dimensional object in the imaging range 12111 to 12114 and the temporal change of this distance (relative speed with respect to the vehicle 12100). It is possible to extract the closest three-dimensional object on the traveling path of the vehicle 12100, which is traveling in a substantially same direction as the vehicle 12100 at a predetermined speed (for example, 0 km / h or more), as a preceding vehicle. it can. Further, the microcomputer 12051 can set an inter-vehicle distance to be secured before the preceding vehicle and perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. In this way, it is possible to perform cooperative control for automatic driving or the like in which the vehicle travels autonomously without depending on the operation of the driver.
  • automatic brake control including follow-up stop control
  • automatic acceleration control including follow-up start control
  • the microcomputer 12051 converts the three-dimensional object data relating to the three-dimensional object into other three-dimensional objects such as a motorcycle, a normal vehicle, a large vehicle, a pedestrian, a telephone pole, and the like based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles. For example, the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into obstacles that are visible to the driver of the vehicle 12100 and obstacles that are difficult to see. Then, the microcomputer 12051 determines a collision risk indicating a risk of collision with each obstacle, and when the collision risk is equal to or more than the set value and there is a possibility of collision, via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver through forced driving and avoidance steering via the drive system control unit 12010, driving assistance for collision avoidance can be performed.
  • driving assistance for collision avoidance can be performed.
  • At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays.
  • the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian exists in the captured images of the imaging units 12101 to 12104. The recognition of such a pedestrian is performed by, for example, extracting a feature point in an image captured by the imaging units 12101 to 12104 as an infrared camera, and performing a pattern matching process on a series of feature points indicating the outline of the object to determine whether the object is a pedestrian.
  • the audio image output unit 12052 outputs a rectangular contour line for emphasizing the recognized pedestrian.
  • the display unit 12062 is controlled so that is superimposed. Further, the sound image output unit 12052 may control the display unit 12062 so as to display an icon or the like indicating a pedestrian at a desired position.
  • the technology according to the present disclosure can be applied to the imaging unit 12031 among the configurations described above.
  • the image pickup apparatus 100 in FIG. 1 can be applied to the image pickup unit 12031.
  • the present technology may have the following configurations.
  • a region dividing unit that divides the pixel array unit into a plurality of regions based on a distribution of dark current in the pixel array unit in which a plurality of pixels are arranged in a two-dimensional lattice pattern
  • a correction value holding unit that holds the value of the dark current at each boundary of the plurality of regions as a correction value for dark current correction
  • a solid-state image sensor comprising: a correction unit that performs the dark current correction on each pixel signal of the plurality of pixels using the correction value.
  • a plurality of rows are arranged in the pixel array section, A predetermined number of pixels are arranged in a predetermined direction in each of the plurality of rows, The area dividing unit, The solid-state imaging device according to (1), wherein the distribution is acquired for each of the plurality of rows and the boundary is set based on the distribution.
  • the pixel array section is divided into a plurality of vertical division areas at regular intervals in a direction perpendicular to a predetermined direction, The solid-state imaging device according to (1), wherein the area dividing unit acquires the distribution for each of the plurality of vertical divided areas and sets the boundary based on the distribution.
  • An environmental information acquisition unit that acquires environmental information including at least one of temperature, power consumption, and voltage is further provided.
  • the solid-state imaging device according to any one of (1) to (3), wherein the correction unit performs the dark current correction based on the environment information and the correction value.
  • a plurality of pixel blocks in which a plurality of pixels sharing a floating diffusion layer are arranged are arranged,
  • the solid-state imaging device according to any one of (1) to (4), wherein the region dividing unit divides the pixels into the plurality of regions based on the distribution for each pixel having the same relative position in the pixel array unit.
  • the area dividing unit obtains, as the boundary, a coordinate at which the amount of change in the dark current in the distribution exceeds a predetermined threshold value.
  • the solid-state imaging device according to any one of (1) to (6), in which the area dividing unit obtains an inflection point of a function indicating the distribution as the boundary.
  • the correction unit interpolates the value of the dark current at a coordinate that does not correspond to the boundary from the correction value and performs the dark current correction using the interpolated interpolation value (1) to (7) 7.
  • the solid-state image sensor according to any one of 1.
  • a region dividing unit that divides the pixel array unit into a plurality of regions based on a distribution of dark current in the pixel array unit in which a plurality of pixels are arranged in a two-dimensional lattice pattern, A correction value holding unit that holds the value of the dark current at each boundary of the plurality of regions as a correction value for dark current correction, A correction unit that performs the dark current correction on each pixel signal of the plurality of pixels using the correction value;
  • An image pickup apparatus comprising: a signal processing unit that processes the pixel signal that has been subjected to the dark current correction.
  • image pickup device 110 optical unit 120 DSP circuit 121 area dividing unit 122 correction value setting unit 123 correction value holding unit 124 environment information acquisition unit 125 shading correction unit 130 display unit 140 operation unit 150 bus 160 frame memory 170 storage unit 180 power supply unit 200 Solid-state image sensor 210 Row selection unit 220 Timing control unit 230 DAC 250 pixel array unit 251 light-shielding area 252 light-shielding pixel 253 effective area 254 effective pixel 260 column signal processing unit 261 analog-to-digital conversion unit 262 area dividing unit 263 correction value setting unit 264 correction value holding unit 265 shading correction unit 270 horizontal transfer scanning unit 280 Pixel block 281, 283, 285, 287 Transfer transistor 282, 284, 286, 288 Photodiode 289 Floating diffusion layer 290 Reset transistor 291 Amplifying transistor 292 Select transistor 12031 Imaging unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

L'invention concerne un élément d'imagerie à semi-conducteurs qui corrige une distorsion de contraste, la précision de correction étant améliorée. Cet élément d'imagerie à semi-conducteurs comprend une unité de division de zone, une unité de maintien de valeur de correction et une unité de correction. L'unité de division de zone divise une unité de réseau de pixels, dans laquelle une pluralité de pixels sont disposés en réseau dans un motif de grille bidimensionnelle, en une pluralité de zones sur la base d'une distribution de courant d'obscurité dans l'unité de réseau de pixels. L'unité de maintien de valeur de correction maintient la valeur d'un courant d'obscurité à la limite de chacune de la pluralité de zones en tant que valeur de correction pour une correction de courant d'obscurité. L'unité de correction réalise une correction de courant d'obscurité sur le signal de pixel de chacun de la pluralité de pixels à l'aide de la valeur de correction.
PCT/JP2019/027623 2018-10-11 2019-07-12 Élément d'imagerie à semi-conducteur, dispositif d'imagerie, et procédé de contrôle d'élément d'imagerie à semi-conducteur WO2020075357A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018192233A JP2020061669A (ja) 2018-10-11 2018-10-11 固体撮像素子、撮像装置、および、固体撮像素子の制御方法
JP2018-192233 2018-10-11

Publications (1)

Publication Number Publication Date
WO2020075357A1 true WO2020075357A1 (fr) 2020-04-16

Family

ID=70164121

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/027623 WO2020075357A1 (fr) 2018-10-11 2019-07-12 Élément d'imagerie à semi-conducteur, dispositif d'imagerie, et procédé de contrôle d'élément d'imagerie à semi-conducteur

Country Status (2)

Country Link
JP (1) JP2020061669A (fr)
WO (1) WO2020075357A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022184422A (ja) * 2021-06-01 2022-12-13 ソニーセミコンダクタソリューションズ株式会社 信号処理方法、信号処理装置および撮像装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011071709A (ja) * 2009-09-25 2011-04-07 Nikon Corp 電子カメラ
JP2012049947A (ja) * 2010-08-30 2012-03-08 Mitsubishi Electric Corp 画像処理装置
JP2018019227A (ja) * 2016-07-27 2018-02-01 キヤノン株式会社 撮像センサおよび暗電流ノイズ除去方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011071709A (ja) * 2009-09-25 2011-04-07 Nikon Corp 電子カメラ
JP2012049947A (ja) * 2010-08-30 2012-03-08 Mitsubishi Electric Corp 画像処理装置
JP2018019227A (ja) * 2016-07-27 2018-02-01 キヤノン株式会社 撮像センサおよび暗電流ノイズ除去方法

Also Published As

Publication number Publication date
JP2020061669A (ja) 2020-04-16

Similar Documents

Publication Publication Date Title
US11394913B2 (en) Solid-state imaging element, electronic device, and method for controlling correction of luminance in the solid-state imaging element
WO2020066803A1 (fr) Élément d'imagerie à semi-conducteurs et dispositif d'imagerie
WO2020105314A1 (fr) Élément d'imagerie à semi-conducteurs et dispositif d'imagerie
WO2018190126A1 (fr) Dispositif d'imagerie à semi-conducteur et appareil électronique
US11196956B2 (en) Solid-state image sensor, imaging device, and method of controlling solid-state image sensor
US11418741B2 (en) Solid-state imaging device and electronic device
US20200057149A1 (en) Optical sensor and electronic device
JP7230824B2 (ja) 画像処理装置、画像処理方法およびプログラム
US11172173B2 (en) Image processing device, image processing method, program, and imaging device
WO2022019026A1 (fr) Dispositif de traitement d'informations, système de traitement d'informations, procédé de traitement d'informations et programme de traitement d'informations
WO2020075357A1 (fr) Élément d'imagerie à semi-conducteur, dispositif d'imagerie, et procédé de contrôle d'élément d'imagerie à semi-conducteur
WO2020137198A1 (fr) Dispositif de capture d'image et élément de capture d'image à semi-conducteurs
US11770641B2 (en) Solid-state image capturing element, image capturing apparatus, and method of controlling solid-state image capturing element
WO2023112480A1 (fr) Élément d'imagerie à semi-conducteurs, dispositif d'imagerie, et procédé de contrôle d'élément d'imagerie à semi-conducteurs
WO2017169079A1 (fr) Dispositif de traitement d'image, procédé de traitement d'image et élément de capture d'image
WO2022137790A1 (fr) Élément de capture d'image à semi-conducteurs, dispositif de détection et procédé permettant de commander un élément de capture d'image à semi-conducteurs
WO2023286297A1 (fr) Élément d'imagerie à semi-conducteurs, dispositif d'imagerie, et procédé de contrôle d'élément d'imagerie à semi-conducteurs
WO2023100640A1 (fr) Dispositif à semi-conducteur, procédé de traitement de signal et programme
JP2024000625A (ja) 固体撮像装置および電子機器
WO2023021780A1 (fr) Dispositif d'imagerie, appareil électronique et procédé de traitement d'informations
WO2023021774A1 (fr) Dispositif d'imagerie et appareil électronique l'intégrant
WO2023136093A1 (fr) Élément d'imagerie et appareil électronique
WO2020100399A1 (fr) Élément d'imagerie à semi-conducteurs, dispositif d'imagerie, et procédé de contrôle d'élément d'imagerie à semi-conducteurs
TW202416721A (zh) 固態成像裝置及電子裝置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19870211

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19870211

Country of ref document: EP

Kind code of ref document: A1