CN115499604A - Image sensor, image processing method, depth camera, and storage medium - Google Patents

Image sensor, image processing method, depth camera, and storage medium Download PDF

Info

Publication number
CN115499604A
CN115499604A CN202210976999.8A CN202210976999A CN115499604A CN 115499604 A CN115499604 A CN 115499604A CN 202210976999 A CN202210976999 A CN 202210976999A CN 115499604 A CN115499604 A CN 115499604A
Authority
CN
China
Prior art keywords
pixel
image frame
pixels
original image
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210976999.8A
Other languages
Chinese (zh)
Inventor
程向伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orbbec Inc
Original Assignee
Orbbec Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orbbec Inc filed Critical Orbbec Inc
Priority to CN202210976999.8A priority Critical patent/CN115499604A/en
Publication of CN115499604A publication Critical patent/CN115499604A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses an image sensor, an image processing method, a depth camera and a storage medium, wherein the image sensor comprises a control and data transmission interface, an exposure control circuit, a pixel array, a reading circuit and an image signal processor, after receiving an image acquisition instruction, an original image frame is exposed by adopting a global exposure mode and an environment light image frame is exposed by adopting a rolling shutter exposure mode, then voltage signals of every two lines of pixels in the original image frame and voltage signals of pixels of corresponding lines in the environment light image frame are processed to obtain corresponding pixel values, background reduction calculation is executed, the influence of interference light rays in environment light, which are the same as the laser wavelength of the depth camera, is eliminated, and the accuracy of a depth image is improved. The global exposure and the rolling shutter exposure are combined, and every two lines of pixels are adopted for processing, so that the exposure time between an original image frame and an ambient light image frame is shortened, and the background reduction efficiency is improved.

Description

Image sensor, image processing method, depth camera, and storage medium
Technical Field
The invention relates to the technical field of computer vision, in particular to an image sensor, an image processing method, a depth camera and a storage medium.
Background
With the development of scientific technology, especially the development of computer vision technology, depth images are widely used in various scenes (e.g., 3D face recognition, 3D target tracking, etc.). At present, the depth of a target object is usually obtained by a depth camera, specifically, the depth camera is controlled to emit laser to the target object, the depth camera receives the laser reflected by the target object, and a depth image corresponding to the target object is obtained by comparing the difference between the received laser pattern and a reference pattern.
The problem in the prior art is that there may be light (i.e., interference light) in the ambient light that is the same as the laser wavelength emitted by the depth camera, and in the process of receiving laser light and calculating and acquiring a depth image by an image sensor of the depth camera, the interference light in the ambient light may cause interference to the depth camera, thereby affecting the accuracy of acquiring the depth image, and further possibly affecting the accuracy of subsequent operations such as face recognition based on the depth image, and affecting the use experience of a user.
Disclosure of Invention
The invention mainly aims to provide an image sensor, an image processing method, a depth camera and a storage medium, and aims to solve the problem that in the prior art, in a scheme of directly emitting laser through the depth camera and acquiring a depth image, interference light rays in ambient light, which are the same as laser wavelength, interfere with the depth camera and are not beneficial to improving the acquisition precision of the depth image.
In order to achieve the above object, a first aspect of the present invention provides an image sensor comprising: the control and data transmission interface is used for receiving and outputting an image acquisition instruction; the exposure control circuit is used for receiving an image acquisition instruction and then controlling the pixel array to expose an original image frame in a global exposure mode and expose an environment light image frame in a rolling shutter exposure mode, wherein the original image frame comprises an image frame obtained by projecting a target space through environment light and speckle laser, and the environment light image frame comprises an image frame obtained by projecting the target space through the environment light; the pixel array is used for outputting voltage signals of pixels in every two rows in an original image frame and voltage signals of pixels in a corresponding row in an ambient light image frame to the reading circuit in an interleaving manner; the reading circuit is used for processing the voltage signals of the interlaced and output row-by-row pixels to obtain corresponding pixel values and outputting the pixel values to the image signal processor; and the image signal processor is used for performing background subtraction calculation on the pixel values of every two rows of pixels in the original image frame and the pixel values of the corresponding rows of pixels in the ambient light image frame in the pixel values output in an interleaving manner, and outputting the pixel values through the control and data transmission interface to form a background subtraction image frame.
In one embodiment, a readout circuit includes: the programmable gain amplifier is used for carrying out gain control on the voltage signals of the interlaced pixels to obtain gain summation voltage of the interlaced pixels and outputting the gain summation voltage to the analog-to-digital conversion circuit; the analog-to-digital conversion circuit is used for converting the gain sum voltage of the interlaced output line-by-line pixels into voltage values of the interlaced output line-by-line pixels through analog-to-digital conversion and outputting the voltage values to the data conversion circuit; and the data conversion circuit is used for converting the voltage values of the interlaced and output progressive pixels into pixel values of the interlaced and output progressive pixels and outputting the pixel values to the image signal processor.
In some embodiments, the image signal processor is further configured to obtain pixel information of the original image frame, and compare the pixel information of the original image frame with a preset pixel information threshold; and determining the next original image frame according to the comparison result to execute background subtraction calculation or directly outputting the pixel value of each row of pixels in the original image frame in the pixel values output in an interlaced way through a control and data transmission interface. The statistical mode of the pixel information of the original image frame is window statistics and/or histogram statistics. The pixel information comprises a pixel value mean value, a saturated pixel point number, a saturated pixel point mean value, a gradient mean value and/or a target brightness pixel number; the pixel information threshold comprises a pixel value mean threshold, a saturated pixel number threshold, a saturated pixel mean threshold, a gradient mean threshold and/or a target brightness pixel number corresponding to the pixel information; the number of the saturated pixel points is the number of pixels of which the pixel values in the original image frame are greater than a preset saturation threshold, the mean value of the saturated pixel points is the mean value of the pixels of which the pixel values in the original image frame are greater than the preset saturation threshold, and the number of the target brightness pixels is the number of pixels of which the pixel values in the original image frame are within a preset pixel value interval.
In some embodiments, the background subtraction calculation is based on the following preset background subtraction calculation formula: f (x, y) = | TG (x, y) -BG (x, y) |, where TG (x, y) is the pixel value of the pixel with coordinates (x, y) in the original image frame, and BG (x, y) is the ringThe pixel value of the pixel with the coordinate (x, y) in the ambient light image frame is subtracted from the pixel value of the pixel with the coordinate (x, y) in the background image frame. Alternatively, the background subtraction calculation is based on the following preset background subtraction calculation formula: Δ = TG (x, y) -g × BG (x, y) + offset,
Figure BDA0003798857980000031
Figure BDA0003798857980000032
where Δ is the compensated pixel difference, TG (x, y) is the pixel value of the pixel with coordinates (x, y) in the original image frame, BG (x, y) is the pixel value of the pixel with coordinates (x, y) in the ambient light image frame, g is the preset fixed coefficient compensation parameter, offset is the preset fixed offset compensation parameter, and F (x, y) is the pixel value of the pixel with coordinates (x, y) in the background image frame.
The second aspect of the present invention provides an image processing method, wherein after receiving an image acquisition instruction, an original image frame is exposed in a global exposure manner, and an ambient light image frame is exposed in a rolling shutter exposure manner, wherein the original image frame includes an image frame obtained by projecting a target space by ambient light and speckle laser, and the ambient light image frame includes an image frame obtained by projecting the target space by ambient light; processing voltage signals of every two rows of pixels in an original image frame and voltage signals of pixels in corresponding rows in an ambient light image frame line by line to obtain corresponding pixel values; and performing background subtraction calculation on the pixel values of every two lines of pixels in the original image frame in the pixel values and the pixel values of the corresponding lines of pixels in the ambient light image frame and outputting to form a background subtraction image frame.
A third aspect of the invention provides a depth camera comprising: the image sensor comprises a transmitting module, a depth calculating chip and a receiving module comprising the image sensor.
A fourth aspect of the present invention provides a computer-readable storage medium having stored thereon an image processing program which, when executed by a processor, implements the steps of the image processing method described above.
In the scheme of the invention, the image sensor comprises a control and data transmission interface, an exposure control circuit, a pixel array, a reading circuit and an image signal processor, after receiving an image acquisition instruction, an original image frame is exposed by adopting a global exposure mode and an ambient light image frame is exposed by adopting a rolling shutter exposure mode, then voltage signals of every two lines of pixels in the original image frame and voltage signals of pixels of corresponding lines in the ambient light image frame are processed to obtain corresponding pixel values, and background subtraction calculation is performed on the pixel values of every two lines of pixels in the original image frame and the pixel values of the pixels of corresponding lines in the ambient light image frame in the pixel values to form a background subtraction image frame.
Compared with the scheme that the depth image is obtained directly according to the image obtained when the depth camera emits the laser in the prior art, the method and the device perform background subtraction calculation on the pixel value of the original image frame and the pixel value of the environment light image frame and output the pixel values, are favorable for eliminating the influence of interference light rays in the environment light, wherein the interference light rays are the same as the laser wavelength of the depth camera, and improve the accuracy of the depth image. The global exposure and the rolling shutter exposure are combined, and every two lines of pixels are adopted for processing, so that the exposure time between an original image frame and an ambient light image frame is shortened, and the background reduction efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic diagram of a depth camera according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an image sensor according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a shared pixel in a pixel array according to an embodiment of the invention;
FIG. 4 is a pixel value histogram provided by an embodiment of the present invention;
FIG. 5 is a flowchart illustrating an image processing method according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating the step S200 in FIG. 5 according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating the additional steps in FIG. 5 according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings of the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.
In the prior art, most depth cameras use laser with a wavelength of a near-infrared band, such as 940 nanometers (nm) or 850 nanometers (nm), to emit light, infrared light with the wavelength of the near-infrared band may also exist in different environments (such as indoors or outdoors), and an optical receiver of the depth camera receives the infrared light with the wavelength of the near-infrared band in the environment while receiving the laser light with the wavelength of the near-infrared band emitted by the depth camera, so that the near-infrared band infrared light (i.e., interference light) in the environment affects the accuracy of obtaining a depth image.
Therefore, in the process of receiving laser light and calculating and acquiring a depth image by a depth camera, the depth camera may not be interfered when ambient light is weak, and the accuracy of acquiring the depth image may be affected if the ambient light is strong, so that the accuracy of subsequent operations such as face recognition based on the depth image may be affected, and the use experience of a user may be affected.
In order to solve the above problems, the present invention provides an image sensor and an image processing method, which can implement exposure of an original image frame in a global exposure mode and exposure of an ambient light image frame in a rolling exposure mode by sharing a pixel array, and perform background subtraction calculation on pixel values of every two lines of the original image frame and pixel values of pixels in a corresponding line of the ambient light image frame in pixel values output in an interleaved manner, so as to obtain a background subtraction image frame for eliminating the influence of interference light in ambient light, which is the same as the laser wavelength of a depth camera, and improve the accuracy of a depth image. The global exposure and the rolling shutter exposure are combined, and every two lines of pixels are adopted for processing, so that the exposure time between an original image frame and an ambient light image frame is shortened, and the background reduction efficiency is improved.
Furthermore, the pixel information of the current original image frame can be compared with a preset pixel information threshold value, and whether the next frame performs background subtraction or direct output is determined according to the comparison result, so that the flexibility of background subtraction is improved. For example, when the outdoor ambient light is strong, the pixel information of the current original image frame is greater than the pixel information threshold, it is determined that the next frame needs to be background-subtracted, and when the outdoor light is weak, the pixel information of the current original image frame is less than the pixel information threshold, it is determined that the next frame does not subtract the background but directly outputs the original image frame.
To facilitate the description that follows, reference will first be made to related terms as they are referred to herein;
global exposure (Global Shutter): all pixel exposure times in a frame of image are synchronously turned on and off, i.e. all pixels are exposed simultaneously.
Rolling Shutter exposure (Rolling Shutter): all pixels in one frame image do not start exposure and end at the same time, but are exposed in a row-by-row order.
Exposure in groups; the method comprises the steps of collecting original image frames in a global shutter exposure mode, collecting ambient light image frames in a shutter exposure mode, using at least one original image and at least one ambient light image as a group, and determining collection intervals among a plurality of groups of images according to a total output frame rate.
Original image frames; the target space projects the acquired image frames via ambient light and speckle laser.
An ambient light image frame; the target space projects the acquired image frames via ambient light.
The image sensor having the background reduction function in the present embodiment can be applied to a depth camera (a structured light depth camera in the present embodiment). As shown in fig. 1, the depth camera includes a transmitting module 1, a receiving module 2, and a depth calculating chip 4, and some depth cameras are equipped with other cameras such as a color camera. The emitting module 1 is used for emitting light beams to a target space, the receiving module 2 is used for receiving the light beams reflected back by the target space, and the depth calculating chip 4 is used for performing depth calculation according to the received light beam information to obtain a depth image. The receiving module 2 includes an image sensor 21 for receiving an original image frame and an ambient light image frame. In one embodiment, the processor 3 and the memory 5 are provided within the peripheral control means. In one embodiment, the processor 3 may also be provided within the depth camera. The processor 3 is used for sending control instructions and receiving data output by the depth camera. The memory 5 is used for storing control instructions, configuration information, data storage, and the like of the processor. In one embodiment, the processor is connected with the depth calculating chip 4 through a control and data transmission interface (including an MIPI interface and an IIC interface), and sends a control instruction to the depth calculating chip 4, and the depth calculating chip 4 sends the control instruction to the transmitting module 1 to control the transmitting module to operate. In one embodiment, the processor sends a control command to the transmitting module 1 through the control and data transmission interface, and directly controls the transmitting module 1 to operate.
In one embodiment, the depth camera is a structured light depth camera, wherein the transmitting module 1 includes a laser for projecting a coded structured light patterned beam into the target space, the receiving module 2 collects the structured light pattern and outputs the structured light pattern to the depth calculating chip, and the chip performs depth calculation on the structured light pattern to obtain a depth image of the target space. The transmitting module 1 may include a light source, a lens and a diffractive optical element, the transmitted structured light pattern is an infrared speckle pattern, the pattern has a relatively uniform particle distribution but a high irrelevance, and the receiving module 2 is an infrared camera including an image sensor 21. The structured light pattern may also be in the form of stripes, two-dimensional patterns, and the like. Starting a laser, wherein an image sensor 21 can acquire speckle laser projected by the laser and an original image frame formed by projecting infrared light in the environment onto a target space; the laser is turned off and the image sensor 21 may capture an image frame of ambient light formed by only the projection of ambient light onto the target space. The target space is an object to be photographed and a depth image is obtained, and may be a person, an object, or a scene, and is not limited specifically herein.
In one embodiment, the depth camera is a TOF depth camera, a binocular structured light depth camera, or the like. In the following description, a structured light depth camera will be taken as an example, and the principles thereof may be applied to other types of depth cameras.
The processor 3 is for controlling the entire depth camera, and the processor 3 may be a single processor or a plurality of processor units including, but not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a digital signal processing unit (DSP), a neural Network Processor (NPU), an Image Signal Processor (ISP), and the like. In some embodiments, the processor 3 may be an integrated system on chip (SoC) or an Application Specific Integrated Circuit (ASIC), including a processor such as a CPU, on-chip memory, a controller, a communication interface, and the like.
The memory 5 is used for storing data such as system data, application data, parameter data, etc. The system data comprises operating systems such as Android and Linux, and the parameter data comprises internal and external parameters of the depth camera.
The processor 3 can obtain the information of exposure time, exposure mode, etc. from the memory 5, and send an image acquisition instruction through a control and data transmission interface with the image sensor to control the image sensor to obtain an image and perform background subtraction calculation and send the background subtraction result to the depth control chip 4 for depth calculation.
It should be noted that the specific process of performing the background subtraction calculation and obtaining the background subtraction image frame in the present invention may be executed in the image sensor, or the processor 3 may control the image sensor to transmit the pixel values of the original image frame and the background image frame to the depth calculating chip 4, and the specific background subtraction calculation process is executed by the depth calculating chip 4, which is not limited herein.
In one embodiment, the image sensor comprises a control and data transmission interface, an exposure control circuit, a pixel array, a readout circuit and an image signal processor, wherein the control and data transmission interface is used for receiving and outputting an image acquisition instruction; the exposure control circuit is used for receiving an image acquisition instruction and then controlling the pixel array to expose an original image frame in a global exposure mode and expose an ambient light image frame in a rolling shutter exposure mode; the pixel array is used for outputting voltage signals of pixels in every two rows in an original image frame and voltage signals of pixels in corresponding rows in an ambient light image frame to the reading circuit in an interleaving manner; the reading circuit is used for processing the voltage signals of the interlaced and output row-by-row pixels to obtain corresponding pixel values and outputting the pixel values to the image signal processor; the image signal processor is used for executing background subtraction calculation on pixel values of every two lines of pixels in the original image frame and pixel values of corresponding lines of pixels in the ambient light image frame in the pixel values output in an interlaced mode, and outputting the pixel values through the control and data transmission interface to form a background subtraction image frame. The readout circuit may include a programmable gain amplifier, an analog-to-digital conversion circuit, and a data conversion circuit. In one embodiment, the readout circuit may further include a static memory for storing the digital signal output by the analog-to-digital conversion circuit. In one embodiment, the image sensor further comprises a controller for receiving data such as an input clock, an image capture command, and the like, and controlling the internal output. The above description performs the series of data stream processing and background subtraction calculation with every two rows of pixels as a unit, and may also perform the series of data stream processing and background subtraction calculation with one row as a unit, or with two or more rows, such as three rows, as a unit, and is not limited in particular.
Referring to fig. 2, an embodiment of the image sensor includes: the pixel array 101, the row exposure control circuit 1021, the column exposure control circuit 1022, the controller 103, the image signal processor 104, the analog circuit 105, the programmable gain amplifier 106, the analog-to-digital conversion circuit 107, the static memory 108, the data conversion circuit 109, the control and data transmission interface 110, and the power management module 111. Wherein, the functions of each part are introduced as follows;
specifically, the control and data transmission interface 10 receives an image capture command from the depth camera processor 3, where the image capture command may include an exposure mode such as global exposure, rolling shutter exposure, or group exposure, and an exposure time, and the exposure time may be set to one or more exposure times respectively matching different exposure modes. The control and data transmission interface may include, for example, IIC (Inter-Integrated Circuit bus), MIPI (Mobile industry processor interface) ( Mobile Industry Processor Interface, LVDS (Low Voltage Differential Signaling) Interface, etc.
In one embodiment, the image sensor may receive and interpret the image capture command via the controller 103 and provide the interpreted exposure time and exposure mode to the exposure control circuitry. In an embodiment, the function of receiving and analyzing the image capturing command may also be directly performed by the exposure control circuit, that is, the exposure control circuit receives the image capturing command from the control and data transmission interface 10 and analyzes the exposure time and the exposure mode.
The exposure control circuit controls the pixel array to expose based on the exposure time and the exposure mode, and in the embodiment, the original image frame is exposed by adopting a global exposure mode, and the ambient light image frame is exposed by adopting a rolling shutter exposure mode. Each pixel in the ambient light image frame is in a one-to-one correspondence with each pixel in the original image frame.
The exposure control circuit includes a row exposure control circuit 1021 and a column exposure control circuit 1022. The pixel array comprises a row exposure control circuit 1021 and a column exposure control circuit 1022, wherein the column exposure control circuit 22 selectively scans the light quantity received by each pixel unit in the pixel array 101 area in the column direction in sequence by row unit to obtain the light signal of each row of pixels, and obtains the voltage signal of each row of pixels after photoelectric conversion. The row exposure control circuit 1021 sequentially selects the voltage signal of each row of pixels, performs black level adjustment, various digital signal processing, and the like, and then sequentially outputs the signals for each row.
As shown in fig. 3, since the pixels in the pixel array 101 are shared two by two, the exposure time of each two rows in the pixel array is the same, and the specific exposure process is as follows: according to the global exposure mode, the original image frame is exposed by all pixels in the pixel array 101 under the control of the row exposure control circuit 1021 and the column exposure control circuit 1022, exposure charges generated after each pixel is exposed, namely voltage signals, are transferred from the photodiodes to the storage diodes, the photodiodes are reset, and the rolling shutter exposure mode is adopted to expose the ambient light image frame line by line. After the voltage signals of the storage diodes of the pixels of the first row and the second row in the original image frame are read out line by line, the voltage signals of the pixels of the first row and the second row in the ambient light image frame are transferred to the storage diodes of the pixels of the first row and the second row; reading voltage signals of pixels in a first row and a second row of an ambient light image frame in a storage diode row by row, wherein the pixels in a third row and a fourth row of the ambient light image frame generate voltage signals after being exposed by a photodiode of a pixel array 101; and reading the voltage signals of the pixels in the third row and the fourth row of the original image frame in the storage diodes row by row, wherein the voltage signals of the pixels in the third row and the fourth row of the ambient light image frame are transferred to the storage diodes of the pixels in the third row and the fourth row to be read. That is, the output of the pixel array 101 is sequentially the voltage signal of the first row of pixels in the original image frame, the voltage signal of the second row of pixels in the original image frame, the voltage signal of the first row of pixels in the ambient light image frame, the voltage signal of the second row of pixels in the ambient light image frame, the voltage signal of the third row of pixels in the original image frame, the voltage signal of the fourth row of pixels in the original image frame, the voltage signal of the third row of pixels in the ambient light image frame, and the voltage signal … … of the fourth row of pixels in the ambient light image frame, namely, the voltage signal of every two rows of pixels in the original image frame and the voltage signal of the corresponding row of pixels in the ambient light image frame are output in an interleaving manner. For convenience of description, the pixels in the first row, the pixels in the second row, and the like are only used for explaining the difference between the pixels in the previous row and the pixels in the next row, the pixel array 101 is limited by the pixel array structure, and the voltage signal output is not necessarily performed from the pixels in the first row of the pixel array, and the invention is not limited herein. The pixels in fig. 3 are connected two by two, saving the switches between the diodes and reducing the size and cost of the pixel array 101 compared to the individual pixel array 101.
It should be noted that the exposure mode of any one frame image may be global exposure or rolling shutter exposure, and in the embodiment of the present application, in order to reduce the exposure time between the original image frame and the ambient light image frame as much as possible, the original image frame is exposed in the global exposure mode, and the ambient light image frame is exposed in the rolling shutter exposure mode.
A Programmable Gain Amplifier (PGA) 106 receives and amplifies the voltage signals of the pixels in rows output alternately, and then sums the amplified voltages of the pixels in each two-row pixel group to obtain a Gain sum voltage, and outputs the Gain sum voltage.
The analog-to-digital conversion circuit 107 receives and performs analog-to-digital conversion on the gain-added voltage to obtain a voltage value of the digital signal, and the voltage value is stored in a Static Random-Access Memory (SRAM) 108 in time.
The static memory 108 stores the voltage values of the pixels in the row alternately output until the voltage value of the pixel in the next row arrives, and outputs the voltage value of the pixel in the previous row to the data conversion circuit 109. The static memory 108 can be used for caching data, so that the situation that the data is not processed and is not disordered before and after the data is processed is prevented.
The data conversion circuit 109 obtains the voltage values of the pixel groups in each row, which are alternately output in the static memory 108, converts the voltage values into corresponding pixel values, and outputs the pixel values through the control and data transmission interface 110.
The image signal processor 104 receives pixel values of pixels in lines output in an interleaving manner, and performs background subtraction calculation on pixel values of pixels in every two lines in the original image frame and pixel values of pixels in a corresponding line in the ambient light image frame, for example, receives pixel values of pixels in a first line of the original image frame and pixel values of pixels in a second line of the original image frame in a line-by-line manner, when receiving the pixel values of the pixels in the first line of the ambient light image frame, the pixel values of the pixels in the first line of the original image frame and the pixel values of pixels in the first line of the ambient light image frame can perform background subtraction calculation on the corresponding pixels, and similarly, when receiving the pixel values of pixels in the second line of the ambient light image frame, the pixel values of pixels in the second line of the original image frame and the pixel values of pixels in the second line of the ambient light image frame can perform background subtraction calculation on the corresponding pixels. And then outputting the calculation result of subtracting the background line by line. In one embodiment, all pixels of a frame of original image frame may be output after being subjected to background subtraction calculation to form a background subtraction image frame.
In one embodiment, the background subtraction calculation may be performed based on the following preset background subtraction calculation formula:
F(x,y)=|TG(x,y)-BG(x,y)| (1)
wherein TG (x, y) is a pixel value of a pixel with coordinates (x, y) in the original image frame, BG (x, y) is a pixel value of a pixel with coordinates (x, y) in the ambient light image frame, and F (x, y) is a pixel value of a pixel with coordinates (x, y) in the background image frame.
In one embodiment, the background subtraction calculation may be performed based on the following preset background subtraction calculation formula:
Δ=TG(x,y)-g*BG(x,y)+offset (2)
Figure BDA0003798857980000121
where Δ is the compensated pixel difference, TG (x, y) is the pixel value of the coordinate (x, y) pixel in the original image frame, BG (x, y) is the pixel value of the coordinate (x, y) pixel in the ambient light image frame, g is the preset fixed coefficient compensation parameter, offset is the preset fixed offset compensation parameter, and F (x, y) is the pixel value of the coordinate (x, y) pixel in the background image frame.
In general, the pixel value of the original image frame pixel is greater than the pixel value of the pixel corresponding to the ambient light, and if F (x, y) is less than 0, it indicates that the slight pixel value difference may be caused by other errors, and at this time, the pixel value of the pixel may be directly set to 0. It should be noted that the embodiments of the present invention are not limited to the two background subtraction calculation formulas, and other methods such as directly using the difference between the pixel values of the pixels in the original image frame and the pixels corresponding to the ambient light image frame as the background subtraction result may also be used.
In one application scenario, each pixel is calculated individually according to the above-described background subtraction calculation formulas (2) (3). Specifically, the pixels in the original image frame and the co-located pixels of the pixel array 101 in the ambient light image frame are obtained. And (3) acquiring a fixed coefficient compensation parameter and a fixed offset compensation parameter, calculating the pixel difference value of the pixel based on the formula (2), and then correcting according to the formula (3) to obtain the pixel value of each pixel in the background subtraction image frame. In this embodiment, the same depth camera is used to capture the original image frame and the ambient light image frame, and the position and other parameters of the depth camera are not adjusted, so that it is only necessary to obtain the pixels at the same position in the two images to ensure that the two pixels are corresponding to each other.
It should be noted that, for the same background light, there may be a deviation between the pixel values acquired in the two acquisitions, and if the difference between the pixel values is too large, it is not favorable for the background subtraction processing of the original image frame. Therefore, in this embodiment, the pixel values of the pixels in the ambient light image frame are adjusted based on the compensation parameter, and the corresponding pixel values after brightness compensation are obtained. The compensation parameter in the formula (2) may be input by a user, or may be preset by the user in combination with the current area. Specifically, the formula (2) emphasizes a fixed coefficient compensation parameter g and a fixed offset compensation parameter offset, and specific values thereof can be set and adjusted according to actual requirements, and in an application scenario where the pixel value is not greatly offset, g =1 and offset =0. And if the pixel value of the pixel with the coordinate (x, y) in the ambient light image frame is BG (x, y), the pixel value of the adjustment correction pixel obtained after compensation adjustment is g BG (x, y) -offset. Wherein x and y may represent the abscissa and ordinate values of the image, respectively.
It should be noted that, in this embodiment, the background subtraction calculation is performed on pixels by pixels, and in an actual use process, the background subtraction calculation may also be performed on each block, for example, the background subtraction operation may be directly performed on two lines of pixels obtained each time as one block, or the background subtraction optimization calculation may be performed on an entire frame of an image, or the background subtraction optimization calculation may be performed on a multi-frame image, which is not limited specifically herein.
In an application scenario, the control and data transmission interface 110 may also directly output the original image frame and the ambient light image frame.
In one embodiment, the image signal processor 104 is further configured to obtain pixel information of the original image frame, and compare the pixel information of the original image frame with a preset pixel information threshold; and determining the next original image frame to execute background subtraction calculation or directly outputting the pixel values of every two lines of pixels in the original image frame in the pixel values output in an interlaced way through a control and data transmission interface according to the comparison result.
Specifically, the image signal processor 104 may obtain pixel information of an original image frame, and compare the pixel information of a current original image frame with a preset pixel information threshold, where if the pixel information is greater than the preset pixel information threshold, the received next frame performs a background subtraction operation, and if the pixel information is not greater than the preset pixel information threshold, the received next frame does not perform the background subtraction operation but directly outputs the original image frame.
The pixel information comprises a pixel value mean value, a saturated pixel point number, a saturated pixel point mean value, a gradient mean value and/or a target brightness pixel number, and the pixel information threshold comprises a pixel value mean value threshold, a saturated pixel point number threshold, a saturated pixel point mean value threshold, a gradient mean value threshold and/or a target brightness pixel number corresponding to the pixel information. The number of saturated pixels is the number of pixels of which the pixel values are greater than a preset saturation threshold value in the original image frame, the mean value of the saturated pixels is the mean value of the pixels of which the pixel values are greater than the preset saturation threshold value in the original image frame, and the number of target brightness pixels is the number of pixels of which the pixel values are in a preset pixel value interval in the original image frame. The pixel information of the original image frame may be in the form of window statistics and/or histogram statistics.
When the influence of the interference light (i.e., the light with the same wavelength as the speckle laser) in the ambient light on the collected original image frame is large, the pixel information of the original image frame may reflect whether the ambient light is too strong, and whether the background subtraction operation is required to eliminate the influence of the interference light in the ambient light, which is the same as the laser wavelength of the depth camera.
It should be noted that, the brightness of the original image frame may be determined based on any one or any multiple of specific pixel information of the original image frame, and the pixel information threshold of the original image frame includes which specific thresholds are determined according to the specific pixel information of the original image frame, for example, when the pixel information of the original image frame includes a pixel value mean, the pixel information threshold specifically includes a corresponding pixel value mean threshold, when the pixel information of the original image frame also includes a saturated pixel point number, the pixel information threshold specifically includes a corresponding saturated pixel point number threshold, and when the pixel information of the original image frame also includes a saturated pixel point mean, the pixel information threshold specifically includes a corresponding saturated pixel point mean threshold.
Specifically, a (weighted) pixel value mean threshold value avg _ w _ t for distinguishing the indoor weak ambient light from the outdoor strong ambient light is first configured by the image signal processor 104, then the (weighted) pixel value mean value avg _ w of the original image frame obtained by statistics (including window statistics and/or histogram statistics) is compared with the (weighted) luminance mean threshold value, and if avg _ w is greater than avg _ w _ t, the background subtraction calculation is performed. The pixel value average value is an average value of pixel values of all pixels in the original image frame, and when the pixel value average value is larger than a pixel value average value threshold value, it indicates that the original image frame is too bright, and image processing is required to eliminate the influence of interference light in ambient light, which is the same as the laser wavelength of the depth camera.
It should be noted that the gradient is an absolute value obtained by subtracting a previous pixel value from a pixel value, the average value of the gradient in a region is obtained by adding the gradient values of all the points in the region together to average, and the average value of the gradient in the region reflects the intensity of the change of the image value in the region, so that when the average value of the gradient of the original image frame is large, it indicates that the image value of the original image frame changes greatly, that is, a part of the original image frame may be affected by the corresponding disturbing light in the ambient light, and therefore, it is necessary to perform background subtraction to eliminate the influence of the disturbing light in the ambient light, which is the same as the laser wavelength of the depth camera.
In one embodiment, the image information statistics may be performed on the original image frame based on a window statistics manner. The original image frame is divided into a plurality of windows, each window can contain one or a plurality of pixels, the statistics of the pixel information of the original image frame is carried out on each window respectively, then the statistics result of the whole original image frame is obtained by combining the statistics result of each window, and therefore the efficiency of obtaining the pixel information of the original image frame can be improved. Further, different weights can be set for each window, for example, a window corresponding to a region that a user is more concerned about (for example, a window where a target space subject is located) can be set with a higher weight, so that a statistical result of pixel information of an original image frame better meets the user requirement, and the user experience is improved.
Specifically, the original image frame is divided into a plurality of windows, and the pixel value mean avg, the saturation point number sat _ cnt, the saturation point luminance mean sat _ avg, and the gradient mean grad _ avg of each window are respectively counted. Then, according to the weight corresponding to each window, a (weighted) pixel value mean value avg _ w, (a weighted) saturation point pixel value mean value sat _ avg _ w, (a weighted) gradient mean value grad _ avg _ w, and the like corresponding to the ambient light image frame are obtained through statistics, so that whether the ambient light image frame is too bright is judged according to a corresponding threshold value. In this embodiment, a saturation threshold is set in a register in advance, and when a pixel value of a certain pixel exceeds the saturation threshold, the pixel is a saturation point. The saturation point is a point with an excessively large pixel value, that is, a point that may be influenced by relatively large ambient light, and the larger the number of the saturation points or the larger the average value of the pixel values of the saturation points is, the larger the influence on the current original image frame is, and the influence on the current original image frame needs to be eliminated. In one embodiment, an original image frame is divided into n windows (n ≧ 1, rounded), and the above (weighted) pixel value mean value avg _ w = (a 1 × L1+ a2 × L2+ … + an × Ln)/(a 1+ a2+ … + an), where a1 to an are the pixel value mean values of the respective windows, and L1 to Ln are the weights of the respective windows. Optionally, the maximum value max and the minimum value min of the pixel value of each window may be counted, and whether the current original image is affected by a light ray (i.e., an interference light ray) which may exist in the ambient light and has the same laser wavelength as the laser wavelength emitted by the depth camera may be further determined based on a difference between the maximum value and the minimum value.
The weight of each window may be input by a user or may be preset, and a specific setting method is not particularly limited. For example, the weight of all windows may be set to be the same, or the weight of the window in the middle of the image may be set to be large, and the weight of the window at the edge may be set to be small, or the weight of the window where the target space subject is located may be set to be large, and the weight of the other windows may be set to be small. In this embodiment, the obtained background-reduced image needs to be applied to the field of face recognition, and the face is usually located in the middle of the image when the face image is collected, so that the window weight in the middle of the image, the window weight at the upper edge is small, the window weight at the lower edge is medium (i.e., greater than the weight of the window at the upper edge and smaller than the weight of the middle window), and thus, it is better determined whether the face subject is affected by the light (i.e., the interference light) that may exist in the ambient light and is the same as the laser wavelength emitted by the depth camera.
In an embodiment, histogram statistics may be performed on the pixel information values of the original image frame, as shown in fig. 4, according to the statistical histogram, the number of pixels of the pixel values in a certain luminance interval (a preset pixel value interval) may be counted, and accordingly, whether the image is too bright or too dark may be determined. For example, the preset pixel value interval may be preset to be an interval having a pixel value equal to 235 to 245 in this embodiment. Optionally, the pixel brightness mean value of the original image frame may also be calculated based on the histogram.
In an embodiment, the image signal processor may further perform Auto Exposure (AE) according to pixel information of the original image frame, that is, after the pixel information of the original image frame is obtained, by determining whether the pixel information is proper shooting brightness, and if not, adjusting the size of the aperture, the Exposure time, and the like according to a specific pixel value, so as to obtain the optimal shooting brightness.
The image sensor further comprises a power management module 111 and an analog circuit 105, wherein the power management module 111 is used for supplying power to the whole image sensor. The analog circuit 105 includes basic circuits of analog domains such as a DC-DC boost circuit, a clock signal control, etc., for maintaining a normal operation of the image sensor.
In one embodiment, the image sensor may have a plurality of background subtraction modes, including a fixed background subtraction mode, a non-background subtraction mode, or an automatic background subtraction mode, wherein the fixed background subtraction mode is used to control the image sensor to obtain the original image frame and the ambient light image frame and directly perform a background subtraction operation and output a background subtraction result; the background-unreduced mode is used for controlling the whole image sensor to obtain an original image frame and outputting the original image frame; the automatic background subtraction mode is used for controlling the image sensor to obtain an original image frame and an ambient light image frame, and judging whether background subtraction is required for a next original image frame or the next original image frame is directly output according to pixel information of the original image frame.
In one application scenario, the fixed background reduction mode or the automatic background reduction mode is started under outdoor strong ambient light conditions to eliminate the influence of the possible existence of light rays (i.e., interference light rays) in the ambient light, which are the same as the laser wavelength emitted by the depth camera; the background subtraction is turned off in low ambient light conditions indoors and the image processing process described above is not performed. Control logic for the background subtraction mode may be added to the depth camera processor 3 and transmitted to the image signal processor 104 via the control and data transmission interface 110.
From the above, based on the scheme in this embodiment, the background subtraction mode may be flexibly selected according to the ambient light condition, and the interference caused by the interference light in the ambient light, which is the same as the laser wavelength of the depth camera, on the reflected laser light obtained by the depth camera is eliminated through background subtraction processing, so that the accuracy of obtaining the depth image by the depth camera is improved, and the accuracy of subsequent operations (for example, 3D face image recognition based on the depth image) using the depth image may be improved.
As shown in fig. 5, an embodiment of the present invention provides an image processing method applied in an image sensor, and specifically, the image processing method includes the following steps:
s100, after receiving an image acquisition instruction, exposing an original image frame by adopting a global exposure mode and exposing an ambient light image frame by adopting a rolling shutter exposure mode;
s200, processing voltage signals of every two lines of pixels in an original image frame and voltage signals of pixels in corresponding lines in an ambient light image frame line by line to obtain corresponding pixel values;
and S300, performing background subtraction calculation on the pixel values of every two lines of pixels in the original image frame and the pixel values of the corresponding lines of pixels in the ambient light image frame in the pixel values, and outputting to form a background subtraction image frame.
In an embodiment, the step S200 specifically includes the following steps:
s201, performing gain control on voltage signals of every two rows of pixels in an original image frame and voltage signals of pixels of corresponding rows in an environment light image frame line by line to obtain corresponding gain summation voltage;
s202, converting the gain sum voltage of the row-by-row pixels into voltage values of the row-by-row pixels through analog-to-digital conversion;
s203, converting the voltage values of the row-by-row pixels into pixel values of the row-by-row pixels.
In one embodiment, the image processing method further comprises the steps of:
s401, obtaining pixel information of an original image frame, and comparing the pixel information of the original image frame with a preset pixel information threshold value;
s402, determining the next original image frame to execute background subtraction calculation or directly outputting the pixel values of every two lines of pixels in the original image frame in the pixel values output in an interlaced mode according to the comparison result.
In one embodiment, the statistical manner of the pixel information of the original image frame is window statistics and/or histogram statistics.
In one embodiment, the pixel information includes a pixel value mean, a saturated pixel point number, a saturated pixel point mean, a gradient mean, and/or a target luminance pixel number;
the pixel information threshold comprises a pixel value mean threshold, a saturated pixel number threshold, a saturated pixel mean threshold, a gradient mean threshold and/or a target brightness pixel number corresponding to the pixel information;
the number of the saturated pixel points is the number of pixels of which the pixel values in the original image frame are greater than a preset saturation threshold value, and the number of the target brightness pixels is the number of pixels of which the pixel values in the original image frame are within a preset pixel value interval.
In one embodiment, the background subtraction calculation is based on the following preset background subtraction calculation formula:
F(x,y)=|TG(x,y)-BG(x,y)| (1)
wherein TG (x, y) is a pixel value of a pixel with coordinates (x, y) in the original image frame, BG (x, y) is a pixel value of a pixel with coordinates (x, y) in the ambient light image frame, and F (x, y) is a pixel value of a pixel with coordinates (x, y) in the background image frame.
In one embodiment, the background subtraction calculation is based on the following preset background subtraction calculation formula:
Δ=TG(x,y)-g*BG(x,y)+offset (2)
Figure BDA0003798857980000181
where Δ is the compensated pixel difference, TG (x, y) is the pixel value of the coordinate (x, y) pixel in the original image frame, BG (x, y) is the pixel value of the coordinate (x, y) pixel in the ambient light image frame, g is the preset fixed coefficient compensation parameter, offset is the preset fixed offset compensation parameter, and F (x, y) is the pixel value of the coordinate (x, y) pixel in the background image frame.
It should be noted that, the specific process of performing the background subtraction calculation and obtaining the background subtraction image frame may be executed in the image sensor, or may be executed in the corresponding depth calculation chip, which is not limited herein. When background subtraction is carried out in the depth calculation chip, namely background subtraction outside the chip, the image sensor transmits the original image frame and the ambient light image frame to the depth calculation chip, the depth calculation chip judges a background subtraction mode according to a preset background subtraction calculation formula and pixel information of the original image frame and carries out background subtraction calculation, and the obtained background subtraction image frame is used for depth calculation.
For a specific explanation in the above embodiment of the image processing method, reference may be made to the description of the embodiment of the image sensor, and details are not repeated here.
The embodiment of the present invention further provides a computer-readable storage medium, where an image processing program is stored on the computer-readable storage medium, and when the image processing program is executed by a processor, the image processing program implements the steps of any one of the image processing methods provided in the embodiment of the present invention.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by functions and internal logic of the process, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
It should be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional units and modules is only used for illustration, and in practical applications, the above functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above described functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention. For the specific working processes of the units and modules in the system, reference may be made to the corresponding processes in the foregoing method embodiments, which are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art would appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the above modules or units is only one type of logical function division, and the actual implementation may be implemented by another division manner, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments described above may be implemented. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying the above-described computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, and the like. It should be noted that the contents contained in the computer-readable storage medium can be increased or decreased as required by legislation and patent practice in the jurisdiction.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art; the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein.

Claims (16)

1. An image sensor, comprising:
the control and data transmission interface is used for receiving and outputting an image acquisition instruction;
the exposure control circuit is used for receiving the image acquisition instruction and then controlling the pixel array to expose an original image frame in a global exposure mode and expose an environment light image frame in a rolling shutter exposure mode, wherein the original image frame comprises an image frame obtained by projecting a target space through environment light and speckle laser, and the environment light image frame comprises an image frame obtained by projecting the target space through the environment light;
the pixel array is used for outputting voltage signals of pixels in every two rows in the original image frame and voltage signals of pixels in corresponding rows in the ambient light image frame to a reading circuit in an interleaving manner;
the reading circuit is used for processing the voltage signals of the interlaced and output pixels to obtain corresponding pixel values and outputting the pixel values to the image signal processor;
the image signal processor is configured to perform background subtraction calculation on pixel values of every two rows of pixels in the original image frame and pixel values of corresponding rows of pixels in the ambient light image frame in the pixel values output in an interleaved manner, and output the calculation result through the control and data transmission interface to form a background subtraction image frame.
2. The image sensor of claim 1, wherein the readout circuit comprises:
the programmable gain amplifier is used for carrying out gain control on the voltage signals of the interlaced and output progressive pixels to obtain gain addition voltage of the interlaced and output progressive pixels and outputting the gain addition voltage to the analog-to-digital conversion circuit;
the analog-to-digital conversion circuit is used for converting the gain sum voltage of the interlaced output line-by-line pixels into the voltage value of the interlaced output line-by-line pixels through analog-to-digital conversion and outputting the voltage value to the data conversion circuit;
and the data conversion circuit is used for converting the voltage values of the interlaced and output progressive pixels into the pixel values of the interlaced and output progressive pixels and outputting the pixel values to the image signal processor.
3. The image sensor of claim 1, wherein the image signal processor is further configured to obtain pixel information of the raw image frame and compare the pixel information of the raw image frame with a preset pixel information threshold;
and determining the next original image frame to execute the background subtraction calculation according to the comparison result or directly outputting the pixel values of the progressive pixels in the original image frame in the pixel values output in an interlaced mode through the control and data transmission interface.
4. The image sensor of claim 3, wherein the pixel information statistics of the raw image frame are window statistics and/or histogram statistics.
5. The image sensor of claim 3,
the pixel information comprises a pixel value mean value, a saturated pixel point number, a saturated pixel point mean value, a gradient mean value and/or a target brightness pixel number;
the pixel information threshold comprises a pixel value mean threshold, a saturated pixel number threshold, a gradient mean threshold and/or a target brightness pixel number corresponding to the pixel information;
wherein the number of saturated pixel points is the number of pixels of which the pixel values are greater than a preset saturation threshold value in the original image frame, and the target brightness pixel number is the number of pixels of which the pixel values are within a preset pixel value interval in the original image frame.
6. The image sensor of any of claims 1-3, wherein the background subtraction calculation is based on the following preset background subtraction calculation formula:
F(x,y)=|TG(x,y)-BG(x,y)|
wherein TG (x, y) is a pixel value of a pixel with coordinates (x, y) in the original image frame, BG (x, y) is a pixel value of a pixel with coordinates (x, y) in the ambient light image frame, and F (x, y) is a pixel value of a pixel with coordinates (x, y) in the background subtraction image frame.
7. The image sensor of any of claims 1-3, wherein the background subtraction calculation is based on the following preset background subtraction calculation formula:
Δ=TG(x,y)-g*BG(x,y)+offset
Figure FDA0003798857970000021
wherein Δ is the compensated pixel difference, TG (x, y) is the pixel value of the pixel with coordinates (x, y) in the original image frame, BG (x, y) is the pixel value of the pixel with coordinates (x, y) in the ambient light image frame, g is a preset fixed coefficient compensation parameter, offset is a preset fixed offset compensation parameter, and F (x, y) is the pixel value of the pixel with coordinates (x, y) in the background subtraction image frame.
8. An image processing method, comprising:
after receiving an image acquisition instruction, exposing an original image frame by adopting a global exposure mode and exposing an ambient light image frame by adopting a rolling shutter exposure mode, wherein the original image frame comprises an image frame obtained by projecting a target space by ambient light and speckle laser, and the ambient light image frame comprises an image frame obtained by projecting the target space by the ambient light;
processing voltage signals of every two lines of pixels in the original image frame and voltage signals of pixels in corresponding lines in the ambient light image frame line by line to obtain corresponding pixel values;
and performing background subtraction calculation on the pixel values of every two lines of pixels in the original image frame and the pixel values of the corresponding lines of pixels in the ambient light image frame and outputting to form a background subtraction image frame.
9. The image processing method according to claim 8, wherein the processing the voltage signals of every two rows of pixels in the original image frame and the voltage signals of the corresponding rows of pixels in the ambient light image frame line by line to obtain corresponding pixel values comprises:
performing gain control on the voltage signals of every two rows of pixels in the original image frame and the voltage signals of the pixels of the corresponding rows in the environment light image frame line by line to obtain gain addition voltages of the pixels line by line;
performing analog-to-digital conversion on the gain addition voltage of the row-by-row pixels to obtain voltage values of the row-by-row pixels;
and converting the voltage values of the row-by-row pixels into pixel values of the row-by-row pixels.
10. The image processing method according to claim 8, further comprising:
acquiring pixel information of the original image frame, and comparing the pixel information of the original image frame with a preset pixel information threshold value;
and determining the next original image frame to perform the background subtraction calculation or directly outputting the pixel values of every two lines of pixels in the original image frame according to the comparison result.
11. The image processing method according to claim 10, wherein the statistical manner of the pixel information of the original image frame is window statistics or histogram statistics.
12. The image processing method according to claim 10,
the pixel information comprises a pixel value mean value, a saturated pixel point number, a saturated pixel point mean value, a gradient mean value and/or a target brightness pixel number;
the pixel information threshold comprises a pixel value mean threshold, a saturated pixel number threshold, a saturated pixel mean threshold, a gradient mean threshold and/or a target brightness pixel number corresponding to the pixel information;
wherein the number of saturated pixel points is the number of pixels of which the pixel values are greater than a preset saturation threshold value in the original image frame, and the target brightness pixel number is the number of pixels of which the pixel values are within a preset pixel value interval in the original image frame.
13. The image processing method according to any one of claims 8 to 10, wherein the background subtraction calculation is based on the following preset background subtraction calculation formula:
F(x,y)=|TG(x,y)-BG(x,y)|
wherein TG (x, y) is a pixel value of a pixel with coordinates (x, y) in the original image frame, BG (x, y) is a pixel value of a pixel with coordinates (x, y) in the ambient light image frame, and F (x, y) is a pixel value of a pixel with coordinates (x, y) in the background image frame.
14. The image processing method according to any one of claims 8 to 10, wherein the background subtraction calculation is based on the following preset background subtraction calculation formula:
Δ=TG(x,y)-g*BG(x,y)+offset
Figure FDA0003798857970000041
wherein Δ is the compensated pixel difference, TG (x, y) is the pixel value of the pixel with coordinates (x, y) in the original image frame, BG (x, y) is the pixel value of the pixel with coordinates (x, y) in the ambient light image frame, g is a preset fixed coefficient compensation parameter, offset is a preset fixed offset compensation parameter, and F (x, y) is the pixel value of the pixel with coordinates (x, y) in the background subtraction image frame.
15. A depth camera, comprising: a transmitting module, a depth calculating chip and a receiving module comprising an image sensor according to any of claims 1-7.
16. A computer-readable storage medium, having stored thereon an image processing program which, when executed by a processor, implements the steps of the image processing method according to any one of claims 8 to 14.
CN202210976999.8A 2022-08-15 2022-08-15 Image sensor, image processing method, depth camera, and storage medium Pending CN115499604A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210976999.8A CN115499604A (en) 2022-08-15 2022-08-15 Image sensor, image processing method, depth camera, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210976999.8A CN115499604A (en) 2022-08-15 2022-08-15 Image sensor, image processing method, depth camera, and storage medium

Publications (1)

Publication Number Publication Date
CN115499604A true CN115499604A (en) 2022-12-20

Family

ID=84465684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210976999.8A Pending CN115499604A (en) 2022-08-15 2022-08-15 Image sensor, image processing method, depth camera, and storage medium

Country Status (1)

Country Link
CN (1) CN115499604A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116419074A (en) * 2023-03-08 2023-07-11 哈尔滨市科佳通用机电股份有限公司 Railway vehicle image acquisition method and system for eliminating sunlight interference

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116419074A (en) * 2023-03-08 2023-07-11 哈尔滨市科佳通用机电股份有限公司 Railway vehicle image acquisition method and system for eliminating sunlight interference
CN116419074B (en) * 2023-03-08 2024-04-19 哈尔滨市科佳通用机电股份有限公司 Railway vehicle image acquisition method and system for eliminating sunlight interference

Similar Documents

Publication Publication Date Title
CN110036630B (en) Solid-state imaging device, and mirror of transport apparatus
US8976279B2 (en) Light receiver, method and transmission system with time variable exposure configurations
CN213279832U (en) Image sensor, camera and terminal
US7884868B2 (en) Image capturing element, image capturing apparatus, image capturing method, image capturing system, and image processing apparatus
US9230310B2 (en) Imaging systems and methods for location-specific image flare mitigation
US10593055B2 (en) Method and apparatus for capturing images and associated 3D model based on a single image sensor and structured-light patterns in the visible spectrum
US9516295B2 (en) Systems and methods for multi-channel imaging based on multiple exposure settings
US8259179B2 (en) Compensating for non-uniform illumination of object fields captured by a camera
US20020163583A1 (en) System and method for capturing color images that extends the dynamic range of an image sensor
US9838625B2 (en) Image processing apparatus and control method for image processing apparatus for controlling correction of a black level in a combined image signal
CN112118378A (en) Image acquisition method and device, terminal and computer readable storage medium
US10638072B2 (en) Control apparatus, image pickup apparatus, and control method for performing noise correction of imaging signal
US20180338096A1 (en) Image processing appartatus
CN102783135A (en) Method and apparatus for providing a high resolution image using low resolution
JP2009520402A (en) Method and apparatus for setting black level of imaging device using optical black pixel and voltage fixed pixel
US11818462B2 (en) Phase detection autofocus sensor apparatus and method for depth sensing
US20180284576A1 (en) Imaging apparatus, imaging method, and program
CN107995396B (en) Two camera modules and terminal
CN115499604A (en) Image sensor, image processing method, depth camera, and storage medium
US7773805B2 (en) Method and apparatus for flare cancellation for image contrast restoration
JP4523629B2 (en) Imaging device
CN112750087A (en) Image processing method and device
CN110602420B (en) Camera, black level adjusting method and device
CN112866596B (en) Anti-strong light three-dimensional capturing method and system based on CMOS sensor
CN111885281B (en) Image Processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination