CN118077214A - Noise reduction circuit for image sensor - Google Patents

Noise reduction circuit for image sensor Download PDF

Info

Publication number
CN118077214A
CN118077214A CN202280068060.XA CN202280068060A CN118077214A CN 118077214 A CN118077214 A CN 118077214A CN 202280068060 A CN202280068060 A CN 202280068060A CN 118077214 A CN118077214 A CN 118077214A
Authority
CN
China
Prior art keywords
noise reduction
image
reference frame
reduction circuit
noise
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280068060.XA
Other languages
Chinese (zh)
Inventor
莱尔·大卫·班布里奇
蔡宗勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meta Platforms Technologies LLC
Original Assignee
Meta Platforms Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/950,199 external-priority patent/US20230105527A1/en
Application filed by Meta Platforms Technologies LLC filed Critical Meta Platforms Technologies LLC
Priority claimed from PCT/US2022/045505 external-priority patent/WO2023059538A1/en
Publication of CN118077214A publication Critical patent/CN118077214A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)

Abstract

Some examples described herein include noise reduction circuitry for an image sensor. The noise reduction circuit may include a reference frame generator configured to generate a reference frame based on a set of image frames received from the image sensor during a calibration phase. The noise reduction circuit may also include a memory coupled to the reference frame generator. The memory may receive a reference frame from the reference frame generator and store the reference frame for subsequent use during the noise reduction stage. The noise reduction circuit may also include a processor coupled to the memory. The processor may retrieve a reference frame from memory during a noise reduction stage and use the reference frame to reduce noise in received image frames from the image sensor.

Description

Noise reduction circuit for image sensor
Technical Field
The present disclosure relates generally to image sensors. More particularly, but not by way of limitation, the present disclosure relates to noise reduction circuitry for image sensors (e.g., digital pixel sensors).
Background
The image sensor may include an array of pixel cells. Each pixel cell may include a photodiode to sense light by converting photons into charge (e.g., electrons or holes). An analog-to-digital converter (ADC) can then convert the amount of charge produced by the photodiode array to a digital value to produce a digital image. The digital image may be transmitted from the image sensor to another system for use by the other system.
In some types of image sensors, each pixel may have its own ADC for converting the charge of the pixel to a digital value. In some cases, the ADC may be fabricated on the same chip as the pixel array. Thus, within a small footprint (footprint) of the image sensor, many ADCs may be compact. To accommodate the small footprint of the image sensor, the size of these ADCs must be very small. The small size of such ADCs may make it difficult to manufacture these ADCs in a consistent, high quality manner. Due to these manufacturing inconsistencies, a single image sensor may be composed of ADCs that behave differently (sometimes significantly differently) from each other, which may create noise in the digital image produced.
The noise may include a plurality of noise components. Two examples of such noise components may include fixed pattern noise and temporal noise. The fixed pattern noise may be a spatial variation in pixel output values that occurs under uniform illumination due to small differences in the properties of the pixels. The fixed pattern noise is "fixed", meaning that the fixed pattern noise is constant in time and such that the pattern of pixel brightness variations that occur in images taken under the same illumination conditions is the same. In contrast, the temporal noise may be random noise that varies independently with different images and over time. The overall noise observed in the digital image may originate from a combination of fixed pattern noise and temporal noise. Removing fixed pattern noise and temporal noise from digital images can be challenging. Existing noise reduction techniques may remove one or the other type of noise, but not both types of noise, resulting in digital images that still have residual noise.
It may be desirable to overcome one or more of the above problems, for example by providing such noise reduction circuitry: the noise reduction circuit is capable of reducing both fixed pattern noise and temporal noise in an image frame captured by an image sensor.
Disclosure of Invention
According to a first aspect of the present disclosure, a noise reduction circuit is provided. The noise reduction circuit includes: a reference frame generator configured to generate a reference frame based on a plurality of image frames received from the image sensor during a calibration phase. The noise reduction circuit also includes a memory coupled to the reference frame generator, the memory configured to receive the reference frame from the reference frame generator and store the reference frame for subsequent use during a noise reduction stage that follows the calibration stage. The noise reduction circuit also includes a processor coupled to the memory, the processor configured to retrieve a reference frame from the memory during a noise reduction stage and use the reference frame to reduce noise in received image frames from the image sensor.
In one embodiment, the noise reduction circuit is separate from the image sensor.
In one embodiment, the noise reduction circuit is separate from the image sensor.
In an embodiment, the reference frame generator is configured to generate the reference frame by averaging the plurality of image frames together.
In an embodiment, the plurality of image frames includes more than five image frames.
In an embodiment, the processor is configured to use the reference frame to reduce noise in the image frame by subtracting the reference frame from the image frame to generate a corrected image frame.
In an embodiment, the noise comprises temporal noise generated by one or more analog-to-digital converters of the image sensor.
In one embodiment, the image sensor is a digital pixel sensor having a pixel array configured to be coupled to noise reduction circuitry.
In an embodiment, the reference frame generator is further configured to perform a recalibration phase after the noise reduction phase, the recalibration phase comprising: receiving a new image frame from an image sensor; receiving a reference frame from a memory; generating a new reference frame by combining the new image frame with the reference frame; and storing the new reference frame in memory.
In an embodiment, the processor is configured to use the new reference frame to reduce noise in the received image frame from the image sensor during another noise reduction stage following the recalibration stage.
In an embodiment, the reference frame generator is configured to generate a new reference frame by applying a weighted average scheme to the reference frame and a new image frame, the new image frame having a lower weight than the reference frame in the weighted average scheme.
According to a second aspect of the present disclosure, there is provided a method comprising: during a calibration phase, receiving, by a noise reduction circuit, a plurality of image frames from an image sensor; generating, by a noise reduction circuit, a reference frame based on the plurality of image frames; during a noise reduction stage subsequent to the calibration stage, receiving, by the noise reduction circuit, image frames from the image sensor; and reducing noise in the image frame by the noise reduction circuit using the reference frame.
In an embodiment, the noise reduction circuit is part of the image sensor.
In one embodiment, the noise reduction circuit is separate from and coupled to the image sensor.
In an embodiment, generating the reference frame includes averaging the plurality of image frames together.
In an embodiment, using the reference frame to reduce noise in the image frame includes: the reference frame is subtracted from the image frame to generate a corrected image frame.
In one embodiment, the noise includes fixed pattern noise and temporal noise, the fixed pattern noise and temporal noise being generated by one or more analog-to-digital converters of the image sensor.
In an embodiment, the noise reduction stage is a first noise reduction stage, and the method further comprises: a recalibration phase is performed after the first noise reduction phase, the recalibration phase comprising: receiving a new image frame from an image sensor; and generating a new reference frame by combining the new image frame with the reference frame; and performing a second noise reduction stage after the recalibration stage, the second noise reduction stage comprising: receiving one or more image frames from an image sensor; and using the new reference frame to reduce noise in the one or more image frames.
In an embodiment, generating the new reference frame includes: a weighted average scheme is applied to the reference frame and the new image frame.
According to a third aspect of the present disclosure, there is provided an artificial reality system comprising an image sensor and a noise reduction circuit coupled to the image sensor. The noise reduction circuit is configured to: receiving a plurality of image frames from an image sensor during a calibration phase; generating a reference frame based on the plurality of image frames; receiving image frames from the image sensor during a noise reduction stage following the calibration stage; and generating a corrected image frame by reducing noise in the image frame using the reference frame. The artificial reality system further includes a computer system coupled to the noise reduction circuit and the display device, the computer system configured to generate an artificial reality environment for display on the display device based on the corrected image frames generated by the noise reduction circuit.
Some examples of the present disclosure may overcome one or more of the problems described above by providing such noise reduction circuitry; the noise reduction circuit is capable of reducing both fixed pattern noise and temporal noise in an image frame captured by an image sensor. For example, the noise reduction circuit may implement a calibration phase that includes acquiring N image frames from the image sensor, where N is greater than 1. One example of N may be 10 image frames, although other numbers of image frames are possible. The noise reduction circuit may then generate a reference frame based on the N image frames, for example, by averaging the N image frames together. Generating a reference frame using multiple image frames may help to address temporal noise, while using only a single image frame may not be sufficient to address temporal noise. After generating the reference frame, the noise reduction circuit may store the reference frame in a memory (e.g., a static random access memory (static random access memory, SRAM) frame buffer). This may complete the calibration phase.
Next, the noise reduction circuit may initiate a noise reduction stage. In the noise reduction stage, the noise reduction circuit may receive additional image frames from the image sensor and use the stored reference frames to reduce noise in the additional image frames. For example, the noise reduction circuit may subtract the stored reference frame from each additional image frame to generate a corresponding corrected image frame. The corrected image frames may have less fixed pattern noise and/or temporal noise than that present prior to noise correction (e.g., noise cancellation).
After generating the corrected image frames, the noise reduction circuitry may transmit the corrected image frames to the computing system for subsequent use. For example, the computing system may use the corrected image frames for various purposes, such as object recognition and tracking, location tracking, augmented reality (augmented reality, AR), and Virtual Reality (VR). The computing system may be able to perform its intended function in an improved manner by using the noise corrected digital image.
The above description is provided by way of example only and does not limit or define the boundaries of the subject matter. Various other examples are described herein, and variations to these examples will be understood by those skilled in the art. The advantages provided by the various examples may be further understood by examining this specification and/or by practicing one or more examples of the claimed subject matter.
These illustrative examples are mentioned not to limit or define the scope of the disclosure, but to provide examples to aid in understanding the disclosure. Illustrative examples are discussed in the detailed description, which provides further description. The advantages provided by the various examples may be further understood by examining this specification.
Drawings
A number of illustrative embodiments are described with reference to the following figures.
Fig. 1A, 1B, 1C, and 1D are schematic diagrams of a near-eye display.
Fig. 2 is a cross-sectional side view of a near-eye display.
Fig. 3 shows an isometric view of a waveguide display with a single source assembly.
Fig. 4 shows a cross section of a waveguide display.
Fig. 5 is a block diagram of a system including a near-eye display.
Fig. 6 shows an example of an image sensor and its operation.
Fig. 7A, 7B, and 7C illustrate examples of an image processing system and its operation.
Fig. 8A, 8B, and 8C illustrate example components of the image processing system of fig. 7A-7C.
Fig. 9 illustrates an example of a system including an image sensor with noise reduction circuitry in accordance with some aspects of the present disclosure.
Fig. 10 illustrates an example of a system in which an image sensor is separate from noise reduction circuitry, in accordance with some aspects of the present disclosure.
Fig. 11 illustrates an example of a noise reduction circuit according to some aspects of the present disclosure.
Fig. 12 illustrates an example of operational phases associated with an image sensor and noise reduction circuitry in accordance with aspects of the present disclosure.
Fig. 13 illustrates an example of a reference frame generator in accordance with aspects of the present disclosure.
Fig. 14 illustrates an example of a process for switching between multiple modes of operation of a noise correction circuit in accordance with some aspects of the present disclosure.
For purposes of illustration only, the drawings depict some examples of the disclosure. Those skilled in the art will readily recognize from the following description that alternative examples of the illustrated structures and methods may be employed without departing from the principles or the claimed benefits of the present disclosure.
In the drawings, similar components and/or features may have the same reference numerals. Furthermore, individual components of the same type may be distinguished by following the reference label by a lower case letter that distinguishes among the similar components. If only reference numerals are used in the specification, the description may be applied to any one of a plurality of similar components having the same reference numerals without regard to the corresponding lower case letters.
Detailed Description
In the following description, for purposes of explanation, specific details are set forth in order to provide a thorough understanding of certain inventive embodiments. It will be apparent, however, that the various embodiments may be practiced without these specific details. These drawings and descriptions are not meant to be limiting.
Fig. 1A and 1B are schematic diagrams of an embodiment of a near-eye display 100. The near-eye display 100 presents media to a user. Examples of media presented by near-eye display 100 include one or more images, video, and/or audio. In some embodiments, the audio is presented by an external device (e.g., a speaker and/or a headset) that receives audio information from the near-eye display 100, the console, or both the near-eye display and the console, and presents audio data based on the audio information. The near-eye display 100 is generally configured to operate as a Virtual Reality (VR) display. In some embodiments, the near-eye display 100 is adapted to operate as an Augmented Reality (AR) display and/or a Mixed Reality (MR) display.
Near-eye display 100 includes a frame 105 and a display 110. The frame 105 is coupled to one or more optical elements. The display 110 is configured for viewing by a user content presented by the near-eye display 100. In some embodiments, display 110 includes a waveguide display assembly for directing light from one or more images to the eyes of a user.
Near-eye display 100 also includes image sensors 120a, 120b, 120c, and 120d. Each of the image sensors 120a, 120b, 120c, and 120d may include an array of pixels configured to generate image data representing different fields of view in different directions. For example, sensors 120a and 120B may be configured to provide image data representing two fields of view along the Z-axis towards direction a, while sensor 120C may be configured to provide image data representing fields of view along the X-axis towards direction B, and sensor 120d may be configured to provide image data representing fields of view along the X-axis towards direction C.
In some embodiments, the sensors 120 a-120 d may be configured as input devices for controlling or affecting the display content of the near-eye display 100 to provide an interactive VR/AR/MR experience to a user wearing the near-eye display 100. For example, the sensors 120 a-120 d may generate physical image data of a physical environment in which the user is located. These physical image data may be provided to a position tracking system to track the user's position and/or path of movement in the physical environment. The system may then update the image data provided to the display 110 based on, for example, the user's location and orientation to provide an interactive experience. In some embodiments, the location tracking system may run a synchronized localization and mapping (simultaneous localization AND MAPPING, SLAM) algorithm to track a set of objects in a physical environment and within a user's field of view as the user moves in the physical environment. The location tracking system may construct and update a map of the physical environment based on the set of objects and may track the user's location within the map. The sensors 120 a-120 d may provide a more comprehensive view of the physical environment to the position-tracking system by providing image data corresponding to multiple fields of view, which may allow for more objects to be included in the construction and updating of the map. With this arrangement, the accuracy and robustness of tracking of the user's location within the physical environment can be improved.
In some embodiments, the near-eye display 100 may also include one or more active illuminators 140 to project light into the physical environment. The projected light may be associated with different spectrums (e.g., visible light, infrared light, ultraviolet light) and may be used for various purposes. For example, illuminator 140 may project light in a dark environment (or in an environment with low intensity infrared light, ultraviolet light, etc.) to assist image sensors 120 a-120 d in capturing images of different objects within the dark environment, thereby enabling, for example, location tracking of a user. The illuminator 140 may project certain markers onto objects within the environment to assist the position tracking system in identifying the objects for mapping/updating.
In some embodiments, illuminator 140 may also cause stereoscopic imaging to be achieved. For example, one or more of the sensors 120a or 120b may include a first pixel array for visible light sensing and a second pixel array for Infrared (IR) light sensing. The first pixel array may be covered with a color filter (e.g., a Bayer filter), wherein each pixel in the first pixel array is configured to measure an intensity of light associated with a particular color (e.g., one of red, green, or blue). The second pixel array (for IR light sensing) may also be covered with a filter allowing only IR light to pass, wherein each pixel in the second pixel array is configured to measure the intensity of the IR light. These pixel arrays may generate a red-green-blue (RGB) image and an IR image of the object, where each pixel in the IR image is mapped to each pixel in the RGB image. Illuminator 140 may project a set of IR markers onto an object, each image of which may be captured by an array of IR pixels. The system may estimate distances between different portions of the object and the array of IR pixels based on the distribution of IR markers of the object shown in the image and generate a stereoscopic image of the object based on the distances. The system may determine, for example, a relative position of the object with respect to the user based on the stereoscopic image of the object, and may update image data provided to the display 110 based on the relative position information to provide an interactive experience.
As discussed above, the near-eye display 100 may operate in an environment associated with a very wide range of light intensities. For example, the near-eye display 100 may operate in an indoor environment or an outdoor environment, and/or at different times of the day. The near-eye display 100 may also operate with or without the active illuminator 140 being turned on. Accordingly, the image sensors 120 a-120 d may need to have a wide dynamic range to be able to operate normally (e.g., to generate an output related to the intensity of incident light) over a very wide range of light intensities associated with different operating environments of the near-eye display 100.
Fig. 1C and 1D are schematic diagrams of another embodiment of a near-eye display 100. Fig. 1C and 1D illustrate a side of the near-eye display 100 facing one or more eyeballs 135 of a user wearing the near-eye display 100. As shown in fig. 1C and 1D, the near-eye display 100 may further include a plurality of illuminators 140a, 140b, 140C, 140D, 140e, and 140f. The near-eye display 100 also includes a plurality of image sensors 150a and 150b. The illuminators 140a, 140B, and 140c may emit light of a particular frequency range (e.g., near infrared (NEAR INFRARED, NIR)) toward direction D (direction D is opposite to direction a in fig. 1A and 1B). The emitted light may be associated with a particular pattern and may be reflected by the left eye of the user. The sensor 150a may include an array of pixels for receiving the reflected light and generating an image of the reflected pattern. Similarly, illuminators 140d, 140e, and 140f may emit NIR light carrying a pattern. The NIR light may be reflected by the user's right eye and may be received by the image sensor 150b. Sensor 150b may also include an array of pixels that are used to generate an image of the reflected pattern. The system may determine a gaze point of the user based on the images of the reflected patterns from sensors 150a and 150b and update the image data provided to display 100 based on the determined gaze point to provide an interactive experience for the user.
As discussed above, to avoid damaging the eyes of the user, the illuminators 140a, 140b, 140c, 140d, 140e, and 140f are typically configured to output very low intensity light. In the case where the image sensors 150a and 150B comprise the same sensor devices as the image sensors 120a to 120d in fig. 1A and 1B, the image sensors 120a to 120d may need to be able to generate an output related to the intensity of the incident light when the intensity of the incident light is very low, which may further increase the dynamic range requirements of these image sensors.
Further, the image sensors 120a to 120d may need to be able to generate outputs at high speed to track the movement of the eyeballs. For example, the user's eye may make very rapid movements (e.g., saccadic movements) in which there may be a rapid jump from one eye position to another. In order to track the rapid movement of the eyes of the user, the image sensors 120a to 120d need to generate images of the eyes at high speed. For example, the rate at which the image sensor generates image frames (frame rate) needs to be matched at least to the moving speed of the eyeballs. The high frame rate requires a short total exposure time of all pixel cells involved in generating the image frames and a high speed of converting the sensor output to digital values for image generation. Furthermore, as discussed above, the image sensor also needs to be able to operate in environments with lower light intensities.
Fig. 2 is an embodiment of a cross-section 200 of the near-eye display 100 shown in fig. 1A-1D. The display 110 includes at least one waveguide display assembly 210. The exit pupil (exit pupil) 230 is the position at which a single eyeball 220 of a user is positioned in the eyebox (eyebox) region when the user wears the near-eye display 100. For illustration purposes, fig. 2 shows a cross-section 200 associated with an eyeball 220 and a single waveguide display assembly 210, while a second waveguide display is for a second eye of a user.
Waveguide display assembly 210 is configured to direct image light to an eyebox located at exit pupil 230 and to eye 220. Waveguide display assembly 210 may be constructed of one or more materials (e.g., plastic or glass) having one or more refractive indices. In some embodiments, near-eye display 100 includes one or more optical elements located between waveguide display assembly 210 and eyeball 220.
In some embodiments, waveguide display assembly 210 includes a stack of one or more waveguide displays including, but not limited to, stacked waveguide displays, zoom waveguide displays, and the like. Stacked waveguide displays are multi-color displays (e.g., RGB displays) produced by stacking a plurality of waveguide displays, each having a different color from a single color source. Stacked waveguide displays are also multi-color displays that can be projected on multiple planes (e.g., multi-plane color displays). In some configurations, the stacked waveguide display is a single color display (e.g., a multi-planar single color display) that can be projected on multiple planes. A zoom waveguide display is a display in which the focal position of image light emitted from the waveguide display can be adjusted. In alternative embodiments, waveguide display assembly 210 may include a stacked waveguide display and a zoom waveguide display.
Fig. 3 shows an isometric view of an embodiment of a waveguide display 300. In some embodiments, waveguide display 300 is a component in near-eye display 100 (e.g., waveguide display assembly 210). In some embodiments, waveguide display 300 is part of some other near-eye display or other system that directs image light to a particular location.
Waveguide display 300 includes a source assembly 310, an output waveguide 320, and a controller 330. For illustration purposes, fig. 3 shows waveguide display 300 associated with a single eyeball 220, but in some embodiments another waveguide display separate or partially separate from waveguide display 300 provides image light to the other eye of the user.
Source component 310 generates image light 355. Source assembly 310 generates image light 355 and outputs the image light to coupling element 350, which is located on first side 370-1 of output waveguide 320. The output waveguide 320 is an optical waveguide that outputs the expanded image light 340 to the eyeball 220 of the user. The output waveguide 320 receives image light 355 at one or more coupling elements 350 located on a first side 370-1 and directs the received input image light 355 to a guiding element (DIRECTING ELEMENT) 360. In some embodiments, coupling element 350 couples image light 355 from source component 310 into output waveguide 320. Coupling element 350 may be, for example, a diffraction grating, a holographic grating, one or more cascaded reflectors, one or more prismatic surface elements, and/or a holographic reflector array.
The guiding element 360 redirects the received input image light 355 to the decoupling element 365 such that the received input image light 355 is coupled out of the output waveguide 320 through the decoupling element 365. The guiding element 360 is part of the first side 370-1 of the output waveguide 320 or attached to the first side 370-1 of the output waveguide 320; decoupling element 365 is part of second side 370-2 of output waveguide 320 or is attached to second side 370-2 of output waveguide 320; such that the guide element 360 is opposite the decoupling element 365. The steering element 360 and/or the decoupling element 365 may be, for example, a diffraction grating, a holographic grating, one or more cascading reflectors, one or more prismatic surface elements, and/or a holographic reflector array.
The second side 370-2 represents a plane along the x-dimension and the y-dimension. Output waveguide 320 may be composed of one or more materials that promote total internal reflection of image light 355. Output waveguide 320 may be constructed of, for example, silicon, plastic, glass, and/or polymer. The output waveguide 320 has a relatively small form factor. For example, the width of output waveguide 320 along the x-dimension may be about 50mm (millimeters), the length along the y-dimension may be about 30mm, and the thickness along the z-dimension may be about 0.5mm to 1mm.
The controller 330 controls the scanning operation of the source assembly 310. The controller 330 determines scan instructions for the source component 310. In some embodiments, the output waveguide 320 outputs the expanded image light 340 to the user's eye 220 in a large field of view (FOV). For example, the expanded image light 340 is provided to the user's eye 220 at 60 degrees and/or greater than 60 degrees, and/or at a diagonal FOV (in x and y) of 150 degrees and/or less than 150 degrees. The output waveguide 320 is configured to provide the following eyeboxes: the eyebox has a length of 20mm or greater than 20mm, and/or equal to or less than 50mm, and/or a width of 10mm or greater than 10mm, and/or equal to or less than 50mm.
In addition, the controller 330 also controls the image light 355 generated by the source component 310 based on the image data provided by the image sensor 370. The image sensor 370 may be located on the first side 370-1 and may include, for example, the image sensors 120 a-120 d in fig. 1A and 1B to generate image data of a physical environment in front of the user (e.g., for location determination). The image sensors 150a and 150b may also be located on the second side 370-2 and may include the image sensors 150a and 150b of fig. 1C and 1D to generate image data of the user's eye 220 (e.g., for gaze point determination). The image sensor 370 may interface with a remote console that is not within the waveguide display 300. The image sensor 370 may provide image data to a remote console that may determine, for example, the location of the user or the gaze point of the user and determine the content of the image to be displayed to the user. The remote console may send instructions related to the determined content to the controller 330. The controller 330 may control the generation and output of image light 355 by the source component 310 based on these instructions.
Fig. 4 illustrates an embodiment of a cross section 400 of a waveguide display 300. Section 400 includes source assembly 310, output waveguide 320, and image sensor 370. In the example of fig. 4, the image sensor 370 may include a set of pixel cells 402 located on the first side 370-1 for generating an image of the physical environment in front of the user. In some embodiments, there may be a mechanical shutter 404 interposed between the set of pixel cells 402 and the physical environment for controlling the exposure of the set of pixel cells 402. In some embodiments, as will be discussed below, the mechanical shutter 404 may be replaced by an electronic shutter switch. Each of the pixel cells 402 may correspond to a pixel in an image. Although not shown in fig. 4, it will be appreciated that each of the pixel cells 402 may also be covered with a filter for controlling the frequency range of light to be sensed by the pixel cells.
Upon receiving an instruction from the remote console, the mechanical shutter 404 may open and expose the set of pixel cells 402 for an exposure period. During the exposure period, the image sensor 370 may obtain samples of light incident on the set of pixel cells 402 and may generate image data based on the intensity distribution of the incident light samples detected by the set of pixel cells 402. The image sensor 370 may then provide the image data to a remote console that determines the display content and provides the display content information to the controller 330. Then, the controller 330 may determine the image light 355 based on the display content information.
The source component 310 generates image light 355 according to instructions from the controller 330. Source assembly 310 includes a source 410 and an optical system 415. Source 410 is a source of light that generates coherent or partially coherent light. The source 410 may be, for example, a laser diode, a vertical cavity surface emitting laser, and/or a light emitting diode.
Optical system 415 includes one or more optical components that condition light from source 410. Adjusting the light from the source 410 may include, for example: the orientation is expanded, collimated, and/or adjusted according to instructions from the controller 330. The one or more optical components may include one or more lenses, one or more liquid lenses, one or more mirrors, one or more apertures, and/or one or more gratings. In some embodiments, optical system 415 includes a liquid lens having a plurality of electrodes that allows a light beam to be scanned at a threshold scan angle to shift the light beam to an area outside the liquid lens. Light exiting the optical system 415 (and also the source assembly 310) is referred to as image light 355.
The output waveguide 320 receives image light 355. The coupling element 350 couples image light 355 from the source component 310 into the output waveguide 320. In embodiments where coupling element 350 is a diffraction grating, the pitch (pitch) of the diffraction grating is selected such that total internal reflection occurs in output waveguide 320 and image light 355 propagates inside output waveguide 320 (e.g., by total internal reflection) toward decoupling element 365.
The guiding element 360 redirects the image light 355 to the decoupling element 365 for coupling out of the output waveguide 320. In embodiments where the guide element 360 is a diffraction grating, the pitch of the diffraction grating is selected such that the incident image light 355 exits the output waveguide 320 at one or more oblique angles relative to the surface of the decoupling element 365.
In some embodiments, the guide element 360 and/or the decoupling element 365 are similar in structure. The expanded image light 340 exiting the output waveguide 320 is expanded in one or more dimensions (e.g., may be elongated in the x-dimension). In some embodiments, waveguide display 300 includes a plurality of source assemblies 310 and a plurality of output waveguides 320. Each source assembly 310 emits monochromatic image light of a particular wavelength band corresponding to a primary color (e.g., red, green, or blue). Each output waveguide 320 may be stacked together at a separation distance to output multi-colored expanded image light 340.
Fig. 5 is a block diagram of an embodiment of a system 500 including a near-eye display 100. The system 500 includes a near-eye display 100, an imaging device 535, an input/output interface 540, and image sensors 120 a-120 d and 150 a-150 b, each coupled to a control circuit 510. The system 500 may be configured as a head-mounted device, a wearable device, or the like.
Near-eye display 100 is a display that presents media to a user. Examples of media presented by near-eye display 100 include one or more images, video, and/or audio. In some embodiments, the audio is presented via an external device (e.g., speaker and/or headset) that receives audio information from the near-eye display 100 and/or control circuit 510 and presents audio data to the user based on the audio information. In some embodiments, the near-eye display 100 may also function as AR glasses. In some embodiments, the near-eye display 100 utilizes computer-generated elements (e.g., images, video, sound) to enhance a view of a physical, real-world environment.
The near-eye display 100 includes a waveguide display assembly 210, one or more position sensors 525, and/or an inertial measurement unit (inertial measurement unit, IMU) 530. Waveguide display assembly 210 includes a source assembly 310, an output waveguide 320, and a controller 330.
The IMU 530 is an electronic device that generates fast calibration data indicative of an estimated position of the near-eye display 100 relative to an initial position of the near-eye display 100 based on measurement signals received from one or more position sensors 525.
The imaging device 535 may generate image data for various applications. For example, the imaging device 535 may generate image data based on calibration parameters received from the control circuit 510 to provide slow calibration data. The imaging device 535 may include, for example, the image sensors 120a to 120d in fig. 1A and 1B for generating image data of the physical environment in which the user is located in order to perform position tracking of the user. The imaging device 535 may also include image sensors 150 a-150 b, such as in fig. 1C and 1D, for generating image data for determining a gaze point of a user to identify objects of interest to the user.
The input/output interface 540 is a device that allows a user to send an action request to the control circuit 510. An action request is a request to perform a particular action. For example, the action request may be to start or end an application, or to perform a particular action within an application.
Control circuitry 510 provides media to near-eye display 100 for presentation to a user based on information received from one or more of imaging device 535, near-eye display 100, and input/output interface 540. In some examples, the control circuit 510 may be housed within the system 500 configured as a headset. In some examples, control circuit 510 may be a stand-alone console device communicatively coupled with other components in system 500. In the example shown in fig. 5, the control circuitry 510 includes an application store 545, a tracking module 550, and an engine 555.
The application store 545 stores one or more application programs for execution by the control circuit 510. An application is a set of instructions that, when executed by a processor, generate content for presentation to a user. Examples of applications include gaming applications, conferencing applications, video playback applications, or other suitable applications.
The tracking module 550 calibrates the system 500 using one or more calibration parameters and may adjust the one or more calibration parameters to reduce errors when determining the position of the near-eye display 100.
The tracking module 550 uses the slow calibration information from the imaging device 535 to track movement of the near-eye display 100. The tracking module 550 also uses the position information from the quick calibration information to determine the position of the reference point of the near-eye display 100.
The engine 555 executes applications within the system 500 and receives position information, acceleration information, velocity information, and/or predicted future positions of the near-eye display 100 from the tracking module 550. In some embodiments, the information received by engine 555 may be used to generate a signal (e.g., display instructions) to waveguide display assembly 210 that determines the type of content presented to the user. For example, to provide an interactive experience, engine 555 may determine content to present to the user based on the location of the user (e.g., provided by tracking module 550) or the gaze point of the user (e.g., based on image data provided by imaging device 535), the distance between the object (e.g., based on image data provided by imaging device 535) and the user.
Fig. 6 shows an example of an image sensor 600 and its operation. As shown in fig. 6, an image sensor 600 may include an array of pixel cells (including pixel cell 601) and may generate digital intensity data corresponding to image pixels. The pixel unit 601 may be part of the pixel unit 402 in fig. 4. As shown in fig. 6, the pixel unit 601 may include one or more photodiodes 602, electronic shutter switches 603, transfer switches 604, reset switches 605, charge storage devices 606, and quantizers 607. The quantizer 607 may be a pixel level ADC that is only accessible by the pixel cell 601. The photodiode 602 may include, for example, a P-N diode, a P-I-N diode, a pinned diode (pinned diode), and the charge storage device 606 may be a floating diffusion node (floating diffusion node) of the transfer switch 604. The photodiode 602 may generate and accumulate charge when light is received during an exposure period, and the amount of charge generated during the exposure period may be proportional to the intensity of the light.
Fig. 7A, 7B, and 7C illustrate an example of an image processing system 700 and its operation. The image processing system 700 includes a host device 702 and an image sensor 704. Host device 702 can include host processor 706 that runs application 708, which can perform the following operations on image 710 generated by image sensor 704: these operations include, for example, fusion of two-dimensional (2D) and three-dimensional (3D) sensing, object recognition and tracking, or position tracking. In some examples, the image processing system 700 may be located in a wearable device. In some examples, processing system 700 may be divided into multiple separate devices. For example, the host device 702 may be a personal computer (personal computer, PC), a smart phone, a camera base station (base station), or an integrated circuit (e.g., a central processing unit (central processing unit, CPU), a field-programmable switch array (SWITCH ARRAY, FPGA), or a microcontroller unit (micro controller unit, MCU)). The host device 702 and the image sensor 704 may be electrically connected via an interconnect (not shown in fig. 7A), such as an interconnect compatible with the mobile industry processor interface (mobile industry processor interface, MIPI) standard.
Referring to fig. 7A, the image sensor 704 includes a pixel cell array control circuit 716 and a pixel cell array 718. Each pixel cell in the pixel cell array 718 may include similar components (e.g., photodiode 602, electronic shutter switch 603, and transfer switch 604) as the pixel cell 601 in fig. 6 to perform light measurement operations to generate pixel data. In some examples, pixel cell array 718 and pixel cell array control circuit 716 may form a stack structure to maximize the light receiving surface of image sensor 704, which allows pixel cell array 718 to include more pixel cells to improve resolution.
Each pixel cell in pixel cell array 718 may include a configuration memory, which may be part of the pixel cell or external to the pixel cell, for storing programming data for configuring/programming light measurement operations at each pixel cell or at a block of pixel cells. The configuration memory of each pixel cell may be individually addressable, which allows light measurement operations at each pixel cell or at a block of pixel cells to be individually programmed by pixel cell array control circuit 716 based on a pixel array programming map (programming map) 720. In some examples, pixel array programming map 720 may be generated by host processor 706 as a result of an object tracking operation on image 710. In some examples, pixel cell array control circuit 716 may also include a programming map generator 721 to generate a pixel array programming map 720 based on image 710. Pixel cell array control circuit 716 may extract programming data from pixel array programming map 720 and send the programming data to pixel cell array 718 in the form of control signals 722 and 724. Programming data may be read from the configuration memory to configure the light measurement operation.
As will be described in more detail below, the configuration of the light measurement operation at the pixel cell may include, for example, setting the power states of the different circuit components (e.g., quantization circuit 620) accessed/associated by the pixel cell. The configuration may also include other aspects of the light measurement operation, such as setting the exposure period of the light measurement operation, or setting the quantization resolution/bit depth, etc.
The pixel array programming map 720 may include programming data for each pixel cell in the pixel cell array. Fig. 7B shows an example of a pixel array programming map 720. As shown in fig. 7B, pixel array programming map 720 may include a two-dimensional (2D) array of programming data, where each programming data entry in the two-dimensional array is for a pixel cell in pixel cell array 718. For example, where pixel cell array 718 has a width of M pixels (e.g., M columns of pixels) and a height of N pixels (e.g., N rows of pixels), pixel array programming map 720 also has a width of M entries (e.g., M columns of entries) and a height of N entries (e.g., N rows of entries). Programming data a 00 at entry (0, 0) of pixel array programming map 720 is for pixel cell P 00 at pixel location (0, 0) of pixel cell array 718, while programming data a 01 at entry (0, 1) of pixel array programming map 720 is for pixel cell P 01 at pixel location (0, 1) of pixel cell array 718. In some examples, the programming data for each entry of pixel array programming map 720 may be sequentially transmitted to form a serial data stream following a predetermined scan pattern, such as traversing one row from left to right (e.g., a 00、A01、……、A0i), followed by traversing the next row from left to right (e.g., a 10、A11、……、A1i). The programming data for each entry may be extracted and identified from the serial data stream based on the scan pattern and the order in which the entries were received. In some examples, pixel array programming map 720 may be sent only when specific programming data needs to be updated between frames and only the programming data that needs to be updated is included in pixel array programming map 720. In some examples, each entry of pixel array programming map 720 may also be for a block of pixel cells (e.g., a2 x2 array of pixel cells, a 4 x 4 array of pixel cells).
Depending on the configuration operation, each entry of pixel array programming map 720 may include binary programming data or non-binary programming data. Fig. 7C shows an example of pixel array programming diagrams 720a and 720 b. As shown in fig. 7C, pixel array programming map 720a includes binary programming data 0 and 1. In some examples, binary programming data at each entry of pixel array programming map 720a may enable (e.g., programming data set to 1) or disable (e.g., programming data set to 0) generation of pixel data at a pixel cell corresponding to that entry. The binary programming data may also set the power state of quantization circuit 620 used by the pixel cell. For example, if the programming data indicates that a pixel cell will not generate pixel data, the processing circuitry and memory included in the quantization circuitry used by the pixel cell may be powered down (powered-down).
In addition, pixel array programming map 720b may include non-binary programming data such as-1, 0, 1, or other values. As shown in fig. 7C, the non-binary programming data of pixel array programming map 720b may be used, for example, to set an exposure period, or to set a quantization resolution. For example, a programming value of-1 may indicate that the pixel cell and quantization circuit are disabled during a frame period; a programming value of 0 may indicate that the pixel cell and quantization circuit are operating in a low resolution mode; while a programmed value of 1 may indicate that the pixel cell and quantization circuit are operating in full resolution mode. Then, the pixel unit can set the power supply states of the processing circuits and the memory in the quantization circuit accordingly.
Fig. 8A, 8B, and 8C illustrate example components of pixel cell array control circuit 716 and pixel cell array 718 in image sensor 704. As shown in fig. 8A, the pixel cell array control circuit 716 may include a programming map parser 802, a column control circuit 804, a row control circuit 806, and a pixel data output circuit 807. The programming map parser 802 may parse a pixel array programming map 720 (which may be in a serial data stream) to identify programming data for each pixel cell (or block of pixel cells). For example, the identification of programming data may be based on: the two-dimensional pixel array programming pattern is converted to a predetermined scan pattern for use in a serial format, and the order in which programming data is received from the serial data stream by programming pattern parser 802. The programming map parser 802 may create a mapping between row addresses of pixel cells, column addresses of pixel cells, and one or more configuration signals based on programming data for the pixel cells. The programming map parser 802 may send control signals 808 including column addresses and configuration signals to the column control circuit 804 based on the mapping, and control signals 810 including row addresses mapped to the column addresses and configuration signals to the row control circuit 806. In some examples, the configuration signal may also be distributed between control signal 808 and control signal 810 or sent to row control circuit 806 as part of control signal 810.
Column control circuit 804 and row control circuit 806 are configured to forward the configuration signals received from programming map parser 802 to the configuration memory of each pixel cell of pixel cell array 718. In fig. 8A, each box labeled P ij (e.g., P 00、P01、P10、P11) may represent a pixel cell or block of pixel cells (e.g., a 2 x 2 array of pixel cells, a 4 x 4 array of pixel cells) and may include or be associated with quantization circuitry. As shown in fig. 8A, the column control circuit 804 drives a plurality of sets of column buses C0, C1, … …, ci. Each set of column buses includes one or more buses and may be used to transmit control signals 722 (which may include column select signals and/or other configuration signals) of fig. 7A to a column of pixel cells. For example, column bus(s) C0 may transmit column select signal 808a to select a column of pixel cells (or a column of pixel cell blocks) p 00、p01、……、p0j, column bus(s) C1 may transmit column select signal 808b to select a column of pixel cells (or pixel cell blocks) p 10、p11、……、p1j, and so on.
Further, row control circuit 806 drives sets of row buses labeled R0, R1, … …, rj. Each set of row buses also includes one or more buses, and may be used to transmit control signals 724 (which may include row select signals and/or other configuration signals) of fig. 7A to a row of pixel cells or a block of pixel cells. For example, row bus(s) R0 may transmit row select signal 810a to select a row of pixel cells (or pixel cell block) p 00、p10、……、pi0, row bus(s) R1 may transmit row select signal 810b to select a row of pixel cells (or pixel cell block) p 01、p11、……、pi1, and so on. Any pixel cell (or block of pixel cells) in the pixel cell array 718 may be selected to receive the configuration signal based on a combination of the row select signal and the column signal. As described above, the row select signals, column select signals, and configuration signals (if present) are synchronized based on control signals 808 and 810 from programming map parser 802. Each column of pixel cells may share a set of output buses to transfer pixel data to pixel data output module 807. For example, a column of pixel cells (or pixel cell blocks) p 00、p01、……、p0j may share an output bus D0, a column of pixel cells (or pixel cell blocks) p 10、p11、……、p1j may share an output bus D1, and so on.
Pixel data output module 807 may receive pixel data from the bus, convert the pixel data into one or more serial data streams (e.g., using shift registers), and transmit the data streams to host device 702 in accordance with a predetermined protocol such as MIPI. The data stream may come from quantization circuitry associated with each pixel cell (or pixel cell block) as part of a sparse image frame. In addition, pixel data output module 807 may also receive control signals 808 and 810 from programming map parser 802 to, for example, determine which pixel cells do not output pixel data or determine the bit width of the pixel data output by each pixel cell and then adjust the generation of the serial data stream accordingly. For example, pixel data output module 807 may control the shift register to skip multiple bits when generating the serial data stream to account for, for example, different bit widths of the output pixel data between pixel cells or disabling of the pixel data output at certain pixel cells.
In addition, the pixel cell array control circuit 716 further includes a global power state control circuit 820, a column power state control circuit 822, a row power state control circuit 824, and a local power state control circuit 826 (not shown in fig. 8A) at each pixel cell or each pixel cell block, which form a hierarchical power state control circuit. Global power state control circuit 820 may have a highest level in the hierarchy followed by row power state control circuit 824/column power state control circuit 822, while local power state control circuit 826 is at a lowest level in the hierarchy.
The hierarchical power state control circuitry may provide different granularity in controlling the power state of the image sensor 704. For example, global power state control circuit 820 may control the global power states of all of the circuits of image sensor 704, including: processing circuitry and memory for all pixel cells; a DAC; a counter, etc. The row power state control circuit 822 may individually control the power states of the processing circuits and the memories of each row of pixel cells (or pixel cell blocks), while the column power state control circuit 824 may individually control the power states of the processing circuits and the memories of each column of pixel cells (or pixel cell blocks). Some examples may include row power state control circuitry 822 without column power state control circuitry 824 or vice versa. In addition, the local power state control circuit 826 may be part of a pixel cell or part of a pixel cell block and may control the power state of the processing circuits and memory of the pixel cell or pixel cell block.
Fig. 8B shows an example of the internal components of the hierarchical power state control circuit and the operation thereof. In particular, global power state control circuit 820 may output a global power state signal 832, which may be in the form of a bias voltage, bias current, power voltage, or programming data, that sets the global power state of image sensor 704. In addition, the column power state control circuit 822 (or the row power state control circuit 824) may output a column/row power state signal 834 that sets the power state of a column/row of pixel cells (or pixel cell blocks) of the image sensor 704. The column/row power status signal 834 may be transmitted to the pixel cells as a row signal 810 and a column signal 808. In addition, the local power state control circuit 826 may output a local power state signal 836 that sets the power state of the pixel cell (or block of pixel cells), including the power state of the associated processing circuitry and memory. The local power state signal 836 may be output to the processing circuitry and memory of the pixel cell to control the power state of the processing circuitry and memory.
In the hierarchical power state control circuit 838, the upper power state signal may set an upper limit for the lower power state signal. For example, global power state signal 832 may be an upper level power state signal of column/row power state signal 834, and an upper limit of column/row power state signal 834 may be set. Further, the column/row power state signal 834 may be an upper level power state signal of the local power state signal 836, and an upper limit of the local power state signal 836 may be set. For example, if global power state signal 832 indicates a low power state, column/row power state signal 834 and local power state signal 836 may also indicate a low power state.
Each of the global power state control circuit 820, the column power state control circuit 822/row power state control circuit 824, and the local power state control circuit 826 may include a power state signal generator, while the column power state control circuit 822/row power state control circuit 824 and the local power state control circuit 826 may include gating logic (gating logic) to perform an upper bound imposed by an upper power state signal. In particular, global power state control circuit 820 may include a global power state signal generator 821 to generate a global power state signal 832. Global power state signal generator 821 may generate global power state signal 832 based on, for example, external configuration signal 840 (e.g., from host device 702) or a predetermined time sequence of global power states.
In addition, column power state control circuit 822/row power state control circuit 824 may include column/row power state signal generator 823 and gating logic 825. The column/row power state signal generator 823 may generate an intermediate column/row power state signal 833 based on, for example, an external configuration signal 842 (e.g., from the host device 702) or a predetermined time sequence of row/column power states. Gating logic 825 may select one of global power state signal 832 or intermediate column/row power state signal 833 that represents a lower power state as column/row power state signal 834.
Further, the local power state control circuit 826 may include a local power state signal generator 827 and gating logic 829. The local power state signal generator 827 may generate an intermediate local power state signal 835 based on, for example, an external configuration signal 844 (which may be from a pixel array programming map), a predetermined time sequence of row/column power states, and the like. Gating logic 829 may select one of intermediate local power state signal 835 or column/row power state signal 834, which represents a lower power state, as local power state signal 836.
Fig. 8C shows additional details of pixel cell array 718, including local power state control circuitry 826 (e.g., 826a, 826b, 826C, and 826d labeled "PWR" in fig. 8C) and configuration memory 850 (e.g., 850a, 850b, 850C, and 850d labeled "Config" in fig. 8C) for each pixel cell (or each pixel cell block). The configuration memory 850 may store first programming data to control the light measurement operation (e.g., exposure period duration, or quantization resolution) of the pixel cells (or pixel cell blocks). In addition, configuration memory 850 may also store second programming data that local power state control circuit 826 may use to set the power states of the processing circuitry and memory. The configuration memory 850 may be implemented as a Static Random Access Memory (SRAM). Although fig. 8C shows the local power state control circuit 826 and the configuration memory 850 located inside each pixel cell, it will be appreciated that the configuration memory 850 may also be located outside each pixel cell, such as when the local power state control circuit 826 and the configuration memory 850 are used for blocks of pixel cells.
As shown in fig. 8C, the configuration memory 850 of each pixel cell is coupled to the column bus C and the row bus R via a transistor S (e.g., S 00、S10、S10、S11, etc.). In some examples, each set of column buses (e.g., C0 and C1) and each set of row buses (e.g., R0 and R1) may include multiple bits (bits). For example, in FIG. 8C, each set of column buses and each set of row buses may carry n+1 bits. It will be appreciated that in some examples, each set of column buses and each set of row buses may also carry a single data bit. Each pixel cell is also electrically connected to a transistor T (e.g., T 00、T10、T10 or T 11) to control the transmission of configuration signals to the pixel cell (or pixel cell block). The row and column select signals may drive the transistor S of each pixel cell to enable (or disable) the corresponding transistor T to transmit a configuration signal to the pixel cell. In some examples, column control circuitry 804 and row control circuitry 806 may be programmed by a single write instruction (e.g., from host device 702) to write to configuration memory 850 of multiple pixel cells simultaneously. Then, the column control circuit 804 and the row control circuit 806 can control the row bus and the column bus to write to the configuration memory of the pixel unit.
In some examples, local power state control circuit 826 may also directly receive configuration signals from transistor T without storing these configuration signals in configuration memory 850. For example, as described above, the local power state control circuit 826 may receive row/column power state signals 834, which may be analog signals such as voltage bias signals or supply voltages, to control the power states of the pixel cells and the processing circuitry and/or memory used by the pixel cells.
In addition, each pixel cell also includes a transistor O (e.g., O 00、O10、O10 or O 11) to control the sharing of the output bus D between a column of pixel cells. The plurality of transistors O in each row may be controlled by a read signal (e.g., read_r0, read_r1) to enable row-by-row reading of pixel data such that one row of pixel cells outputs pixel data through output buses D0, D1, … …, di, followed by the next row of pixel cells.
In some examples, the circuit components (including processing circuitry and memory), counters, DACs, buffer networks (including buffers), etc. of the pixel cell array 718 may be organized into a hierarchical power domain that is managed by the hierarchical power state control circuit 838. The hierarchical power domains may include a hierarchy of a plurality of power domains and power subfields. The hierarchical power state control circuit may individually set the power state of each power domain, and the power state of each power subfield under each power domain. This arrangement allows fine-grained control of the power consumption of the image sensor 704 and supports various spatial and temporal power state control operations, thereby further improving the power efficiency of the image sensor 704.
Although some pixel-level ADCs or block-level ADCs are disabled, high-speed control signals, such as clocks, analog ramp signals, or digital ramp signals, may still be transmitted to each pixel-level ADC or block-level ADC via the buffer network, which may consume a significant amount of power and increase the average power consumption for generating each pixel. This inefficiency may be further exacerbated if: when the sparseness of the image frame increases (e.g., contains fewer pixels), but the high-speed control signal is still transmitted to each pixel unit, so that the power consumption when transmitting the high-speed control signal remains the same, but the average power consumption for generating each pixel increases because fewer pixels are generated.
Fig. 9 illustrates an example of a system 900 including an image sensor 902 with noise reduction circuitry 904, in accordance with some aspects of the present disclosure. In some examples, image sensor 902 may correspond to any of the image sensors described above with reference to fig. 1-8, such as image sensors 120 a-120 d, 150a, 150b, 600, and 704.
Image sensor 902 may include an array 918 of pixel cells (e.g., pixel cells 901) for generating digital intensity data corresponding to digital pixels of an image. The pixel unit 901 may include one or more Photodiodes (PDs), anti-blooming gates (AB) that may prevent charges from the photodiodes from overflowing to the node FD when the node FD holds a signal for ADC conversion, a transfer gate (TRANSFER GATE, TG) for transferring charges from the PDs to the FD, a reset gate (RESET GATE, RST) that resets a voltage at the FD to a higher level, a Source Follower (SF) that may be used as a unity gain buffer, and/or a bias transistor (VBN) that may provide a bias current to the SF. The photodiodes may include, for example, P-N diodes, P-I-N diodes, or pinned diodes. The photodiode may generate and accumulate charge upon receiving light during an exposure period, and the amount of charge generated during the exposure period may be proportional to the intensity of the light. In some examples, the exposure period may be defined based on the timing of the AB signal.
The pixel array 918 can be coupled to one or more quantizers 920 that include analog-to-digital converters (ADCs) 922. Each quantizer may include a capacitor coupled to the analog-to-digital converter. In some examples, each pixel cell may be coupled to its own quantizer. For example, each quantizer may be a pixel-level quantizer that is only accessible by individual pixel units. In other examples, a single quantizer may be accessed by multiple pixel units. The quantizer 920 may convert the charge from the pixel cells into digital values representing the original image frame 914. The original image frame 914 may include noise, such as fixed pattern noise and temporal noise. This noise may be caused by manufacturing inconsistencies of the ADCs 922 or may be caused by other reasons. Some examples of the present disclosure may include noise reduction circuitry 904 to help reduce (e.g., remove) this noise.
Noise reduction circuit 904 may be coupled to quantizer 920. The noise reduction circuit 904 may receive the original image frame 914 and apply a reference frame to the original image frame 914 to generate a corrected image frame 916. For example, the noise reduction circuit 904 may subtract a reference frame from the original image frame 914 to generate a corrected image frame 916. Due to this noise correction process, the corrected image frame 916 may have less noise than the original image frame 914. The noise reduction circuit 904 may then send the corrected image frame 916 to the image processor 906.
An image processor 906 may be coupled to the noise reduction circuit 904. The image processor 906 may receive the corrected image frame 916 and perform one or more image processing operations on the corrected image frame 916 to generate an output digital image 910. Examples of image processing operations may include filtering, feature extraction, and cropping. The image processor 906 may then transmit the digital image 910 to a computing system 912, which may include the image sensor 902 or may be separate from the image sensor 902. The image processor 906 may transmit the corrected image frames 916 to the computing system 912 in any suitable manner (e.g., via a wireless connection or a wired connection).
Computing system 912 may include a computing device or computing devices configured to perform operations using digital image 910. Examples of such computing devices may include laptop computers, desktop computers, servers, mobile phones, tablet computers, electronic readers, and wearable devices (e.g., smart watches or headsets). Computing system 912 may be, for example, a viewing system for viewing digital image 910, a processing system for interpreting digital image 910, or a compiling system for compiling a set of digital images. In some examples, computing system 912 may be an artificial reality system. The artificial reality system may be configured to generate the artificial reality environment 908 using the digital image 910. The virtual reality environment 908 may be an output on a display device 932 (e.g., a Liquid CRYSTAL DISPLAY (LCD), a light-emitting diode (LED) display, and/or a head mounted display).
Artificial reality is a form of reality that has been somehow adjusted before being presented to a user, which may include, for example, virtual Reality (VR), augmented Reality (AR), mixed Reality (MR), mixed reality (hybrid reality), or some combination and/or derivative thereof. The artificial reality content may include entirely generated content, or generated content combined with captured (e.g., real world) content. The artificial reality content may include video, audio, haptic feedback, or some combination thereof, and any of these items may be presented in a single channel or multiple channels (e.g., stereoscopic video that produces a three-dimensional effect to the viewer). Further, the artificial reality may also be associated with applications, products, accessories, services, or some combination thereof, for creating content in the artificial reality and/or otherwise using in the artificial reality (e.g., performing an activity in the artificial reality), for example. The artificial reality system providing the artificial reality content may be implemented on a variety of platforms including a head-mounted display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing the artificial reality content to one or more viewers.
While fig. 9 shows one particular arrangement of the image sensor 902 and the noise reduction circuit 904, other arrangements are possible and contemplated herein. For example, fig. 10 shows a system 1000 in which an image sensor 902 is separate from a noise reduction circuit 904. In this example, the image sensor 902 includes a pixel array 918, a quantizer 920, and an image processor 906, but the image sensor 902 does not include noise reduction circuitry 904.
As shown in fig. 10, the quantizer 920 may generate an original image frame 914 and transmit the original image frame 914 to the image processor 906. The image processor 906 may perform one or more image processing operations on the original image frame 914 to generate a preprocessed image frame 1002. The image sensor 902 may then transmit the preprocessed image frame 1002 to the noise reduction circuit 904, which may be coupled to the image sensor 902 via a wireless connection or a wired connection. The noise reduction circuit 904 may generate a corrected image frame 916 by applying the reference frame to the preprocessed image frame 1002. After generating the corrected image frame 916, the noise reduction circuit 904 may transmit the corrected image frame 916 to the computing system 912 for subsequent use. The noise reduction circuit 904 may transmit the corrected image frame 916 to the computing system 912 in any suitable manner (e.g., via a wired connection or a wireless connection).
Fig. 11 shows one example of the noise reduction circuit 904. The noise reduction circuit 904 may be capable of operating in three modes (calibration mode, noise reduction mode, and recalibration mode). Each of these modes is described below.
In the calibration mode, the noise reduction circuit 904 may implement a calibration phase. During the calibration phase, the noise reduction circuit 904 may obtain calibration image frames 1110 a-1110 n from the image sensor 902. When generating the calibration image frames 110 a-110 n, the image sensor 902 may configure the pixels not to capture any light signals when the quantizer is operating normally (e.g., by using zero exposure time). As a result, each calibration image frame 110 a-110 n may contain only noise components at the top of the "dark" frame. The noise reduction circuit 904 may acquire any number of calibration image frames 1110a through 1110n that is greater than 1. The noise reduction circuit 904 may then operate the reference frame generator 1102 to generate a reference frame 1106 based on the calibration image frames 1110 a-1110 n. The reference frame generator 1102 may derive the reference frame 1106 from the calibration image frames 1110 a-1110 n using any number of techniques and any combination of these techniques. For example, the reference frame generator 1102 may average the calibration image frames 1110 a-1110 n to generate the reference frame 1106. The averaging process may be weighted or unweighted. The averaging method may produce a reference frame 1106 in which temporal noise is reduced such that the reference frame 1106 exhibits primarily fixed pattern noise. After generating the reference frame 1106, the reference frame generator 1102 may store the reference frame 1106 in the memory 1104.
The memory 1104 may include one memory or a plurality of memories. The memory 1104 may be volatile memory or nonvolatile memory. Examples of memory 1104 include random access memory (random access memory, RAM) such as Static Random Access Memory (SRAM); read-only memory (ROM) such as an electrically erasable programmable read-only memory (EEPROM); and a flash memory. In some examples, memory 1104 may correspond to any of the memories described above with reference to fig. 1-8, such as a configuration memory. The reference frame generator 1102 may store reference frames 1106 in the memory 1104. Storing the reference frame 1106 in the memory 1104 may end the calibration phase.
After the calibration phase is completed, the noise reduction circuit 904 may switch to a noise reduction mode to implement the noise reduction phase. The noise reduction stage may be considered a normal operation stage in which the noise reduction circuit 904 performs its normal function of reducing noise in the image frames 1112a through 1112n received from the image sensor 902. During the noise reduction stage, the noise reduction circuit 904 may receive image frames 1112 a-1112 n (e.g., raw image frame 914 or preprocessed image frame 1002) from the image sensor 902. Noise reduction circuit 904 may receive image frames 1112 a-1112 n at processor 1108. Processor 1108 may include one processor or multiple processors. Examples of the processor 1108 include a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), an application-specific integrated circuit (ASIC), or a microprocessor. In some examples, the processor 1108 may correspond to any of the processors described above with reference to fig. 1-8. Processor 1108 may also receive reference frames 1106 from memory 1104. Processor 1108 may then apply reference frame 1106 to image frames 1112 a-1112 n to reduce noise in these image frames, thereby generating corrected image frames 916 a-916 n. For example, processor 1108 may generate corrected image frame 916a by subtracting reference frame 1106 from image frame 1112a to reduce noise in image frame 1112 a. The noise reduction circuit 904 may remain in the noise reduction mode for a majority of its operation.
At some point in time, it may be necessary to recalibrate the noise reduction circuit 904. For example, the pixel array in image sensor 902 may be susceptible to the following: degradation, environmental forces (e.g., temperature changes or physical effects), and other factors that may affect the quality of the digital image generated by the image sensor 902. If the reference frame 1106 is generated before these factors affect the image sensor 902, the ability of the noise reduction circuit 904 to sufficiently reduce noise in the image frames 1112 a-1112 n may decrease over time. To account for these variations, in some examples, the noise reduction circuit 904 may enter a recalibration mode in response to a trigger event.
In the recalibration mode, the noise reduction circuit 904 may implement a recalibration phase. During the recalibration phase, the noise reduction circuit 904 may update the reference frame 1106 based on one or more additional calibration image frames 1114a through 1114n received from the image sensor 902. For example, the reference frame generator 1102 may receive additional calibration image frames 1114a through 1114n from the image sensor 902. The reference frame generator 1102 may also receive existing reference frames 1106 from the memory 1104. The reference frame generator 1102 may then update the existing reference frames 1106 based on the additional calibration image frames 1114a through 1114n. The reference frame generator 1102 may store the updated reference frame 1116 in the memory 1104, for example, by overwriting the existing reference frame 1106 with the updated reference frame 1116. Once the recalibration phase is completed, the noise reduction circuit 904 may reenter the noise reduction mode to continue its normal operation.
Fig. 12 shows an example of the above-described modes/phases of operation. Referring to fig. 11 and 12 together, the noise reduction circuit 904 may begin in a calibration mode. The noise reduction circuit 904 may automatically enter the calibration mode at start-up (e.g., at start-up). The calibration mode is configured to implement a calibration phase 1202.
During the calibration phase 1202, the noise reduction circuit 904 may receive a predetermined number of calibration image frames 1110a through 1110n from the image sensor 902. In fig. 12, the predetermined number of calibration image frames 1110a through 1110n is 16. This number of calibration image frames 1110 a-1110 n may be advantageous because only some of the most significant bits are needed when the reference frame 1106 needs to be consumed by the processor 1108 for noise correction (e.g., the 4 least significant bits may be ignored when capturing 16 reference frames). This approach may allow frame averaging without having to perform division operations at the cost of complex division circuitry. In other examples, more or fewer calibration image frames 1110 a-1110 n may be used. The predetermined number may be configurable settings stored in the noise reduction circuit 904 (e.g., memory 1104).
The noise reduction circuit 904 (e.g., the reference frame generator 1102) may combine a predetermined number of calibration image frames 1110a through 1110n to produce a reference frame 1106. For example, the noise reduction circuit 904 may accumulate the calibration image frames 1110 a-1110 n to produce the reference frame 1106. An example of this process is shown in the right column 1210 of fig. 12. As shown, the reference frame 1106 is initially blank. As each calibration image frame is received, the noise reduction circuit 904 adds the calibration image frame to the reference frame 1106. After a predetermined number of calibration image frames 1110 a-1110 n have been received and accumulated, the result may be a reference frame of a combination (e.g., a sum) of these calibration image frames 1110 a-1110 n. The noise reduction circuit 904 may store the reference frame 1106 in the memory 1104. The noise reduction circuit 904 may then transition to the noise reduction mode.
In the noise reduction mode, the noise reduction circuit 904 may implement the noise reduction stage 1204. In the noise reduction stage 1304, the noise reduction circuit 904 (e.g., the processor 1108) may receive a reference frame 1106. For example, processor 1108 may retrieve reference frame 1106 from memory 1104. The noise reduction circuit 904 may also receive any number of image frames 1112 a-1112 n from the image sensor 902. The image frames 1112 a-1112 n may not have previously undergone any noise correction. The noise reduction circuit 904 may apply the reference frame 1106 to the image frames 1112 a-1112 n to reduce noise in the image frames to generate a corrected image frame 916. It will be appreciated that during the noise reduction stage 1204, the reference frame 1106 remains fixed-i.e., it does not change. This example is shown in the right column 1210 of fig. 12, where the reference frame 1106 holds the accumulation of 16 calibration image frames 1110a through 1110n received during the calibration phase 11002.
In some examples, an event may occur that triggers the recalibration mode. The event may be detected by the noise reduction circuit 904. Examples of such events may include physical effects on image sensor 902, an ambient temperature near image sensor 902 exceeding a predetermined threshold, a temperature of a hardware component of image sensor 902 exceeding a predetermined threshold, a predetermined period of time elapsing, an update to software (e.g., firmware) of image sensor 902, or any combination of these. In response to detecting such an event, the noise reduction circuit 904 may automatically enter a recalibration mode.
To assist in event detection, in some examples, the image sensor 902 may include one or more sensors configured to generate sensor signals and transmit the sensor signals to the noise reduction circuit 904. Examples of sensors may include temperature sensors, accelerometers, gyroscopes, voltmeters, ammeters, inclinometers, or any combination thereof. The sensor signal may comprise a measurement made by the sensor. The noise reduction circuit 904 (e.g., the processor 1108) may receive the sensor signals and analyze the sensor signals to detect predefined events. For example, the noise reduction circuit 904 may include an algorithm or a look-up table that may be used to detect predefined events based on one or more characteristics of the sensor signal. Examples of such characteristics may include amplitude, waveform, digital value, and/or frequency associated with one or more sensor signals. In response to detecting a predefined event, the noise reduction circuit 904 may automatically enter a recalibration mode.
In the recalibration mode, the noise reduction circuit 904 may implement the recalibration phase 1206. In the recalibration phase 1206, the noise reduction circuit 904 (e.g., the reference frame generator 1102) may receive one or more additional calibration image frames, such as the calibration image frame 1114a. The noise reduction circuit 904 may update the existing reference frame 1106 with one or more additional calibration image frames to generate an updated reference frame 1116. An example of this is shown in the right column 1210 of fig. 12, where the reference frame 1106 is updated based on the 17 th calibration image frame 1114a received during the recalibration phase 1206. Updating the reference frame 1106 may include incorporating one or more additional calibration image frames 1114a through 1114n into the existing reference frame 1106. For example, the noise reduction circuit 904 can generate the updated reference frame 1116 by averaging the calibration image frame 1114a and the existing reference frame 1106.
In some examples, the noise reduction circuit 904 may perform a weighted average process during the recalibration phase 1206 to generate the updated reference frame 1116. For example, the noise reduction circuit 904 may generate the updated reference frame 1116 (URF) by performing a weighted rolling update according to the following equation:
urf= (existing reference frame) ×wo+ (new calibration image) ×f×1-Wo
Where (existing reference frames) represents the existing reference frames 1106, (new calibration images) represents the additional calibration image frames 1114a, wo captured during the recalibration phase 1206 are weighting factors, and F is the total number of calibration image frames acquired during the calibration phase 1202. The weighting factor (Wo) may be selected to assign sufficient weight to the additional calibration image frame 1114 a. The weighting factors may be adjusted based on the severity of the change in environmental factors (e.g., temperature) or based on other factors. If the magnitude of the detected event (e.g., temperature change) is small and the image sensor 902 has been in operation for a long period of time, a larger weight (e.g., 7/8 or 15/16) may be selected. Conversely, if the size of the detected event is larger, a smaller weight (e.g., 1/2 or 1/4) may be selected. This may give more representation to the additional calibration image frame 1114 a. In some examples, the weights may be dynamically adjusted during operation of the noise reduction circuit 904 based on the length of operation of the noise reduction circuit and/or the size of the detected event (e.g., temperature change).
In the example shown in fig. 12, 16 image frames are selected as F, so the above equation can be written as:
URF=(#1-#16)*(0.75)+(#17)*(16)*(1-(0.75))
Where, (# 1- # 16) represents the existing reference frame 1106, which is the accumulation of the original 16 calibration image frames 1110 a-1110 n captured during the calibration phase 11002; (# 17) represents the additional calibration image frames 1114a captured during the recalibration phase 1206; and the weighting factor (Wo) is 0.75. This means that the original reference frame 1106 contributes 75% of the updated reference frame 1116, while the additional calibration image frame 1114a contributes 25% of the updated reference frame 1116.
After completing the recalibration phase, the noise reduction circuit 904 may re-enter the noise reduction mode to start the second noise reduction phase 1208. The second noise reduction stage 1208 may be similar to the first noise reduction stage 11004 except that the updated reference frame 1116, rather than the original reference frame 1106, is used to reduce noise in subsequently captured image frames.
Various aspects of the above process may be repeated as desired. For example, the noise reduction circuit 904 may repeatedly and automatically switch between the recalibration mode and the noise reduction mode during its operation. For example, after initiating the second noise reduction stage 1208, the noise reduction circuit 904 may detect an event. In response to detecting this event, the noise reduction circuit 904 may automatically re-enter the recalibration mode and initiate a second recalibration phase. After the second recalibration phase is completed, the noise reduction circuit 904 may then automatically reenter the noise reduction mode and initiate a third noise reduction phase. Etc.
As described above, the reference frame generator 1102 may be configured to generate the reference frame 1106 during a calibration phase and to generate the updated reference frame 1116 during a recalibration phase. The reference frame generator 1102 may be implemented using any combination of software and/or hardware. An example embodiment of a reference frame generator 1102 is shown in fig. 13. In this example, the reference frame generator 1102 includes hardware components (e.g., integrated circuits, logic gates, processors, transistors, capacitors, resistors, and inductors) configured to implement basic arithmetic operations. In particular, reference frame generator 1102 includes an adder 1302, multipliers 1306a and 1306b, and dividers 1304a and 1304b, each of which may be hardware components configured to perform respective arithmetic operations. Multipliers 1306a and 1306b may be coupled to dividers 1304a and 1304b, and dividers 1304a and 1304b may be coupled to adder 1302. Adder 1302 can be coupled to memory 1104 and image sensor 902 (e.g., an output of quantizer 920). Adder 1302 may be used to perform addition/accumulation of calibration image frames 1110 a-1110 n during a calibration phase. Multipliers 1306a and 1306b and dividers 1304a and 1304b may be used to implement weighted rolling updates during the recalibration phase. Of course, in other examples, reference frame generator 1102 may include more components, fewer components, different components or different combinations of components than shown in fig. 13. For example, in another example, the reference frame generator may include a processor programmed to perform some or all of the arithmetic operations described above.
Fig. 14 illustrates an example of a process for switching between multiple modes of operation of the noise reduction circuit 904, in accordance with some aspects of the present disclosure. Other examples may include more steps, fewer steps than shown in fig. 14, different steps than shown in fig. 14, or a different order of steps. The steps in fig. 14 are described above with reference to the components of fig. 11.
At block 1400, the noise reduction circuit 904 determines whether to initiate a calibration mode. For example, the processor 1108 or the reference frame generator 1102 may determine whether to initiate a calibration mode. The noise reduction circuit 904 may automatically initiate a calibration mode in response to detecting one or more events. Examples of such events may include an initiation event (e.g., turning on the noise reduction circuit 904), a noise level in the digital image exceeding a predetermined threshold, receipt of a particular input from a user, receipt of a particular input from a hardware component (e.g., an external processor) coupled to the noise reduction circuit 904, or any combination of these.
At block 1402, the noise reduction circuit 904 receives calibration image frames 1110 a-1110 n from the image sensor 902. For example, the reference frame generator 1102 may receive calibration image frames 1110 a-1110 n from the image sensor 902. The noise reduction circuit 904 may receive any number of calibration image frames 1110a through 1110n. This number may be selected by a user or manufacturer of the noise reduction circuit 904. In some examples, the number may be customized or dynamically adjusted based on one or more factors (e.g., environmental conditions associated with the image sensor 902).
At block 1404, the noise reduction circuit 904 generates a reference frame 1106 based on the calibration image frames 1110 a-1110 n. For example, the reference frame generator 1102 may generate the reference frame 1106 based on the calibration image frames 1110 a-1110 n. The reference frame 1106 may be generated by combining some or all of the calibration image frames 1110 a-1110 n together. In some such examples, the noise reduction circuit 904 may combine the calibration image frames 1110 a-1110 n together by performing a pixel-by-pixel averaging of the calibration image frames 1110 a-1110 n.
At block 1406, the noise reduction circuit 904 stores the reference frame 1106 in the memory 1104. For example, the reference frame generator 1102 may store the reference frames 1106 in the memory 1104. In some examples, memory 1104 may be a volatile memory in which stored data is erased when power is turned off. For example, memory 1104 may include an SRAM buffer that may not retain any data when powered down.
At block 1408, the noise reduction circuit 904 determines whether to initiate a noise reduction mode. For example, processor 1108 may determine whether to initiate a noise reduction mode. The noise reduction circuit 904 may initiate a noise reduction mode in response to completing the calibration phase (e.g., in response to storing the reference frame 1106 in the memory 1104). If the noise reduction circuit 904 determines that a noise reduction mode is to be initiated, the process may proceed to block 1408. Otherwise, the process may proceed to block 1416.
At block 1410, the noise reduction circuit 904 receives an image frame 1112a from the image sensor 902. In some examples, image frame 1112a may be an original image frame that has not previously undergone any noise correction or other preprocessing. In other examples, image frame 1112a may have undergone some limited preprocessing before being received by noise reduction circuitry 904.
At block 1412, the noise reduction circuit 904 receives the reference frame 1106 from the memory 1104. For example, processor 1108 may retrieve reference frame 1106 from memory 1104.
At block 1414, the noise reduction circuit 904 uses the reference frame 1106 to reduce noise in the image frame 1112 a. For example, processor 1108 may use reference frame 1106 to reduce noise in image frame 1112 a. This may include subtracting reference frame 1106 from image frame 1112a to generate corrected image frame 916a. Subtracting the reference frame 1106 from the image frame 1112a may be a type of noise cancellation.
At block 1416, the noise reduction circuit 904 determines whether to initiate a recalibration mode. For example, processor 1108 may determine whether to initiate a recalibration mode. The noise reduction circuit 904 may initiate a recalibration mode in response to detecting an event (e.g., in response to detecting a temperature change greater than or equal to a threshold amount). If the noise reduction circuit 904 determines that a recalibration mode is to be initiated, the process may proceed to block 1418. Otherwise, the process may return to block 1408.
At block 1418, the noise reduction circuit 904 receives one or more new image frames (e.g., additional calibration image frames 1114a through 1114 n) from the image sensor 902. For example, the reference frame generator 1102 may receive one or more new image frames from the image sensor 902.
At block 1420, the noise reduction circuit 904 generates a new reference frame (e.g., updated reference frame 1116) based on the existing reference frame 1106 and the one or more new image frames. For example, the reference frame generator 1102 may generate a new reference frame by combining the existing reference frame 1106 and one or more new image frames. This may be performed by applying a weighting scheme to the existing reference frame 1106 and the one or more new image frames.
At block 1422, the noise reduction circuit 904 stores the new reference frame in the memory 1104. For example, the reference frame generator 1102 may store the new reference frame in the memory 1104 for subsequent use. Depending on the size of the memory 1104, the noise reduction circuit 904 may store one or both of the original reference frame 1106 and the new reference frame in the memory 1104. For example, if the memory 1104 is small, the noise reduction circuit 904 can overwrite the original reference frame 1106 with the new reference frame in the memory 1104. If the memory 1104 is large, the noise reduction circuit 904 can keep copies of both the original reference frame 1106 and the new reference frame in the memory 1104. After storing the new reference frame in memory 1104, the process may then return to block 1408, where the noise reduction circuit 904 may again initiate the noise reduction mode.
Some or all of the above processes may be repeated any number of times. For example, steps 1408-1416 may be repeated multiple times during operation of the noise reduction circuit 904.
In some examples, the process may return to block 1400 and restart the calibration process from the head. For example, the noise reduction circuit 904 may detect an event. In response to detecting the event, the noise reduction circuit 904 may restart the calibration process (at block 1400) by deleting some or all of the stored reference frames from the memory 1104 and reentering the calibration mode. The noise reduction circuit 904 may then again perform the initial calibration phase. This may occur, for example, if the event corresponds to a significant change in the configuration of the image sensor 902, which may ensure that the calibration process is completely restarted.
Some portions of this description describe embodiments of the present disclosure in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to effectively convey the substance of their work to others skilled in the art. These operations, although described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent circuits, or microcode, or the like. Furthermore, it has proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, and/or hardware.
The described steps, operations, or processes may be performed or implemented using one or more hardware or software modules, alone or in combination with other devices. In some embodiments, the software modules are implemented using a computer program product comprising a computer readable medium containing computer program code executable by a computer processor to perform any or all of the steps, operations, or processes described.
Embodiments of the present disclosure may also relate to an apparatus for performing the described operations. The apparatus may be specially constructed for the required purposes, and/or the apparatus may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory tangible computer readable storage medium, which may be coupled to a computer system bus, or any type of medium suitable for storing electronic instructions. In addition, any computing system referred to in the specification may comprise a single processor or may be an architecture employing multiple processor designs to increase computing capability.
Embodiments of the present disclosure may also relate to an article of manufacture produced by the computing process described herein. Such an article of manufacture may comprise information generated from a computing process, wherein the information is stored on a non-transitory tangible computer-readable storage medium and may comprise any embodiment of a computer program product or other data combination described herein.
The language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. It is intended, therefore, that the scope of the disclosure be limited not by this detailed description, but rather by any claims issued based on the applications herein. Accordingly, the disclosure of the embodiments is intended to be illustrative, but not limiting, of the scope of the disclosure, which is set forth in the following claims.

Claims (15)

1. A noise reduction circuit, the noise reduction circuit comprising:
A reference frame generator configured to generate a reference frame based on a plurality of image frames received from an image sensor during a calibration phase;
a memory coupled to the reference frame generator, the memory configured to receive the reference frame from the reference frame generator and store the reference frame for subsequent use during a noise reduction stage, the noise reduction stage subsequent to the calibration stage; and
A processor coupled to the memory, the processor configured to retrieve the reference frame from the memory and use the reference frame to reduce noise in received image frames from the image sensor during the noise reduction stage.
2. The noise reduction circuit of claim 1, wherein the noise reduction circuit is part of the image sensor or is separate from the image sensor.
3. The noise reduction circuit of claim 1, wherein the reference frame generator is configured to generate the reference frame by averaging the plurality of image frames together; and
Optionally, wherein the plurality of image frames comprises more than five image frames.
4. The noise reduction circuit of claim 1, wherein the processor is configured to reduce noise in the image frame using the reference frame by subtracting the reference frame from the image frame to generate a corrected image frame.
5. The noise reduction circuit of claim 1, wherein the noise comprises temporal noise generated by one or more analog-to-digital converters of the image sensor.
6. The noise reduction circuit of claim 1, wherein the image sensor is a digital pixel sensor having a pixel array configured to be coupled to the noise reduction circuit.
7. The noise reduction circuit of claim 1, wherein the reference frame generator is further configured to perform a recalibration phase after the noise reduction phase, the recalibration phase comprising:
receiving a new image frame from the image sensor;
receiving the reference frame from the memory;
generating a new reference frame by combining the new image frame with the reference frame; and
The new reference frame is stored in the memory.
8. The noise reduction circuit of claim 7, wherein the processor is configured to use the new reference frame to reduce noise in received image frames from the image sensor during another noise reduction stage subsequent to the recalibration stage; and/or the reference frame generator is configured to generate the new reference frame by applying a weighted average scheme to the reference frame and the new image frame, the new image frame having a lower weight than the reference frame in the weighted average scheme.
9. A method, the method comprising:
during a calibration phase, receiving, by a noise reduction circuit, a plurality of image frames from an image sensor;
generating, by the noise reduction circuit, a reference frame based on the plurality of image frames;
During a noise reduction stage subsequent to the calibration stage, receiving, by the noise reduction circuit, image frames from the image sensor; and
The reference frame is used by the noise reduction circuit to reduce noise in the image frame.
10. The method of claim 9, wherein the noise reduction circuit is part of the image sensor or is separate from and coupled to the image sensor.
11. The method of claim 9, wherein generating the reference frame comprises averaging the plurality of image frames together.
12. The method of claim 9, wherein using the reference frame to reduce noise in the image frame comprises:
the reference frame is subtracted from the image frame to generate a corrected image frame.
13. The method of claim 9, wherein the noise comprises fixed pattern noise and temporal noise, the fixed pattern noise and the temporal noise being generated by one or more analog-to-digital converters of the image sensor.
14. The method of claim 9, wherein the noise reduction stage is a first noise reduction stage, and the method further comprises:
performing a recalibration phase after the first noise reduction phase, the recalibration phase comprising:
Receiving a new image frame from the image sensor; and
Generating a new reference frame by combining the new image frame with the reference frame; and
Performing a second noise reduction stage after the recalibration stage, the second noise reduction stage comprising:
Receiving one or more image frames from the image sensor; and
Using the new reference frame to reduce noise in the one or more image frames; and
Optionally wherein generating the new reference frame comprises:
a weighted average scheme is applied to the reference frame and the new image frame.
15. An artificial reality system, the artificial reality system comprising:
An image sensor;
A noise reduction circuit coupled to the image sensor, the noise reduction circuit configured to:
receiving a plurality of image frames from the image sensor during a calibration phase;
generating a reference frame based on the plurality of image frames;
receiving image frames from the image sensor during a noise reduction stage subsequent to the calibration stage; and
Generating a corrected image frame by reducing noise in the image frame using the reference frame; and
A computer system coupled to the noise reduction circuit and to a display device, the computer system configured to generate an artificial reality environment for display on the display device based on the corrected image frames generated by the noise reduction circuit.
CN202280068060.XA 2021-10-05 2022-10-03 Noise reduction circuit for image sensor Pending CN118077214A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/252,420 2021-10-05
US17/950,199 2022-09-22
US17/950,199 US20230105527A1 (en) 2021-10-05 2022-09-22 Noise-reduction circuit for an image sensor
PCT/US2022/045505 WO2023059538A1 (en) 2021-10-05 2022-10-03 Noise-reduction circuit for an image sensor

Publications (1)

Publication Number Publication Date
CN118077214A true CN118077214A (en) 2024-05-24

Family

ID=91102643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280068060.XA Pending CN118077214A (en) 2021-10-05 2022-10-03 Noise reduction circuit for image sensor

Country Status (1)

Country Link
CN (1) CN118077214A (en)

Similar Documents

Publication Publication Date Title
US20210368124A1 (en) Programmable pixel array having multiple power domains
EP3900319B1 (en) Dynamically programmable image sensor
CN112585950B (en) Pixel sensor with adaptive exposure time
US11948089B2 (en) Sparse image sensing and processing
EP3900324B1 (en) Programmable pixel array
JP2021528890A (en) Pixel sensor with multi-photodiode
US11910114B2 (en) Multi-mode image sensor
US20210044742A1 (en) Dynamically programmable image sensor
CN114586331A (en) Distributed sensor system
CN118077214A (en) Noise reduction circuit for image sensor
US20230105527A1 (en) Noise-reduction circuit for an image sensor
WO2023059538A1 (en) Noise-reduction circuit for an image sensor
JP7515477B2 (en) Programmable Pixel Array
US12034015B2 (en) Programmable pixel array
US20220217295A1 (en) Image sub-sampling with a color grid array
TWI810304B (en) Apparatus and method for generating image frame

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination