CN113037989B - Image sensor, camera module and control method - Google Patents

Image sensor, camera module and control method Download PDF

Info

Publication number
CN113037989B
CN113037989B CN201911256196.XA CN201911256196A CN113037989B CN 113037989 B CN113037989 B CN 113037989B CN 201911256196 A CN201911256196 A CN 201911256196A CN 113037989 B CN113037989 B CN 113037989B
Authority
CN
China
Prior art keywords
branch
mode
pixel
hyperspectral
image sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911256196.XA
Other languages
Chinese (zh)
Other versions
CN113037989A (en
Inventor
谢勇
刘宇
董泳江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201911256196.XA priority Critical patent/CN113037989B/en
Publication of CN113037989A publication Critical patent/CN113037989A/en
Application granted granted Critical
Publication of CN113037989B publication Critical patent/CN113037989B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/671Focus control based on electronic image sensor signals in combination with active ranging signals, e.g. using light or sound signals emitted toward objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/703SSIS architectures incorporating pixels for producing signals other than image signals
    • H04N25/705Pixels for depth measurement, e.g. RGBZ

Abstract

The embodiment of the application provides an image sensor, a camera module and a control method. The camera module comprises an image sensor and a module controller. The module controller is used for controlling the image sensor to work in a first mode or a second mode. When the image sensor works in the first mode, the image sensor receives a first optical signal and converts the first optical signal into a first electric signal for acquiring the distance from the image sensor to a target object; when the image sensor works in the second mode, the image sensor receives a second optical signal and converts the second optical signal into a second electric signal for acquiring a hyperspectral image of a target object. The camera module integrates the ranging function and the hyperspectral image shooting function into one camera module, so that the ranging function and the hyperspectral image shooting function can be realized simultaneously, the size of the camera module is reduced, and the application range of the camera module is enlarged.

Description

Image sensor, camera module and control method
Technical Field
The present disclosure relates to the field of electronic and communication technologies, and in particular, to an image sensor, a camera module and a control method thereof.
Background
In recent years, time-of-flight (ToF) cameras, which are used widely in various fields such as imaging, movies, games, computers, user interfaces, face recognition, object recognition, autopilot, etc., and which obtain a distance between a target object and a camera by detecting a time of flight (round trip) of a light pulse by continuously transmitting the light pulse to the target and then receiving the light returned from the target object with a sensor, have been in growing demand. The 3D image can be generated by providing distance information on the basis of an image acquired by a common optical camera, so that the identification degree and the stereoscopic impression of the image are improved.
At present, TOF cameras are different in principle and mainly divided into two types, namely direct time-of-flight (d-TOF) cameras and indirect time-of-flight (i-TOF) cameras. In the direct time-of-flight camera, the time of flight from the emitted light to the received light is directly measured by a counter and then converted into the distance from the camera to the target object. The indirect time-of-flight camera obtains charges obtained by photoelectrically converting reflected light through at least two square waves having a phase difference, and calculates the phase difference between the reflected light and the emitted light through the charges, thereby calculating the distance from the camera to a target object.
However, both direct time-of-flight cameras and indirect time-of-flight cameras can only provide camera-to-target object distance information, resulting in low integration of existing TOF cameras, providing relatively single information. In order to obtain more information, when using with other camera cooperations in order to obtain more comprehensive information, can have the synchronism poor, the big problem of error to the shared space of many cameras is big, is not suitable for using at current smart machine, especially can not satisfy the miniaturized demand of smart machine's camera module among the prior art.
Disclosure of Invention
The application provides an image sensor, a camera module and a control method. The image sensor and the camera module integrate the function of shooting hyperspectral images on the basis of the TOF sensor and the camera module.
Due to the fact that different components have different spectral absorption, the hyperspectral camera can obtain information of the physical structure and the chemical components inside the target object. Therefore, the hyperspectral camera is applied to agriculture and food industries, such as the sugar degree test and the maturity test of fruits, the freshness test and the pesticide residue degree of vegetables, and the like, and is closely related to human life, such as human face pigment damage, eyeground inspection, inflammation diagnosis and the like in the medical industry. The hyperspectral imaging technology is based on image data technology of very many narrow wave bands, combines the imaging technology with the spectrum technology, continuously images a target object in dozens or hundreds of spectral wave bands in a spectral coverage range, and obtains the spatial characteristics of the target object and the spectral information of the target object.
At present, the TOF camera and the spectrum splitting camera are two separate devices, and distance measurement and hyperspectral information testing of a target object need to be realized by different cameras. The two devices occupy large space and high cost, and have the problems of poor synchronism and large error. The corresponding relation error of the 3D image shot by the TOF camera and the hyperspectral image shot by the hyperspectral camera is large. The camera module is integrated with a distance measuring function and a hyperspectral image shooting function on an image sensor, and can simultaneously realize distance measurement and spectral feature shooting. The application provides an image sensor and camera module is not single isolated has realized two kinds of functions, and the 3D image of shooing is identical with the hyperspectral image who obtains, and the synchronism is good, can be with the certain local spectral information on the 3D image of accurate positioning shooting, in addition with TOF camera and the integrated camera module of hyperspectral camera, can reduce the volume, improve user's use and experience, make the range of application wider.
The present application is described below in a number of aspects, it being readily understood that implementations of the following aspects may be referred to one another.
In a first aspect, an embodiment of the present application provides a camera module including an optical lens, an optical filter, an image sensor, and a module controller. And the optical lens is used for projecting the light wave reflected by the target object on the image sensor through the optical filter. The image sensor and the optical filter are located on a main light axis of the optical lens. The module controller is used for controlling the image sensor and the optical filter to work in a first mode (ranging mode) or a second mode (hyperspectral image shooting mode). When the image sensor and the optical filter operate in the first mode, the image sensor receives a first optical signal filtered by the optical filter and converts the first optical signal into a first electrical signal for obtaining a distance from the image sensor to the target object. When the image sensor and the optical filter work in the second mode, the image sensor receives a second optical signal filtered by the optical filter and converts the second optical signal into a second electrical signal for acquiring a hyperspectral image of the target object. It is thus clear that will find range and shoot hyperspectral image function integration in a camera module in this application, reduce the volume of camera module, improve the experience that the user used, reduced the cost of camera module to satisfy the miniaturized demand of smart machine's camera module among the prior art.
In a possible implementation form according to the first aspect, the module controller adjusts the wavelength of the light waves for passing through the optical filter. When the optical filter works in the first mode, enabling light waves with the wavelength for distance measurement to pass through the optical filter to obtain the first optical signal; and when the optical filter works in the second mode, enabling the light wave with the wavelength for shooting the hyperspectral image to pass through the optical filter to obtain the second optical signal. It can be seen that the conversion between the two modes can be achieved by adjusting the wavelength of the light wave transmitted through the optical filter. Wherein, in the distance measurement mode, the wavelength passing through the optical filter is 810nm-940nm; the wavelength passed through the optical filter ranges from visible to infrared wavelengths when capturing a hyperspectral mode. Optionally, in the distance measurement mode, the wavelength passed through the optical filter is 940nm, because the 940nm light wave in the atmosphere has less interference to the camera module.
In a possible implementation manner, the module controller is configured to send a control parameter to the image sensor to control the image sensor to operate in the first mode or the second mode, where the control parameter includes at least one of an operation time of starting and ending of the image sensor, an integration time, an exposure time, and an amplification factor of an electrical signal, and the control parameter can control the image sensor to convert a received optical signal into an electrical signal required in a different mode.
According to the first aspect, in a possible implementation manner, the camera module further includes a light source, the light source is configured to provide a light wave to the target object, and when the camera module operates in the first mode, the light source emits a light wave with a wavelength for ranging; when the light source works in the second mode, the light source emits light waves with the wavelength for shooting the hyperspectral image. Wherein, in the distance measurement mode, the wavelength range of the light source is 810nm-940nm; when the hyperspectral image mode is shot, the wavelength range of the light source is from visible light to infrared light. Optionally, according to different working modes and different application scenarios, the light source may be a laser, an LED, a point light source with two-dimensional scanning capability, a sheet light source with scanning capability, or the like. Alternatively, the light source may be one or more.
According to a first aspect, in a possible implementation, the camera module further comprises a MEMS scanning mirror centered on a principal optical axis of the optical lens; the module group controller is also used for controlling the scanning angle of the MEMS scanning mirror. The number of pixel circuits in the image sensor can be reduced by adopting the MEMS scanning mirror so as to reduce the volume of the whole camera module. In addition, by controlling the scanning angle of the MEMS scanning mirror, the collected distance or the resolution of the hyperspectral image can be adjusted.
According to the first aspect, in a possible implementation manner, the light waves provided by the light source to the target object are reflected by the target object and then projected onto the image sensor through the MEMS scanning mirror, the optical lens and the optical filter in sequence, and the number of pixel circuits in the image sensor can be reduced by using the MEMS scanning mirror, so as to reduce the volume of the whole camera module.
According to the first aspect, in a possible implementation manner, the camera module further includes a prism, the light wave provided by the light source passes through the prism to the MEMS scanning mirror, and after being reflected by the MEMS scanning mirror, the light wave reaches the target object, and after being reflected by the target object, the light wave passes through the optical lens and the optical filter in sequence and is projected onto the image sensor, and the light wave provided by the light source passes through the prism to the MEMS scanning mirror, so that the intensity of the light wave irradiated onto the target object can be improved, the range of the light source is further extended, and the camera module can be used for imaging the target object at a further distance.
In a second aspect, embodiments of the present application provide an image sensor including a pixel controller and a pixel circuit. The pixel controller is used for controlling the pixel circuit to work in a first mode or a second mode. The pixel circuit is used for receiving a first optical signal reflected by a target object when the pixel circuit works in the first mode, and converting the first optical signal into a first electric signal for acquiring the distance from the pixel circuit to the target object; and when the device works in the second mode, receiving a second optical signal reflected by a target object, and converting the second optical signal into a second electric signal for acquiring a hyperspectral image of the target object. According to the image sensor, the distance measurement function and the hyperspectral image shooting function are integrated on the image sensor, and when two functions are achieved on the image sensor, the size of the image sensor is reduced, and the cost is reduced.
According to the second aspect, in a possible implementation manner, the pixel circuit includes a first pixel branch and a second pixel branch, and the first pixel branch and the second pixel branch have the same topology. When the pixel circuit works in the first mode, the pixel controller is used for controlling the first pixel branch and the second pixel branch to work; when the pixel circuit works in the second mode, the pixel controller is used for controlling any one of the first pixel branch circuit or the second pixel branch circuit to work. The pixel circuit realizes two functions of ranging and hyperspectral image shooting through the control of the pixel controller.
According to the second aspect, in a possible implementation manner, the first pixel branch and the second pixel branch respectively include a switching tube; when the pixel circuit works in the first mode, the pixel controller is used for controlling the switching tubes of the first pixel branch and the second pixel branch to be in a working state; when the pixel circuit works in the second mode, the pixel controller is used for controlling the switching tube of the first pixel branch circuit to be in an off state, and the switching tube of the second pixel branch circuit is in a working state; or the switching tube of the second pixel branch is controlled to be in an off state, and the switching tube of the first pixel branch is controlled to be in a working state. The working state of the first pixel branch circuit and the second pixel branch circuit can be controlled by controlling the switching tubes of the first pixel branch circuit and the second pixel branch circuit. The switch tube comprises a charge switch tube, a reset switch tube, a voltage and current conversion switch tube or an analog-digital conversion switch tube. Optionally, the first pixel branch and the second pixel branch are in a working state or an off state by controlling the same switching tube of the first pixel branch and the second pixel branch. Optionally, the first pixel branch and the second pixel branch are in a working state or an off state by controlling different switching tubes of the first pixel branch and the second pixel branch.
According to a second aspect, in one possible implementation, the pixel circuit comprises a time-of-flight (TOF) branch and a hyperspectral branch; when the pixel circuit works in the first mode, the pixel controller is used for controlling the TOF branch circuit to work, and the hyperspectral branch circuit does not work; when the pixel circuit works in the second mode, the pixel controller is used for controlling the hyperspectral branch circuit to work, and the TOF branch circuit does not work. It can be seen that two modes can be realized by the pixel circuit and the pixel controller in one image sensor. Alternatively, one or more pixel circuits in one image sensor may be provided.
According to the second aspect, in a possible implementation manner, the TOF branch and the hyperspectral branch each include a switching tube; when the pixel circuit works in the first mode, the pixel controller is used for controlling a switching tube of the TOF branch circuit to be in a working state, and the switching tube of the hyperspectral branch circuit is in a turn-off state; when the pixel circuit works in the second mode, the pixel controller is used for controlling the switch tube of the hyperspectral branch circuit to be in a working state, and the switch tube of the TOF branch circuit is in a turn-off state. Specifically, the switching of the two modes is realized by respectively controlling the switching tubes in the TOF branch and the hyperspectral branch.
According to a second aspect, in one possible implementation, the hyperspectral branch comprises a charge transfer module and a charge output module. The charge transfer module is used for transmitting the corresponding electric signals to the charge output module; the charge output module is used for converting the electric signal passing through the charge transmission module into a digital signal. The charge transfer module of the hyperspectral branch comprises a hyperspectral branch switching tube, and the charge output module of the hyperspectral branch comprises an analog-to-digital converter.
According to the second aspect, in a possible implementation manner, the hyperspectral branch includes an amplifier, the amplifier is located between the photoelectric converter and the analog-to-digital converter of the hyperspectral branch or located within the analog-to-digital converter of the hyperspectral branch, and the amplifier is configured to amplify an electrical signal converted by the photoelectric converter. Therefore, the amplifier can improve the signal quality so as to improve the reliability of the obtained electric signal for shooting the hyperspectral image.
According to a second aspect, in a possible implementation manner, the pixel circuit further includes a switching circuit, the switching circuit further includes a first switch tube, the first switch tube is respectively connected to the switch tubes of the TOF branch and the hyperspectral branch, and the pixel controller is configured to control the TOF branch to operate or control the hyperspectral branch to operate through the first switch tube. By controlling the switching circuit, the TOF branch circuit and the switch tube of the hyperspectral branch circuit, the control reliability is improved, and the error rate of switching of two modes is reduced.
According to the second aspect, in a possible implementation manner, the switching circuit further includes a second switching tube, configured to control the first switching tube to be turned on and off, and when the pixel circuit operates in the second mode, the second switching tube is configured to control the first switching tube to be turned off; when the pixel circuit works in the first mode, the second switch tube is used for controlling the first switch tube to be conducted, and the accuracy of control is further improved through the second switch tube.
In a possible implementation manner, the image sensor further includes a processing circuit, and the pixel controller controls the pixel circuit to operate in the first mode or the second mode through the processing circuit. Optionally, the processing circuitry may include a row decoder or driver. Optionally, when the pixel circuits are multiple columns or multiple rows, the processing circuit is configured to control the pixel circuits row by row or column by column or by specific row or column according to an instruction of the pixel controller, so as to improve flexibility of control.
According to the second aspect, in one possible implementation, the processing circuit is a separate entity or the same entity as the pixel controller.
In a third aspect, an embodiment of the present application provides an electronic device including a sensor, a processor, and a camera module. The sensor is used for acquiring the working mode information of the camera module. And the processor is used for controlling the camera module to work in a first mode or a second mode according to the working mode information of the camera module. The camera module is used for receiving a first optical signal reflected by a target object when the camera module works in the first mode, and converting the first optical signal into a first electric signal for acquiring the distance from the camera module to the target object; and when the device works in the second mode, receiving a second optical signal reflected by a target object, and converting the second optical signal into a second electric signal for acquiring a hyperspectral image of the target object. A camera module on an electronic device can be in two working modes, so that the integration level of the camera module of the electronic device is improved, the size of the electronic device is reduced, the user experience is improved, and the cost is reduced.
According to the third aspect, in a possible implementation manner, the electronic device further includes a light source, the light source is configured to provide light waves to the target object, and when the camera module operates in the first mode, the processor is configured to control the light source to emit light waves with a wavelength for ranging; when the camera module works in the second mode, the processor is used for controlling the light source to emit light waves with the wavelength for shooting the hyperspectral image. Wherein the light source emits light in a wavelength range from visible light to infrared light. Optionally, the light source is inside the camera module. And the electronic equipment realizes the switching of two modes by controlling the light source. Optionally, the number of the light sources is one or more.
According to a third aspect, in one possible implementation, the camera module includes a module controller, an optical filter, and an image sensor. The processor is configured to control the image sensor and the optical filter to operate in the first mode or the second mode through the module controller; when the image sensor and the optical filter are operated in the first mode, the image sensor receives the first optical signal filtered by the optical filter and converts the first optical signal into the first electrical signal for acquiring the distance from the image sensor to the target object; when the image sensor and the optical filter operate in the second mode, the image sensor receives the second optical signal filtered by the optical filter and converts the second optical signal into the second electrical signal for obtaining a hyperspectral image of the target object. Alternatively, the processor, the module controller and the pixel controller of the image sensor may be separate entities, or two of them may be integrated into one entity, or three may be integrated into one entity.
According to a third aspect, in one possible implementation, the camera module further includes a MEMS scanning mirror. The processor is used for controlling the angle range scanned by the MEMS scanning mirror and the resolution of the angle of each scanning through the module controller. The number of pixel circuits in the image sensor can be reduced by adopting the MEMS scanning mirror so as to reduce the volume of the whole camera module. In addition, by controlling the scan angle of the MEMS scan mirror, the distance acquired or the resolution of the hyperspectral image can be adjusted.
According to the third aspect, in one possible implementation, the sensor includes at least one of a photosensitive sensor, an acoustic-sensitive sensor, a pressure-sensitive sensor, and a temperature-sensitive sensor. The sensor can obtain the user indication camera module working mode.
In a fourth aspect, a method for controlling a camera module provided in an embodiment of the present application includes: receiving an instruction, wherein the instruction is used for indicating the working modes of an image sensor and an optical filter of the camera module; controlling the image sensor and the optical filter to work in a first mode or a second mode according to the instruction: when the image sensor and the optical filter work in the first mode, controlling the image sensor to receive a first optical signal filtered by the optical filter and converting the first optical signal into a first electric signal for acquiring the distance from the image sensor to the target object; when the image sensor and the optical filter work in the second mode, the image sensor is controlled to receive a second optical signal filtered by the optical filter, and the second optical signal is converted into a second electric signal used for acquiring a hyperspectral image of the target object. Therefore, according to the working mode indicated by the received instruction, the image sensor and the optical filter are controlled, and the switching between the first mode and the second mode can be realized, so that the size of the camera module can be reduced, the user experience can be improved, the cost of the camera module can be reduced, and the miniaturization requirement of the camera module of the intelligent device in the prior art can be met.
According to a fourth aspect, in a possible implementation manner, according to the instruction, the wavelength of the light wave which can be passed by the optical filter is controlled, and when the optical filter works in the first mode, the light wave with the wavelength for distance measurement is controlled to pass through the optical filter; and when the optical filter works in the second mode, controlling the light wave with the wavelength for shooting the hyperspectral image to pass through the optical filter. Wherein, in the distance measurement mode, the wavelength passing through the optical filter is 810nm-940nm; the wavelength passed through the optical filter ranges from visible to infrared wavelengths when capturing a hyperspectral mode. Optionally, in the distance measurement mode, the wavelength passed through the optical filter is 940nm, because the 940nm light wave in the atmosphere has less interference to the camera module. The first mode and the second mode may be achieved by controlling the wavelength passed by the optical filter.
According to the fourth aspect, in a possible implementation manner, according to the instruction, the wavelength of the light wave emitted by the light source of the camera module is controlled, and when the camera module works in the first mode, the light source is adjusted to emit the light wave with the wavelength for ranging; and when the device works in the second mode, the light source is adjusted to emit light waves with the wavelength for shooting the hyperspectral image. Wherein, in the distance measurement mode, the wavelength range of the light source is 810nm-940nm; when the hyperspectral image mode is shot, the wavelength range of the light source is from visible light to infrared light. Optionally, according to different working modes and different application scenarios, the light source may be a laser, an LED, a point light source with two-dimensional scanning capability, a sheet light source with scanning capability, or the like. Optionally, the light source may be one or more.
According to the fourth aspect, in a possible implementation manner, according to the instruction, the angular range of the scanning of the MEMS scanning mirror of the camera module and the resolution of the angle of each scanning are controlled. In addition, by controlling the scanning angle of the MEMS scanning mirror, the collected distance or the resolution of the hyperspectral image can be adjusted.
According to the fourth aspect, in a possible implementation manner, the instruction is preset according to a user requirement, or is obtained according to an obtained distance from the image sensor to the target object or hyperspectral information of the target object. The instruction is obtained according to the obtained distance from the image sensor to the target object or the hyperspectral information of the target object, so that the accuracy of the distance or the hyperspectral information can be improved
In a fifth aspect, an embodiment of the present application provides a method for controlling an image sensor, including: receiving an instruction indicating an operating mode of pixel circuitry of the image sensor; controlling the pixel circuit to work in a first mode or a second mode according to the instruction; when the pixel circuit works in the first mode, controlling the pixel circuit to receive a first optical signal reflected by a target object and converting the first optical signal into a first electric signal for acquiring the distance from the pixel circuit to the target object; and when the pixel circuit works in the second mode, controlling the pixel circuit to receive a second optical signal reflected by a target object and converting the second optical signal into a second electric signal for acquiring a hyperspectral image of the target object. By controlling the pixel circuit of the image sensor to work in the first mode and the second mode, the size of the image sensor is reduced and the synchronism of the two modes is improved while two functions are realized on one image sensor.
According to the fifth aspect, in a possible implementation manner, the pixel circuit includes a first pixel branch and a second pixel branch, and the first pixel branch and the second pixel branch have the same topology; when the pixel circuit works in the first mode, the first pixel branch and the second pixel branch are controlled to work; when the pixel circuit works in the second mode, the pixel controller is controlled to be used for controlling any one of the first pixel branch or the second pixel branch to work, and the pixel circuit achieves two functions of ranging and hyperspectral image shooting through the control of the pixel controller.
According to a fifth aspect, in one possible implementation, the pixel circuit includes a TOF arm and a hyperspectral arm; when the pixel circuit works in the first mode, the TOF branch circuit is controlled to work, and the hyperspectral branch circuit does not work; and when the pixel circuit works in the second mode, the hyperspectral branch circuit is controlled to work, and the TOF branch circuit does not work. By controlling different branches of the pixel circuit, switching between two modes can be realized in one pixel circuit.
According to a fifth aspect, in one possible implementation, the pixel circuit includes a switching circuit; when the pixel circuit works in the first mode, a first switching tube of the switching circuit is conducted, so that the TOF branch circuit works; and when the pixel circuit works in the second mode, the first switching tube of the switching circuit is switched off, so that the hyperspectral branch circuit works. By controlling the switching tubes of the switching circuit, the TOF branch circuit and the hyperspectral branch circuit, the reliability of control is improved, and the error rate of switching of the two modes is reduced.
According to a fifth aspect, in one possible implementation, the switching circuit includes a second switching tube; when the pixel circuit works in the first mode, the first switching tube is controlled to be conducted by conducting the second switching tube; when the pixel circuit works in the second mode, the second switching tube is switched off, the first switching tube is controlled to be switched off, and the control accuracy is further improved.
According to the fifth aspect, in a possible implementation manner, the image sensor further includes a processing circuit, and the processing circuit controls the pixel circuit to operate in the first mode or the second mode. Optionally, the processing circuitry may comprise a row decoder or driver. Optionally, when the pixel circuits are multiple columns or multiple rows, the processing circuit is configured to control the pixel circuits row by row or column by column or by specific row or column according to an instruction of the pixel controller, so as to improve flexibility of control. Optionally, the processing circuit and the pixel controller are separate entities or the same entity.
In a sixth aspect, an embodiment of the present application provides a method for controlling an electronic device, including: acquiring working mode information of a camera module of the electronic equipment; controlling the camera module to work in a first mode or a second mode according to the working mode information of the camera module; when the camera module works in the first mode, the camera module is controlled to receive a first optical signal reflected by a target object, and the first optical signal is converted into a first electric signal for acquiring the distance from the camera module to the target object; and when the camera module works in the second mode, the camera module is controlled to receive a second optical signal reflected by a target object, and the second optical signal is converted into a second electric signal for acquiring a hyperspectral image of the target object. On an electronic device, the camera module is controlled to work in two modes according to the acquired working mode information of the camera module, so that the integration level of the camera module of the electronic device is improved, the size of the electronic device is reduced, the user experience is improved, and the cost is reduced.
According to the sixth aspect, in a possible implementation manner, when the camera module operates in the first mode, a light source of the electronic device is controlled to emit light waves with a wavelength for ranging; and when the camera module works in the second mode, controlling a light source of the electronic equipment to emit light waves with the wavelength for shooting the hyperspectral image. Through the control to the light source emission light wave length, make its mode of cooperation camera module, improve electronic equipment's integrated level, reduced electronic equipment's volume.
According to the sixth aspect, in one possible implementation manner, the image sensor and the optical filter of the camera module are controlled to operate in the first mode or the second mode by a module controller of the camera module; when the image sensor and the optical filter work in the first mode, controlling the image sensor to receive the first optical signal filtered by the optical filter and convert the first optical signal into the first electric signal for acquiring the distance from the image sensor to the target object; when the image sensor and the optical filter work in the second mode, the image sensor is controlled to receive the second optical signal filtered by the optical filter, and the second optical signal is converted into the second electric signal used for obtaining the hyperspectral image of the target object. The switching between the two modes is realized by controlling the image sensor and the optical filter of the camera module.
According to a sixth aspect, in one possible implementation, the angular range of the MEMS scanning mirror scan of the camera module and the resolution of the angle of each scan are controlled by the module controller. By controlling the scanning angle of the MEMS scanning mirror, the collected distance or the resolution of the hyperspectral image can be adjusted.
According to the sixth aspect, in one possible implementation manner, the operating mode information of the camera module of the electronic device is acquired through at least one of a photosensitive signal, a sound-sensitive signal, a pressure-sensitive signal, and a temperature-sensitive signal. Through the signal, the accuracy and the timeliness of the electronic equipment for acquiring the working mode information of the camera module can be improved.
In a seventh aspect, an embodiment of the present application provides an image display method, which may be applied to an electronic device with a touch screen and a camera module, where the method may include that the electronic device detects an operation of a user to open an application, where the operation is used to indicate a camera module working mode, and the camera module working mode includes a ranging mode and a hyperspectral image shooting mode. Responding to the operation, displaying a shooting interface on the touch screen, wherein the shooting interface comprises a view frame, displaying an image in the view frame, and when the working mode is a distance measurement mode, displaying distance information of a target object by the image; and when the working mode is a hyperspectral image shooting mode, the image presents hyperspectral information of the target object. The camera module is the camera module of the above aspect. The image display method can realize two shooting modes by calling one camera module, and improves the information content of the target object.
In an eighth aspect, an embodiment of the present application provides a pixel circuit, where the pixel circuit includes a hyperspectral branch and a photoelectric converter, and the hyperspectral branch includes a switch tube and an analog-to-digital converter. One end of the switch tube of the hyperspectral branch is connected with the photoelectric converter, and the other end of the switch tube of the hyperspectral branch is connected with the analog-to-digital converter. Optionally, the charge transfer module of the hyperspectral branch comprises an amplifier, and the amplifier is located between the photoelectric converter and the analog-to-digital converter or located in the analog-to-digital converter. Optionally, the pixel circuit includes a switching circuit and a TOF branch, the switching circuit further includes a first switch tube, the first switch tube is respectively connected to the TOF branch and the switch tube of the hyperspectral branch, and the pixel controller is configured to control the TOF branch or control the hyperspectral branch through the first switch tube. Optionally, the switching circuit further includes a second switching tube, configured to control the first switching tube to be turned on and off, and when the pixel circuit operates in the second mode, the second switching tube is configured to control the first switching tube to be turned off; when the pixel circuit works in the first mode, the second switch tube is used for controlling the first switch tube to be conducted.
In a ninth aspect, an embodiment of the present application provides a camera module, which includes a fixed component, a MEMS scanning mirror, an optical lens, a filter, and a photoelectric converter, wherein: the fixed part comprises a light inlet, and the MEMS scanning mirror, the optical lens, the filter and the photoelectric converter are all arranged in the fixed part; the position of the MEMS scanning mirror corresponds to the position of the light inlet, the optical lens is positioned between the MEMS scanning mirror and the photoelectric converter, and the filter is positioned between the optical lens and the photoelectric converter; the MEMS scanning mirror, the filter and the photoelectric converter are all positioned on a main light line axis of the optical lens.
In a tenth aspect, an embodiment of the present application provides a camera module, which includes a prism, a MEMS scanning mirror, an optical lens, a filter, a light source, and an image sensor, wherein: the MEMS scanning mirror is used for projecting light emitted by a light source onto a target object, and the filter is positioned between the optical lens and the image sensor; the MEMS scanning mirror, the filter and the image sensor are all positioned on a main light line axis of the optical lens.
Drawings
Fig. 1 (a) is a schematic diagram of a simplified photographing system according to an embodiment of the present disclosure;
fig. 1 (b) is a schematic structural diagram of an image sensor according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a TOF camera in a second camera module according to an embodiment of the present disclosure being an indirect time-of-flight (i-TOF) camera;
FIG. 3 is a schematic structural diagram of a pixel circuit in a second image sensor according to an embodiment of the present disclosure;
FIG. 4 is a timing diagram of signals of the first switch tube, the second switch tube, the incident light and the reflected light when the second image sensor of the embodiment of the present application measures distance;
FIG. 5 is a schematic timing diagram of signals in an image sensor of a second camera module according to an embodiment of the present disclosure during ranging;
fig. 6 is a schematic timing diagram of signals when hyperspectral images are shot in an image sensor in a second camera module according to the embodiment of the application;
fig. 7 (a) is a schematic flowchart of a control method based on an i-TOF image sensor according to an embodiment of the present application;
fig. 7 (b) is a schematic flowchart of a control method based on an i-TOF camera module according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of a TOF camera in a three-camera module according to an embodiment of the present application being an indirect time-of-flight (d-TOF) camera;
FIG. 9 is a schematic diagram illustrating an indirect time-of-flight (d-TOF) camera as a TOF camera in a four-camera module according to an embodiment of the present disclosure;
fig. 10 (a) - (d) are schematic structural diagrams of pixel circuits in an image sensor in a three-or four-camera module according to an embodiment of the present application;
fig. 11 is a schematic timing diagram of signals in a distance measurement process of an image sensor in a three-camera module or a four-camera module according to an embodiment of the present disclosure;
fig. 12 is a schematic timing diagram of signals when a hyperspectral image is captured in an image sensor in a three-or four-camera module according to an embodiment of the present application.
Fig. 13 (a) is a schematic flowchart of a control method based on a d-TOF image sensor according to an embodiment of the present application;
fig. 13 (b) is a schematic flowchart of a control method based on a d-TOF camera module according to an embodiment of the present disclosure;
fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 15 (a) is a schematic flowchart of a control method of an electronic device according to an embodiment of the present application;
fig. 15 (b) is a schematic flowchart of a method for displaying an image according to an embodiment of the present application;
FIGS. 16 (a) - (e) are schematic diagrams of a set of display interfaces of an electronic device provided by an embodiment of the present application;
17 (a) - (c) are schematic diagrams of another set of display interfaces of an electronic device provided by an embodiment of the present application;
fig. 18 (a) - (h) are schematic diagrams of display interfaces of another group of electronic devices provided in an embodiment of the present application.
The elements in the figures are numbered as follows:
the camera module 100, 200, 800, 900, 1403, the light source 101, 201, 801, 901, the module controller 102, 202, 802, 902, the optical lens 103, 203, 803, 903, the optical filter 104,204, 804, 904, the image sensor 105, 205, 805, 905, the target object 106, the pixel controller 107, the processing circuit 108, the pixel circuit 109, 300, 1000, the photoelectric converter 301, 1005, the pixel branch 320, 321, the charge switch tube TG0, tg1 303, the capacitor FD0, fd1 307, the analog-to-digital converter ADC0, ADC1 313, ADC2 1004, ADC3 1011 voltage-to-current switch tube S0, S1 309, the analog-to-digital switch tube Sel0, sel1 311, the prism 812, the mems scanning mirror 811, 911, the fixed part 914, the light inlet 913, the tof branch, the hyperspectral branch 1002, the switching circuit 1003, the switching circuit R1008, the branch switch tube S1012, the tof switching tube T1010, the hyperspectral switching circuit 1007, the hyperspectral switching circuit switch tube TG 1010, the processing circuit 1400, the inverse spectral amplifier 1006, the processing circuit 1400, the analog-to-digital converter 301, 1005.
Detailed Description
As shown in fig. 1 (a), a simplified layout of a photographing system 110 according to an embodiment of the present disclosure is provided. The camera module 100 in the system is mainly applied to electronic devices, and by combining the TOF camera with the optical filter 104, the shooting function of 3D images and hyperspectral images can be simultaneously realized. Specifically, the camera module 100 includes a light source 101, an optical lens 103, an optical filter 104, an image sensor 105, a module controller 102, and the like. The optical filter 104 is located between the optical lens 103 and the image sensor 105.
The electronic device to which the camera module 100 is applied may be portable or non-portable. Some examples of portable versions of electronic devices may include popular consumer electronic gadgets such as, for example, mobile devices, cellular phones, smart phones, user Equipment (UE), tablets, digital cameras, laptop or desktop computers, car navigation units, machine-to-machine (M2M) communication units, virtual Reality (VR) devices or modules, robots, and so forth. On the other hand, some examples of non-portable versions of electronic devices may include gaming machines in electronic game rooms, interactive video terminals, automobiles with autonomous navigation capabilities, machine vision systems, industrial robots, VR devices, and the like. According to the camera module 100 disclosed in the embodiment of the application, the distance measurement function and the shooting hyperspectral function are integrated and can be applied to many applications, face recognition, food detection, security monitoring, medical health and the like.
The module controller 102 is used for receiving an instruction of switching the mode, switching the mode to a first mode (ranging mode) or a second mode (hyperspectral image shooting mode) according to the instruction, and adjusting the light source, the optical filter and the image sensor according to the mode. The ranging mode is to measure the distance between the camera and the target object. The hyperspectral image shooting mode is that a spectral information image of a target object in a certain wave band or a range of certain wave bands is shot, and due to the fact that different components of the target object absorb different spectrums, the hyperspectral image can fully reflect the difference of the physical structure and the chemical components inside the target object.
The light source 101 provides light waves with desired wavelengths according to different operation modes and different structures of the camera module 100. In the ranging mode, the wavelength of the light source 101 ranges from 810nm to 940nm. In one embodiment, the wavelength of the light source 101 can be set at 940nm, and since the wavelength of 940nm in the atmosphere (emitted by the sun) is easily absorbed by moisture in the atmosphere, the 940nm light wave in the atmosphere has less interference to the camera module. Therefore, a 940nm light source can be applied outdoors. In another embodiment, the wavelength of the light source 101 may be set at 810nm or 850nm, because the quantum efficiency of the silicon-based image sensor at the wavelength of 810nm or 850nm is higher than that at the wavelength of 940nm. Thus, in the ranging mode, the appropriate wavelength of the light source 101 can be selected according to whether the particular application scenario is indoors or outdoors. When the hyperspectral image is shot, the wavelength range used by the hyperspectral image shooting device is wide, and the wavelength ranges from visible light to infrared light according to different applications and different wavelength characteristics of a target object. According to different working modes and different application scenarios, the light source 101 may be a laser, an LED, a point light source with two-dimensional scanning capability, a sheet light source with scanning capability, or the like. The light source 101 may be located in the camera module or outside the camera module.
The optical lens 103 is mainly used for condensing light, and light reflected by the target object 106 is condensed onto the optical filter 104 through the optical lens to be filtered.
The optical filter 104 is a device that allows light corresponding to a specific wavelength to pass therethrough and attenuates light corresponding to other wavelengths. For example, the optical filter 104 may be a Micro Electro Mechanical System-Fabry Perot interferometer (MEMS-FPI), a piezoelectric FPI, a grating, and a liquid crystal. In this embodiment, the MEMS-FPI is taken as an example, and the MEMS-FPI is used as the optical filter 104, which has a wide filtering range and is widely applied. The driver for driving the two reflecting mirror surfaces of the FPI to move can be a micro-actuating structure in a micro-electro-mechanical system, mixed light rays can be reflected for multiple times between the two reflecting mirror surfaces, the rays corresponding to the specified wavelengths can be reserved, the rays corresponding to the rest wavelengths are attenuated, and the reserved rays are further transmitted to the image sensor. In this embodiment, in the ranging mode and the hyperspectral image shooting mode, the light source outputs light with a specific wavelength to the target object, and the wavelength of light reflected by the target object is transmitted through the MEMS-FPI to the image sensor by adjusting the MEMS-FPI.
Fig. 1 (b) is a schematic structural diagram of an image sensor according to a first embodiment of the present disclosure. As shown in fig. 1 (b), the image sensor 105 includes a pixel controller 107 and processing circuitry 108 and at least one pixel circuit 109. As shown in fig. 3 below, each pixel circuit includes a photoelectric converter and a first pixel branch and a second pixel branch, and the first pixel branch and the second pixel branch are electrically connected to the photoelectric converter respectively. The photoelectric converter is a device that converts an optical signal into an electrical signal, and may be, for example, a PD (Photo Diode), an APD (Avalanche Photo Diode), or the like. The first pixel branch and the second pixel branch are respectively connected with the photoelectric converter and are used for processing electric signals and converting the electric signals into digital signals. The first pixel branch may include a digital circuit and an analog circuit; the second pixel branch may include a digital circuit and an analog circuit. The pixel controller 107 is used for selectively controlling the pixel circuit to convert the optical signal into an electrical signal according to the requirement of the camera module in the range-finding mode or the shooting hyperspectral mode. Optionally, the image sensor further includes a processing circuit 108, and the pixel controller 107 is configured to control the pixel circuit to operate in a range finding mode or a shooting high spectrum mode through the processing circuit 108. Specifically, as shown in fig. 1 (b), the processing circuit controls the switching tube of the pixel circuit according to an instruction sent by the controller, and sends a signal to the switching tube of the pixel circuit to turn off and on the switching tube. The processing circuitry 108 may include row decoders/drivers. When the pixel circuits are in multiple columns or multiple rows, the processing circuit can control the light-emitting tubes of the pixel circuits by sending pixel circuit switch tube control signals according to instructions of the pixel controller. Specifically, the processing circuit may send a pixel circuit switching tube control signal to control a pixel circuit switching tube row by row or column by column or a specific row or column, so as to convert the optical signal into an electrical signal through the pixel circuit. Alternatively, the processing circuit 108 may be a separate module or integrated with the pixel controller 107, which may be in the form of hardware or software, such as a chip, or an executable program. In this embodiment, as shown in fig. 3, the number of the photoelectric converters may be one or more. When the TOF camera is an i-TOF, the image sensor comprises a plurality of photoelectric converters, each connected to a respective one of the first pixel branches and to a respective one of the second pixel branches, for processing the optical signals passing through the optical filter to convert them into electrical signals. When the TOF camera is a d-TOF one or more opto-electronic converters may be used, each of the same opto-electronic converters being connected to the first pixel branch and the second pixel branch, respectively. In the case of one photoelectric converter, the one photoelectric converter is located on the principal optical axis of the optical lens 103; in the case of a plurality of photoelectric converters, one of the plurality of photoelectric converters is located on the principal optical axis of the optical lens 103. Specifically, the structures and functions of the first pixel branch and the second pixel branch of the i-TOF and the d-TOF will be described in detail in the following embodiments.
FIG. 2 is a schematic diagram of a TOF camera in a second camera module according to an embodiment of the present disclosure being an indirect time of flight (i-TOF) camera. The camera module in this embodiment includes a light source 201, an optical lens 203, an optical filter 204, an image sensor 205, and a module controller 202. The optical filter 204 is located between the optical lens 203 and the image sensor 205, and the optical filter 204 may be a MEMS-FPI, or an optical filter with a structure such as a grating or a liquid crystal. The image sensor 205 includes a pixel controller and a plurality of pixel circuits. Each pixel circuit 109 includes a photoelectric converter and a pixel branch, and the photoelectric converter is electrically connected to a corresponding pixel branch. The specific functions of the components and circuits are described with reference to fig. 1 (a) and other embodiments.
Fig. 3 is a schematic diagram of a pixel circuit in an image sensor in a camera module according to an embodiment of the present disclosure. The pixel circuit 310 includes a first pixel branch 320 and a second pixel branch 321. The first pixel branch 320 and the second pixel branch 321 are respectively connected to the photoelectric converter 301, and the first pixel branch 320 and the second pixel branch 321 process a first electrical signal from the photoelectric converter 301 and convert the first electrical signal into a first digital signal. The first pixel branch 320 and the second pixel branch 321 may include a charge transfer module, a charge collection module, and a charge output module. The charge transmission module is used for transmitting the electric signal converted by the photoelectric converter to the charge collection module; the charge collection module is used for collecting electric signals within a period of time; the charge output module is used for converting the electric signals collected by the charge collection module into the digital signals. The charge transfer module of the first pixel branch 320 comprises a charge switch tube TG 0; the charge transfer module of the second pixel branch 321 includes a charge switch tube TG1 303; the charge collection module of the first pixel branch 320 comprises a capacitor FD0 306; the charge transfer module of the second pixel branch 321 includes a capacitor FD1 307; the charge output module of the first pixel branch 320 comprises an analog-to-digital converter ADC0 312; the charge output module of the second pixel branch 321 includes an analog-to-digital converter ADC1 313.
It will be appreciated that in some embodiments, the switching transistor of fig. 3 may be formed of an N-or P-channel metal oxide semiconductor field effect transistor (nmosft, NMOS transistor or PMOSFET, PMOS). In addition, the separation of the various circuit components described above into their respective parts is for purposes of illustration and discussion only. In particular embodiments, the partitioned module may include more or less circuit components than listed herein or different circuit components than listed herein. In one embodiment, a first terminal of the photoelectric converter 301 is connected to a first terminal of the capacitor FD0 306 through the charge switch tube TG0 302; a first end of the capacitor FD0 is connected to a power supply via a Reset switch tube Reset0 304; the second end of the capacitor FD0 is connected to the analog-to-digital converter ADC0 312 through a voltage-to-current conversion switch tube S0 308 and an analog-to-digital conversion switch tube Sel0 310; a first end of the photoelectric converter 301 is connected with a first end of the capacitor FD1 307 through the charge switch tube TG1 303; a first end of the capacitor FD1 307 is connected to a power supply via a Reset switch tube Reset1 305; the second end of the capacitor FD1 307 is connected to the analog-to-digital converter ADC1 313 through a voltage-current conversion switch tube S1 309 and an analog-to-digital conversion switch tube Sel1 311. In the ranging mode, both the first pixel branch 320 and the second pixel branch 321 are enabled. Either of the first pixel branch 320 and the second pixel branch 321 may be enabled when capturing the hyperspectral mode.
When the camera module is in a ranging mode, two pulses with a phase difference of 180 degrees are generated by controlling the on and off of the charge switch tube TG0 and the charge switch tube TG 1303, and charges passing through the photoelectric converter 301 are stored in the capacitor FD0 and the capacitor FD1 307 respectively; according to the capacitance of the capacitor FD0 306 and the capacitance of the capacitor FD1 307, the charges are converted into the input voltages of the analog-to-digital converter ADC0 312 and the analog-to-digital converter ADC1 313, respectively, and the distance between the camera module and the object to be photographed is obtained according to the proportional relationship between the voltages at the two ends of the pixel branch 320 and the pixel branch 321. Specifically, the charge switch tube TG0 is turned on at the same time as the pulse of the emission light source, and the charge switch tube TG0 302 is turned on for the same time as the pulse period T0 of the emission light source. When the charge switch tube TG0 302 is turned off from on, that is, when the charge switch tube TG 1303 is turned on from off, the duration of turning on of the charge switch tube TG0 is the same as the pulse period T0 of the emission light source. During the pulse period of the emitting light source, a pulse of reflected light is received. Fig. 4 is a schematic timing diagram of signals of a first switch tube, a second switch tube, incident light and reflected light when a pixel circuit in an image sensor in a second camera module according to the embodiment of the present application measures distance. The pulse duration of the reflected light is the sampling time of the image sensor. During the period, according to the time segment that the windows of the charge switch tube TG0 and the charge switch tube TG 1303 are opened, the received optical signal is converted into an electrical signal by the photoelectric converter, and the obtained charges Q1 and Q2 are respectively stored on the capacitor FD0 306 and the capacitor FD1 307. The capacitances of the voltage-current conversion switch tube S0, the analog-to-digital conversion switch tube Sel0 310, the voltage-current conversion switch tube S1 309, the analog-to-digital conversion switch tube Sel1 311, the capacitor FD0 306 and the capacitor FD1 307 are converted into voltages Vout0 and Vout1 (V = Q/C) at two ends of the analog-to-digital converters ADC0 312 and ADC1 313, and the distance from the target object to the camera module is calculated by using the proportional relationship between the two voltage signals. If the signal strength is relatively weak, the above process is repeated, and the signals of multiple ranging can be integrated.
Vout0=Q1/Cfd1=N x Iph x{(T0–Td)/Cfd1}………(1)
Vout1=Q2/Cfd2=N x Iph x(Td/Cfd2)………(2)
In the formulas (1) and (2), cfd1 and Cfd2 are capacitances of FD0 and FD 1; n is the number of integration times; iph is photocurrent, i.e. the current through the ADC0 and ADC1 branches; t0 is the pulse period of the emission light source; td is the delay time for receiving the optical signal; the delay time Td in the case of Cfd1= Cfd2 in the formula (1) and the formula (2) can be calculated by the following formula.
Td={Vout1/(Vout0+Vout1)}x T0………(3)
The distance between the target object and the camera can be calculated according to the formula (3)
L=1/2×C×Td=1/2×C×T0×Q2/(Q1+Q2)=1/2×C×T0×Vout1/(Vout0+Vout1) C is the speed of light 3X 10 8 m/s. It can be understood that the values of Vout1 and Vout0 are proportional to the digital signals converted by the photo currents through ADC0 and ADC 1.
Fig. 5 is a schematic signal timing diagram of a pixel circuit in an image sensor in a camera module according to an embodiment of the present disclosure during ranging. One frame is an image of the distance perpendicular to the camera between each point on a target object and the camera. As described above, vout0 and Vout1 are voltage signals into ADC0 and ADC1, where the voltage signals are the products of the currents converted by the amplification circuits S1, S2 and the resistors in the ADC. The digital signal output from the ADC is processed by a Digital Signal Processor (DSP) or an Image Signal Processor (ISP) to calculate a distance composite image. The digital signal processor may be integrated in the image sensor, the module controller, the Application Processor (AP) or a single AP-controlled processor. Alternatively, as shown in fig. 5, before the ranging mode is started, the analog circuit of the photosensor receives the reset signals reset0 and reset1 and the transfer gate signals TG0 and TG1 to reset the capacitors FD0 and FD1 and the photoelectric converter PD 301, so as to clear the charges remaining in the capacitors FD1 and FD0 and the photoelectric converter PD 301, and read the reset noise at this time by sending the ADC selection gate signal to Sel0 and Sel 1. Alternatively, when the optical signal of the photoelectric converter is weak, the charges may be integrated by sending a signal transfer gate signal multiple times to drive TG0 and TG1 multiple times and accumulate the charges on the capacitors FD0 and FD 1. After the integration is finished, the switching tubes Sel0 and Sel1 are conducted to read the charges for analog-to-digital conversion and eliminate reset noise, and converted output signals are obtained from the input signals Vout0 and Vout1 and are used for calculating the distance.
When the camera module is in the hyperspectral image shooting mode, as shown in fig. 3, any one of the first pixel branch 320 and the second pixel branch 321 can be enabled, and the circuit operation in the hyperspectral image shooting mode will be described below by taking the first pixel branch 320 as an example. The mode for shooting the hyperspectral image is different from a ranging mode, and the mode for shooting the hyperspectral image does not adopt an integration mode but adopts a certain exposure time to accumulate photo-generated charges. Fig. 6 is a schematic signal timing diagram of a pixel circuit in an image sensor in a camera module according to an embodiment of the present application when capturing hyperspectral images. As shown in fig. 3 and 6, the charge switching tube TG0 302 and the reset switching tube reset0 304 are turned on, the charges in the photoelectric converter PD 301 and the capacitor FD0 306 are cleared, and when the reset switching tube reset0 304 is turned off, the exposure is started. After the preset exposure time, the photoelectric converter PD 301 accumulates a certain charge through photoelectric conversion, the charge switch tube TG0, the voltage-current conversion switch tube S0 308 and the analog-to-digital conversion switch tube Sel0 are turned on, the accumulated charge is converted into a voltage at two ends of the capacitor FD0 306, the voltage at two ends of the capacitor FD0 is converted into a current through the voltage-current conversion switch tube S0 308 and the analog-to-digital conversion switch tube Sel0 310, the current is transmitted to the analog-to-digital converter ADC0 312 for analog-to-digital conversion, and the size of the obtained digital signal is reflected by the depth of the hyperspectral image. The corresponding light wave wavelengths of each frame of image of the hyperspectral image are different, so that an optical filter needs to be adjusted before each frame of image is shot, light with preset wavelength passes through the optical filter, and analog-to-digital conversion is carried out after a certain exposure time. And then, adjusting the optical filter again, and repeating the operation until the required spectral wavelength image is completely shot.
As described above, the control of the photographing hyperspectral image mode is realized by the first pixel branch 320 or the second pixel branch 321 in fig. 3. First, as shown in fig. 6, residual charges in the photoelectric converter PD 301 and the capacitor FD0 306 are cleared by turning on the charge switching tube TG0 and the Reset switching tube Reset0 304. The exposure time is controlled by two switches of the charge switch tube TG0 302, that is, the time between two square waves acting on the TG0 302 is the exposure time of the photoelectric converter. The turn-off of the sub-charge switch tube TG0 302 is the start of the exposure time, and the turn-on of the second sub-charge switch tube TG0 is the end of the exposure time and performs analog-to-digital conversion on the charges. Optionally, during the exposure, resetting the capacitor FD0 306 by turning on the reset switch tube reset0 304 while turning on sel0 at the time when the reset switch tube reset0 304 is turned off, and reading the reset noise by using a down counter. The reset switch tube reset0 and the switch tube TG0 are MOS tubes, and noise is caused by individual differences between the MOS tubes in the manufacturing process of the MOS tubes, such as threshold voltage. When the charge switch tube TG0 302 is turned on for the second time, the exposure time ends, and the charge accumulated on the photoelectric converter at this time is transferred to the capacitor FD0 306; when the charge switch tube TG0 302 is turned off for the second time, the voltage across the capacitor FD0 306 is converted into a current, the current is transmitted to the analog-to-digital converter ADC0 312 to perform analog-to-digital conversion, the digital signal is obtained by counting by the up counter, and the number counted by the up counter and the number counted by the down counter are subtracted to obtain a digital signal, which is the digital signal without reset noise, that is, the size of the signal detected by the photoelectric converter PD 301 at a certain wavelength. The down counter starts counting when the reset switch tube reset0 304 is turned off, and stops counting when the ramp signal generated in the ADC0 and the input voltage signal obtained by the ADC0 are equal. When the charge switch tube TG0 302 is conducted for the second time and the photoelectric converter is exposed, after the exposure is finished, the up counter is used for counting, and until the ramp signal generated in the ADC0 and the input voltage signal obtained by the ADC0 are equal again, the up counter stops counting and turns off sel0. It can be understood that when each pixel circuit corresponds to one photoelectric converter, the signal intensity of each pixel point on each hyperspectral image under a certain wavelength can be obtained through the above process. The digital signals output by the ADC0 are processed by the DSP, and digital signals of all pixel points under different wavelengths are counted and synthesized into an image. The DSP may be integrated in the image sensor unit, the module controller, the AP or a separate processor controlled by the AP.
Fig. 7 (a) is a schematic flowchart of a control method based on an i-TOF image sensor according to an embodiment of the present application. The process can be implemented based on the image sensor in the camera module shown in fig. 1 (a) -3, and the method can be implemented by the pixel controller, including but not limited to the following steps:
step S701: instructions and parameters are received.
The pixel controller receives instructions from the module controller to control the pixel circuits and configures corresponding parameters according to different operating modes. Specifically, the instruction is used for indicating the working modes of the pixel circuit of the image sensor and the number of times of measurement of each working mode, and the working modes comprise a distance measurement mode and a hyperspectral image shooting mode. The parameters include the start time of the image sensor, the amplification factor of the analog signal, and the like. Wherein the start time of the image sensor may be used to control the pixel circuits of the image sensor. In the i-TOF camera module, the number of photoelectric converters of the image sensor is n, and each photoelectric converter corresponds to one pixel point, so that the maximum pixel is n. For the i-TOF camera module, the number of times of measurement in each operating mode is the number of frames measured in each operating mode.
Step S702: the pixel circuits in the image sensor are controlled according to the indicated mode and parameters.
In a ranging mode, the image sensor is controlled to convert the optical signal into a first electrical signal carrying information for obtaining a distance from the image sensor to a target object, step S702 (a).
In the distance measuring mode, the pixel controller controls the image sensor to convert the optical signal into an electric signal, obtains a digital signal by integrating the electric signal, and calculates the distance from a target object to the lens module according to the digital signal. The pixel controller of the image sensor may be a separate entity or integrated with the processing circuit, and the pixel controller may be in the form of hardware or software, such as a chip, or an executable program. The processing circuitry may comprise a row decoder/driver. The pixel controller controls the image sensor according to the received starting and ending time, the integration time and the amplification factor of the analog signal. Specifically, first, the pixel controller in the image sensor sends reset signals reset0, reset1 and signal transfer gate signals TG0, TG1 to reset the capacitances FD0 and FD1 and the photoelectric converters to clear the capacitances FD1, FD0 and the residual charges in the photoelectric converters; then, the pixel controller sends ADC0, ADC1 selects the gate signal to the switch tubes sel0 and sel1 to read the reset noise; the pixel controller alternately sends signal transfer gate signals to the TG0 and the TG1 to accumulate charges on the capacitances FD0 and FD 1; after integration is finished, the pixel controller sends an ADC selection gate signal to conduct the switching tubes Sel0 and Sel1, voltage analog signals of the capacitors FD0 and FD1 are converted into digital signals through the ADC0 and the ADC1 respectively, the pixel controller sends the digital signals to the DSP, the DSP determines a time of flight (TOF) value of return pulse according to the obtained digital signals, and the distance from a target object to the lens module is determined based on the TOF value. The DSP calculates the distance corresponding to each pixel and sends the distance to the application processor. And integrating the obtained distances into an image by using the processor, and sending the image to the display for display. Optionally, the integration time and the number of integrations may be adjusted according to the intensity of the received optical signal.
In the step S702 (b), the pixel controller controls the image sensor to convert the optical signal into a digital signal and obtain a spectral image based on the digital signal when the high spectral mode is photographed.
When capturing the hyperspectral mode, the module controller may send instructions as well as parameters to the pixel controller of the image sensor. The pixel controller controls the image sensor to convert the optical signal into an electrical signal, obtains a digital signal within the exposure time by controlling the exposure time, and obtains a spectral image according to the digital signal.
The pixel controller of the image sensor may be a separate module or integrated with a circuit, such as a chip. The module controller transmits the start and end times, the exposure time, and the amplification factor of the analog signal to the image sensor. After the pixel controller in the image sensor receives the parameters sent by the module controller, each module of the control circuit works. Specifically, first, the residual charges in the photoelectric converter PD 301 and the capacitor FD0 306 are cleared by turning on the charge switching tube TG0 and the Reset switching tube Reset0 304. The exposure time is controlled by two switching of the charge switch tube TG0 302, i.e. the time between two square waves acting on the TG0 302 is the exposure time of the photoelectric converter. The turn-off of the sub-charge switch tube TG0 is the beginning of the exposure time, and the turn-on of the second sub-charge switch tube TG0 is the end of the exposure time and performs analog-to-digital conversion on the charges. Optionally, during the exposure, resetting the capacitor FD0 306 by turning on the reset switch tube reset0 304 while turning on sel0 at the time when the reset switch tube reset0 304 is turned off, and reading the reset noise by using a down counter. The reset switch tube reset0 and the switch tube TG0 are MOS tubes, and noise is caused by individual differences between the MOS tubes in the manufacturing process of the MOS tubes, such as threshold voltage. When the charge switch tube TG0 302 is turned on for the second time, the exposure time ends, and the charge accumulated on the photoelectric converter at this time is transferred to the capacitor FD0 306; when the charge switch tube TG0 302 is turned off for the second time, the voltage across the capacitor FD0 306 is converted into a current, the current is transmitted to the analog-to-digital converter ADC0 312 to perform analog-to-digital conversion, the digital signal is obtained by counting by the up counter, and the number counted by the up counter and the number counted by the down counter are subtracted to obtain a digital signal, which is the digital signal without reset noise, that is, the size of the signal detected by the photoelectric converter PD 301 at a certain wavelength. The down counter starts counting when the reset switch tube reset0 304 is turned off, and stops counting when the ramp signal generated in the ADC0 and the input voltage signal obtained by the ADC0 are equal. And when the charge switch tube TG0 302 is conducted for the second time and the exposure of the photoelectric converter is finished, counting by using the up-counter after the exposure is finished, and stopping counting by using the up-counter and turning off sel0 when the ramp signal generated in the ADC0 and the input voltage signal obtained by the ADC0 are equal again. It can be understood that when each pixel corresponds to one photoelectric converter, the intensity of the signal of each pixel point on each hyperspectral image under a certain wavelength can be obtained through the above process. The digital signals output by the ADC0 are processed by the DSP, and the digital signals of all pixels under different wavelengths are counted and synthesized into an image. The DSP may be integrated in the image sensor unit, the module controller, the AP or a separate processor controlled by the AP. Optionally, the switching tubes to the sixth switching tube may be MOS tubes. The third voltage-current transfer switch tube S0 and the fifth voltage-current transfer switch tube S1 309 can implement a source follower function, that is, the current of the third voltage-current transfer switch tube S0 and the fifth voltage-current transfer switch tube S1 is changed by changing the voltage.
As shown in fig. 2-7 (b), when the TOF camera in the camera module is a direct time of flight (i-TOF) camera, the embodiment of the present application is described by taking an example in which the pixel circuit includes two pixel branches. Optionally, the pixel circuit may include at least two pixel branches, such as two pixel branches, four pixel branches, six pixel branches, and eight pixel branches, so as to improve the accuracy of the measured distance in the ranging mode.
Fig. 7 (b) is a schematic flowchart of a control method based on an i-TOF camera module according to an embodiment of the present disclosure. This process can be implemented based on the camera module shown in fig. 1 (a) -3, and the method can be implemented by a module controller, including but not limited to the following steps:
step S711: instructions and parameters are received.
The module controller receives instructions from the AP to control the light source, the image sensor, and the optical filter to perform operations, and configures corresponding parameters according to different operation modes. Specifically, the instruction is used for indicating the working modes of the camera module and the number of times of measurement of each working mode, and the working modes comprise a distance measurement mode and a hyperspectral image shooting mode. The parameters comprise control parameters of each main part of the camera module, such as the wavelength of the light source; the starting time of the image sensor and the amplification factor of the analog signal; the wavelength transmitted by the optical filter. The number of the light sources may be one or more, and when the number of the light sources is more than one, the parameter sent by the application processor may include the number of the light sources, because different light sources may correspond to different wavelengths. Wherein the start time of the image sensor may be used to control the pixel circuits of the image sensor. In the i-TOF camera module, the number of photoelectric converters of the image sensor is n, each photoelectric converter corresponds to one pixel point, and therefore the maximum pixel thereof is also n, wherein the wavelength transmitted by the optical filter can be directly transmitted to the module controller by the AP, or the application processor can transmit the wavelength-controlling parameter to the module controller according to the specific type of the optical filter, for example, the MEMS-FPI can adjust the transmitted wavelength by voltage, and the application processor can directly transmit the voltage value to the module controller. Optionally, the number of times of measurement of each operation mode may be preset by the application processor, such as an instruction input by a user; or the number of measurements for each mode of operation can be adjusted by analyzing the received distance and the results of the hyperspectral image. For the i-TOF camera module, the number of times of measurement in each working mode is the number of frames measured in each working mode.
Step S712: and controlling the optical filter and the light source according to the instruction and the parameters.
Step S712 (a): the wavelength of the light source is adjusted according to the mode indicated by the instruction.
Specifically, according to different working modes, the wavelength of the light wave of the light source is adjusted, so that the light wave emitted by the light source can reach the image sensor through the optical filter after being irradiated to the target object and reflected by the target object. When the indicated mode is a ranging mode, the wavelength of the light source ranges from 810nm to 940nm. In one embodiment, the wavelength of the light source can be set at 940nm, and since the wavelength of 940nm in the atmosphere (emitted by the sun) is easily absorbed by moisture in the atmosphere, the 940nm light waves in the atmosphere have less interference to the camera module. Therefore, a 940nm light source can be applied outdoors. In another embodiment, the wavelength of the light source can be set at 810nm or 850nm, because the quantum efficiency of the silicon-based image sensor at the wavelength of 810nm or 850nm is higher than that at the wavelength of 940nm. Therefore, in the ranging mode, the light source wavelength can be selected according to whether the specific application scene is indoors or outdoors. When the indicated mode is a shooting hyperspectral mode, the wavelength range used by the mode is wide, and the wavelength ranges from visible light to infrared light are different according to different applications and different wavelength characteristics of target objects. When a hyperspectral mode is shot, the wavelength range and the wavelength interval of a light source irradiating the target object are adjusted according to different application scenes and different target objects. Alternatively, when the wavelength of interest is visible light, the reflected light of the target object irradiated with the ambient light without switching on the light source may be selected when shooting the hyperspectral mode. Alternatively, the emitting light source may be single or multiple, the module controller may control the switching on and off of the single or multiple light sources according to the operation mode and specific requirements, different light sources may cover the same or different wavelengths, and the switching on and off of the light sources may be selected according to specific applications or operation modes. The emitting light source may be a laser, an LED, a point light source with two-dimensional scanning capability, a sheet light source with scanning capability, etc.
Step S712 (b): and controlling the wavelength of the optical filter according to the instruction and the parameter.
Specifically, when the indicated mode is the ranging mode, the wavelength of the light wave that can pass through the optical filter is adjusted according to the wavelength of the emission light source. The light wave passing through the optical filter is reflected light of the emission light source after being reflected by the target object. For example, when the optical filter is a MEMS-FPI, the distance between the two mirrors of the MEMS-FPI can be controlled by controlling the voltage to turn on the MEMS, thereby controlling the wavelength of the reflected light through the MEMS-FPI. Optionally, in the ranging mode, the light source has a wavelength in the range of 810nm-940nm. In one embodiment, the wavelength of the light source can be set at 940nm, and since the wavelength of 940nm in the atmosphere (emitted by the sun) is easily absorbed by moisture in the atmosphere, the 940nm light wave in the atmosphere has less interference to the camera module. Therefore, a 940nm light source can be applied outdoors. In another embodiment, the wavelength of the light source can be set at 810nm or 850nm, because the quantum efficiency of the silicon-based image sensor at the wavelength of 810nm or 850nm is higher than that at the wavelength of 940nm. Therefore, in the ranging mode, the light source wavelength and the wavelength that the optical filter can pass can be selected according to whether the specific application scene is indoors or outdoors. The optical filter can be MEMS-FPI, piezoelectric FPI, grating or liquid crystal.
When the indicated mode is a shooting hyperspectral mode, the wavelength range used by the mode is wide, and the wavelength ranges from visible light to infrared light are different according to different applications and different wavelength characteristics of target objects. When a hyperspectral mode is shot, the wavelength range and the wavelength interval of the optical filter are adjusted according to different application scenes and different target objects. In an embodiment, the hyperspectral images corresponding to different wavelengths may be captured by using a camera module, for example, when the hyperspectral image corresponding to a certain wavelength (marked as a first wavelength) is captured by using the camera module, the processor may adjust a wavelength range of the optical filter to a filtering range allowing the reflected light of the first wavelength to pass through, and obtain a frame of hyperspectral image corresponding to the first wavelength. Then, when the camera module shoots a hyperspectral image corresponding to another wavelength (marked as a second wavelength), the processor adjusts the filtering range of the optical filter to a filtering range allowing light rays with the second wavelength to pass through, and then a frame of hyperspectral image corresponding to the second wavelength is obtained. Thus, the camera module can obtain spectral images corresponding to different wavelengths. According to the instruction, a plurality of frames of hyperspectral images corresponding to a plurality of wavelengths can be shot. The interval between the first wavelength and the second wavelength can be adjusted according to the requirement of the user on the resolution or the instruction of the application processor.
Step S713, the mode controller communicates the indicated mode and parameters to the pixel controller in the image sensor.
In the distance measuring mode, the image sensor is controlled to convert the optical signal into a digital signal to calculate the distance from the target object to the optical lens in step S713 (a).
In the ranging mode, the module controller may send instructions and parameters to the pixel controller of the image sensor. The module controller controls the image sensor to convert the optical signal into an electrical signal, obtains a digital signal by integrating the electrical signal, and calculates the distance from the target object to the lens module according to the digital signal. The pixel controller of the image sensor may be a separate entity or integrated with the control circuit, and the pixel controller may be in the form of hardware or software, such as a chip, or an executable program. The module controller sends the time of starting and ending, the integration time and the amplification factor of the analog signal to the image sensor. And after the image sensor receives the parameters sent by the module controller, each module of the control circuit works. Specifically, first, the pixel controller in the image sensor sends reset signals reset0 and reset1 and signal transfer gate signals TG0 and TG1 to reset the capacitances FD0 and FD1 and the photoelectric converter to clear the residual charges in the capacitances FD1 and FD0 and the photoelectric converter; then, the pixel controller sends ADC0, ADC1 selects the gate signal to the switch tubes sel0 and sel1 to read the reset noise; the pixel controller alternately sends signal transfer gate signals to the TG0 and the TG1 to accumulate charges on the capacitances FD0 and FD 1; after integration is finished, the pixel controller sends an ADC selection gate signal to conduct the switching tubes Sel0 and Sel1, voltage analog signals of the capacitors FD0 and FD1 are converted into digital signals through the ADC0 and the ADC1 respectively, the pixel controller sends the digital signals to the DSP, the DSP determines a time of flight (TOF) value of return pulse according to the obtained digital signals, and the distance from a target object to the lens module is determined based on the TOF value. The DSP calculates the distance corresponding to each pixel and sends the distance to the application processor. And integrating the obtained distances into an image by using the processor, and sending the image to the display for display. Optionally, the integration time and the number of integrations may be adjusted according to the intensity of the received optical signal.
In step S713 (b), the pixel controller controls the image sensor to convert the light signal into a digital signal and obtain a spectral image based on the digital signal when the high spectral mode is photographed.
When capturing the hyperspectral mode, the module controller may send instructions as well as parameters to the pixel controller of the image sensor. The pixel controller controls the image sensor to convert the optical signal into an electrical signal, obtains a digital signal within the exposure time by controlling the exposure time, and obtains a spectral image according to the digital signal.
The pixel controller of the image sensor may be a separate module or integrated with a circuit, such as a chip. The module controller transmits the start and end times, the exposure time, and the amplification factor of the analog signal to the image sensor. And after the image sensor receives the parameters sent by the module controller, each module of the control circuit works. Specifically, first, the residual charges in the photoelectric converter PD 301 and the capacitor FD0 306 are cleared by turning on the charge switch tube TG0 and the Reset switch tube Reset0 304. The exposure time is controlled by two switching of the charge switch tube TG0 302, i.e. the time between two square waves acting on the TG0 302 is the exposure time of the photoelectric converter. The first time the charge switch tube TG0 is turned off as the beginning of the exposure time, the second time the charge switch tube TG0 302 is turned on as the end of the exposure time and the charges are analog-to-digital converted. Alternatively, during exposure, the capacitor FD0 306 is reset by turning on the reset switch tube reset0 304 while sel0 is turned on at the time when the reset switch tube reset0 304 is turned off, and reset noise is read using a down counter. The reset switch tube reset0 and the switch tube TG0 are MOS tubes, and noise is caused by individual differences between the MOS tubes in the manufacturing process of the MOS tubes, such as threshold voltage. When the charge switch tube TG0 302 is turned on for the second time, the exposure time ends, and the charge accumulated on the photoelectric converter at this time is transferred to the capacitor FD0 306; when the charge switch tube TG0 302 is turned off for the second time, the voltage across the capacitor FD0 306 is converted into a current, the current is transmitted to the analog-to-digital converter ADC0 312 to perform analog-to-digital conversion into a digital signal, the digital signal is obtained by counting by the up counter, and the number counted by the up counter and the number counted by the down counter are subtracted to obtain a first digital signal, where the first digital signal is a digital signal from which reset noise is removed, that is, the size of the signal detected by the photoelectric converter PD 301 at a certain wavelength. The down counter starts counting when the reset switch tube reset0 304 is turned off, and stops counting when the ramp signal generated in the ADC0 and the input voltage signal obtained by the ADC0 are equal. And when the charge switch tube TG0 302 is conducted for the second time and the exposure of the photoelectric converter is finished, counting by using the up-counter after the exposure is finished, and stopping counting by using the up-counter and turning off sel0 when the ramp signal generated in the ADC0 and the input voltage signal obtained by the ADC0 are equal again. It can be understood that when each pixel corresponds to one photoelectric converter, the intensity of a signal of each pixel point on each hyperspectral image under a certain wavelength can be obtained through the above process. The digital signals output by the ADC0 are processed by the DSP, and the digital signals of all pixels under different wavelengths are counted and synthesized into an image. The DSP may be integrated in the image sensor unit, the module controller, the AP or a separate processor controlled by the AP. Optionally, the charge switch tube, the voltage current switch tube, and the analog-to-digital conversion switch tube may be MOS tubes. The voltage-current conversion switches S0 and S1 309 can implement a source follower function, that is, the current is changed by changing the voltage.
As shown in fig. 2-7 (b), when the TOF camera in the camera module is a direct time of flight (i-TOF) camera, the embodiment of the present application is described by taking an example in which the pixel circuit includes two pixel branches. Optionally, the pixel circuit may include at least two pixel branches, such as two pixel branches, four pixel branches, six pixel branches, and eight pixel branches, so as to improve the accuracy of the measured distance in the ranging mode.
FIG. 8 is a schematic diagram of a TOF camera in a three-camera module according to an embodiment of the present disclosure being an indirect time-of-flight (d-TOF) camera. The camera module includes a light source 801, an optical lens 803, an optical filter 804, an image sensor 805, a mems scanning mirror 811, and a module controller 802. The optical filter 804 is located between the optical lens 803 and the image sensor 805, and the optical filter 804 may be a MEMS-FPI, or an optical filter with a structure such as a grating or a liquid crystal. Unlike the camera module integrated based on the i-TOF camera module in fig. 2, the photoelectric conversion of the d-TOF integrated camera module 800 provided by this embodiment is implemented by using the image sensor 805 of a single pixel circuit or a plurality of pixel circuits, and combining with the scanning of the MEMS scanning mirror 811 to achieve the acquisition of a full-range distance and a hyperspectral image. In one embodiment, light emitted from the light source 801 passes through the prism 812 to the MEMS scanning mirror 811, is reflected by the MEMS scanning mirror 811 to irradiate on the target object, and is reflected by the target object, filtered by the optical lens 803 and the optical filter 804 to reach the image sensor 805 for photoelectric conversion. The image sensor of the d-TOF integrated camera module comprises at least one pixel circuit, each pixel circuit comprises a photoelectric converter and a pixel circuit, and each photoelectric converter is connected with one corresponding pixel circuit. Based on the above structure, when the camera module 800 is in the distance measurement mode or the high spectrum shooting mode, firstly, the module controller 802 adjusts the filtering range of the MEMS-FPI according to the required wavelength, adjusts the position of the optical lens 803, then, the MEMS scanning mirror starts scanning, the photoelectric converter converts the received optical signal into an electrical signal, the processed electrical signal can be stored in the memory, and then, a frame of spectrum image is synthesized by the image processor. After the MEMS scanning mirror 811 finishes scanning for one frame, the above process is repeated until the scanning process for all frames is completed, wherein the completion of the scanning of one rotation period by the MEMS scanning mirror 811 can be regarded as the completion of the scanning for one frame.
The camera module 800 can also obtain the distance or the spectrum information of a designated point of the object to be photographed, and the corresponding process may be that, after the MEMS scanning mirror 811 is aligned with a designated point, the MEMS scanning mirror does not perform a rotational motion, and the module controller 802 can adjust the filtering ranges of the light source and the MEMS-FPI, and the position of the optical lens 803 according to the required wavelengths. Thus, the camera module 800 can obtain the distance or spectrum information of a specific point of the target object.
The micro-actuation structure of the MEMS scanning mirror 811 may be one or a combination of several of an electrostatic micro-actuator, a piezoelectric micro-actuator, a thermal micro-actuator, an electromagnetic micro-actuator, and a shape memory alloy micro-actuator, which is not limited in this embodiment.
Fig. 9 is a schematic diagram of a TOF camera in a four-camera module according to an embodiment of the present application being an indirect time-of-flight (d-TOF) camera. As shown in fig. 9, the camera module 900 may include a fixing member 914, a MEMS-scanning mirror 911, an optical lens 903, an optical filter 904, an image sensor 905, and a module controller 902, wherein the fixing member 914 may include an optical inlet 913, and the MEMS-scanning mirror 911, the optical lens 903, the optical filter 904, and the image sensor 905 are all mounted in the fixing member 914; the position of the MEMS-scanning mirror 911 corresponds to the position of the light inlet 913, the optical lens 903 is located between the MEMS-scanning mirror 911 and the image sensor 905, the optical filter 904 is located between the optical lens 903 and the image sensor 905, and the MEMS-scanning mirror 911, the optical filter 904, and the image sensor 905 are all located on the principal optical axis of the optical lens 903. As described above, in fig. 8, the MEMS-scanning mirror is located on the light source side, the light wave emitted from the light source 801 is projected onto the target object after refraction by the prism 812 and the MEMS-scanning mirror 811, and fig. 9 differs from fig. 8 in that the MEMS-scanning mirror 911 is located between the optical lens 903 and the target object, and the MEMS-scanning mirror 911 adjusts the angle of rotation thereof according to the position of the light reflected by the target object to guide the light reflected by the target object into the optical lens 903 to convert the light signal into an electric signal by the image sensor 905 via the optical filter 904. Based on the above structure, when the camera module 900 is in the range finding mode or the high spectrum shooting mode, first, the module controller 902 adjusts the filtering range of the MEMS-FPI according to the required wavelength, and adjusts the position of the optical lens 903, then, the MEMS scanning mirror starts scanning, the photoelectric converter converts the received optical signal into an electrical signal, the processed electrical signal can be stored in the memory, and then, a frame of spectrum image is synthesized by the image processor. After the MEMS scanning mirror 911 finishes scanning for one frame, the above process is repeated until the scanning process for all frames is completed, wherein the completion of one rotation cycle of the scanning by the MEMS scanning mirror can be regarded as the completion of one frame of scanning.
The camera module can also obtain the distance or the spectrum information of a certain appointed point of a shot object, the corresponding process can be that the MEMS scanning mirror does not rotate after being aligned with the certain appointed point, and the module controller can respectively adjust the filtering range of the MEMS-FPI and the position of the optical lens according to various required wavelengths. Thus, the camera module can obtain the distance or spectrum information of a certain appointed point of the shot object.
The micro-actuation structure of the MEMS scanning mirror 911 may be one or a combination of several of an electrostatic micro-actuator, a piezoelectric micro-actuator, a thermal micro-actuator, an electromagnetic micro-actuator, and a shape memory alloy micro-actuator, which is not limited in this embodiment.
Fig. 10 (a) to 10 (d) are schematic structural diagrams of pixel circuits in an image sensor in a three-or four-camera module according to an embodiment of the present disclosure. The pixel circuit 1000 includes a TOF branch 1001, a hyperspectral branch 1002, and a switching circuit 1003. First ends of the TOF branch 1001 and the hyperspectral branch 1002 are respectively connected to a first end of the photoelectric converter 1005 and a first end of the switching circuit 1003, and one of the TOF branch 1001 or the hyperspectral branch 1002 processes the electrical signal from the photoelectric converter 1005 and converts the electrical signal into the digital signal. The TOF sub-circuit 1001 is used for a ranging mode; the hyperspectral branch 1002 is used for shooting a hyperspectral mode, and the switching circuit 1003 is used for switching the conduction of the TOF branch 1001 and the hyperspectral branch 1002 according to the working mode of the camera module. The TOF branch 1001 and hyperspectral branch 1002 may include a charge transfer module, and a charge output module; the charge transfer module is used for transmitting the corresponding electric signals to the charge output module; the charge output module is used for converting the electric signal passing through the charge transmission module into a digital signal.
The switching circuit 1003 comprises a switching circuit switching tube 1008, a first end of the switching circuit switching tube 1008 is connected with a first end of the TOF branch 1001, a first end of the hyperspectral branch 1002 is connected with a first end of the photoelectric converter 1005, and a second end of the switching circuit switching tube 1008 is connected with a power supply VDD. Optionally, the switching circuit includes a switching circuit switch tube 1007, a first end of the switching circuit switch tube 1007 is connected to a third end of the switching circuit switch tube 1008, a second end of the switching circuit switch tube 1007 is connected to ground, and the switching circuit switch tube 1007 is used to control on and off of the switching circuit switch tube 1008. Optionally, the switching tube 1008 of the switching circuit is a P-channel MOS transistor (P-MOS). When the switch tube 1008 is a P-MOS, the first terminal of the second photoelectric converter 1005 and the first terminal of the inverter 1006 are connected to the source of the P-MOS, the drain of the P-MOS is connected to the power supply, and the gate of the P-MOS is connected to the switch tube 1007.
The TOF branch 1001 and the hyperspectral branch 1002 are described separately below. The first ends of the TOF branch 1001 and the hyperspectral branch 1002 are respectively connected to the first end of the photoelectric converter 1005 and the first end of the switching circuit 1003, and the switching circuit 1003 controls one of the TOF branch 1001 and the hyperspectral branch 1002 to process the electrical signal from the photoelectric converter 1005 and convert the electrical signal into the digital signal according to the working mode of the camera module. In one embodiment, the charge transfer module of the TOF branch 1001 includes an inverter 1006, a TOF branch switch tube S1012, the charge output module of the TOF branch 1001 includes an analog-to-digital converter ADC2 1004, the charge transfer module of the hyperspectral branch 1002 includes a hyperspectral branch switch tube T1010, and the charge output module of the hyperspectral branch 1002 includes an analog-to-digital converter ADC3 1011. A first end of the second photoelectric converter 1005 is connected to a first end of the inverter 1006, and a second end of the inverter 1006 is connected to the analog-to-digital converter ADC2 1004 through the TOF branch switch tube S1012. A first end of the second photoelectric converter 1005 is connected to the analog-to-digital converter ADC3 1011 through the hyperspectral branch switching tube T1010. A second terminal of the second photoelectric converter 1005 is grounded.
When the camera module is in the ranging mode, the TOF branch 1001 is in a working state, and the hyperspectral branch 1002 is in a rest state. Specifically, when the light source emits light waves, the switching circuit switching tube 1008 and the TOF branch switching tube 1012 are turned on, the ninth switching tube 1010 is turned off, the first time digital converter TDC of the analog-to-digital converter 1004 starts counting, and the first end of the inverter 1006 is at a high level. It will be appreciated that the module controller may transmit signals to the light source and the image sensor simultaneously to control the switching of the switching tube and the first time-to-digital converter count. When the reflected light returns to the second photoelectric converter 1005 to generate a photocurrent, the TOF branch 1001 is turned on to Vbd, and the first end of the inverter 1005 is at a low level and the second end is at a high level, and the high level signal is converted into a digital signal by the ADC2 1004, so as to trigger the first time-to-digital converter TDC in the ADC2 1004 to stop counting. The time interval between the start of counting and the stop of counting of the first time-to-digital converter TDC is the flight time from the light wave emission of the light source to the reflected light reception. According to the flight time and the light speed, the distance between the target object and the camera module can be calculated. Optionally, the ADC2 1004 further includes a second time-to-digital converter, where the second time-to-digital converter and the first time-to-digital converter start to finish timing at the same time, a measurement frequency of the second time-to-digital converter is higher than a measurement frequency of the first time-to-digital converter, and timing of the second time-to-digital converter is more accurate.
When the camera module is in a high spectrum shooting mode, the high spectrum branch 1002 is in a working state, and the TOF branch 1001 is in a rest state. Specifically, the hyperspectral branch switching tube 1010 is turned on, the switching circuit switching tube 1008 and the TOF branch switching tube 1012 are turned off, the second photoelectric converter 1005 converts an optical signal into an electrical signal, and the electrical signal is converted into a digital signal through the analog-to-digital converter ADC3 1011.
Optionally, the charge transfer module of the hyperspectral branch 1002 comprises a first amplifier 1009, the first amplifier 1009 is located between the photoelectric converter 1005 and the analog-to-digital converter ADC3 1011, and the first amplifier 1009 is configured to amplify the electrical signal converted by the photoelectric converter.
Optionally, the analog-to-digital converter ADC3 1011 in the charge output module of the hyperspectral branch 1002 includes a second amplifier, and the second amplifier is configured to amplify the third electrical signal converted by the photoelectric converter.
Optionally, the eighth, ninth, and tenth switching tubes are n-channel MOS transistors (n-MOS).
Fig. 11 is a schematic signal timing diagram of a pixel circuit in an image sensor in a three-or four-camera module according to an embodiment of the present disclosure during ranging. When the ranging mode is started, the light source is triggered to emit a light wave signal, and meanwhile, an ADC input signal is sent, so that the first time digital converter TDC1 in the ADC1 starts to time. When the photoelectric converter 1005 receives the reflected light wave, the TOF sub-circuit 1001 is connected to ground, the TOF sub-circuit 1001 is connected to Vbd, the first end of the inverter 1006 is at a low level, the second end of the inverter is at a high level, and the high level signal is converted into a digital signal through the ADC2, so as to control the first time-to-digital converter TDC to stop counting. Through the counting of the first time digital converter, the flight time from the light source to receive the reflected light wave can be calculated, and then the distance between the target object and the camera module can be calculated. In another embodiment, in order to accurately calculate the time of flight from the light source to the reflected light wave, the time may be timed by a second time-to-digital converter. And when the MEMS scanning mirror is changed by one angle, a digital signal corresponding to one pixel point of the target object can be obtained. Each time the MEMS scanning mirror covers a scan angle range, it is a frame of image.
Fig. 12 is a signal timing diagram of a pixel circuit in an image sensor according to the third or fourth embodiment of the present application when capturing a high spectral mode. When a hyperspectral shooting mode is started, the pixel controller sends a low level signal to the switching circuit switch tube R1008 and the TOF branch switch tube S1012 and sends a high level signal to the hyperspectral branch switch tube T1010, when the photoelectric converter 1005 does not receive the reflected light signal 1202, the analog-to-digital converter ADC3 1011 is conducted with a power supply, and the power supply voltage is converted into a digital signal by the analog-to-digital converter ADC3 1011; when the photoelectric converter 1005 receives the reflected light signal 1202, the photoelectric converter 1005 converts the light signal into an electrical signal, and transmits the electrical signal to the analog-to-digital converter ADC3 1011 to be converted into a digital signal. And when the MEMS scanning mirror is changed by an angle, a digital signal corresponding to one pixel point of the target object can be obtained. Each time the MEMS scanning mirror covers a range of scanning angles, it is a frame of image. When shooting a hyperspectral mode, after the wavelength of a light source and an optical filter needs to be adjusted every time one frame of image is finished, shooting of the next frame is started. The obtained digital signals are sent to a DSP, the DSP sends the digital signals corresponding to each pixel circuit to an application processor, and the application processor integrates the obtained digital signals corresponding to each pixel point of a target object in a frame into an image and sends the image to a display for displaying.
Fig. 13 (a) is a schematic flowchart of a control method based on a d-TOF image sensor according to an embodiment of the application. This process may be implemented based on the image sensor in the camera module shown in fig. 8-10 (d), and the method may be implemented by the pixel controller, including but not limited to the following steps:
step S1301: instructions and parameters are received.
The pixel controller receives instructions from the module controller to control the pixel circuits and configures corresponding parameters according to different operating modes. Specifically, the instruction is used for indicating the working modes of the pixel circuit of the image sensor and the number of times of measurement of each working mode, and the working modes comprise a distance measurement mode and a hyperspectral image shooting mode. The parameters include the start time of the image sensor, and the amplification of the analog signal. The pixel controller may cause the TDC in the ADC to start counting by transmitting an enable signal. In the d-TOF camera module, the number of the photoelectric converters is one or more. When only one photoelectric converter is arranged, the pixels of the image are controlled by controlling the scanning angle range and the angular resolution of the MEMS scanning mirror; when there are multiple photoelectric converters, the number of pixels of the image is determined by the number of the photoelectric converters and the scanning angle of the MEMS scanning mirror. It can be understood that when the scanning angle of the MEMS scanning mirror is fixed, the more the number of the photoelectric converters is, the higher the pixel value of the image is. Optionally, the number of times of measurement of each working mode may be preset by the application processor according to a user requirement, or may be adjusted by analyzing a received distance and a result of the hyperspectral image. For a d-TOF camera module, the MEMS scanning mirror completes scanning the number of frames measured for each mode of operation within the angular range.
Step S1302: the pixel circuits in the image sensor are controlled according to the indicated mode and parameters.
In the distance measuring mode, the image sensor is controlled to convert the optical signal into a first electrical signal carrying information for obtaining a distance from the image sensor to the target object, step S1302 (a).
In the ranging mode, the pixel controller of the image sensor controls the pixel circuits in the image sensor according to the received command and the parameters. Wherein, the pixel controller of the image sensor can be a single body or integrated with the pixel circuit, and the pixel controller can be realized in a software or hardware form. The module controller sends the time of start and end, the amplification factor of the analog signal to the image sensor. After the image sensor receives the parameters sent by the module controller, all parts of the control circuit work. When the ranging mode is started, the trigger light source emits a light wave signal, and simultaneously, the TOF branch switch tube conducting signal is sent, and the ADC input signal and a clock TDC1 start timing signal. When the photoelectric converter receives the reflected light wave, the ADC input signal goes high (i.e., the inverter output signal goes low) and the first time-to-digital converter TDC1 stops counting. Through TDC1 counting, the flight time from the light source to the reflected light wave can be calculated, and further the distance between the target object and the camera module can be calculated. In another embodiment, in order to accurately calculate the time of flight from the light source to the reflected light wave, the time may be measured by a second time-to-digital converter TDC2, wherein the measuring frequency of the second time-to-digital converter TDC2 is higher than the measuring frequency of the first time-to-digital converter TDC 1. Optionally, the image sensor further includes a processing circuit, and the processing circuit controls the TOF branch and the hyperspectral branch to operate according to an instruction and a parameter sent by the pixel controller. The processing circuit and the pixel controller are separate bodies or the same body. The processing circuit can be controlled in different ways according to the number of pixel circuits in the image sensor. Specifically, as shown in fig. 1 (b), the processing circuit controls the switching tube of the pixel circuit according to an instruction sent by the controller, and sends a signal to the switching tube of the pixel circuit to turn off and on the switching tube. The processing circuitry 108 may include row decoders/drivers. When the pixel circuits are multiple columns or multiple rows, the processing circuit can control the switching tubes of the pixel circuits row by row or column by column or a specific row or column according to the instruction of the pixel controller. Alternatively, the processing circuit 108 may be a separate module or integrated with the pixel controller 107, which may be in the form of hardware or software, such as a chip, or an executable program.
In the step S1302 (b), when the hyperspectral mode is captured, the pixel controller controls the image sensor to convert the optical signal into a digital signal, and obtains a spectral image based on the digital signal.
The pixel controller of the image sensor receives instructions as well as parameters when capturing the hyperspectral mode. The pixel controller of the image sensor can be a separate body or integrated with the pixel circuit, and the pixel controller can be implemented in software or hardware. Specifically, the parameters received by the image sensor include the start and end time, the exposure time, and the amplification factor of the analog signal. And after the image sensor receives the parameters, controlling each part of the hyperspectral branch to work. When a hyperspectral mode is started, the pixel controller sends a low-level signal to the switching circuit switch tube R1008 and the TOF branch switch tube S1012 and sends a high-level signal to the hyperspectral branch switch tube T1010, when the photoelectric converter 1005 does not receive reflected light 1202, the analog-to-digital converter ADC3 is conducted with a power supply, and the analog-to-digital converter converts the power supply voltage into a digital signal; when the photoelectric converter 1005 receives the reflected light, the photoelectric converter 1005 converts the optical signal into an electrical signal, and transmits the electrical signal to the analog-to-digital converter ADC3 to be converted into a digital signal. And when the MEMS scanning mirror is changed by an angle, a digital signal corresponding to one pixel point of the target object can be obtained. Each time the MEMS scanning mirror covers a scan angle range, it is a frame of image. When shooting a hyperspectral mode, after the wavelength of a light source and an optical filter needs to be adjusted every time one frame of image is finished, shooting of the next frame is started. The obtained digital signals are sent to a DSP, the DSP sends the digital signals corresponding to each pixel circuit to an application processor, and the application processor integrates the obtained digital signals corresponding to each pixel point of a target object in a frame into an image and sends the image to a display for displaying.
Optionally, the application processor may also send an instruction to the module controller to adjust the number of times and parameters of the ranging mode and the hyperspectral mode by analyzing the received distance and the result of the hyperspectral image. The module controller sends the received instructions and parameters to a pixel controller of the image sensor. Alternatively, the application processor, the module controller and the pixel controller may be separate entities, or two of them may be integrated together, or three of them may be integrated into one entity. And according to the integration degree, the control method carries out corresponding adaptive adjustment.
Fig. 13 (b) is a schematic flowchart of a control method based on a d-TOF camera module according to an embodiment of the present disclosure. The process can be implemented based on the camera module shown in fig. 8 or 9. As shown in fig. 13 (b), the method mainly includes the following steps:
s1301: instructions and parameters are received.
The module controller receives instructions from the application processor to control the light source, the MEMS scanning mirror, the image sensor, and the optical filter to perform operations and configure corresponding parameters according to different operating modes. Specifically, the instruction is used for indicating the working modes of the camera module and the number of times of measurement of each working mode, and the working modes comprise a distance measurement mode and a hyperspectral image shooting mode. The parameters comprise control parameters of each main part of the camera module, such as the wavelength of the light source; the angle range and the angle resolution of the MEMS scanning mirror; the starting time of the image sensor and the amplification factor of the analog signal; the wavelength transmitted by the optical filter. Optionally, the module controller directly receives the angle range scanned by the MEMS scanning mirror and the pixels from the application processor, and converts the received pixel information into the corresponding angular resolution, that is, the angular size of the rotation of the MEMS scanning mirror each time. Optionally, the module controller directly receives the angle range and voltage scanned by the MEMS scanning mirror from the application processor, and the obtained voltage parameter can directly control the MEMS driving voltage to control the rotation angle of the scanning mirror each time. The number of the light sources may be one or more, and when the number of the light sources is more than one, the parameter sent by the application processor may include the number of the light sources, because different light sources may correspond to different wavelengths. Wherein the module controller may cause the TDC in the ADC to start counting by transmitting an enable signal. In the d-TOF camera module, the number of the photoelectric converters is one or more. When only one photoelectric converter is arranged, the pixels of the image are controlled by controlling the scanning angle range and the angular resolution of the MEMS scanning mirror; when there are multiple photoelectric converters, the number of pixels of the image is determined by the number of the photoelectric converters and the scanning angle of the MEMS scanning mirror. It can be understood that when the scanning angle of the MEMS scanning mirror is fixed, the larger the number of the photoelectric converters is, the higher the pixel value of the image is. The wavelength transmitted by the optical filter may be directly transmitted to the module controller by the application processor, or the application processor may transmit a parameter for controlling the wavelength to the module controller according to the specific type of the optical filter, for example, the MEMS-FPI may adjust the transmitted wavelength by voltage, and the application processor may directly transmit the voltage value to the module controller. Optionally, the number of times of measurement of each operating mode may be set in advance by the application processor, or may be adjusted by analyzing the received distance and the result of the hyperspectral image. For a d-TOF camera module, the MEMS scanning mirror completes scanning the number of frames measured for each mode of operation within the angular range.
Step S1302: and controlling the optical filter, the light source and the MEMS scanning mirror according to the instructions and the parameters.
Step S1302 (a): the wavelength of the light source is adjusted according to the mode indicated by the instruction.
Specifically, according to different working modes, the wavelength of the light wave of the light source is adjusted, so that the light wave emitted by the light source can reach the image sensor through the optical filter after being irradiated to the target object and reflected by the target object. When the indicated mode is a ranging mode, the wavelength of the light source ranges from 810nm to 940nm. In one embodiment, the wavelength of the light source can be set at 940nm, and since the wavelength of 940nm in the atmosphere (emitted by the sun) is easily absorbed by moisture in the atmosphere, the 940nm light wave in the atmosphere has less interference to the camera module. Therefore, a 940nm light source can be applied outdoors. In another embodiment, the wavelength of the light source can be set at 810nm or 850nm, because the quantum efficiency of the silicon-based image sensor at the wavelength of 810nm or 850nm is higher than that at the wavelength of 940nm. Therefore, in the ranging mode, the light source wavelength can be selected according to whether the specific application scene is indoors or outdoors. When the indicated mode is a shooting hyperspectral mode, the wavelength range used by the mode is wide, and the wavelength ranges from visible light to infrared light are different according to different applications and different wavelength characteristics of target objects. When a hyperspectral mode is shot, the wavelength range and the wavelength interval of a light source irradiating the target object are adjusted according to different application scenes and different target objects. Alternatively, when the wavelength of interest is visible light, the reflected light of the target object irradiated with the ambient light without switching on the light source may be selected when shooting the hyperspectral mode. Alternatively, the light source may be single or multiple, the module controller may control the switching on and off of the single or multiple light sources according to the operation mode and specific requirements, different light sources may cover the same or different wavelengths, and the switching on and off of the light sources may be selected according to the specific application or operation mode. The emitting light source may be a laser, an LED, a point light source with two-dimensional scanning capability, a sheet light source with scanning capability, etc.
Step S1302 (b): and controlling the wavelength of the optical filter according to the instruction and the parameter.
Specifically, when the indicated mode is the ranging mode, the wavelength of the light wave that can pass through the optical filter is adjusted according to the wavelength of the emission light source. The light wave passing through the optical filter is reflected light of the emission light source after being reflected by the target object. For example, when the optical filter is a MEMS-FPI, the distance between the two mirrors of the MEMS-FPI can be controlled by controlling the voltage to turn on the MEMS, thereby controlling the wavelength of the reflected light through the MEMS-FPI. Optionally, in the ranging mode, the light source has a wavelength in the range of 810nm-940nm. In one embodiment, the wavelength of the light source can be set at 940nm, and since the wavelength of 940nm in the atmosphere (emitted by the sun) is easily absorbed by moisture in the atmosphere, the 940nm light waves in the atmosphere have less interference to the camera module. Therefore, a 940nm light source can be applied outdoors. In another embodiment, the wavelength of the light source can be set at 810nm or 850nm, because the quantum efficiency of the silicon-based image sensor at the wavelength of 810nm or 850nm is higher than that at the wavelength of 940nm. Therefore, in the ranging mode, the light source wavelength and the wavelength that the optical filter can pass can be selected according to whether the specific application scene is indoors or outdoors. Alternatively, the optical filter may be a MEMS-FPI, a piezoelectric FPI, a grating, or a liquid crystal, etc.
When the indicated mode is a shooting hyperspectral mode, the wavelength range used by the mode is wide, and the wavelength ranges from visible light to infrared light are different according to different applications and different wavelength characteristics of target objects. When a hyperspectral mode is shot, the wavelength range and the wavelength interval of the optical filter are adjusted according to different application scenes and different target objects. In an embodiment, the camera module may be used to capture hyperspectral images corresponding to different wavelengths, for example, when the camera module captures a hyperspectral image corresponding to a certain wavelength (denoted as a first wavelength), the processor may adjust a wavelength range of the optical filter to a filtering range that allows reflected light of the first wavelength to pass through, and when the MEMS scanning mirror finishes scanning each angle, a frame of hyperspectral image corresponding to the first wavelength is obtained. Then, when the camera module shoots a hyperspectral image corresponding to another wavelength (marked as a second wavelength), the processor adjusts the filtering range of the optical filter to a filtering range allowing light rays with the second wavelength to pass through, and after the MEMS scanning mirror scans each angle, a frame of hyperspectral image corresponding to the second wavelength is obtained. Thus, the camera module can obtain spectral images corresponding to different wavelengths. According to the instruction, a plurality of frames of hyperspectral images corresponding to a plurality of wavelengths can be shot. The interval between the first wavelength and the second wavelength can be adjusted according to the requirement of the user on the resolution or the instruction of the application processor.
Step S1302 (c): the MEMS scanning mirror is adjusted according to the mode indicated by the instructions.
When the MEMS scanning mirror works in a mode indicated by an instruction, the emission light source starts to scan a target object through the MEMS scanning mirror, light emitted by the light source is emitted to each point reflection value optical lens of the target object by controlling the rotation angle of the MEMS scanning mirror around the central axis of the MEMS scanning mirror, the light is transmitted to the image sensor through the optical lens and the optical filter, the image sensor converts received optical signals into electric signals, the processed electric signals are stored in the DSP, and then the image sensor synthesizes a frame distance or spectrum image. When the MEMS scanning mirror completely covers the set scanning angle, the scanning of one frame can be considered to be completed. In the hyperspectral mode, after the MEMS scanning mirror finishes scanning one frame, the module controller adjusts the wavelength of the optical filter, and the process is repeated until the scanning process of all frames is finished. It will be appreciated that the angular range of the MEMS scanning mirror scan and the resolution of the angle of each scan can control the size of the pixels of the camera module. Alternatively, the rotation angle of the MEMS scanning mirror can be controlled by controlling the voltage of the MEMS.
In step S1303, the mode controller communicates the indicated mode and parameters to the pixel controller in the image sensor.
In the distance measuring mode, the pixel controller in the image sensor controls the photoelectric converter to convert the optical signal into an electrical signal, performs signal processing to obtain a digital signal, and calculates the distance from the target object to the lens module according to the digital signal.
In the ranging mode, the module controller may send instructions and parameters to the pixel controller of the image sensor. The pixel controller of the image sensor can be a separate body or integrated with the pixel circuit, and the pixel controller can be implemented in software or hardware. The module controller sends the time of start and end, the amplification factor of the analog signal to the image sensor. After the image sensor receives the parameters sent by the module controller, all parts of the control circuit work. When the ranging mode is started, the trigger light source emits a light wave signal, and simultaneously, the TOF branch switch tube conducting signal is sent, and the ADC input signal and a clock TDC1 start timing signal. When the photoelectric converter receives the reflected light wave, the ADC input signal goes high (i.e., the inverter output signal goes low) and the first time-to-digital converter TDC1 stops counting. Through TDC1 counting, the flight time from the light source to the reflected light wave can be calculated, and further the distance between the target object and the camera module can be calculated. In another embodiment, in order to accurately calculate the time of flight from the light source to the reflected light wave, the time may be measured by a second time-to-digital converter TDC2, wherein the measuring frequency of the second time-to-digital converter TDC2 is higher than the measuring frequency of the first time-to-digital converter TDC 1.
In the step S1303 (b), when the high spectrum mode is photographed, the pixel controller in the image sensor controls the photoelectric converter to convert the optical signal into the electric signal, performs signal processing to obtain the digital signal, and obtains the spectrum image according to the digital signal.
When capturing the hyperspectral mode, the module controller may send instructions as well as parameters to the pixel controller of the image sensor. The pixel controller of the image sensor can be a single body or integrated with the pixel circuit, and the pixel controller can be implemented in software or hardware. The module controller transmits the start and end times, the exposure time, and the amplification factor of the analog signal to the image sensor. And after receiving the parameters sent by the module controller, the image sensor controls each part of the pixel circuit to work. When a hyperspectral mode is started, the pixel controller sends a low-level signal to the switching circuit switch tube R1008 and the TOF branch switch tube S1012 and sends a high-level signal to the hyperspectral branch switch tube T1010, when the photoelectric converter 1005 does not receive reflected light 1202, the analog-to-digital converter ADC3 is conducted with a power supply, and the analog-to-digital converter converts the power supply voltage into a digital signal; when the photoelectric converter 1005 receives the reflected light, the photoelectric converter 1005 converts the light signal into an electrical signal, and transmits the electrical signal to the analog-to-digital converter ADC3 to be converted into a digital signal. And when the MEMS scanning mirror is changed by one angle, a digital signal corresponding to one pixel point of the target object can be obtained. Each time the MEMS scanning mirror covers a range of scanning angles, it is a frame of image. When shooting a hyperspectral mode, after the wavelength of a light source and an optical filter needs to be adjusted every time one frame of image is finished, shooting of the next frame is started. The obtained digital signals are sent to a DSP, the DSP sends the digital signals corresponding to each pixel circuit to an application processor, and the application processor integrates the obtained digital signals corresponding to each pixel point of a target object in a frame into an image and sends the image to a display for displaying.
Optionally, the application processor may also send an instruction to the module controller to adjust the number of times and parameters of the ranging mode and the hyperspectral mode by analyzing the received distance and the result of the hyperspectral image.
As shown in fig. 14, a fifth embodiment of the present application further provides an electronic device 1400. The electronic device 1400 includes a sensor 1401, a processor 1402, and a camera module 1403. The camera module 1403 can be implemented by any one of the camera modules of the foregoing embodiments. The sensor is used for acquiring working mode instructions of the camera module, and the working modes of the camera module comprise a distance measurement mode and a hyperspectral image shooting mode; the processor is used for sending instructions and parameters according to the collected working mode of the camera module to control the light source, the image sensor and the optical filter; and the processor obtains the distance or hyperspectral information of the target object according to the digital signal converted from the optical signal by the image sensor. The sensor can be a photosensitive sensor, a sound-sensitive sensor, a pressure-sensitive sensor, a temperature-sensitive sensor and the like, and the working mode instruction is given by the change of information detected by the sensor.
The sensor 1401, the processor 1402, and the camera module 1403 in the electronic device 1400 may be integrated into one processing module, or may be separate physical units, or two or more units may be integrated into one module. The above modules may be implemented in the form of hardware, and may also be implemented in the form of software functions. If implemented in the form of software functions, the corresponding program commands are stored in the storage medium provided by the present invention.
As described above, some examples of portable versions of electronic devices may include popular consumer electronic gadgets such as, for example, mobile devices, cellular phones, smart phones, user Equipment (UE), tablets, digital cameras, laptop or desktop computers, car navigation units, machine-to-machine (M2M) communication units, virtual Reality (VR) devices or modules, robots, and so forth. On the other hand, some examples of non-portable versions of electronic devices may include gaming machines in electronic game rooms, interactive video terminals, automobiles with autonomous navigation capabilities, machine vision systems, industrial robots, VR devices, and the like.
Fig. 15 (a) is a schematic flowchart of a control method of an electronic device according to an embodiment of the present application. The process can be implemented based on the camera module shown in fig. 1 (a) -3 and 8-10 (d), and the method can be implemented by an application processor, including but not limited to the following steps:
step S1501: and acquiring the working mode information of the camera module.
The camera module working mode comprises a distance measurement mode and a hyperspectral image shooting mode, and the camera module working mode comprises the distance measurement mode and the hyperspectral image shooting mode. The electronic equipment can acquire the working mode information of the camera module by detecting the operation of the user. Optionally, the application processor of the electronic device may also determine that the mode needs to be adjusted according to the obtained image including the distance information between the target object and the camera module or the image of the spectral information of the target object, and if the mode needs to be adjusted, the application processor acquires the specific working mode information of the camera module.
Step S1502: and sending an instruction according to the working mode information of the camera module, wherein the instruction is used for indicating the working mode of the camera module.
And the application processor sends an instruction to the module controller according to the acquired working mode information of the camera module, wherein the instruction is used for indicating the working mode of the camera module. Optionally, the application processor may send the instruction to the module controller and send a parameter to the module controller according to the working mode information of the camera module, where the parameter is configured according to different working modes. The specific parameters include control parameters of each main part of the camera module, such as the wavelength of the light source; the starting time of the image sensor and the amplification factor of the analog signal; the wavelength transmitted by the optical filter. The number of the light sources may be one or more, and when the number of the light sources is more than one, the parameter sent by the application processor may include the number of the light sources, because different light sources may correspond to different wavelengths. Wherein the start time of the image sensor may be used to control the pixel circuits of the image sensor.
Step S1503: and converting the optical signal into an electric signal by controlling a light source, an optical filter and an image sensor of the camera module according to the instruction.
The module controller controls the light source, the optical filter and the image sensor of the camera module according to the received command of the operation mode of the camera module according to the specific operation mode, so that the optical signal reflected by the target object passes through the optical filter and the image sensor is converted into an electrical signal. When the working mode is a distance measuring mode, controlling the camera module to convert a received optical signal into a first electric signal, wherein the first electric signal carries information for obtaining the distance from the image sensor to a target object; and when the working mode is a hyperspectral image shooting mode, controlling the camera module to convert the received optical signals into electric signals, wherein the electric signals are used for acquiring hyperspectral information of the target object.
Optionally, the module controller may control the light source, the optical filter and the image sensor according to the received parameters. The control of the light source, the optical filter and the image sensor of the i-TOF camera module according to the received instructions and parameters can be seen in the above steps S712 and S713; for the control of the light source, optical filter, image sensor and MEMS scanning mirror of the d-TOF camera module, see steps S1302 and S1303 described above. It will be appreciated that the application processor and the module controller may be separate entities or may be integrated together, such as a chip or an executable program.
The image display method according to the embodiment of the present application will be described in detail below with reference to fig. 15 (b). The image display device may be an electronic device having an image display function, and the electronic device may include a display screen and a camera. The electronic device can be a mobile terminal (e.g., a smart phone), a computer, a personal digital assistant, a wearable device, an on-board device, an unmanned aerial vehicle device, an internet of things device or other devices capable of displaying images. The method shown in FIG. 15 (b) includes steps 1511 and 1512, which are each described in detail below.
1511, detecting the operation of the user for opening the application, the operation being used for indicating the working mode of the camera module, the working mode of the camera module including a ranging mode and a hyperspectral image shooting mode.
1512, in response to the operation, displaying a shooting interface on the display screen, where the shooting interface includes a view finder, displaying an image in the view finder, and when the operating mode is a ranging mode, the image presenting distance information of a target object; and when the working mode is a hyperspectral image shooting mode, the image presents hyperspectral information of the target object.
In one example, the shooting behavior of the user may include a first operation of the user to open an application; and responding to the first operation, and displaying a shooting interface on a display screen. For example, after detecting a first operation of a user clicking an icon of a camera Application (APP) on a desktop, the electronic device may start the camera application and display a shooting interface. A viewfinder frame may be included on the camera interface, and it is understood that the size of the viewfinder frame may be different in the camera mode and the video mode. For example, the finder frame may be a finder frame in a photographing mode. In the video mode, the viewing frame may be the entire display screen. In the preview state, i.e. before the user turns on the camera and does not press the photo/video button, the preview image can be displayed in real time in the view finder.
Fig. 16 (a) shows a Graphical User Interface (GUI) of a cell phone, which is the desktop 1610 of the cell phone. When the electronic device detects an operation of a user clicking an icon 1620 of a camera Application (APP) on the desktop 1610, the camera application may be started to display another GUI, which may be referred to as a shooting interface 1630, as shown in fig. 16 (b). A viewfinder frame 1640 may be included on the capture interface 1630. In the preview state, a preview image can be displayed in real time in the finder frame 1640.
For example, referring to fig. 16 (b), after the electronic device starts the camera, a preview image, which is a 2D image of the target object, may be displayed in the viewing frame 1640. Controls 1650 for indicating a photograph mode, as well as other photograph controls, may also be included on the photographic interface.
For example, it may be detected that the user indicates the camera module operation mode. The camera module is one of the camera modules in fig. 2,8 and 9. The camera module working mode comprises a distance measurement mode and a hyperspectral image shooting mode. The user calls the camera module to shoot by indicating the working mode of any camera module. Referring to fig. 16 (c), a shooting option 1660 is included on the shooting interface, and after the electronic device detects that the user clicks on the shooting option 1660, referring to fig. 16 (d), the electronic device displays a shooting mode interface. After the electronic device detects that the user clicks on the shooting mode interface to indicate the ranging mode 1661, the mobile phone enters the ranging mode.
For example, there may be an operation of detecting a user for instructing shooting 1670 of detecting a user for instructing shooting in the ranging mode.
It should be understood that the operation of the user for instructing the shooting behavior may include pressing a shooting button in a camera of the electronic device, or may include the user device instructing the electronic device to perform the shooting behavior through voice, or may also include the user instructing the electronic device to perform the shooting behavior. The foregoing is illustrative and not limiting of the present application.
In the present application, an operation of opening the camera by the user and an operation of indicating the camera by the user are detected, and in response to the operations, an image is displayed in the view finder of the display screen, as shown in fig. 16 (e), the image is an image processed for distance information of each pixel point of the target object acquired by the camera module. It should be understood that the processed image may be a 3D image of the target object. Alternatively, the processed image may be a depth map of the target object. Optionally, the processed image directly displays distance information of each pixel point of the target object.
Referring to fig. 17 (a), a shooting option 1660 is included on the shooting interface, and after the electronic device detects that the user clicks on the shooting option 1660, referring to fig. 17 (b), the electronic device displays a shooting mode interface. After the electronic device detects that the user clicks on the shooting mode interface to indicate the hyperspectral mode 1662, the mobile phone enters a hyperspectral image shooting mode.
For example, there may be an operation of detecting a user for instructing shooting 1671 of detecting a user for instructing shooting in the ranging mode.
It should be understood that, as described above, the operation of the user for instructing the shooting behavior may include pressing a shooting button in a camera of the electronic device, or may include the user equipment instructing the electronic device to perform the shooting behavior by voice, or may include the user otherwise instructing the electronic device to perform the shooting behavior. The foregoing is illustrative and not limiting of the present application.
In the application, the operation of opening the camera by the user and the operation of indicating the camera by the user are detected, and the image is displayed in the view frame of the display screen in response to the operation, as shown in fig. 17 (c), the image is an image obtained by processing the spectral information of each pixel point of the target object acquired by the camera module. It should be understood that the processed image contains information on the chemical composition and composition of matter of portions of the target object. Taking the shooting of a human face as an example, the processed human face image can identify stains, moles, acnes and the like on the human face. According to the difference of substances formed by different human faces, the accuracy of human face recognition can be improved.
Fig. 18 (a) and 18 (b) show an internal structure of a vehicle of one embodiment of the invention. As shown in fig. 18 (a) and 18 (b), the display may be disposed in an instrument panel region, a seat region, a pillar trim region, a door region, a center console region, a roof (head lining) region, a sun visor (sun visor) region, or may be implemented in a windshield region or a window region. The above arrangement position of the display is merely an illustration, and does not limit the present application.
In the embodiment of the application, a human-computer interaction interface can be displayed on the display, for example, when the vehicle is in automatic driving, an automatic driving interface can be displayed.
It will be appreciated that the display adapter may drive a display, the display being coupled to the system bus. The display may be used for visual display, voice playing of information input by or provided to the user, and various menus of the in-vehicle apparatus. The display may include one or more of a Liquid Crystal Display (LCD), a thin film transistor-liquid crystal display (TFT LCD), an organic light-emitting diode (OLED), a flexible display (flexible display), a 3D display (3D display), and an electronic ink display (e-ink display). The touch panel may overlay the display, and when the touch panel detects a touch operation on or near the touch panel, the touch panel may transmit the touch operation to the processor to determine the type of the touch event, and then the processor may provide a corresponding visual output on the display according to the type of the touch event. In addition, the touch panel and the display can also be integrated to realize the input and output functions of the vehicle-mounted device.
Further, the display may be implemented by a Head Up Display (HUD). In addition, the display may be provided with a projection module so as to output information by projecting an image on a windshield or a window of a vehicle. The display may comprise a transparent display. The transparent display may be attached to a windshield or window. The transparent display can display a predetermined screen with a predetermined transparency. In order to make the transparent display have transparency, the transparent display may include one or more of a Transparent Film Electroluminescent (TFEL) display, a transparent organic Light-Emitting Diode (OLED), a transparent LCD (liquid crystal display), a transmissive transparent display, and a transparent LED (Light Emitting Diode) display. The transparency of the transparent display can be adjusted.
Fig. 18 (a) shows a Graphical User Interface (GUI) of an automobile, which is a desktop 1800 of an in-vehicle display screen. When the electronic device detects an operation 1810 of clicking an icon of a navigation Application (APP) on a desktop by a user, the navigation application may be started. Optionally, the display screen in the car can be a plurality of, and can put in other positions, such as the left side of steering wheel, on the door window, the back of seat in the car, can also include wearing equipment connected with the car, terminal equipment's display screen shows. Another GUI, as shown in fig. 18 (b), may be referred to as a mode selection interface 1820 of the navigation application.
For example, it may be detected that the user indicates a camera module operation mode. The working mode of the camera module can be a distance measurement mode or a hyperspectral image shooting mode. After the electronic device detects that the user clicks on the navigation mode interface to indicate the ranging mode 1830, the camera module of the automobile of the embodiment enters the ranging mode.
It should be understood that the operation of the user for instructing the shooting behavior may include pressing a shooting button in a camera of the electronic device, or may include the user device instructing the electronic device to perform the shooting behavior through voice, or may also include the user instructing the electronic device to perform the shooting behavior. The foregoing is illustrative and not limiting of the present application.
In the application, the operation of opening the navigation application by the user and the operation of indicating the camera by the user are detected, the operation is responded, and the image of the real-time road condition is displayed in the viewing frame of the display screen, as shown in fig. 18 (d), the image is the image which is obtained by processing the distance information of each pixel point of the target object collected by the camera module. It is understood that the processed image may be a 3D image 1840 of the target object. Alternatively, the processed image may be a depth map of the target object. The processed image information may be used to assist driver driving or for automated driving. Alternatively, when used for automatic driving, the processed image information may not be displayed on the display screen.
Referring to fig. 18 (b), the in-vehicle application interface includes an option of the navigation application, and after the electronic device detects that the user clicks the option 1810 of the navigation application, referring to fig. 18 (e), the electronic device displays a navigation mode interface. After the electronic equipment detects that a user clicks a navigation mode interface to indicate a hyperspectral mode, the camera module enters a hyperspectral image shooting mode.
For example, an operation of the user for instructing a navigation mode may be detected, and the operation of the user for instructing the navigation mode may be detected in the photographing hyperspectral mode.
It should be understood that, as described above, the operation of the user for instructing the shooting behavior may include pressing a shooting button in a camera of the electronic device, or may include the user equipment instructing the electronic device to perform the shooting behavior by voice, or may include the user otherwise instructing the electronic device to perform the shooting behavior. The foregoing is illustrative and not limiting of the present application.
In the present application, an operation of opening a navigation application by a user and an operation of indicating a navigation mode by the user are detected, and in response to the operation, an image is displayed in a view frame of a display screen, as shown in fig. 18 (f), where the image is an image 1860 obtained by processing spectral information of each pixel point of a target object acquired by a camera module. It should be understood that the processed image contains information on the chemical composition and composition of matter of portions of the target object. Taking the example of taking a pedestrian, the processed face image can be based on facial features of the person, such as spots, moles, pox, etc. on the face. According to the difference of substances formed by different human faces, the accuracy of human face recognition can be improved.
Optionally, referring to fig. 18 (a), the in-vehicle application interface includes an option of the navigation application, and after the electronic device detects that the user clicks the option 1610 of the navigation application, referring to fig. 18 (g), the electronic device displays a navigation mode interface. After the electronic equipment detects that a user clicks on the navigation mode interface to indicate the hyperspectral mode and the ranging mode 1870, the camera module respectively enters a hyperspectral image shooting mode and a ranging mode. When it is detected that the user clicks the hyperspectral and ranging mode, the camera module in fig. 2,8,9 may successively acquire hyperspectral information and distance information, and display an image in a viewing frame of the display screen, as shown in fig. 18 (h), where the image is an image obtained by processing spectral information and distance information of each pixel point of the target object acquired by the camera module. Optionally, when it is detected that the user clicks the hyperspectral and ranging mode, two camera modules in the graph 2,8,9 may simultaneously and respectively acquire hyperspectral information and distance information, that is, one camera acquires hyperspectral information, the other camera acquires distance information, and an image is displayed in a viewing frame of a display screen, as shown in fig. 18 (h), the image is an image processed according to the spectral information and the distance information of each pixel point of a target object acquired by the camera modules.
The image display method provided by the embodiment of the present application is described in detail above with reference to fig. 16 (a) to 18 (h), and the device embodiment of the present application is described in detail below with reference to fig. 14. It should be understood that the image display device in the embodiment of the present application may perform the various methods in the embodiment of the present application and the specific working processes of the image display device in the embodiment of the present application, and reference may be made to the corresponding processes in the foregoing method embodiments. The image display apparatus in this embodiment is one of the electronic apparatuses in the fifth embodiment. The image display device in this embodiment includes a sensor, a processor, a camera module, and a display screen. The processor and the camera module in the image display device are the same as the processor and the camera module of the electronic device in the fifth embodiment of the present application, which are not described herein again. It should be noted that the sensor in this embodiment may be a pressure-sensitive sensor, and specifically, the pressure-sensitive sensor is located on the touch screen. Optionally, the sensor in this embodiment may also be a photosensitive sensor, an acoustic sensor, a temperature-sensitive sensor, or the like, and the instruction of the operating mode is issued by the change of information detected by the sensor.
In the description of the present application, it will be appreciated that the application processor, the pixel controller, and the module controller may be any conventional processor or controller, such as a commercially available Central Processing Unit (CPU). Alternatively, the processor or controller may be a dedicated device such as an Application Specific Integrated Circuit (ASIC) or other hardware based processor.
A processor, may refer to one or more processors, for example, a processor may include one or more central processors, or a central processor and a graphics processor, or an application processor and a co-processor (e.g., a micro-control unit or a neural network processor). When the processor includes a plurality of processors, the plurality of processors may be integrated on the same chip or may be independent chips. A processor may include one or more physical cores, where a physical core is the smallest processing unit.
In the description of the present application, it is to be understood that the terms "center", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience in describing the present application and simplifying the description, but do not indicate or imply that the referred device or element must have a particular orientation, be constructed in a particular orientation, and be operated, and thus should not be construed as limiting the present application.
The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature.
The above description is only one embodiment of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (41)

1. A camera module is characterized by comprising an optical lens, an optical filter, an image sensor and a module controller; the image sensor comprises a pixel controller and a pixel circuit;
the optical lens is used for projecting the light waves reflected by the target object on the image sensor through the optical filter;
the image sensor and the optical filter are positioned on a main light line shaft of the optical lens;
the module controller is used for controlling the image sensor and the optical filter to work in a first mode or a second mode;
the pixel circuit comprises a first pixel branch and a second pixel branch; when the image sensor and the optical filter are operated in the first mode, the module controller adjusts the wavelength of light waves for passing through the optical filter, so that light waves with the wavelength for ranging pass through the optical filter to obtain a first optical signal, and the pixel controller is used for controlling the first pixel branch and the second pixel branch to operate, so that the image sensor receives the first optical signal filtered by the optical filter and converts the first optical signal into a first electrical signal for obtaining the distance from the image sensor to the target object; when the image sensor and the optical filter work in the second mode, light waves with a wavelength for shooting a hyperspectral image pass through the optical filter to obtain a second optical signal, and the pixel controller is used for controlling any one of the first pixel branch or the second pixel branch to work so that the image sensor receives the second optical signal filtered by the optical filter and converts the second optical signal into a second electrical signal for acquiring a hyperspectral image of the target object;
alternatively, the first and second electrodes may be,
the pixel circuit comprises a photoelectric converter, a time of flight (TOF) branch and a hyperspectral branch which are connected with the photoelectric converter, and the hyperspectral branch also comprises an analog-to-digital converter, and the photoelectric converter is connected with the analog-to-digital converter of the hyperspectral branch through a switch tube of the hyperspectral branch; when the image sensor and the optical filter operate in the first mode, the module controller adjusts the wavelength of light waves for passing through the optical filter, so that light waves with a wavelength for ranging pass through the optical filter to obtain the first optical signal, the pixel controller is used for controlling the operation of the TOF branch, and the hyperspectral branch does not operate, so that the image sensor receives the first optical signal filtered by the optical filter and converts the first optical signal into a first electrical signal for obtaining the distance from the image sensor to the target object; when the image sensor and the optical filter work in the second mode, light waves with wavelengths for shooting a hyperspectral image pass through the optical filter to obtain a second optical signal, the pixel controller is used for controlling the hyperspectral branch circuit to work, the TOF branch circuit does not work, so that the image sensor receives the second optical signal filtered by the optical filter and converts the second optical signal into a second electric signal for obtaining the hyperspectral image of the target object, a switch tube of the hyperspectral branch circuit is switched on, a switch tube of the TOF branch circuit is switched off, and the second optical signal is converted into the second electric signal through the photoelectric converter and an analog-to-digital converter of the hyperspectral branch circuit.
2. The camera module of claim 1, wherein the module controller is configured to send control parameters to the image sensor to control the image sensor to operate in the first mode or the second mode, wherein the control parameters include at least one of an operating time of the image sensor at the beginning and the end, an integration time, an exposure time, and a magnification of an electrical signal.
3. The camera module of claim 1 or 2, further comprising a light source for providing light waves to the target object, the light source emitting light waves of a wavelength for ranging when operating in the first mode; when the light source works in the second mode, the light source emits light waves with the wavelength for shooting the hyperspectral image.
4. The camera module of claim 3, further comprising a micro-electromechanical system (MEMS) scanning mirror centered on a principal optic axis of the optical lens;
and the module controller is also used for controlling the scanning angle range of the MEMS scanning mirror.
5. The camera module according to claim 4, wherein the light wave provided by the light source to the target object is reflected by the target object and then projected onto the image sensor through the MEMS scanning mirror, the optical lens, and the optical filter in sequence.
6. The camera module according to claim 4, further comprising a prism, wherein the light wave provided by the light source passes through the prism to the MEMS scanning mirror, passes through the MEMS scanning mirror to the target object after being reflected by the MEMS scanning mirror, passes through the optical lens and the optical filter after being reflected by the target object, and is projected on the image sensor sequentially.
7. The camera module of claim 1 or 2, wherein the optical filter is one of a MEMS fabry-perot interferometer MEMS-FPI, a grating, or a liquid crystal.
8. An image sensor, comprising a pixel controller and a pixel circuit, wherein;
the pixel controller is used for controlling the pixel circuit to work in a first mode or a second mode;
the pixel circuit comprises a first pixel branch and a second pixel branch; when the pixel circuit works in the first mode, the pixel controller is used for controlling the first pixel branch circuit and the second pixel branch circuit to work so as to receive a first optical signal reflected by a target object and convert the first optical signal into a first electrical signal for acquiring the distance from the pixel circuit to the target object, wherein the first optical signal is obtained by enabling an optical wave with a wavelength for ranging to pass through an optical filter when the optical filter works in the first mode; when the optical filter works in the second mode, the pixel controller is configured to control any one of the first pixel branch and the second pixel branch to work so as to receive a second optical signal reflected by a target object and convert the second optical signal into a second electrical signal for acquiring a hyperspectral image of the target object, where the second optical signal is obtained by allowing a light wave with a wavelength for capturing a hyperspectral image to pass through the optical filter when the optical filter works in the second mode;
alternatively, the first and second electrodes may be,
the pixel circuit comprises a photoelectric converter, a time of flight (TOF) branch and a hyperspectral branch which are connected with the photoelectric converter, and the hyperspectral branch also comprises an analog-to-digital converter which is connected with the analog-to-digital converter of the hyperspectral branch through a switch tube of the hyperspectral branch; when the pixel circuit works in the first mode, the pixel controller is used for controlling the TOF branch circuit to work, the hyperspectral branch circuit does not work so as to receive a first optical signal reflected by a target object and convert the first optical signal into a first electrical signal for acquiring the distance from the pixel circuit to the target object, wherein the first optical signal is obtained by enabling a light wave with a wavelength for ranging to pass through an optical filter when the optical filter works in the first mode; when the high-spectrum image acquisition device works in the second mode, the pixel controller is used for controlling the high-spectrum branch to work, the TOF branch does not work to receive a second optical signal reflected by a target object and convert the second optical signal into a second electric signal used for acquiring a high-spectrum image of the target object, wherein the second optical signal is obtained by enabling an optical wave with a wavelength used for shooting the high-spectrum image to pass through the optical filter when the optical filter works in the second mode, a switch tube of the high-spectrum branch is connected, the switch tube of the TOF branch is disconnected, and the second optical signal is converted into the second electric signal through the photoelectric converter and an analog-to-digital converter of the high-spectrum branch.
9. The image sensor of claim 8, wherein the first pixel branch and the second pixel branch are topologically the same.
10. The image sensor of claim 9, wherein the first pixel branch and the second pixel branch each comprise a switching tube;
when the pixel circuit works in the first mode, the pixel controller is used for controlling the switching tubes of the first pixel branch and the second pixel branch to be in a working state;
when the pixel circuit works in the second mode, the pixel controller is used for controlling the switching tube of the first pixel branch circuit to be in an off state, and the switching tube of the second pixel branch circuit to be in a working state; or the switching tube of the second pixel branch is controlled to be in an off state, and the switching tube of the first pixel branch is controlled to be in a working state.
11. The image sensor of claim 10, wherein the switch tube comprises a charge switch tube, a reset switch tube, a voltage-current conversion switch tube, or an analog-to-digital conversion switch tube.
12. The image sensor of claim 8, wherein the TOF branch and the hyperspectral branch each comprise a switching tube;
when the pixel circuit works in the first mode, the pixel controller is used for controlling a switching tube of the TOF branch circuit to be in a working state, and the switching tube of the hyperspectral branch circuit is in a turn-off state;
when the pixel circuit works in the second mode, the pixel controller is used for controlling the switch tube of the hyperspectral branch circuit to be in a working state, and the switch tube of the TOF branch circuit is in a turn-off state.
13. The image sensor of claim 12, wherein the hyperspectral branch comprises an amplifier, the amplifier is located between the photoelectric converter and the analog-to-digital converter of the hyperspectral branch or located within the analog-to-digital converter of the hyperspectral branch, and the amplifier is configured to amplify the electrical signal converted by the photoelectric converter.
14. The image sensor of claim 8, wherein the pixel circuit further comprises a switching circuit, the switching circuit further comprises a first switching tube, the first switching tube is respectively connected to the switching tubes of the TOF branch and the hyperspectral branch, and the pixel controller is configured to control the TOF branch to operate or control the hyperspectral branch to operate through the first switching tube.
15. The image sensor of claim 14, wherein the switching circuit further comprises a second switching transistor for controlling the first switching transistor to turn on and off, and the second switching transistor is for controlling the first switching transistor to turn off when the pixel circuit operates in the second mode; when the pixel circuit works in the first mode, the second switch tube is used for controlling the first switch tube to be conducted.
16. The image sensor of claim 15, wherein the first switch transistor is a P-channel MOS transistor, and the second switch transistor, the switch transistor of the TOF sub-circuit and the switch transistor of the hyperspectral sub-circuit are n-channel MOS transistors.
17. The image sensor of any one of claims 8 to 16, further comprising a processing circuit, wherein the pixel controller controls the pixel circuit to operate in the first mode or the second mode through the processing circuit.
18. The image sensor of claim 17, wherein the processing circuit is a separate entity or the same entity as the pixel controller.
19. An electronic device comprising a sensor, a processor, and a camera module, wherein,
the camera module comprises an image sensor; the image sensor comprises a pixel controller and a pixel circuit;
the sensor is used for acquiring the working mode information of the camera module;
the processor is used for controlling the camera module to work in a first mode or a second mode according to the working mode information of the camera module;
the pixel circuit comprises a first pixel branch and a second pixel branch; when the camera module works in the first mode, the pixel controller is used for controlling the first pixel branch and the second pixel branch to work so as to receive a first optical signal reflected by a target object and convert the first optical signal into a first electrical signal for acquiring the distance from the camera module to the target object, wherein the first optical signal is obtained by enabling a light wave with a wavelength for ranging to pass through an optical filter when the optical filter works in the first mode; when the optical filter works in the second mode, the pixel controller is used for controlling any one of the first pixel branch or the second pixel branch to work so as to receive a second optical signal reflected by a target object and convert the second optical signal into a second electrical signal used for acquiring a hyperspectral image of the target object, wherein the second optical signal is obtained by enabling light waves with a wavelength for shooting the hyperspectral image to pass through the optical filter when the optical filter works in the second mode;
alternatively, the first and second electrodes may be,
the pixel circuit comprises a photoelectric converter, a time of flight (TOF) branch and a hyperspectral branch which are connected with the photoelectric converter, and the hyperspectral branch also comprises an analog-to-digital converter which is connected with the analog-to-digital converter of the hyperspectral branch through a switch tube of the hyperspectral branch; when the camera module works in the first mode, the pixel controller is used for controlling the TOF branch circuit to work, the hyperspectral branch circuit does not work so as to receive a first optical signal reflected by a target object and convert the first optical signal into a first electric signal for acquiring the distance from the camera module to the target object, wherein the first optical signal is that when an optical filter works in the first mode, light waves with the wavelength for ranging pass through the optical filter; when the high-spectrum image acquisition device works in the second mode, the pixel controller is used for controlling the high-spectrum branch to work, the TOF branch does not work to receive a second optical signal reflected by a target object and convert the second optical signal into a second electric signal used for acquiring a high-spectrum image of the target object, wherein the second optical signal is obtained by enabling an optical wave with a wavelength used for shooting the high-spectrum image to pass through the optical filter when the optical filter works in the second mode, a switch tube of the high-spectrum branch is connected, the switch tube of the TOF branch is disconnected, and the second optical signal is converted into the second electric signal through the photoelectric converter and an analog-to-digital converter of the high-spectrum branch.
20. The electronic device of claim 19, further comprising a light source for providing light waves to the target object, wherein the light source emits light in a wavelength range from visible light to infrared light;
when the camera module works in the first mode, the processor is used for controlling the light source to emit light waves with the wavelength for distance measurement;
when the camera module works in the second mode, the processor is used for controlling the light source to emit light waves with wavelengths for shooting hyperspectral images.
21. The electronic device of claim 19, wherein the camera module comprises a module controller, an optical filter, and an image sensor;
the processor is configured to control the image sensor and the optical filter to operate in the first mode or the second mode through the module controller;
when the image sensor and the optical filter are operated in the first mode, the image sensor receives the first optical signal filtered by the optical filter and converts the first optical signal into the first electrical signal for acquiring the distance from the image sensor to the target object;
when the image sensor and the optical filter operate in the second mode, the image sensor receives the second optical signal filtered by the optical filter and converts the second optical signal into the second electrical signal for obtaining a hyperspectral image of the target object.
22. The electronic device of claim 21, wherein the camera module further comprises a MEMS scanning mirror;
the processor is used for controlling the angle range scanned by the MEMS scanning mirror and the resolution of the angle of each scanning through the module controller.
23. The electronic device of any of claims 19-22, wherein the sensor comprises at least one of a light-sensitive sensor, a sound-sensitive sensor, a pressure-sensitive sensor, and a temperature-sensitive sensor.
24. The control method of a camera die set, characterized by, the said camera die set includes optical filter and image sensor; the image sensor includes a pixel circuit;
the control method comprises the following steps:
receiving an instruction, wherein the instruction is used for indicating the working modes of the image sensor and the optical filter of the camera module, and the working modes comprise a first mode and a second mode;
controlling the image sensor and the optical filter to work in the first mode or the second mode according to the instruction:
the pixel circuit comprises a first pixel branch and a second pixel branch; when the image sensor and the optical filter work in the first mode, controlling the first pixel branch and the second pixel branch to work so as to control the image sensor to receive a first optical signal filtered by the optical filter and convert the first optical signal into a first electrical signal for acquiring a distance from the image sensor to a target object, wherein the first optical signal is obtained by allowing a light wave with a wavelength for ranging to pass through the optical filter when the optical filter works in the first mode; when the image sensor and the optical filter work in the second mode, controlling any one of the first pixel branch and the second pixel branch to work so as to control the image sensor to receive a second optical signal filtered by the optical filter and convert the second optical signal into a second electrical signal for acquiring a hyperspectral image of the target object, wherein the second optical signal is obtained by allowing light waves with a wavelength for shooting a hyperspectral image to pass through the optical filter when the optical filter works in the second mode;
alternatively, the first and second electrodes may be,
the pixel circuit comprises a photoelectric converter, a time of flight (TOF) branch and a hyperspectral branch which are connected with the photoelectric converter, and the hyperspectral branch also comprises an analog-to-digital converter which is connected with the analog-to-digital converter of the hyperspectral branch through a switch tube of the hyperspectral branch; when the image sensor and the optical filter work in the first mode, controlling the TOF branch to work, and the hyperspectral branch to not work, so as to control the image sensor to receive a first optical signal filtered by the optical filter and convert the first optical signal into a first electrical signal for acquiring the distance from the image sensor to a target object, wherein the first optical signal is obtained by allowing a light wave with a wavelength for ranging to pass through the optical filter when the optical filter works in the first mode; when the image sensor and the optical filter work in the second mode, the hyperspectral branch circuit is controlled to work, the TOF branch circuit does not work so as to control the image sensor to receive a second optical signal filtered by the optical filter and convert the second optical signal into a second electrical signal used for acquiring a hyperspectral image of the target object, wherein the second optical signal is obtained by enabling an optical wave with a wavelength used for shooting the hyperspectral image to pass through the optical filter when the optical filter works in the second mode, a switch tube of the hyperspectral branch circuit is switched on, the switch tube of the TOF branch circuit is switched off, and the second optical signal is converted into the second electrical signal through the photoelectric converter and an analog-to-digital converter of the hyperspectral branch circuit.
25. The control method according to claim 24, characterized in that the method further comprises:
controlling the wavelength of light waves that can pass through the optical filter according to the instruction, and controlling the light waves with the wavelength for distance measurement to pass through the optical filter when the optical filter works in the first mode; and when the optical filter works in the second mode, controlling light waves with the wavelength for shooting the hyperspectral image to pass through the optical filter.
26. The control method of claim 24, wherein the method further comprises:
controlling the wavelength of light waves emitted by a light source of the camera module according to the instruction, and adjusting the light source to emit light waves with the wavelength for distance measurement when the camera module works in the first mode; and when the device works in the second mode, the light source is adjusted to emit light waves with the wavelength for shooting the hyperspectral image.
27. The control method according to any one of claims 24 to 26, characterized in that the method further comprises:
and controlling the angle range of the scanning of the MEMS scanning mirror of the camera module and the resolution of the angle of each scanning according to the instruction.
28. The control method according to any one of claims 24 to 26, wherein the instruction is obtained in advance according to a user requirement, or according to the obtained distance from the image sensor to the target object or the obtained spectral information of the target object.
29. A control method of an image sensor, characterized in that the image sensor includes a pixel circuit;
the control method comprises the following steps:
receiving an instruction indicating an operating mode of the pixel circuit of the image sensor, the operating mode including a first mode and a second mode;
controlling the pixel circuit to work in the first mode or the second mode according to the instruction;
the pixel circuit comprises a first pixel branch and a second pixel branch; when the pixel circuit works in the first mode, controlling the first pixel branch circuit and the second pixel branch circuit to work so as to control the pixel circuit to receive a first optical signal reflected by a target object and convert the first optical signal into a first electrical signal for acquiring the distance from the pixel circuit to the target object, wherein when the first optical signal is an optical filter working in the first mode, the optical filter is used for allowing a light wave with a wavelength for ranging to pass through to obtain the first optical signal; when the pixel circuit works in the second mode, controlling any one of the first pixel branch circuit and the second pixel branch circuit to work so as to control the pixel circuit to receive a second optical signal reflected by a target object and convert the second optical signal into a second electrical signal for acquiring a hyperspectral image of the target object, wherein the second optical signal is obtained by enabling an optical wave with a wavelength for shooting the hyperspectral image to pass through the optical filter when the optical filter works in the second mode;
alternatively, the first and second electrodes may be,
the pixel circuit comprises a photoelectric converter, a time of flight (TOF) branch and a hyperspectral branch which are connected with the photoelectric converter, and the hyperspectral branch also comprises an analog-to-digital converter which is connected with the analog-to-digital converter of the hyperspectral branch through a switch tube of the hyperspectral branch; when the pixel circuit works in the first mode, controlling the TOF branch circuit to work, and controlling the hyperspectral branch circuit to not work so as to control the pixel circuit to receive a first optical signal reflected by a target object and convert the first optical signal into a first electrical signal for acquiring the distance from the pixel circuit to the target object, wherein the first optical signal is obtained by enabling a light wave with a wavelength for ranging to pass through the optical filter when the optical filter works in the first mode; when the pixel circuit works in the second mode, the hyperspectral branch circuit is controlled to work, the TOF branch circuit does not work so as to control the pixel circuit to receive a second optical signal reflected by a target object and convert the second optical signal into a second electrical signal used for acquiring a hyperspectral image of the target object, wherein the second optical signal is that when the optical filter works in the second mode, an optical wave with a wavelength for shooting the hyperspectral image passes through the optical filter to obtain the second optical signal, a switch tube of the hyperspectral branch circuit is switched on, the switch tube of the TOF branch circuit is switched off, and the second optical signal passes through the photoelectric converter and an analog-to-digital converter of the hyperspectral branch circuit and is converted into the second electrical signal.
30. The control method according to claim 29, wherein the first pixel branch and the second pixel branch have the same topology.
31. The control method according to claim 30, wherein the first pixel branch and the second pixel branch respectively comprise a switching tube;
when the pixel circuit works in the first mode, controlling charge switch tubes of the first pixel branch and the second pixel branch to be in a working state;
when the pixel circuit works in the second mode, the switching tube of the first pixel branch circuit is controlled to be in an off state, and the switching tube of the second pixel branch circuit is controlled to be in a working state; or the switching tube of the second pixel branch is controlled to be in an off state, and the switching tube of the first pixel branch is controlled to be in a working state.
32. The control method according to claim 29, wherein the TOF branch and the hyperspectral branch each comprise a switching tube;
when the pixel circuit works in the first mode, controlling a switching tube of the TOF branch circuit to be in a working state, and controlling a switching tube of the hyperspectral branch circuit to be in a turn-off state;
and when the pixel circuit works in the second mode, controlling the switch tube of the hyperspectral branch circuit to be in a working state, and controlling the switch tube of the TOF branch circuit to be in an off state.
33. The control method according to claim 32, wherein when the pixel circuit operates in the second mode, the switching tube of the hyperspectral branch is turned on, the switching tube of the TOF branch is turned off, and the photoelectric converter of the pixel circuit and the analog-to-digital converter of the hyperspectral branch are controlled to convert the second optical signal into the second electrical signal.
34. The control method according to claim 29, wherein the pixel circuit includes a switching circuit;
when the pixel circuit works in the first mode, a first switching tube of the switching circuit is conducted, so that the TOF branch circuit works;
and when the pixel circuit works in the second mode, the first switching tube of the switching circuit is switched off, so that the hyperspectral branch circuit works.
35. The control method of claim 34, wherein the switching circuit comprises a second switching tube;
when the pixel circuit works in the first mode, the first switching tube is controlled to be switched on by switching on the second switching tube;
when the pixel circuit works in the second mode, the first switching tube is controlled to be switched off by switching off the second switching tube.
36. The control method of any one of claims 29-35, wherein the image sensor further comprises a processing circuit, and wherein the processing circuit controls the pixel circuit to operate in the first mode or the second mode.
37. A method for controlling an electronic device,
the electronic equipment comprises a camera module, wherein the camera module comprises an image sensor; the image sensor includes a pixel circuit;
the method comprises the following steps:
acquiring working mode information of the camera module of the electronic equipment, wherein the working mode comprises a first mode and a second mode;
controlling the camera module to work in the first mode or the second mode according to the working mode information of the camera module;
the pixel circuit comprises a first pixel branch and a second pixel branch; when the optical filter works in the first mode, controlling the first pixel branch circuit and the second pixel branch circuit to work so as to control the camera module to receive a first optical signal reflected by a target object and convert the first optical signal into a first electrical signal for acquiring the distance from the camera module to the target object, wherein the first optical signal is obtained by enabling an optical filter to pass through a light wave with a wavelength for ranging when the optical filter works in the first mode; when the optical filter works in the second mode, controlling any one of the first pixel branch circuit and the second pixel branch circuit to work so as to control the camera module to receive a second optical signal reflected by a target object and convert the second optical signal into a second electrical signal for acquiring a hyperspectral image of the target object, wherein the second optical signal is obtained by enabling an optical wave with a wavelength for shooting the hyperspectral image to pass through the optical filter when the optical filter works in the second mode;
alternatively, the first and second electrodes may be,
the pixel circuit comprises a photoelectric converter, a time of flight (TOF) branch and a hyperspectral branch which are connected with the photoelectric converter, and the hyperspectral branch also comprises an analog-to-digital converter which is connected with the analog-to-digital converter of the hyperspectral branch through a switch tube of the hyperspectral branch; when the optical filter works in the first mode, controlling the TOF branch circuit to work, and controlling the hyperspectral branch circuit not to work so as to control the camera module to receive a first optical signal reflected by a target object and convert the first optical signal into a first electric signal for acquiring the distance from the camera module to the target object, wherein the first optical signal is obtained by enabling an optical wave with a wavelength for ranging to pass through the optical filter when the optical filter works in the first mode; when the high-spectrum image acquisition device works in the second mode, the high-spectrum branch circuit is controlled to work, the TOF branch circuit does not work so as to control the camera module to receive a second optical signal reflected by a target object and convert the second optical signal into a second electric signal used for acquiring a high-spectrum image of the target object, wherein the second optical signal is that when the optical filter works in the second mode, light waves with the wavelength used for shooting the high-spectrum image pass through the optical filter to obtain the second optical signal, a switch tube of the high-spectrum branch circuit is switched on, the switch tube of the TOF branch circuit is switched off, and the second optical signal passes through the photoelectric converter and an analog-to-digital converter of the high-spectrum branch circuit and is converted into the second electric signal.
38. The control method of claim 37, wherein the method further comprises:
when the camera module works in the first mode, controlling a light source of the electronic equipment to emit light waves with the wavelength for distance measurement;
and when the camera module works in the second mode, controlling a light source of the electronic equipment to emit light waves with wavelengths for shooting hyperspectral images.
39. The control method according to claim 37, characterized in that the method comprises:
controlling, by a module controller of the camera module, the camera module and an image sensor and an optical filter to operate in the first mode or the second mode;
when the image sensor and the optical filter work in the first mode, controlling the image sensor to receive the first optical signal filtered by the optical filter and convert the first optical signal into the first electric signal for acquiring the distance from the image sensor to the target object;
when the image sensor and the optical filter work in the second mode, the image sensor is controlled to receive the second optical signal filtered by the optical filter, and the second optical signal is converted into the second electrical signal used for acquiring the hyperspectral image of the target object.
40. The control method of claim 39, further comprising:
and controlling the angle range of the MEMS scanning mirror scanning of the camera module and the resolution of each scanning angle through the module controller.
41. The control method according to any one of claims 37 to 40, wherein the operation mode information of the camera module of the electronic device is obtained by at least one of a light-sensitive signal, a sound-sensitive signal, a pressure-sensitive signal, and a temperature-sensitive signal.
CN201911256196.XA 2019-12-09 2019-12-09 Image sensor, camera module and control method Active CN113037989B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911256196.XA CN113037989B (en) 2019-12-09 2019-12-09 Image sensor, camera module and control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911256196.XA CN113037989B (en) 2019-12-09 2019-12-09 Image sensor, camera module and control method

Publications (2)

Publication Number Publication Date
CN113037989A CN113037989A (en) 2021-06-25
CN113037989B true CN113037989B (en) 2022-11-18

Family

ID=76451216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911256196.XA Active CN113037989B (en) 2019-12-09 2019-12-09 Image sensor, camera module and control method

Country Status (1)

Country Link
CN (1) CN113037989B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115034247B (en) * 2022-08-11 2022-11-08 无锡盈达聚力科技有限公司 Optical information collector and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008283058A (en) * 2007-05-11 2008-11-20 Konica Minolta Holdings Inc Solid-state image pickup element and imaging apparatus using the same
CN104121990A (en) * 2014-07-22 2014-10-29 中国科学院上海光学精密机械研究所 Random grating based compressed sensing broadband hyperspectral imaging system
CN208028993U (en) * 2018-01-31 2018-10-30 深圳市光微科技有限公司 Pixel unit, image sensor chip and imaging system
CN108737753A (en) * 2017-04-21 2018-11-02 迈来芯科技有限公司 Active pixel circuit and its operating method for time-of-flight system
CN110024374A (en) * 2019-02-27 2019-07-16 深圳市汇顶科技股份有限公司 The pixel array and imaging sensor of imaging system and imaging system

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6114712A (en) * 1996-10-09 2000-09-05 Symbol Technologies, Inc. One piece optical assembly for low cost optical scanner
KR101448152B1 (en) * 2008-03-26 2014-10-07 삼성전자주식회사 Distance measuring sensor having vertical photogate and three dimensional color image sensor having the same
KR20100018449A (en) * 2008-08-06 2010-02-17 삼성전자주식회사 Pixel array of three dimensional image sensor
KR101484111B1 (en) * 2008-09-25 2015-01-19 삼성전자주식회사 Three dimensional image sensor
US9247109B2 (en) * 2013-03-15 2016-01-26 Samsung Electronics Co., Ltd. Performing spatial and temporal image contrast detection in pixel array
US20140346361A1 (en) * 2013-05-23 2014-11-27 Yibing M. WANG Time-of-flight pixels also sensing proximity and/or detecting motion in imaging devices & methods
US9741755B2 (en) * 2014-12-22 2017-08-22 Google Inc. Physical layout and structure of RGBZ pixel cell unit for RGBZ image sensor
US10412280B2 (en) * 2016-02-10 2019-09-10 Microsoft Technology Licensing, Llc Camera with light valve over sensor array
US10557925B2 (en) * 2016-08-26 2020-02-11 Samsung Electronics Co., Ltd. Time-of-flight (TOF) image sensor using amplitude modulation for range measurement
KR102609464B1 (en) * 2016-10-18 2023-12-05 삼성전자주식회사 The Electronic Device Shooting Image
US10419741B2 (en) * 2017-02-24 2019-09-17 Analog Devices Global Unlimited Company Systems and methods for compression of three dimensional depth sensing
US10520589B2 (en) * 2017-10-16 2019-12-31 Sensors Unlimited, Inc. Multimode ROIC pixel with laser range finding (LRF) capability

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008283058A (en) * 2007-05-11 2008-11-20 Konica Minolta Holdings Inc Solid-state image pickup element and imaging apparatus using the same
CN104121990A (en) * 2014-07-22 2014-10-29 中国科学院上海光学精密机械研究所 Random grating based compressed sensing broadband hyperspectral imaging system
CN108737753A (en) * 2017-04-21 2018-11-02 迈来芯科技有限公司 Active pixel circuit and its operating method for time-of-flight system
CN208028993U (en) * 2018-01-31 2018-10-30 深圳市光微科技有限公司 Pixel unit, image sensor chip and imaging system
CN110024374A (en) * 2019-02-27 2019-07-16 深圳市汇顶科技股份有限公司 The pixel array and imaging sensor of imaging system and imaging system

Also Published As

Publication number Publication date
CN113037989A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
US20210088633A1 (en) Apparatus for and method of range sensor based on direct time-of-flight and triangulation
CN108881752B (en) Pixel in image sensor, imaging unit, and method and system for ranging
CN111108407B (en) Semiconductor body and method for time-of-flight measurement
US10397554B2 (en) Time-resolving sensor using shared PPD+SPAD pixel and spatial-temporal correlation for range measurement
JP7292269B2 (en) Digital pixels with extended dynamic range
US10477118B2 (en) System and methods for depth imaging using conventional CCD image sensors
CN103477186B (en) Stereo photographic device
US9087387B2 (en) Laser ranging, tracking and designation using 3-D focal planes
EP2793457B1 (en) Image processing device, image processing method, and recording medium
CN108174180B (en) A kind of display device, display system and 3 D displaying method
CN112805584A (en) Multispectral ranging/imaging sensor array and system
CN111758047B (en) Single chip RGB-D camera
CN112235522B (en) Imaging method and imaging system
US20220408018A1 (en) Camera module and super resolution image processing method thereof
US10962764B2 (en) Laser projector and camera
CN113037989B (en) Image sensor, camera module and control method
TW202236655A (en) Dual image sensor package
JP7344076B2 (en) Pixels, time-of-flight sensor systems, and methods of time-of-flight imaging
CN111596507B (en) Camera module and manufacturing method thereof
CN112672021B (en) Language identification method and device and electronic equipment
CN110959287A (en) Image pickup element, image pickup apparatus, and method for acquiring range image
US20190051033A1 (en) System and method for object differentiation in three-dimensional space
CN214177434U (en) Visible-infrared integrated camera system
WO2023041952A1 (en) Apparatus and method for multispectral fusion of images, combining images from different wavelenghts
CN115176132A (en) Communication terminal with improved imager

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant