CN211352285U - Image forming apparatus with a plurality of image forming units - Google Patents

Image forming apparatus with a plurality of image forming units Download PDF

Info

Publication number
CN211352285U
CN211352285U CN201922478342.5U CN201922478342U CN211352285U CN 211352285 U CN211352285 U CN 211352285U CN 201922478342 U CN201922478342 U CN 201922478342U CN 211352285 U CN211352285 U CN 211352285U
Authority
CN
China
Prior art keywords
module
detection module
target scene
light
lighting module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201922478342.5U
Other languages
Chinese (zh)
Inventor
马志洁
臧凯
张超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Adaps Photonics Technology Co ltd
Original Assignee
Shenzhen Adaps Photonics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Adaps Photonics Technology Co ltd filed Critical Shenzhen Adaps Photonics Technology Co ltd
Priority to CN201922478342.5U priority Critical patent/CN211352285U/en
Application granted granted Critical
Publication of CN211352285U publication Critical patent/CN211352285U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The utility model relates to an imaging device, which comprises a lighting module, a detection module, a driving device and a control module; the lighting module is provided with at least one through hole; the detection module is used for receiving light rays which are emitted from the target scene and pass through the through hole of the lighting module; the driving device is respectively electrically connected with the lighting module and the detection module and is used for changing the relative positions of the lighting module and the detection module in the first direction and/or the second direction; the detection module is also used for recording the light intensity of the target scene; the control module is respectively electrically connected with the detection module and the driving device and used for constructing an image of the target scene according to the relative positions of the lighting module and the detection module in the first direction and/or the second direction and the light intensity recorded by the detection module at each relative position, so that the image of the target scene can be reconstructed according to the light intensity information and the light direction information, the data processing is simple, and the imaging is clear.

Description

Image forming apparatus with a plurality of image forming units
Technical Field
The utility model relates to an imaging technology field, in particular to imaging device.
Background
The traditional camera only records light intensity information on a two-dimensional image sensor plane and does not record light direction information, so that only two-dimensional plane information of a scene can be captured, depth direction information is lost, a three-dimensional actual world cannot be truly reflected, and a three-dimensional sensing technology is developed accordingly. Optical three-dimensional sensing refers to acquiring three-dimensional information of a scene by an optical method. Optical three-dimensional sensing technology generally includes both active and passive modes. The active schemes include a time-of-flight method and a structured light method, and the passive schemes include a binocular vision method and a light field method. The active scheme needs an additional infrared light source, increases the space occupation and power consumption of the device, is easily influenced by ambient light, and limits the use of the device in outdoor strong light environment. The binocular vision needs obvious stripes, needs calibration and has large calculation amount. In the existing light field scheme, a micro-lens array or a small hole array is required to be added in front of a sensor to acquire the direction of incident light, so that light field information is recorded.
SUMMERY OF THE UTILITY MODEL
Based on this, it is necessary to provide an imaging apparatus to acquire clear image information of a scene.
An image forming apparatus comprising:
the lighting module is provided with at least one through hole;
the detection module is used for receiving light rays which are emitted from a target scene and pass through the through hole of the lighting module;
the driving device is electrically connected with the lighting module and the detection module respectively and used for changing the relative positions of the lighting module and the detection module in the first direction and/or the second direction;
the detection module is also used for recording the light intensity of the target scene; and
and the control module is respectively electrically connected with the detection module and the driving device and is used for constructing an image of the target scene according to the relative positions of the lighting module and the detection module in the first direction and/or the second direction and the light intensity recorded by the detection module at each relative position.
In one embodiment, the control module is further configured to determine whether the number of changes of the relative positions of the lighting module and the detection module in the first direction and/or the second direction reaches a preset number;
if the change times of the relative positions of the lighting module and the detection module in the first direction and/or the second direction reach preset times, the control module further constructs an image of the target scene according to the preset times of the relative positions of the lighting module and the detection module in the first direction and/or the second direction and the light intensity recorded by the detection module at each relative position.
In one embodiment, the driving device is further configured to change a relative position between the lighting module and the detecting module in a third direction, where the third direction is perpendicular to the first direction and the second direction, respectively.
In one embodiment, the method further comprises the following steps:
the optical filters correspond to the through holes one by one, and each optical filter is arranged on one corresponding through hole.
In one embodiment, the method further comprises the following steps:
a plurality of optical filters;
the detection module comprises a plurality of pixel units;
the plurality of optical filters correspond to the plurality of pixel units one by one, and each optical filter is arranged on one corresponding pixel unit.
In one embodiment, the method further comprises the following steps:
a light source to increase a brightness of the target scene.
In one embodiment, the detection module comprises a single photon avalanche diode.
In one embodiment, the detection module detects the light intensity by recording the number of times the light is triggered within a preset time.
In one embodiment, the lighting module further comprises a detection unit, wherein the detection unit is used for receiving light rays emitted from the environment where the target scene is located and passing through the through hole of the lighting module, and is also used for recording the ambient light intensity of the environment where the target scene is located.
In one embodiment, the detection unit comprises a photodiode or an ambient light sensor or a single photon avalanche diode.
In one embodiment, the drive device comprises a micro-electromechanical system.
In one embodiment, the light source is a pulsed laser light source.
In one embodiment, the light source is a continuous light source.
An electronic device comprises the imaging device.
In the imaging device and the electronic equipment, the lighting module is provided with the through hole, and the detection module receives light rays which are emitted from a target scene and pass through the through hole of the lighting module; the driving device changes the relative positions of the lighting module and the detection module in the first direction and/or the second direction; the detection module records the light intensity at each relative position; the control module constructs an image of the target scene according to the relative positions of the light collection module and the detection module in the first direction and/or the second direction and the light intensity recorded by the detection module at each relative position, namely reconstructs the image of the target scene according to the light intensity information and the light direction information. The imaging device is simple in structure, simple in data processing and clear in imaging.
Drawings
FIG. 1 is a flow diagram of an imaging method in one embodiment;
FIG. 2 is a flow chart of an imaging method in another embodiment;
FIG. 3 is a sub-flow diagram of an imaging method in one embodiment;
FIG. 4 is a schematic view of an imaging device in one embodiment;
FIG. 5 is a diagram illustrating a variation of an imaging view angle of a through hole of a lighting module according to an embodiment;
FIG. 6 is a top view of a detection module in one embodiment;
FIG. 7 is a schematic diagram of a pixel cell in an embodiment;
FIG. 8 is a diagram illustrating a variation in the viewing angle of the detection module due to a variation in the position of the lighting module in an embodiment;
FIG. 9 is a diagram illustrating coordinates of light detected by the detection module according to an embodiment;
FIG. 10 is a schematic diagram illustrating coordinates of light detected by the detecting module in another embodiment;
FIG. 11 is a schematic diagram illustrating coordinates of light detected by the detecting module in another embodiment;
FIG. 12 is a schematic diagram of coordinates of the results recorded by the detection modules of FIGS. 9-11;
FIG. 13 is a diagram illustrating an imaging of a target scene by a via in an embodiment;
FIG. 14 is a diagram illustrating reconstruction of an image of a target scene, in one embodiment.
Detailed Description
To facilitate an understanding of the present application, the present application will now be described more fully with reference to the accompanying drawings. Preferred embodiments of the present application are given in the accompanying drawings. This application may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
The light field is a complete description of the whole of the light, and the light field information includes the intensity information of the light and the direction information of the light, and the three-dimensional picture of the scene can be recovered through the light field information.
Referring to fig. 1, an imaging method according to an embodiment of the present application includes the following steps.
Step S1, the detecting module receives the light emitted from the target scene and passing through the through hole of the lighting module, wherein the lighting module is provided with at least one through hole.
In step S2, the detection module records the light intensity of the target scene.
Step S3, determining whether the image of the target scene can be constructed according to the relative position of the lighting module and the detecting module in the first direction and/or the second direction and the light intensity recorded by the detecting module at each relative position, if the image of the target scene cannot be constructed, changing the relative position of the lighting module and the detecting module in the first direction and/or the second direction, and executing step S1.
Referring to fig. 2, in an embodiment, the method between step S2 and step S3 includes:
step S21, the relative position of the lighting module and the detecting module in the first direction and/or the second direction is changed.
Step S22, it is determined whether the number of times of changing the relative position between the lighting module and the detecting module in the first direction and/or the second direction reaches a preset number of times.
If the number of times of changing the relative position between the lighting module and the detecting module in the first direction and/or the second direction reaches a preset number, step S3 further includes:
and constructing an image of the target scene according to preset secondary relative positions of the lighting module and the detection module in the first direction and/or the second direction and the light intensity recorded by the detection module at each relative position.
If the rough outline of the target scene can be predicted, the change times of the relative positions of the lighting module and the detection module in the first direction and/or the second direction can be estimated, so that the relative positions of the lighting module and the detection module in the first direction and/or the second direction can be changed, and the image of the target scene can be reconstructed.
In an embodiment, the imaging method further includes: and changing the relative positions of the lighting module and the detection module in a third direction, wherein the third direction is respectively vertical to the first direction and the second direction. The relative position of the lighting module and the detection module in the third direction is changed, and then the focal length of the detection module is changed, so that the field angle of the detection module can be changed, and the target scene imaging zooming is realized.
In one embodiment, the detection module comprises a single photon avalanche diode, and the detection module detects the light intensity by recording the number of times of triggering within a preset time.
The above-described imaging method further includes, before step S1: the ambient light intensity of the environment in which the target scene is located is detected. Generally, the detection module is triggered when receiving photons, if the detection module is in a strong-light environment, the number of triggered times within a certain time is large, the detection module is easy to saturate, and after the detection module is saturated, the detection module is not triggered by photons any more, and then the number of photons can not be recorded any more. The time per frame may be reduced when the ambient light intensity is strong, and increased when the ambient light intensity is weak.
It should be noted that, before the detection module reaches saturation, the received photons are absorbed with a certain probability and trigger the single photon avalanche diode, the triggering probability increases with the increase of the light intensity, and if the light intensity of the target scene is higher, the number of times the detection module is triggered is larger.
Referring to fig. 3, the step of detecting the ambient light intensity of the environment where the target scene is located specifically includes:
step S101, a detection unit receives light rays which are emitted from the environment where the target scene is located and pass through a through hole of a lighting module;
step S102, the detection unit records the ambient light intensity of the environment where the target scene is located.
The detection unit may comprise a photodiode or an ambient light sensor or a single photon avalanche diode.
In one embodiment, the through holes are circular holes and have a diameter of 1-300 μm.
The imaging method comprises the step S1 that a detection module receives light rays which are emitted from a target scene and pass through a through hole of a lighting module, and the lighting module is provided with at least one through hole; step S2, the detection module records the light intensity of the target scene; step S3, determining whether an image of the target scene can be constructed according to the relative position of the lighting module and the detecting module in the first direction and/or the second direction and the light intensity recorded by the detecting module at each relative position, if the image of the target scene cannot be constructed, changing the relative position of the lighting module and the detecting module in the first direction and/or the second direction, and executing step S1, so that the image of the target scene can be reconstructed according to the light intensity information and the light direction information, and the data processing is simple and the imaging is clear.
Referring to fig. 4, an imaging apparatus for acquiring light field information of a target scene 100 is also provided in an embodiment of the present application. The imaging device comprises a lighting module 10, a detection module 20, a driving device 30 and a control module 40.
The lighting module 10 is provided with at least one through hole. The detecting module 20 is used for receiving the light emitted from the target scene 100 and passing through the through hole of the lighting module 10. The driving device 30 is electrically connected to the lighting module 10 and the detecting module 20, respectively, and is used for changing the relative positions of the lighting module 10 and the detecting module 20 in the first direction x and/or the second direction y. The detection module 20 is also used to record the light intensity of the target scene. The control module 40 is electrically connected to the detecting module 20 and the driving device 30, respectively, and is configured to construct an image of the target scene 100 according to the relative positions of the lighting module 10 and the detecting module 20 in the first direction x and/or the second direction y and the light intensity recorded by the detecting module 20 at each relative position. It is understood that the control module 40 obtains the relative positions of the lighting module 10 and the detecting module 20 in the first direction x and/or the second direction y from the driving device 30, and obtains the light intensity recorded by the detecting module 20 at each relative position from the detecting module 20, and the control module 40 can enable the detecting module 20 and the driving device 30 to synchronously coordinate, i.e. the detecting module 20 records the light intensity at each relative position every time the driving device 30 changes the relative positions of the lighting module 10 and the detecting module 20 in the first direction x and/or the second direction y.
In an embodiment, the control module 40 is further configured to determine whether the number of changes of the relative positions of the lighting module 10 and the detecting module 20 in the first direction x and/or the second direction y reaches a preset number. If the number of times of changing the relative positions of the lighting module 10 and the detecting module 20 in the first direction x and/or the second direction y reaches a preset number of times, the control module 40 further constructs an image of the target scene 100 according to the preset number of times of relative positions of the lighting module 10 and the detecting module 20 in the first direction x and/or the second direction y and the light intensity recorded by the detecting module 20 at each relative position.
The through hole of the lighting module 10 images the target scene 100, that is, the light emitted from the target scene 100 is incident to the through hole of the lighting module 10 and reaches the detecting module 20 through the through hole of the lighting module 10, so as to be received by the detecting module 20. Each through hole is a circular hole and the diameter of each through hole is 1-300 microns. The pinhole imaging has no chromatic dispersion, no aberration (such as spherical aberration, coma aberration, astigmatism and distortion), and infinite depth of field, zooming can be realized by adjusting the distance between the pinhole and the sensor, and the image is zoomed, as shown in fig. 5, when the distance between the lighting module 10 and the detection module 20 is short, the imaging angle of view of the target scene 100 on the detection module 20 through the through hole of the lighting module 10 is large, and when the distance between the lighting module 10 and the detection module 20 is long, the imaging angle of view of the target scene 100 on the detection module 20 through the through hole of the lighting module 10 becomes small. The imaging of the target scene 100 through the through hole on the lighting module 10 does not need to use a complex multi-lens group, and has simple structure and low cost.
The lighting module 10 may be a light shielding plate with a through hole, and the light of the target scene 100 cannot reach the detection module 20 through a place other than the through hole of the light shielding plate. The through holes of the lighting module 10 may be regularly arranged, such as mura (modified uniform reduced array), or irregularly arranged. In practical application, the arrangement of the through holes on the lighting module 10 may be designed according to different target scenes 100, so that the lighting module 10 may be suitable for different target scenes 100. The movement track is determined at the time of factory shipment, the lighting module 10 moves according to the determined movement track, the control module 40 transmits the movement track to the driving device 30, and the driving device 30 controls the movement of the lighting module 10 or the detection module 20 according to the movement track.
It should be noted that the relative position between the lighting module 10 and the detecting module 20 in the first direction x and/or the second direction y can be changed by moving the lighting module 10 or the detecting module 20. The plane formed by the first direction x and the second direction y is a plane parallel to the plane where the lighting module 10 is located or the plane where the detecting module 20 is located.
In one embodiment, the detection module 20 comprises a single photon avalanche diode, and the detection module 20 detects the light intensity by recording the number of times that the light intensity is triggered within a preset time. Since the through hole is used to image the target scene 100, in order to obtain a clear image, the diameter of the through hole needs to be small enough, which results in a small incident light amount of the through hole, a small number of photons received by the detection module 20, a weak response, and a low signal-to-noise ratio, if the light amount received by the detection module 20 is increased by increasing the lighting time, the frame rate of the imaging device will be reduced, and if the diameter of the through hole is increased, the light spot from the target point of the target scene 100 will be increased, which results in imaging blur. The single photon avalanche diode can be used for realizing single photon level light quantity detection, can collect weak light incident to a through hole with a smaller diameter, and realizes high-gain and high-sensitivity photoelectric detection, so that clear images can be obtained without increasing lighting time and reducing imaging quality. In other embodiments, the detection module 20 may include a Quantum Image Sensor (QIS), or a Charge Coupled Device (CCD) type photosensor, or a Solid State Light (SSL) sensor, such as a CMOS sensor or a photosensor fabricated using III-V materials or II-VI materials.
If the detecting module 20 includes a single photon avalanche diode, the detecting module 20 obtains a frame of image by recording the number of times of triggering within a preset time, where the time of a frame is the preset time. Before the detection module 20 reaches saturation, the received photons are absorbed with a certain probability and trigger the single photon avalanche diode, the triggering probability increases with the increase of the light intensity, and if the light intensity of the target scene 100 is higher, the number of times the detection module 20 is triggered is larger. If the detection module 20 is in a strong-light environment, the number of times of triggering within a certain period of time is large, so that saturation is easy, and after the detection module 20 is saturated, the detection module is not triggered by photons any more, and the number of photons cannot be recorded any more, therefore, before the detection module 20 is used for collecting and imaging the target scene 100, the ambient light intensity of the environment where the target scene 100 is located can be detected, and then the time of each frame can be determined, so that the detection module 20 can accurately record each frame of image.
The imaging device further comprises a detection unit, wherein the detection unit is used for receiving light rays which are emitted from the environment where the target scene is located and pass through the through hole of the lighting module, and is also used for recording the ambient light intensity of the environment where the target scene is located. The detection unit may include a Photo sensor such as a PD (Photo-diode), an ALS (Ambient light sensor), an SPAD (single photon avalanche diode), and the like. When the light of the environment where the object scene 100 is located is weak, the collection time of the detection module 20 may be increased to increase the number of photons received by the detection module 20, or the light source 60 may be used to increase the brightness of the object scene 100, so as to increase the light incident amount of the through hole, thereby increasing the number of times that the detection module 20 is triggered. The light source 60 may be a pulsed laser light source or a continuous light source.
Referring to fig. 6 and 7, in one embodiment, the driving device 30 includes a Micro-Electro-Mechanical System (MEMS). Microelectromechanical systems can achieve high speed control of mechanical motion on the micrometer scale in multiple dimensions. The through holes of the lighting module 10 are used for imaging the target scene 100, the light incoming amount of the through holes is small, the detection module 20 comprises pixel units 21 which are arranged in an array, each pixel unit 21 comprises a photosensitive area 211, the area of the detection module 20 is limited due to the large size of the pixel unit 21, therefore, the number of the pixel units 21 on the detection module 20 is small, the imaging resolution of the detection module 20 is low, in order to obtain a clearer image, the lighting module 10 or the detection module 20 needs to be moved in the first direction x and/or the second direction y, and the relative positions of the lighting module 10 and the detection module 20 are changed, so that the photosensitive areas 211 receive the light rays incident from the complete target scene 100 through the through holes.
As shown in fig. 8, when the lighting module 10 is located at the first position, since the area of the photosensitive region 211 only occupies a part of the area of the pixel unit 21, only a part of the light rays of the field angle in the field angle corresponding to the pixel unit 21 are collected by the photosensitive region 211 (the visible portion a is labeled in the figure), and the other part of the light rays of the field angle cannot be collected (the invisible portion B is labeled in the figure), the lighting module 10 or the detecting module 20 is moved in the first direction x and/or the second direction y, and when the lighting module 10 is moved to the second position, the invisible portion B becomes the visible angle when the lighting module 10 is located at the first position. Therefore, when the relative position of the lighting module 10 or the detection module 20 in the first direction x and/or the second direction y is changed N times by making the field of view of the photosensitive region 211 cover the whole target scene 100, the resolution of the image of the target scene 100 is increased by N times. Due to the small diameter of the through hole, the micro-electro-mechanical system can realize a slight change of the relative position of the through hole and the photosensitive region 211 in the first direction x and/or the second direction y.
The light field information records the intensity of the light rays in different directions. The following describes a manner of collecting the light field information by moving the relative positions of the lighting module 10 and the detecting module 20. For convenience of illustration, the description will be made with reference to a two-dimensional plan view.
Referring to fig. 9 to 12, in an embodiment, the driving device 30 changes the relative position of the lighting module 10 and the detecting module 20 in the first direction x by moving the lighting module 10 in the first direction x, that is, the position of the detecting module 20 is fixed, and changes the position of the lighting module 10 in the first direction x. When a through hole of the lighting module 10 is located at u, the detecting module 20 receives photons emitted from the target scene 100 and passing through the through hole, and the different pixel units 21 record corresponding light intensities I (u, x), where u represents the location of the through hole and x represents the coordinates of the pixel unit 21. As shown in fig. 9, when the position of the through hole is u1, the reading recorded by the detection module 20 is I (u1, x), for example, if there are n pixel units 21, the readings of the n pixel units 21 are I (u1, x1), I (u1, x2), I (u1, x3), I (u1, x4), …, I (u1, xn), respectively; as shown in fig. 10, when the position of the through hole is u2, the reading recorded by the detection module 20 is I (u2, x), for example, if there are n pixel units 21, the readings of the n pixel units 21 are I (u2, x1), I (u2, x2), I (u2, x3), I (u2, x4), …, I (u2, xn), respectively; as shown in fig. 11, when the position of the through hole is u3, the reading recorded by the detection module 20 is I (u3, x), and for example, if there are n pixel units 21, the readings of the n pixel units 21 are I (u3, x1), I (u3, x2), I (u3, x3), I (u3, x4), …, I (u3, xn), respectively. The explanation is carried out only when the small holes are selected at three positions, and more positions are required to be collected in actual use, so that more comprehensive light field information is obtained. Fig. 12 is a diagram showing the collected light field information, the abscissa shows the coordinates of the detection module 20, the ordinate shows the position of the through hole, the positions of the two points uniquely determine a ray, and the size of the point shows the intensity of the ray.
In an actual three-dimensional scene, the driving device 30 changes the relative positions of the lighting module 10 and the detecting module 20 in the first direction x and/or the second direction y, and the detecting module 20 records the light intensity at each relative position, so as to obtain the light field information of the scene.
The control module 40 may be a computer.
Specifically, the lighting module 10 and the detecting module 20 move relatively in the first direction x and/or the second direction y, and change the relative position. The detection module 20 records the light intensity at each pixel cell 21 within the detection module 20 at each relative position. The control module 40 may include a signal acquisition unit and a signal processing unit. The acquiring unit is used for acquiring the relative positions of the lighting module 10 and the detecting module 20 in the first direction x and/or the second direction y and the light intensity recorded by the detecting module 20 at each relative position. The obtaining unit is further configured to control the driving device 30 to move the lighting module 10 or the detecting module 20 to change the relative position of the lighting module 10 and the detecting module 20 in the first direction x and/or the second direction y. The signal processing unit is used for constructing an image of the target scene 100 according to the relative positions of the lighting module 10 and the detection module 20 in the first direction x and/or the second direction y and the light intensity recorded by the detection module 20 at each relative position.
The signal processing unit traces the target scene 100 according to the light field information of the light field plane of the target scene 100, finds the positions of different target points in the target scene 100, and reconstructs the target scene 100. As shown in fig. 13, the same through hole of the lighting module 10 images the target scene 100 at two different positions, and as shown in fig. 14, the signal processing unit may reconstruct the target scene 100 according to the number of times that the detection module 20 is triggered within the preset time detected at two relative positions and the relative position information of the lighting module 10 and the detection module 20 at the two positions, that is, by recording the directions of the light rays emitted by the target scene 100 at the two positions, and then finding the positions of different target points in the target scene 100 through the intersection point of the light rays at the two positions.
When the number of the through holes of the lighting module 10 is multiple, the collection efficiency of the imaging device on the light field information can be improved, and the control module 40 can generate a preset driving track according to the arrangement rule of the through holes on the lighting module 10 and transmit the driving track to the driving device 30 to control the movement track of the lighting module 10 or the detection module 20, so that the detection module 20 can distinguish the intensities of the light rays from different target points of the target scene 100.
The driving means 30 is further adapted to change the relative position of the lighting module 10 and the detecting module 20 in the third direction z. The third direction z is perpendicular to the plane of the first direction x and the second direction y. The driving device 30 changes the relative position of the lighting module 10 and the detecting module 20 in the third direction z, and further changes the focal length of the detecting module 20, so as to change the field angle of the detecting module 20, thereby realizing zooming of the target scene 100.
In an embodiment, the imaging device further includes at least one optical filter, the optical filters correspond to the through holes one to one, and each optical filter is disposed on a corresponding through hole. By disposing the optical filter on the through hole, the target scene 100 is imaged as a color image on the detection module 20 through the through hole. In other embodiments, the imaging device may include a plurality of optical filters, the optical filters correspond to the pixel units 21 one by one, each optical filter is disposed on a corresponding pixel unit 21, and the control module 40 obtains the color information of the image of the target scene 100 through a certain algorithm according to the optical filters disposed on the detection module 20.
In the imaging device of the present application, through the opening of the light collection module 10, the detection module 20 receives the light emitted from the target scene 100 and passing through the through hole of the light collection module 10; the driving device 30 changes the relative position of the lighting module 10 and the detecting module 20 in the first direction x and/or the second direction y; the detection module 20 records the light intensity at each relative position; the control module 40 constructs an image of the target scene 100 according to the relative positions of the lighting module 10 and the detection module 20 in the first direction x and/or the second direction y and the light intensity recorded by the detection module 20 at each relative position, that is, reconstructs the image of the target scene 100 according to the light intensity information and the light direction information. The imaging device of the application is simple in structure, simple in data processing and clear in imaging.
The embodiment of the application also provides electronic equipment, and the electronic equipment comprises the imaging device.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only represent some embodiments of the present invention, and the description thereof is specific and detailed, but not to be construed as limiting the scope of the present invention. It should be noted that, for those skilled in the art, without departing from the spirit of the present invention, several variations and modifications can be made, which are within the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the appended claims.

Claims (11)

1. An image forming apparatus, comprising:
the lighting module is provided with at least one through hole;
the detection module is used for receiving light rays which are emitted from a target scene and pass through the through hole of the lighting module;
the driving device is electrically connected with the lighting module and the detection module respectively and used for changing the relative positions of the lighting module and the detection module in the first direction and/or the second direction;
the detection module is also used for recording the light intensity of the target scene; and
and the control module is respectively electrically connected with the detection module and the driving device and is used for constructing an image of the target scene according to the relative positions of the lighting module and the detection module in the first direction and/or the second direction and the light intensity recorded by the detection module at each relative position.
2. The imaging device of claim 1, wherein the driving device is further configured to change a relative position of the light collection module and the detection module in a third direction, wherein the third direction is perpendicular to the first direction and the second direction, respectively.
3. The imaging apparatus of claim 2, further comprising:
the optical filters correspond to the through holes one by one, and each optical filter is arranged on one corresponding through hole.
4. The imaging apparatus of claim 2, further comprising:
a plurality of optical filters;
the detection module comprises a plurality of pixel units;
the plurality of optical filters correspond to the plurality of pixel units one by one, and each optical filter is arranged on one corresponding pixel unit.
5. The imaging apparatus of claim 1, further comprising:
a light source to increase a brightness of the target scene.
6. The imaging apparatus of claim 3, wherein the detection module detects the light intensity by recording a number of times that the light intensity is triggered within a predetermined time.
7. The imaging device according to claim 1, further comprising a detection unit for receiving light emitted from an environment in which the target scene is located and passing through the through hole of the lighting module, and for recording an ambient light intensity of the environment in which the target scene is located.
8. The imaging apparatus of claim 7, wherein the detection unit comprises a photodiode or an ambient light sensor or a single photon avalanche diode.
9. The imaging apparatus of claim 1, wherein the driving means comprises a micro-electromechanical system.
10. The imaging apparatus of claim 5, wherein the light source is a pulsed laser light source.
11. The imaging apparatus of claim 5, wherein the light source is a continuous light source.
CN201922478342.5U 2019-12-31 2019-12-31 Image forming apparatus with a plurality of image forming units Active CN211352285U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201922478342.5U CN211352285U (en) 2019-12-31 2019-12-31 Image forming apparatus with a plurality of image forming units

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201922478342.5U CN211352285U (en) 2019-12-31 2019-12-31 Image forming apparatus with a plurality of image forming units

Publications (1)

Publication Number Publication Date
CN211352285U true CN211352285U (en) 2020-08-25

Family

ID=72135509

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201922478342.5U Active CN211352285U (en) 2019-12-31 2019-12-31 Image forming apparatus with a plurality of image forming units

Country Status (1)

Country Link
CN (1) CN211352285U (en)

Similar Documents

Publication Publication Date Title
US11287515B2 (en) Rotating compact light ranging system comprising a stator driver circuit imparting an electromagnetic force on a rotor assembly
US20210255281A1 (en) Optical system for collecting distance information within a field
CA3037378C (en) Multi-camera imaging systems
US8569680B2 (en) Hyperacuity from pre-blurred sampling of a multi-aperture visual sensor
US11009592B2 (en) LiDAR system and method
WO2019231496A1 (en) Light detection system having multiple lens-receiver units
CN103037180A (en) Image sensor and image pickup apparatus
CN103808409A (en) Mixed material multispectral staring array sensor
WO2019113368A1 (en) Rotating compact light ranging system
CN1702452B (en) Digital microscope multi-objective imaging spectrometer apparatus
CN112351270A (en) Method and device for determining fault and sensor system
CN211406089U (en) Imaging device and electronic apparatus
CN211352285U (en) Image forming apparatus with a plurality of image forming units
CN113132581A (en) Imaging method, imaging device and electronic equipment
US8242427B2 (en) System and method for optically co-registering pixels
CN113132582A (en) Imaging method, imaging device and electronic equipment
US20240039632A1 (en) Light receiving device, reception device, communication device, and communication system
US8860850B1 (en) Photon-starved imaging system
US20230022201A1 (en) Aperture Health Monitoring Mode
JP4327625B2 (en) Light position information detection device
CN114690339A (en) Lidar receiver with filter and baffle for reducing crosstalk
CN114812814A (en) Wide-bandgap semiconductor ultraviolet detector imaging system and method based on galvanometer scanning

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant