WO2019076150A1 - 一种摄像机以及图像生成方法 - Google Patents
一种摄像机以及图像生成方法 Download PDFInfo
- Publication number
- WO2019076150A1 WO2019076150A1 PCT/CN2018/103788 CN2018103788W WO2019076150A1 WO 2019076150 A1 WO2019076150 A1 WO 2019076150A1 CN 2018103788 W CN2018103788 W CN 2018103788W WO 2019076150 A1 WO2019076150 A1 WO 2019076150A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- light
- image
- beams
- image sensors
- separated
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Definitions
- the present application relates to the field of camera technologies, and in particular, to a camera and an image generating method.
- the current camera includes a lens 1, an image sensor 2, and an image processor 3, and the image sensor 2 is electrically connected to the image processor 3.
- the image quality of the image sensor is poor. For example, if the image quality of an overly bright object in the initial image is better, the image quality of the object that is too dark in the initial image is poor; conversely, if the image quality of the object that is too dark in the initial image is better, then The image quality of an overly bright object in the original image is poor. Therefore, when the light intensity in the scene captured by the camera is strong or dark, the image may be distorted. For example, when capturing the image of the vehicle red light, the brightness of the signal light is too high, that is, too bright, in order to ensure that the other is not over. The image quality of a bright object, the color of the signal light in the initial image obtained by the image sensor is distorted.
- An object of the embodiments of the present application is to provide a camera and an image generating method to solve the problem of image color distortion.
- the specific technical solutions are as follows:
- a camera comprising: a lens, a beam splitting device, an image processor and a plurality of image sensors, wherein the plurality of image sensors are respectively electrically connected to the image processor, wherein at least one image sensor is used to process light stronger than the first Light intensity threshold light, at least one image sensor for processing light having a light intensity less than a second light intensity threshold,
- the light splitting device is configured to separate the light emitted from the lens into a plurality of light beams, wherein the number of separated light beams is the same as the number of the image sensors;
- Each of the image sensors is configured to receive a beam of light separated by the beam splitting device and convert the received light into an initial image, wherein each of the image sensors receives a different beam of light;
- the image processor is configured to fuse each initial image to generate a fused image.
- the spectroscopic device comprises N dichroic prisms
- the N dichroic prisms are configured to separate the light emitted from the lens into N+1 beams, where N is a positive integer.
- the N dichroic prisms comprise a first beam splitting prism and a second beam splitting prism
- the first beam splitting prism is configured to separate the light emitted from the lens into two light beams, wherein the separated two light beams have the same light intensity ratio as the first light splitting prism;
- the second beam splitting prism is configured to separate the light of the weakest light intensity of the two light beams separated by the first beam splitting prism into two light beams, wherein the light intensity ratio of the two light beams separated is
- the second spectroscopic prism has the same transmissive reflectance ratio
- Each of the image sensors is configured to receive one of the light beams separated by the first beam splitting prism and not separated by the second beam splitting prism, and one of the two light beams separated by the second beam splitting prism Light, and converting the received light into an initial image, wherein each of the image sensors receives a different beam of light.
- the method further includes a synchronization clock, and the plurality of image sensors are respectively electrically connected to the synchronization clock;
- the synchronous clock is configured to separately send a clock synchronization signal to each image sensor.
- the exposure mode of each of the image sensors is a global exposure.
- the method further includes a plurality of infrared filters, the number of the infrared filters being the same as the number of the image sensors;
- Each of the infrared filters is configured to receive a beam of light separated by the beam splitting device and filter out infrared rays in the received light, wherein each of the infrared filters receives a different beam of light ;
- Each of the image sensors is configured to receive a beam of light emitted from one of the infrared filters and convert the received light into an initial image, wherein each of the image sensors receives a different beam of light.
- An image generating method is applied to a camera, the camera comprising: a lens, a beam splitting device, an image processor and a plurality of image sensors, wherein the plurality of image sensors are respectively electrically connected to the image processor, and at least one image sensor is used for Processing light having a light intensity greater than a first light intensity threshold, and at least one image sensor is configured to process light having a light intensity less than a second light intensity threshold, the method comprising:
- the light splitting device separates the light emitted from the lens into a plurality of light beams, wherein the number of separated light beams is the same as the number of the image sensors;
- Each of the image sensors receives a beam of light separated by the beam splitting device and converts the received light into an initial image, wherein each of the image sensors receives a different beam of light;
- the image processor fuses the respective initial images to generate a fused image.
- the spectroscopic device comprises N dichroic prisms
- the step of separating the light emitted from the lens into a plurality of beams of light comprises:
- the N dichroic prisms separate light rays emitted from the lens into N+1 beams, where N is a positive integer.
- the N dichroic prisms comprise a first dichroic prism and a second dichroic prism
- the step of separating the light emitted from the lens into N+1 beams by the N dichroic prisms comprises:
- the first beam splitting prism separates the light emitted from the lens into two light beams, wherein the separated two light beams have the same light intensity ratio as the first light splitting prism;
- the second beam splitting prism separates the light of the weakest light intensity of the two light beams separated by the first beam splitting prism into two light beams, wherein the light intensity ratio of the separated two light rays is different from the second light splitting prism
- the spectroscopic prism has the same transmissive reflectance
- the step of each of the image sensors receiving a beam of light separated by the beam splitting device and converting the received light into an initial image includes:
- Each of the image sensors receives one of the light beams separated by the first beam splitting prism that is not separated by the second beam splitting prism, and one of the two light beams separated by the second beam splitting prism, and The received light is converted to an initial image, wherein each of the image sensors receives a different beam of light.
- the camera further includes a synchronization clock, and the plurality of the image sensors are respectively electrically connected to the synchronization clock, and the method further includes:
- the synchronous clocks respectively transmit clock synchronization signals to respective image sensors.
- the exposure mode of each of the image sensors is a global exposure.
- the camera further includes a plurality of infrared filters, the number of the infrared filters being the same as the number of the image sensors, each of the infrared filters receiving a beam of light separated by the beam splitting device And filtering out infrared rays in the received light, wherein each of the infrared filters receives a different beam of light;
- the step of each of the image sensors receiving a beam of light separated by the beam splitting device and converting the received light into an initial image includes:
- Each of the image sensors receives a beam of light emitted by the beam splitter through an infrared filter and converts the received light into an initial image, wherein each of the image sensors receives a different beam of light.
- the step of the image processor combining the initial images to generate a fused image includes:
- the pixel values of the pixel points located at the same position in each initial image are fused according to the weights corresponding to each pixel point and the pixel values to generate a fused image.
- an embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, and when the computer program is executed by a processor, implement any of the foregoing image generation methods. .
- an embodiment of the present application further discloses an executable program code for being executed to execute any of the image generation methods described above.
- the light emitted from the lens is divided into a plurality of light beams by the light splitting device, and the plurality of light beams are separated by the light splitting device, so that each image sensor can image the light beam separated by the light splitting device.
- An initial image is generated, and then each initial image is fused to generate a fused image. Since at least one image sensor is used to process light having a light intensity greater than a first light intensity threshold (ie, light of high light intensity), at least one image sensor is configured to process light having a light intensity less than a second light intensity threshold (ie, a weak light intensity) Light), whereby the high-intensity light and the weak light-intensity correspond to different image sensors.
- a first light intensity threshold ie, light of high light intensity
- a second light intensity threshold ie, a weak light intensity
- a sensor it is not necessary to simultaneously image high-intensity and low-intensity light, but different image sensors are used. Imaging of high-intensity and low-intensity light, so that objects that are too bright and too dark in the fused image can be displayed normally. Compared with the imaging of an image sensor, the dynamic range of imaging is expanded, and the image color is solved. Distortion problem.
- FIG. 1 is a schematic structural view of a camera in the prior art
- FIG. 2 is a schematic diagram of a first structure of a camera according to an embodiment of the present application.
- FIG. 3 is a schematic diagram of a second structure of a camera according to an embodiment of the present disclosure.
- FIG. 4 is a schematic diagram of a third structure of a camera according to an embodiment of the present application.
- FIG. 5 is a schematic structural diagram of a fourth type of a camera according to an embodiment of the present disclosure.
- FIG. 6 is a schematic structural diagram of a fifth type of a camera according to an embodiment of the present disclosure.
- FIG. 7 is a schematic structural diagram of a sixth type of a camera according to an embodiment of the present disclosure.
- FIG. 8 is a flowchart of an image generating method according to an embodiment of the present application.
- the embodiment of the present application provides a camera and an image generating method.
- the camera provided in the embodiment of the present application is first described in detail below.
- FIG. 2 is a schematic diagram of a first structure of a camera according to an embodiment of the present disclosure.
- the camera includes: a lens 10, a beam splitting device 20, an image processor 30, and a plurality of image sensors 40.
- the plurality of image sensors 40 are respectively electrically connected to the image processor 30. connection.
- At least one image sensor is disposed for processing light having a light greater than the first light intensity threshold, and at least one image sensor is used to process the light intensity less than the second light intensity
- the light of the threshold wherein the first light intensity threshold and the second light intensity threshold may be the same or different.
- the light splitting device 20 is configured to separate the light emitted from the lens 10 into a plurality of light beams, wherein the number of the separated light beams is the same as the number of the image sensors 40;
- Each image sensor 40 is configured to receive a beam of light separated by the beam splitting device 20 and convert the received light into an initial image, wherein each image sensor 40 receives a different beam of light;
- the image processor 30 is configured to fuse the respective initial images to generate a fused image.
- each image sensor 40 converts the received light into an initial image and sends it to the image processor 40, which combines the respective initial images to produce a fused image. Since at least one of the plurality of image sensors 40 is for processing light of a high light intensity, at least one of the light beams for processing the weak light intensity is displayed, so that objects that are too bright and too dark in the fused image can be normally displayed.
- the light emitted from the lens 10 is split into a plurality of light beams by the spectroscopic device 20, and since the plurality of light beams are separated by the spectroscopic device 20, each image sensor 40 can perform a beam of light separated by the spectroscopic device 20.
- the imaging process generates an initial image, and then fuses the respective initial images to generate a fused image. Since at least one image sensor is used to process light having a light intensity greater than a first light intensity threshold (ie, light of high light intensity), at least one image sensor is configured to process light having a light intensity less than a second light intensity threshold (ie, a weak light intensity) Light), whereby the high-intensity light and the weak light-intensity correspond to different image sensors.
- a first light intensity threshold ie, light of high light intensity
- a second light intensity threshold ie, a weak light intensity
- a sensor it is not necessary to simultaneously image high-intensity and low-intensity light, but different image sensors are used. Imaging of high-intensity and low-intensity light, so that objects that are too bright and too dark in the fused image can be displayed normally. Compared with the imaging of an image sensor, the dynamic range of imaging is expanded, and the image color is solved. Distortion problem.
- optical path in FIG. 2 only represents a schematic diagram of the ray path and does not represent an accurate ray path.
- the beam splitting device 20 may include N dichroic prisms 201 and N dichroic prisms 201 for separating the light emitted from the lens 10 into N+1 beams, where N is a positive integer.
- the dichroic prism 201 is a cube structure which is formed by plating a multi-layer interference film on the inclined surface of the right-angle prism and then synthesizing the film. After the light passes through the multi-layer interference film, the optical element can transmit and reflect the light without changing the light. The wavelength distribution only changes the light intensity or polarization direction.
- the transflective ratio of the dichroic prism 201 is a ratio of the intensity of the transmitted light and the reflected light separated by the light passing through the dichroic prism 201.
- the beam splitting prism separates the light emitted from the lens 10 into a plurality of beams of different light intensities.
- the spectroscopic device 20 includes one dichroic prism 201. As shown by the arrow in FIG. 3, the light emitted from the lens 10 is incident on the dichroic prism 201, and is split.
- the prism 201 separates the light emitted from the lens 10 into two beams, and the two beams are respectively incident on the two image sensors 40, and each image sensor 40 converts the received light into an initial image and sends it to the image processor 30, the image.
- Processor 30 fuses the two initial images to produce a fused image.
- the light emitted from the lens 10 is separated into two beams by one of the dichroic prisms 201, and one of the two image sensors 40 is used to process the light whose light is stronger than the first light intensity threshold (that is, the light of high light intensity).
- the other is for processing light with a light intensity lower than the second light intensity threshold (that is, light with weak light intensity), and the high light intensity and the weak light light correspond to different image sensors, and for one sensor, it is not required to simultaneously
- the high-intensity and low-intensity light is imaged, and different image sensors are used to image the high-intensity and low-intensity light respectively, so that objects that are too bright and too dark in the fused image can be displayed normally.
- the dynamic range of imaging is expanded, and the problem of image color distortion is solved.
- the spectroscopic device includes two dichroic prisms 201, which are a first dichroic prism 201 and a second dichroic prism 201, respectively, and a first dichroic prism 201 for The light emitted from the lens 10 is separated into two light beams, wherein the separated two light beams have the same light intensity ratio as the first light splitting prism 201;
- the second beam splitting prism 201 is configured to separate the light of the weakest light intensity of the two light beams separated by the first beam splitting prism 201 into two light beams, wherein the light intensity ratio of the two separated light beams and the second light splitting
- the prism 201 has the same transmissive reflectance ratio
- Each image sensor 40 is configured to receive, among the light separated by the first dichroic prism 201, the light that is not separated by the second dichroic prism 201 and one of the two rays separated by the second dichroic prism 201, and The received light is converted into an initial image, wherein each image sensor 40 receives a different beam of light.
- the light emitted from the lens 10 is incident on the first beam splitting prism 201, and the first beam splitting prism 201 separates the light emitted from the lens 10 into two beams, the weakest of the two beams.
- the light is incident on the second dichroic prism 201, and the second dichroic prism 201 separates the received light of the weakest light into two beams.
- the light split by the first dichroic prism 201 and not separated by the second dichroic prism 201 and the two beams separated by the second dichroic prism 201 are respectively incident on the three image sensors 40, and each image sensor 40 will receive The light is converted into an initial image and sent to image processor 30, which combines the three initial images to produce a fused image.
- the light emitted from the lens 10 is separated into three beams by the two dichroic prisms 201, and at least one of the three image sensors 40 is used to process light having a light intensity greater than the first light intensity threshold (that is, high light intensity).
- At least one light for treating light having a light intensity smaller than a second light intensity threshold that is, light having a weak light intensity
- the light of high light intensity and the light of weak light light correspond to different image sensors, and for one sensor, it is not required
- the high-intensity and low-intensity light is imaged, and different image sensors are used to image the high-intensity and low-intensity light respectively, so that objects that are too bright and too dark in the fused image can be displayed normally.
- the dynamic range of imaging is expanded, and the problem of image color distortion is solved.
- the dichroic prism 201 may be a depolarizing beam splitting prism that has similar spectral characteristics of the P-polarized component and the S-polarized component of the incident light. That is, after splitting by the depolarization beam splitting prism, the proportional relationship between the original horizontal polarization and the vertical polarization in the incident light is maintained as much as possible, and the optical characteristic separation degree of the P-polarized component and the S-polarized component is ⁇ 5%.
- the P-polarized component and the S-polarized component When light penetrates the surface of the optical element at a non-perpendicular angle, both the reflection and transmission characteristics depend on the polarization phenomenon.
- the coordinate system used is defined by the plane containing the input and reflected beams. If the polarization vector of the ray is in this plane, it is called the P-polarization component, and if the polarization vector is perpendicular to the plane, it is called the S-polarization component.
- the beam splitting device 20 may include N flat beam splitting mirrors, N flat beam splitting mirrors for separating the light emitted from the lens 10 into N+1 beams, where N is a positive integer.
- a flat beam splitter is an optical component that is coated on a transparent optical flat glass surface to transmit and reflect light after passing through the thin film. It does not change the wavelength distribution of the light, but only changes the light intensity or polarization. direction.
- the transflective ratio of the plate beam splitter is the light intensity ratio of the transmitted light and the reflected light separated by the light beam through the plate beam splitter.
- the splitting principle of the flat beam splitter is similar to the splitting principle of the splitting prism, for an embodiment in which N is 1, referring to FIG. 3, the splitting prism 201 in FIG. 3 can be replaced by a flat beam splitting mirror.
- the dichroic prism 201 in FIG. 4 may be replaced by a flat beam splitting mirror, and details are not described herein again.
- the camera provided by the embodiment of the present application may further include a synchronous clock 50,
- the image sensors 40 are electrically connected to the synchronous clock 50, respectively, and the synchronous clock 50 is used to respectively transmit clock synchronization signals to the respective image sensors 40.
- the clocks of the respective image sensors 40 are synchronized by transmitting the clock synchronization signals, the accuracy of the clocks of the respective image sensors 40 is ensured, and the accuracy of generating images is further ensured.
- the dynamic range of a typical camera is 0-70dB.
- the dynamic range of the camera is the range of brightness values of the brightest and darkest objects that can be displayed normally in a picture captured by the camera.
- the dynamic range of the camera needs to be Wide dynamic range, where the wide dynamic range is 100-120dB.
- the maximum value of the dynamic range of the camera is the ratio of the exposure amount at the high-intensity light to the exposure at the low-intensity light. Since the image sensor 40 is included in the embodiment of the present application, Therefore, the maximum value of the dynamic range of the camera is the ratio of the exposure of the strongest light and the weakest light on the image sensor after the splitting, wherein the exposure is the product of the light intensity and the exposure time.
- the conversion formula can be obtained, that is, at least 32 times, that is, the ratio of the exposure of the strongest light and the weakest light on the image sensor after the splitting is about 32:1.
- the light intensity is first reduced (weakened) by the spectroscopic device, so that each image sensor 40 does not overflow when photoelectrically converting a portion with strong light, that is, increases the dynamic range of the camera; further reduces The exposure time makes the photoelectric conversion time of the image sensor 40 shorter, further reducing the overflow of the photoelectric conversion when the light is strong, that is, further increasing the dynamic range of the camera.
- the exposure amount corresponding to the light received by each image sensor 40 may be determined according to the exposure time of each image sensor 40 and the light intensity of the received light.
- the exposure ratio corresponding to each image sensor 40 that is, the exposure time ratio, can be determined according to the intensity ratio of the light received by each of the image sensors 40.
- the beam splitter separates 3 beams of light, and the ratio of the intensity of the 3 beams is 8:1:1.
- the exposure time ratio of the image sensor 40 corresponding to the three beams can be set to 4:4:1, then the ratio of the light received by each image sensor 40 after the splitting is 32:4:1, visible, after the splitting
- the ratio of the exposure of the strongest light to the weakest light on the image sensor is 32:1, that is, the dynamic range ratio of the camera is increased by 32:1.
- the dynamic range of the camera is increased by 32 times, that is, 30 dB.
- the problem of image color distortion is solved by increasing the dynamic range of the camera.
- the position of the distorted color region in the image is not required in the embodiment of the present application. Detection is performed, for example, by detecting the position of the signal light, which improves the practicality.
- the image sensor 40 is generally exposed by a roll-on exposure method, and the rolling exposure method is realized by the image sensor 40 progressively exposing. At the beginning of the exposure, the image sensor 40 is exposed line by line until the line is progressively exposed until All pixels are exposed. Of course, all the actions are done in a very short time.
- each image sensor 40 can employ a global exposure exposure mode.
- the global exposure mode is achieved by exposing all the pixel points of the image sensor 40 at the same time, that is, at the beginning of the exposure, all the pixels of the image sensor 40 start collecting light, and at the end of the exposure, all the pixels of the image sensor 40 The light is no longer collected. Since the exposure mode of the global exposure is that all the pixels are simultaneously exposed, the start exposure time and the end exposure time of the different rows of pixels are the same, that is, there is no order of successive exposures, thereby avoiding deformation of the moving object. happening.
- the camera provided by the embodiment of the present application may further include a plurality of infrared filters 60 , the number of infrared filters 60 and the image sensor 40 .
- each infrared filter 60 is configured to receive a beam of light separated by the beam splitting device 20 and filter out infrared light in the received light, wherein each infrared filter 60 receives different light.
- the light is incident from the outside through the lens 10 to the spectroscopic device 20.
- the spectroscopic device 20 separates the light emitted from the lens 10 into a plurality of rays and emits the plurality of rays, and the emitted plurality of rays are respectively incident on the plurality of infrared rays.
- the filter 60, each of the infrared filters 60 filters and emits infrared rays in the received light, and the emitted multiple beams are respectively incident on the plurality of image sensors 40, and each of the image sensors 40 converts the received light.
- the image processor 30 fuses the respective initial images to generate a fused image. Thereby, after the light splitting device separates the light, the infrared light is filtered out, and then the subsequent image is generated, thereby achieving the purpose of avoiding redness of the generated image.
- optical path in FIG. 6 only represents a schematic diagram of the ray path and does not represent an accurate ray path.
- the camera provided by the embodiment of the present application may further include an infrared filter 60 for receiving the slave lens 10 .
- the emitted light and the infrared light in the received light are filtered, and the spectroscopic device 20 receives the light emitted from the infrared filter 60 and separates the received light into a plurality of rays.
- the light is incident from the outside through the lens 10 into the infrared filter 60, and the infrared filter 60 receives the light emitted from the lens 10, filters out the infrared light in the received light, and emits it.
- the light is incident on the spectroscopic device 20, and the spectroscopic device 20 separates the light emitted from the infrared filter 60 into a plurality of rays and emits the plurality of beams, and the emitted plurality of beams are respectively incident on the plurality of image sensors 40, and each of the image sensors 40 receives the light.
- the resulting light is converted into an initial image and sent to image processor 30, which combines the respective initial images to produce a fused image.
- the infrared light is filtered before the light splitting device separates the light, and then the subsequent image is generated, and the infrared light is filtered before the light splitting device separates the light, thereby reducing the infrared filter 60.
- the number of cameras reduces the cost of the camera.
- optical path in FIG. 7 only represents a schematic diagram of the ray path and does not represent an accurate ray path.
- an image generation method provided by the embodiment of the present application is applied to a camera, and the camera includes: a lens, a beam splitting device, an image processor, and multiple An image sensor, wherein the plurality of image sensors are respectively electrically connected to the image processor, and the at least one image sensor is configured to process the light having a light greater than the first light intensity threshold, and the at least one image sensor is configured to process the light intensity less than the second light intensity threshold Light, the method includes:
- the light splitting device separates the light emitted from the lens into a plurality of light beams, wherein the number of the separated light beams is the same as the number of the image sensors;
- Each image sensor receives a beam of light separated by the beam splitting device, and converts the received light into an initial image, wherein each image sensor receives a different beam of light;
- the image processor combines the respective initial images to generate a fused image.
- each image sensor 40 converts the received light into an initial image and sends it to the image processor 40, which combines the respective initial images to produce a fused image. Since at least one of the plurality of image sensors 40 is for processing light of a high light intensity, at least one of the light beams for processing the weak light intensity is displayed, so that objects that are too bright and too dark in the fused image can be normally displayed.
- optical path in FIG. 2 only represents a schematic diagram of the ray path and does not represent an accurate ray path.
- the light emitted from the lens is divided into a plurality of light beams by the light splitting device, and the plurality of light beams are separated by the light splitting device, so that each image sensor can image the light beam separated by the light splitting device.
- An initial image is generated, and then each initial image is fused to generate a fused image. Since at least one image sensor is used to process light having a light intensity greater than a first light intensity threshold (ie, light of high light intensity), at least one image sensor is configured to process light having a light intensity less than a second light intensity threshold (ie, a weak light intensity) Light), whereby the high-intensity light and the weak light-intensity correspond to different image sensors.
- a first light intensity threshold ie, light of high light intensity
- a second light intensity threshold ie, a weak light intensity
- a sensor it is not necessary to simultaneously image high-intensity and low-intensity light, but different image sensors are used. Imaging of high-intensity and low-intensity light, so that objects that are too bright and too dark in the fused image can be displayed normally. Compared with the imaging of an image sensor, the dynamic range of imaging is expanded, and the image color is solved. Distortion problem.
- the light splitting device may include N light splitting prisms, and the light splitting device separates the light emitted from the lens into a plurality of light rays, and may include:
- the N dichroic prisms separate the light emitted from the lens into N+1 beams, where N is a positive integer.
- the N dichroic prisms can separate the light emitted from the lens into N+1 beams.
- the dichroic prism is an optical component that can be used to transmit and reflect light through a multi-layered interference film by plating a multi-layer interference film on a bevel of a right-angle prism, and then synthesizing the light, without changing the light.
- the wavelength distribution only changes the light intensity or polarization direction.
- the transflective ratio of the dichroic prism is the ratio of the intensity of the transmitted light and the reflected light separated by the light passing through the dichroic prism.
- the beam splitting prism separates the light emitted from the lens 10 into a plurality of beams of different light intensities.
- the spectroscopic device 20 includes one dichroic prism 201. As shown by the arrow in FIG. 3, the light emitted from the lens 10 is incident on the dichroic prism 201, and is split.
- the prism 201 separates the light emitted from the lens 10 into two beams, and the two beams are respectively incident on the two image sensors 40, and each image sensor 40 converts the received light into an initial image and sends it to the image processor 30, the image.
- Processor 30 fuses the two initial images to produce a fused image.
- the light emitted from the lens 10 is separated into two beams by one of the dichroic prisms 201, and one of the two image sensors 40 is used to process the light whose light is stronger than the first light intensity threshold (that is, the light of high light intensity).
- the other is for processing light with a light intensity lower than the second light intensity threshold (that is, light with weak light intensity), and the high light intensity and the weak light light correspond to different image sensors, and for one sensor, it is not required to simultaneously
- the high-intensity and low-intensity light is imaged, and different image sensors are used to image the high-intensity and low-intensity light respectively, so that objects that are too bright and too dark in the fused image can be displayed normally.
- the dynamic range of imaging is expanded, and the problem of image color distortion is solved.
- the spectroscopic device includes two dichroic prisms 201, which are a first dichroic prism 201 and a second dichroic prism 201, respectively.
- the N dichroic prisms separate the light emitted from the lens into N+1 beams, which may include:
- the first beam splitting prism separates the light emitted from the lens into two light beams, wherein the separated two light beams have the same light intensity ratio as the first light splitting prism;
- the second beam splitting prism separates the light of the weakest light intensity of the two light beams separated by the first beam splitting prism into two light beams, wherein the light intensity ratio of the separated two light beams is corresponding to the transmitted light reflection corresponding to the second light splitting prism More than the same;
- Each image sensor receives a beam of light separated by the beam splitting device and converts the received light into an initial image, which may include:
- Each image sensor receives one of the light separated by the first dichroic prism and not separated by the second dichroic prism, and one of the two beams separated by the second dichroic prism, and converts the received light into an initial An image in which each image sensor receives a different beam of light.
- the light emitted from the lens 10 is incident on the first beam splitting prism 201, and the first beam splitting prism 201 separates the light emitted from the lens 10 into two beams, the weakest of the two beams.
- the light is incident on the second dichroic prism 201, and the second dichroic prism 201 separates the received light of the weakest light into two beams.
- the light split by the first dichroic prism 201 and not separated by the second dichroic prism 201 and the two beams separated by the second dichroic prism 201 are respectively incident on the three image sensors 40, and each image sensor 40 will receive The light is converted into an initial image and sent to image processor 30, which combines the three initial images to produce a fused image.
- the light emitted from the lens 10 is separated into three beams by the two dichroic prisms 201, and at least one of the three image sensors 40 is used to process light having a light intensity greater than the first light intensity threshold (that is, high light intensity).
- At least one light for treating light having a light intensity smaller than a second light intensity threshold that is, light having a weak light intensity
- the light of high light intensity and the light of weak light light correspond to different image sensors, and for one sensor, it is not required
- the high-intensity and low-intensity light is imaged, and different image sensors are used to image the high-intensity and low-intensity light respectively, so that objects that are too bright and too dark in the fused image can be displayed normally.
- the dynamic range of imaging is expanded, and the problem of image color distortion is solved.
- the dichroic prism 201 may be a depolarizing beam splitting prism that has similar spectral characteristics of the P-polarized component and the S-polarized component of the incident light. That is, after splitting by the depolarization beam splitting prism, the proportional relationship between the original horizontal polarization and the vertical polarization in the incident light is maintained as much as possible, and the optical characteristic separation degree of the P-polarized component and the S-polarized component is ⁇ 5%.
- the P-polarized component and the S-polarized component When light penetrates the surface of the optical element at a non-perpendicular angle, both the reflection and transmission characteristics depend on the polarization phenomenon.
- the coordinate system used is defined by the plane containing the input and reflected beams. If the polarization vector of the ray is in this plane, it is called the P-polarization component, and if the polarization vector is perpendicular to the plane, it is called the S-polarization component.
- the beam splitting device 20 may include N flat beam splitting mirrors, N flat beam splitting mirrors for separating the light emitted from the lens 10 into N+1 beams, where N is a positive integer.
- a flat beam splitter is an optical component that is coated on a transparent optical flat glass surface to transmit and reflect light after passing through the thin film. It does not change the wavelength distribution of the light, but only changes the light intensity or polarization. direction.
- the transflective ratio of the plate beam splitter is the light intensity ratio of the transmitted light and the reflected light separated by the light beam through the plate beam splitter.
- the splitting principle of the flat beam splitter is similar to the splitting principle of the splitting prism, for an embodiment in which N is 1, referring to FIG. 3, the splitting prism 201 in FIG. 3 can be replaced by a flat beam splitting mirror.
- the dichroic prism 201 in FIG. 4 may be replaced by a flat beam splitting mirror, and details are not described herein again.
- the camera provided by the embodiment of the present application may further include a synchronous clock 50,
- the image sensor 40 is electrically connected to the synchronization clock 50, and the image generation method provided by the embodiment of the present application may further include:
- the synchronous clocks respectively send clock synchronization signals to the respective image sensors.
- the dynamic range of a typical camera is 0-70dB.
- the dynamic range of the camera is the range of brightness values of the brightest and darkest objects that can be displayed normally in a picture captured by the camera.
- the dynamic range of the camera needs to be Wide dynamic range, where the wide dynamic range is 100-120dB.
- the maximum value of the dynamic range of the camera is the ratio of the exposure amount at the high-intensity light to the exposure at the low-intensity light. Since the image sensor 40 is included in the embodiment of the present application, Therefore, the maximum value of the dynamic range of the camera is the ratio of the exposure of the strongest light and the weakest light on the image sensor after the splitting, wherein the exposure is the product of the light intensity and the exposure time.
- the conversion formula can be obtained, that is, at least 32 times, that is, the ratio of the exposure of the strongest light and the weakest light on the image sensor after the splitting is about 32:1.
- the light intensity is first reduced (weakened) by the spectroscopic device, so that each image sensor 40 does not overflow when photoelectrically converting a portion with strong light, that is, increases the dynamic range of the camera; further reduces The exposure time makes the photoelectric conversion time of the image sensor 40 shorter, further reducing the overflow of the photoelectric conversion when the light is strong, that is, further increasing the dynamic range of the camera.
- the exposure amount corresponding to the light received by each image sensor 40 may be determined according to the exposure time of each image sensor 40 and the light intensity of the received light.
- the exposure ratio corresponding to each image sensor 40 that is, the exposure time ratio, can be determined according to the intensity ratio of the light received by each of the image sensors 40.
- the beam splitter separates 3 beams of light, and the ratio of the intensity of the 3 beams is 8:1:1.
- the exposure time ratio of the image sensor 40 corresponding to the three beams can be set to 4:4:1, then the ratio of the light received by each image sensor 40 after the splitting is 32:4:1, visible, after the splitting
- the ratio of the exposure of the strongest light to the weakest light on the image sensor is 32:1, that is, the dynamic range ratio of the camera is increased by 32:1.
- the dynamic range of the camera is increased by 32 times, that is, 30 dB.
- the problem of image color distortion is solved by increasing the dynamic range of the camera.
- the position of the distorted color region in the image is not required in the embodiment of the present application. Detection is performed, for example, by detecting the position of the signal light, which improves the practicality.
- the image sensor is generally exposed by the rolling exposure method, and the rolling exposure method is realized by the image sensor progressive exposure.
- the image sensor is exposed line by line until all the pixels are exposed. Both are exposed.
- all the actions are done in a very short time.
- each image sensor can adopt a global exposure exposure mode.
- the global exposure mode is realized by exposing all the pixels of the image sensor at the same time, that is, at the beginning of the exposure, all the pixels of the image sensor start collecting light, and at the end of the exposure, all the pixels of the image sensor are no longer collected.
- the exposure mode of the global exposure is that all pixels are simultaneously exposed, the start exposure time and the end exposure time of different rows of pixels are the same, that is, there is no order of successive exposures, and therefore, the deformation of the moving object is avoided.
- the camera provided by the embodiment of the present application may further include multiple infrared filters.
- the number of infrared filters is the same as the number of image sensors, and each infrared filter is used.
- the chip receives a beam of light separated by the beam splitting device and filters out the infrared rays in the received light, wherein each of the infrared filters receives a different beam of light; each image sensor receives a beam of light separated by the beam splitting device And converting the received light into an initial image, which can include:
- Each image sensor receives a beam of light emitted by the beam splitter through an infrared filter and converts the received light into an initial image, wherein each image sensor receives a different beam of light.
- the light is incident from the outside through the lens 10 to the spectroscopic device 20.
- the spectroscopic device 20 separates the light emitted from the lens 10 into a plurality of rays and emits the plurality of rays, and the emitted plurality of rays are respectively incident on the plurality of infrared rays.
- the filter 60, each of the infrared filters 60 filters and emits infrared rays in the received light, and the emitted multiple beams are respectively incident on the plurality of image sensors 40, and each of the image sensors 40 converts the received light.
- the image processor 30 fuses the respective initial images to generate a fused image. Thereby, after the light splitting device separates the light, the infrared light is filtered out, and then the subsequent image is generated, thereby achieving the purpose of avoiding redness of the generated image.
- optical path in FIG. 6 only represents a schematic diagram of the ray path and does not represent an accurate ray path.
- the camera provided by the embodiment of the present application may further include an infrared filter that receives the light emitted from the lens and filters out the light.
- the infrared light in the received light, the light splitting device separating the light emitted from the lens into a plurality of light rays which may include:
- the spectroscopic device receives the light emitted by the lens through the infrared filter and separates the received light into a plurality of rays.
- the light is incident from the outside through the lens 10 into the infrared filter 60, and the infrared filter 60 receives the light emitted from the lens 10, filters out the infrared light in the received light, and emits it.
- the light is incident on the spectroscopic device 20, and the spectroscopic device 20 separates the light emitted from the infrared filter 60 into a plurality of rays and emits the plurality of beams, and the emitted plurality of beams are respectively incident on the plurality of image sensors 40, and each of the image sensors 40 receives the light.
- the resulting light is converted to an initial image and sent to image processor 40, which combines the respective initial images to produce a fused image.
- the infrared light is filtered before the light splitting device separates the light, and then the subsequent image is generated, and the infrared light is filtered before the light splitting device separates the light, thereby reducing the infrared filter 60.
- the number of cameras reduces the cost of the camera.
- optical path in FIG. 7 only represents a schematic diagram of the ray path and does not represent an accurate ray path.
- the image processor combines the initial images to generate a fused image, which may include:
- the pixel values of the pixel points located at the same position in each initial image are fused according to the weights corresponding to each pixel point and the pixel values to generate a fused image.
- each image sensor Since the image sensor of the present application includes multiple image sensors, each image sensor generates an initial image. In order to fuse multiple initial images into one image, image registration of each initial image is first required, and the correction is due to the factory or The slight pixel offset between each image sensor caused by the installation is corrected. After the correction, each initial image is fused by a fusion algorithm to generate a fused image.
- each of the initial images is fused by the fusion algorithm to generate a fused image
- each pixel point in the initial image is determined according to a pixel value of each pixel in the initial image.
- Weight value according to the weight corresponding to each pixel point and the pixel value, the pixel values of the pixel points located at the same position in each initial image are fused to generate a fused image.
- the method for determining a weight corresponding to each pixel in the initial image according to a pixel value of each pixel in the initial image may be: determining, for each pixel in the initial image, the pixel The range of the target difference corresponding to the difference between the pixel value and the preset pixel value is determined according to the correspondence between the preset difference range and the weight, and the weight corresponding to the target difference range is determined, and the determined target difference range is corresponding. The weight is determined as the weight corresponding to the pixel.
- the pixel values of the pixel points located at the same position in each initial image are fused to generate a fused image, which may be: the pixel points located at the same position in each initial image
- the pixel values are weighted and fused according to the corresponding weights to generate a fused image.
- the pixel points at the same position H in the three initial images are pixel points I, J, and K, respectively, and the pixel value of pixel point I is 100, and the corresponding weight is 0.1.
- the pixel value of pixel J is 10, the corresponding weight is 0.3, the pixel value of pixel K is 200, and the corresponding weight is 0.5;
- the fused image is generated by merging the respective initial images.
- the embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, and when the computer program is executed by the processor, implements any of the image generation methods described above.
- the embodiment of the present application also discloses an executable program code for being executed to execute any of the image generation methods described above.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Color Television Image Signal Generators (AREA)
Abstract
本申请实施例提供一种摄像机以及图像生成方法,摄像机包括:镜头、分光装置、图像处理器和多个图像传感器,多个图像传感器分别与图像处理器电连接,至少一个图像传感器用于处理光强大于第一光强阈值的光线,至少一个图像传感器用于处理光强小于第二光强阈值的光线,分光装置用于将从镜头射出的光线分离为多束光线,其中,每个图像传感器用于接收分光装置分离出的一束光线,并将接收到的光线转换为初始图像,每个图像传感器接收不同的光线束;图像处理器用于将各个初始图像进行融合生成融合图像。本申请中,使用不同的图像传感器对分光装置分离出的多束光线成像,扩大成像动态范围,解决图像颜色失真的问题。
Description
本申请要求于2017年10月18日提交中国专利局、申请号为201710970575.X、发明名称为“一种摄像机以及图像生成方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及摄像机技术领域,特别是涉及一种摄像机以及图像生成方法。
如图1所示,目前的摄像机,包括镜头1、图像传感器2和图像处理器3,图像传感器2与图像处理器3电连接。
使用上述摄像机拍摄图像时,外部光线通过镜头1射入到图像传感器2,图像传感器2对外部光线进行光电转换,得到初始图像,并将初始图像发送至图像处理器3,图像处理器3对接收到的初始图像进行处理生成处理后的图像。
如果光线分布不均匀,一些物体过亮,一些物体过暗,则图像传感器的成像质量较差。比如,如果过亮的物体在初始图像中的成像质量较佳,则过暗的物体在初始图像中的成像质量较差;反之,如果过暗的物体在初始图像中的成像质量较佳,则过亮的物体在初始图像中的成像质量较差。因此,当摄像机拍摄的场景中的光强较强或较暗时,会导致图像失真的问题,例如:抓拍车辆闯红灯的图像时,由于信号灯的亮度较高,即过亮,为了保证其它未过亮的物体的成像质量,图像传感器转换得到的初始图像中信号灯颜色失真。
发明内容
本申请实施例的目的在于提供一种摄像机以及图像生成方法,以解决图像颜色失真的问题。具体技术方案如下:
一种摄像机,包括:镜头、分光装置、图像处理器和多个图像传感器,多个所述图像传感器分别与所述图像处理器电连接,其中,至少一个图像传感器用于处理光强大于第一光强阈值的光线,至少一个图像传感器用于处理光强小于第二光强阈值的光线,
所述分光装置,用于将从所述镜头射出的光线分离为多束光线,其中, 分离得到的光线束的数量与所述图像传感器的数量相同;
每个所述图像传感器,用于接收所述分光装置分离出的一束光线,并将接收到的光线转换为初始图像,其中,每个所述图像传感器接收不同的光线束;
所述图像处理器,用于将各个初始图像进行融合,生成融合图像。
可选的,所述分光装置包括N个分光棱镜;
所述N个分光棱镜,用于将从所述镜头射出的光线分离为N+1束光线,其中N为正整数。
可选的,所述N个分光棱镜包括第一分光棱镜和第二分光棱镜;
所述第一分光棱镜,用于将从所述镜头射出的光线分离为两束光线,其中,分离出的两束光线的光强比与所述第一分光棱镜对应的透射反射比相同;
所述第二分光棱镜,用于将所述第一分光棱镜分离出的两束光线中的最弱光强的光线分离为两束光线,其中,分离出的两束光线的光强比与所述第二分光棱镜对应的透射反射比相同;
每个所述图像传感器,用于接收所述第一分光棱镜分离出的光线中未被所述第二分光棱镜分离的光线、以及所述第二分光棱镜分离出的两束光线中的一束光线,并将接收到的光线转换为初始图像,其中,每个所述图像传感器接收不同的光线束。
可选的,还包括同步时钟,多个所述图像传感器分别与所述同步时钟电连接;
所述同步时钟,用于分别发送时钟同步信号至各个图像传感器。
可选的,每个所述图像传感器的曝光方式为全局式曝光。
可选的,还包括多个红外滤光片,所述红外滤光片的数量与所述图像传感器的数量相同;
每个所述红外滤光片,用于接收所述分光装置分离出的一束光线,并滤除接收到的光线中的红外光线,其中,每个所述红外滤光片接收不同的光线束;
每个所述图像传感器,用于接收从一个所述红外滤光片射出的一束光线,并将接收到的光线转换为初始图像,其中,每个所述图像传感器接收不同的 光线束。
一种图像生成方法,应用于摄像机,摄像机包括:镜头、分光装置、图像处理器和多个图像传感器,多个所述图像传感器分别与所述图像处理器电连接,且至少一个图像传感器用于处理光强大于第一光强阈值的光线,至少一个图像传感器用于处理光强小于第二光强阈值的光线,所述方法包括:
所述分光装置将从所述镜头射出的光线分离为多束光线,其中,分离得到的光线束的数量与所述图像传感器的数量相同;
每个所述图像传感器接收所述分光装置分离出的一束光线,并将接收到的光线转换为初始图像,其中,每个所述图像传感器接收不同的光线束;
所述图像处理器将各个初始图像进行融合,生成融合图像。
可选的,所述分光装置包括N个分光棱镜,所述分光装置将从所述镜头射出的光线分离为多束光线的步骤,包括:
所述N个分光棱镜将从所述镜头射出的光线分离为N+1束光线,其中N为正整数。
可选的,所述N个分光棱镜包括第一分光棱镜和第二分光棱镜,所述N个分光棱镜将从所述镜头射出的光线分离为N+1束光线的步骤,包括:
所述第一分光棱镜将从所述镜头射出的光线分离为两束光线,其中,分离出的两束光线的光强比与所述第一分光棱镜对应的透射反射比相同;
所述第二分光棱镜将所述第一分光棱镜分离出的两束光线中的最弱光强的光线分离为两束光线,其中,分离出的两束光线的光强比与所述第二分光棱镜对应的透射反射比相同;
所述每个所述图像传感器接收所述分光装置分离出的一束光线,并将接收到的光线转换为初始图像的步骤,包括:
每个所述图像传感器接收所述第一分光棱镜分离出的光线中未被所述第二分光棱镜分离的光线、以及所述第二分光棱镜分离出的两束光线中的一束光线,并将接收到的光线转换为初始图像,其中,每个所述图像传感器接收不同的光线束。
可选的,摄像机还包括同步时钟,多个所述图像传感器分别与所述同步时钟电连接,所述方法还包括:
所述同步时钟分别发送时钟同步信号至各个图像传感器。
可选的,每个所述图像传感器的曝光方式为全局式曝光。
可选的,摄像机还包括多个红外滤光片,所述红外滤光片的数量与所述图像传感器的数量相同,每个所述红外滤光片接收所述分光装置分离出的一束光线,并滤除接收到的光线中的红外光线,其中,每个所述红外滤光片接收不同的光线束;
所述每个所述图像传感器接收所述分光装置分离出的一束光线,并将接收到的光线转换为初始图像的步骤,包括:
每个所述图像传感器接收所述分光装置经过一个红外滤光片射出的一束光线,并将接收到的光线转换为初始图像,其中,每个所述图像传感器接收不同的光线束。
可选的,所述图像处理器将各个初始图像进行融合,生成融合图像的步骤,包括:
针对每张初始图像,根据该初始图像中的每个像素点的像素值,确定该初始图像中的每个像素点对应的权值;
根据每个像素点对应的权值以及像素值,将每张初始图像中位于相同位置的像素点的像素值进行融合,生成融合图像。
为达到上述目的,本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述任一种图像生成方法。
为达到上述目的,本申请实施例还公开了一种可执行程序代码,所述可执行程序代码用于被运行以执行上述任一种图像生成方法。
本申请实施例中,通过分光装置将从镜头射出的光线分为了多束光线,由于通过分光装置分离出了多束光线,使得每个图像传感器可以对分光装置分离出的一束光线进行成像处理生成初始图像,然后再将各个初始图像进行融合,生成融合图像。由于至少一个图像传感器用于处理光强大于第一光强阈值的光线(也就是高光强的光线),至少一个图像传感器用于处理光强小于第二光强阈值的光线(也就是弱光强的光线),由此,高光强的光线与弱光强的光线对应不同的图像传感器,对于一个传感器来说,不需要同时对高光强 和弱光强的光线进行成像处理,而是采用不同的图像传感器分别对高光强和弱光强的光线进行成像处理,从而使得在融合图像中过亮和过暗的物体均可正常显示,相对于一个图像传感器成像的情况,扩大了成像的动态范围,解决了图像颜色失真的问题。
为了更清楚地说明本申请实施例和现有技术的技术方案,下面对实施例和现有技术中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为现有技术中摄像机的结构示意图;
图2为本申请实施例提供的摄像机的第一种结构示意图;
图3为本申请实施例提供的摄像机的第二种结构示意图;
图4为本申请实施例提供的摄像机的第三种结构示意图;
图5为本申请实施例提供的摄像机的第四种结构示意图;
图6为本申请实施例提供的摄像机的第五种结构示意图;
图7为本申请实施例提供的摄像机的第六种结构示意图;
图8为本申请实施例提供的图像生成方法的流程图。
为使本申请的目的、技术方案、及优点更加清楚明白,以下参照附图并举实施例,对本申请进一步详细说明。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
为了解决上述技术问题,本申请实施例提供了一种摄像机以及图像生成方法。下面首先对本申请实施例提供的摄像机进行详细说明。
图2为本申请实施例提供的摄像机的第一种结构示意图,摄像机包括:镜头10、分光装置20、图像处理器30和多个图像传感器40,多个图像传感器40分别与图像处理器30电连接。
为了使过亮和过暗的物体均可在图像中正常显示,设置至少一个图像传 感器用于处理光强大于第一光强阈值的光线,至少一个图像传感器用于处理光强小于第二光强阈值的光线,其中,第一光强阈值和第二光强阈值可以相同,也可以不同。
分光装置20,用于将从镜头10射出的光线分离为多束光线,其中,分离得到的光线束的数量与图像传感器40的数量相同;
每个图像传感器40,用于接收分光装置20分离出的一束光线,并将接收到的光线转换为初始图像,其中,每个图像传感器40接收不同的光线束;
图像处理器30,用于将各个初始图像进行融合,生成融合图像。
如图2中的箭头所示,光线从外部通过镜头10射入分光装置20,分光装置20将从镜头10射出的光线分离为多束光线并射出,射出的多束光线分别射入多个图像传感器40,每个图像传感器40将接收到的光线转换为初始图像并发送至图像处理器40,图像处理器30将各个初始图像进行融合,生成融合图像。由于多个图像传感器40中,至少有一个用于处理高光强的光线,至少有一个处理弱光强的光线,因此,融合图像中过亮和过暗的物体均可正常显示。
由此,通过分光装置20将从镜头10射出的光线分为了多束光线,由于通过分光装置20分离出了多束光线,使得每个图像传感器40可以对分光装置20分离出的一束光线进行成像处理生成初始图像,然后再将各个初始图像进行融合,生成融合图像。由于至少一个图像传感器用于处理光强大于第一光强阈值的光线(也就是高光强的光线),至少一个图像传感器用于处理光强小于第二光强阈值的光线(也就是弱光强的光线),由此,高光强的光线与弱光强的光线对应不同的图像传感器,对于一个传感器来说,不需要同时对高光强和弱光强的光线进行成像处理,而是采用不同的图像传感器分别对高光强和弱光强的光线进行成像处理,从而使得在融合图像中过亮和过暗的物体均可正常显示,相对于一个图像传感器成像的情况,扩大了成像的动态范围,解决了图像颜色失真的问题。
需要说明的是,图2中的光路走向仅表示光线路径的示意图,并不代表精确的光线路径。
在一种实施方式中,分光装置20可以包括N个分光棱镜201,N个分光 棱镜201,用于将从镜头10射出的光线分离为N+1束光线,其中N为正整数。
分光棱镜201是通过在直角棱镜的斜面进行镀制多层干涉膜,然后胶合成的一个立方体结构,在光线通过多层干涉膜后,可以对光线进行透射和反射的一个光学元件,不改变光线的波长分布,仅仅改变光强或偏振方向。
其中,分光棱镜201的透射反射比为光线经分光棱镜201分离出的透射光和反射光的光强比。这样,分光棱镜便将镜头10射出的光线分离为光强不同的多束光线。
如图3所示,作为N为1时的一种实施方式,即分光装置20包括1个分光棱镜201,如图3中的箭头所示,从镜头10射出的光线射入分光棱镜201,分光棱镜201将从镜头10射出的光线分离为两束光线,两束光线分别射入两个图像传感器40,每个图像传感器40将接收到的光线转换为初始图像并发送至图像处理器30,图像处理器30将两个初始图像进行融合,生成融合图像。
由此,通过1个分光棱镜201将从镜头10射出的光线分离为2束光线,两个图像传感器40中的一个用于处理光强大于第一光强阈值的光线(也就是高光强的光线),另一个用于处理光强小于第二光强阈值的光线(也就是弱光强的光线),高光强的光线与弱光强的光线对应不同的图像传感器,对于一个传感器来说,不需要同时对高光强和弱光强的光线进行成像处理,而是采用不同的图像传感器分别对高光强和弱光强的光线进行成像处理,从而使得在融合图像中过亮和过暗的物体均可正常显示,相对于一个图像传感器成像的情况,扩大了成像的动态范围,解决了图像颜色失真的问题。
如图4所示,作为N为2时的一种实施方式,即分光装置包括2个分光棱镜201,分别为第一分光棱镜201和第二分光棱镜201,第一分光棱镜201,用于将从镜头10射出的光线分离为两束光线,其中,分离出的两束光线的光强比与第一分光棱镜201对应的透射反射比相同;
第二分光棱镜201,用于将第一分光棱镜201分离出的两束光线中的最弱光强的光线分离为两束光线,其中,分离出的两束光线的光强比与第二分光棱镜201对应的透射反射比相同;
每个图像传感器40,用于接收第一分光棱镜201分离出的光线中未被第二分光棱镜201分离的光线、以及第二分光棱镜201分离出的两束光线中的 一束光线,并将接收到的光线转换为初始图像,其中,每个图像传感器40接收不同的光线束。
如图4中的箭头所示,从镜头10射出的光线射入第一分光棱镜201,第一分光棱镜201将从镜头10射出的光线分离为两束光线,两束光线中的最弱光强的光线射入第二分光棱镜201,第二分光棱镜201将接收到的最弱光强的光线分离为两束光线。
第一分光棱镜201分离出的光线中未被第二分光棱镜201分离的光线、以及第二分光棱镜201分离出的两束光线分别射入三个图像传感器40,每个图像传感器40将接收到的光线转换为初始图像并发送至图像处理器30,图像处理器30将三个初始图像进行融合,生成融合图像。
由此,通过2个分光棱镜201将从镜头10射出的光线分离为3束光线,三个图像传感器40中的至少一个用于处理光强大于第一光强阈值的光线(也就是高光强的光线),至少一个用于处理光强小于第二光强阈值的光线(也就是弱光强的光线),高光强的光线与弱光强的光线对应不同的图像传感器,对于一个传感器来说,不需要同时对高光强和弱光强的光线进行成像处理,而是采用不同的图像传感器分别对高光强和弱光强的光线进行成像处理,从而使得在融合图像中过亮和过暗的物体均可正常显示,相对于一个图像传感器成像的情况,扩大了成像的动态范围,解决了图像颜色失真的问题。
在一种实施方式中,分光棱镜201可以为消偏振分光棱镜,消偏振分光棱镜使入射光的P偏振分量和S偏振分量具有相似的分光特性。即经消偏振分光棱镜分光后尽可能地保持入射光中原有水平偏振和垂直偏振的比例关系,P偏振分量与S偏振分量的光学特性分离度<5%。
下面对P偏振分量与S偏振分量进行介绍:当光线以非垂直角度穿透光学元件的表面时,反射和透射特性均依赖于偏振现象。这种情况下,使用的坐标系是用含有输入和反射光束的那个平面定义的。如果光线的偏振矢量在这个平面内,则称为P偏振分量,如果偏振矢量垂直于该平面,则称为S偏振分量。
在另一种实施方式中,分光装置20可以包括N个平板分束镜,N个平板分束镜,用于将从镜头10射出的光线分离为N+1束光线,其中N为正整数。
平板分束镜是一种把薄膜镀在透明的光学平板玻璃表面上,在光线通过薄膜后,可以对光线进行透射和反射的一个光学元件,不改变光线的波长分布,仅仅改变光强或偏振方向。
其中,平板分束镜的透射反射比为光线经平板分束镜分离出的透射光和反射光的光强比。
由于平板分束镜的分光原理与分光棱镜的分光原理相似,因此,针对于N为1时的一种实施方式,参见图3,将图3中的分光棱镜201替换为平板分束镜即可,针对于N为2时的一种实施方式,参见图4,将图4中的分光棱镜201替换为平板分束镜即可,在此不再赘述。
由于本申请实施例包括多个图像传感器40,为了使各个图像传感器40同步,即同时开始曝光或同时结束曝光,如图5所示,本申请实施例提供的摄像机还可以包括同步时钟50,多个图像传感器40分别与同步时钟50电连接,同步时钟50,用于分别发送时钟同步信号至各个图像传感器40。由此,通过发送时钟同步信号的方式,使得各个图像传感器40的时钟同步,保证了各个图像传感器40的时钟的准确性,进一步保证了生成图像的精确性。
一般摄像机的动态范围为0-70dB,其中,摄像机的动态范围是摄像机拍摄的一个画面内,能正常显示细节的最亮和最暗物体的亮度值所包含的区间,摄像机的动态范围有两个单位,一个是倍数,一个是dB,两者存在换算公式,20lg(倍数)=dB数,比如,2倍≈6dB。
摄像机的动态范围越大,过亮或过暗的物体在同一个画面中均能正常显示的程度也就越大,因此,为了更好的解决图像颜色失真的问题,需要使摄像机的动态范围为宽动态范围,其中,宽动态范围为100-120dB。
针对包含一个图像传感器的摄像机而言,摄像机的动态范围的最大值也就是高光强光线处的曝光量与弱光强光线处的曝光量的比值,由于本申请实施例中,包含多个图像传感器40,因此,摄像机的动态范围的最大值也就是分光之后所分出的最强光线与最弱光线在图像传感器上的曝光量比值,其中,曝光量为光强与曝光时间的乘积。
由于普通摄像机的最大动态为70dB,而实际拍摄的宽动态场景的动态范围超过100-120dB,因此,为了使摄像机捕捉到宽动态范围场景的图像,需要 将摄像机的动态范围至少增加30dB,通过上述换算公式可得,即至少增加32倍,即分光之后所分出的最强光线与最弱光线在图像传感器上的曝光量比值为32:1左右。
在本方案中,首先通过分光装置,将光线强度降低(变弱),使得各个图像传感器40对光线较强的部分光电转化时不会发生溢出,即增加了摄像机的动态范围;进一步通过减小曝光时间,使得图像传感器40的光电转换时间变短,进一步减少对光线较强的部分光电转化时的溢出,即进一步增加了摄像机的动态范围。
进一步地,可以根据各个图像传感器40的曝光时间与接收到的光线的光强来确定各个图像传感器40接收到的光线对应的曝光量。在各个图像传感器40接收到的光线的光强比固定时,可以根据各个图像传感器40所接收到的光线的强度比确定各个图像传感器40对应的曝光比,即曝光时间比。
例如:分光装置分离出3束光线,3束光线的光强比为8:1:1,为了使分光之后所分出的最强光线与最弱光线在图像传感器上的曝光量比值达到32:1,可以将3束光线对应的图像传感器40的曝光时间比设置为4:4:1,则分光之后各个图像传感器40所接收到的光线的比值为32:4:1,可见,分光之后所分出的最强光线与最弱光线在图像传感器上的曝光量比值为32:1,即摄像机增加的动态范围比值为32:1,此时,摄像机的动态范围增加了32倍,即30dB。
由此,通过增加摄像机的动态范围的方式解决了图像颜色失真的问题,相比于通过图像处理的方式来解决图像颜色失真的问题,本申请实施例中无需对图像中的失真颜色区域的位置进行检测,例如:对信号灯的位置进行检测,提高了实用性。
目前,图像传感器40一般采用卷帘式曝光方式进行曝光,卷帘式曝光方式是通过图像传感器40逐行曝光的方式实现的,在曝光开始时,图像传感器40逐行扫描逐行进行曝光,直至所有像素点都被曝光。当然,所有的动作在极短的时间内完成。
由于卷帘式曝光方式是逐行进行曝光,使得不同行像素点的开始曝光时间不同,即存在先后曝光的顺序,因此,卷帘式曝光方式对于运动物体的成像效果不好,会产生运动物体变形的问题,因此,为了避免这一情况的发生, 在一种实施方式中,每个图像传感器40可以采用全局式曝光的曝光方式。
全局式曝光方式是通过图像传感器40的所有像素点在同一时间进行曝光实现的,即在曝光开始时,图像传感器40的所有像素点开始收集光线,在曝光结束时,图像传感器40的所有像素点不再收集光线,由于全局式曝光的曝光方式是所有像素点同时曝光,不同行像素点的开始曝光时间和结束曝光时间相同,即不存在先后曝光的顺序,因此,避免了产生运动物体变形的情况。
为了避免生成的图像发红,在一种实施方式中,如图6所示,本申请实施例提供的摄像机还可以包括多个红外滤光片60,红外滤光片60的数量与图像传感器40的数量相同,每个红外滤光片60,用于接收分光装置20分离出的一束光线,并滤除接收到的光线中的红外光线,其中,每个红外滤光片60接收不同的光线束;每个图像传感器40,用于接收从一个红外滤光片60射出的一束光线,并将接收到的光线转换为初始图像,其中,每个图像传感器40接收不同的光线束。
如图6中的箭头所示,光线从外部通过镜头10射入分光装置20,分光装置20将从镜头10射出的光线分离为多束光线并射出,射出的多束光线分别射入多个红外滤光片60,每个红外滤光片60滤除接收到的光线中的红外光线并射出,射出的多束光线分别射入多个图像传感器40,每个图像传感器40将接收到的光线转换为初始图像并发送至图像处理器40,图像处理器30将各个初始图像进行融合,生成融合图像。由此,在分光装置对光线进行分离之后进行红外光线的滤除,然后进行后续的图像生成,达到了避免生成的图像发红的目的。
需要说明的是,图6中的光路走向仅表示光线路径的示意图,并不代表精确的光线路径。
为了避免生成的图像发红,在另一种实施方式中,如图7所示,本申请实施例提供的摄像机还可以包括红外滤光片60,红外滤光片60,用于接收从镜头10射出的光线,并滤除接收到的光线中的红外光线,分光装置20,用于接收从红外滤光片60射出的光线,并将接收到的光线分离为多束光线。
如图7中的箭头所示,光线从外部通过镜头10射入红外滤光片60,红外滤光片60接收从镜头10射出的光线,滤除接收到的光线中的红外光线并射 出,射出的光线射入分光装置20,分光装置20将从红外滤光片60射出的光线分离为多束光线并射出,射出的多束光线分别射入多个图像传感器40,每个图像传感器40将接收到的光线转换为初始图像并发送至图像处理器30,图像处理器30将各个初始图像进行融合,生成融合图像。
由此,在分光装置对光线进行分离之前进行红外光线的滤除,然后再进行后续的图像生成,相较于分光装置对光线进行分离之前进行红外光线的滤除,减少了红外滤光片60的数量,减少了摄像机的成本。
需要说明的是,图7中的光路走向仅表示光线路径的示意图,并不代表精确的光线路径。
下面对本申请实施例提供的图像生成方法进行详细说明,如图8所示,本申请实施例提供的一种图像生成方法,应用于摄像机,摄像机包括:镜头、分光装置、图像处理器和多个图像传感器,多个图像传感器分别与图像处理器电连接,且至少一个图像传感器用于处理光强大于第一光强阈值的光线,至少一个图像传感器用于处理光强小于第二光强阈值的光线,该方法包括:
S101:分光装置将从镜头射出的光线分离为多束光线,其中,分离得到的光线束的数量与图像传感器的数量相同;
S102:每个图像传感器接收分光装置分离出的一束光线,并将接收到的光线转换为初始图像,其中,每个图像传感器接收不同的光线束;
S103:图像处理器将各个初始图像进行融合,生成融合图像。
如图2中的箭头所示,光线从外部通过镜头10射入分光装置20,分光装置20将从镜头10射出的光线分离为多束光线并射出,射出的多束光线分别射入多个图像传感器40,每个图像传感器40将接收到的光线转换为初始图像并发送至图像处理器40,图像处理器30将各个初始图像进行融合,生成融合图像。由于多个图像传感器40中,至少有一个用于处理高光强的光线,至少有一个处理弱光强的光线,因此,融合图像中过亮和过暗的物体均可正常显示。
需要说明的是,图2中的光路走向仅表示光线路径的示意图,并不代表精确的光线路径。
本申请实施例中,通过分光装置将从镜头射出的光线分为了多束光线,由于通过分光装置分离出了多束光线,使得每个图像传感器可以对分光装置分离出的一束光线进行成像处理生成初始图像,然后再将各个初始图像进行融合,生成融合图像。由于至少一个图像传感器用于处理光强大于第一光强阈值的光线(也就是高光强的光线),至少一个图像传感器用于处理光强小于第二光强阈值的光线(也就是弱光强的光线),由此,高光强的光线与弱光强的光线对应不同的图像传感器,对于一个传感器来说,不需要同时对高光强和弱光强的光线进行成像处理,而是采用不同的图像传感器分别对高光强和弱光强的光线进行成像处理,从而使得在融合图像中过亮和过暗的物体均可正常显示,相对于一个图像传感器成像的情况,扩大了成像的动态范围,解决了图像颜色失真的问题。
在一种实施方式中,分光装置可以包括N个分光棱镜,分光装置将从镜头射出的光线分离为多束光线,可以包括:
N个分光棱镜将从镜头射出的光线分离为N+1束光线,其中N为正整数。
在分光装置包括N个分光棱镜时,N个分光棱镜可以将从镜头射出的光线分离为N+1束光线。
分光棱镜是通过在直角棱镜的斜面进行镀制多层干涉膜,然后胶合成的一个立方体结构,在光线通过多层干涉膜后,可以对光线进行透射和反射的一个光学元件,不改变光线的波长分布,仅仅改变光强或偏振方向。
分光棱镜的透射反射比为光线经分光棱镜分离出的透射光和反射光的光强比。这样,分光棱镜便将镜头10射出的光线分离为光强不同的多束光线。
如图3所示,作为N为1时的一种实施方式,即分光装置20包括1个分光棱镜201,如图3中的箭头所示,从镜头10射出的光线射入分光棱镜201,分光棱镜201将从镜头10射出的光线分离为两束光线,两束光线分别射入两个图像传感器40,每个图像传感器40将接收到的光线转换为初始图像并发送至图像处理器30,图像处理器30将两个初始图像进行融合,生成融合图像。
由此,通过1个分光棱镜201将从镜头10射出的光线分离为2束光线,两个图像传感器40中的一个用于处理光强大于第一光强阈值的光线(也就是高光强的光线),另一个用于处理光强小于第二光强阈值的光线(也就是弱光 强的光线),高光强的光线与弱光强的光线对应不同的图像传感器,对于一个传感器来说,不需要同时对高光强和弱光强的光线进行成像处理,而是采用不同的图像传感器分别对高光强和弱光强的光线进行成像处理,从而使得在融合图像中过亮和过暗的物体均可正常显示,相对于一个图像传感器成像的情况,扩大了成像的动态范围,解决了图像颜色失真的问题。
如图4所示,作为N为2时的一种实施方式,即分光装置包括2个分光棱镜201,分别为第一分光棱镜201和第二分光棱镜201。
上述N个分光棱镜将从镜头射出的光线分离为N+1束光线,可以包括:
第一分光棱镜将从镜头射出的光线分离为两束光线,其中,分离出的两束光线的光强比与第一分光棱镜对应的透射反射比相同;
第二分光棱镜将第一分光棱镜分离出的两束光线中的最弱光强的光线分离为两束光线,其中,分离出的两束光线的光强比与第二分光棱镜对应的透射反射比相同;
每个图像传感器接收分光装置分离出的一束光线,并将接收到的光线转换为初始图像,可以包括:
每个图像传感器接收第一分光棱镜分离出的光线中未被第二分光棱镜分离的光线、以及第二分光棱镜分离出的两束光线中的一束光线,并将接收到的光线转换为初始图像,其中,每个图像传感器接收不同的光线束。
如图4中的箭头所示,从镜头10射出的光线射入第一分光棱镜201,第一分光棱镜201将从镜头10射出的光线分离为两束光线,两束光线中的最弱光强的光线射入第二分光棱镜201,第二分光棱镜201将接收到的最弱光强的光线分离为两束光线。
第一分光棱镜201分离出的光线中未被第二分光棱镜201分离的光线、以及第二分光棱镜201分离出的两束光线分别射入三个图像传感器40,每个图像传感器40将接收到的光线转换为初始图像并发送至图像处理器30,图像处理器30将三个初始图像进行融合,生成融合图像。
由此,通过2个分光棱镜201将从镜头10射出的光线分离为3束光线,三个图像传感器40中的至少一个用于处理光强大于第一光强阈值的光线(也就是高光强的光线),至少一个用于处理光强小于第二光强阈值的光线(也就 是弱光强的光线),高光强的光线与弱光强的光线对应不同的图像传感器,对于一个传感器来说,不需要同时对高光强和弱光强的光线进行成像处理,而是采用不同的图像传感器分别对高光强和弱光强的光线进行成像处理,从而使得在融合图像中过亮和过暗的物体均可正常显示,相对于一个图像传感器成像的情况,扩大了成像的动态范围,解决了图像颜色失真的问题。
在一种实施方式中,分光棱镜201可以为消偏振分光棱镜,消偏振分光棱镜使入射光的P偏振分量和S偏振分量具有相似的分光特性。即经消偏振分光棱镜分光后尽可能地保持入射光中原有水平偏振和垂直偏振的比例关系,P偏振分量与S偏振分量的光学特性分离度<5%。
下面对P偏振分量与S偏振分量进行介绍:当光线以非垂直角度穿透光学元件的表面时,反射和透射特性均依赖于偏振现象。这种情况下,使用的坐标系是用含有输入和反射光束的那个平面定义的。如果光线的偏振矢量在这个平面内,则称为P偏振分量,如果偏振矢量垂直于该平面,则称为S偏振分量。
在另一种实施方式中,分光装置20可以包括N个平板分束镜,N个平板分束镜,用于将从镜头10射出的光线分离为N+1束光线,其中N为正整数。
平板分束镜是一种把薄膜镀在透明的光学平板玻璃表面上,在光线通过薄膜后,可以对光线进行透射和反射的一个光学元件,不改变光线的波长分布,仅仅改变光强或偏振方向。
其中,平板分束镜的透射反射比为光线经平板分束镜分离出的透射光和反射光的光强比。
由于平板分束镜的分光原理与分光棱镜的分光原理相似,因此,针对于N为1时的一种实施方式,参见图3,将图3中的分光棱镜201替换为平板分束镜即可,针对于N为2时的一种实施方式,参见图4,将图4中的分光棱镜201替换为平板分束镜即可,在此不再赘述。
由于本申请实施例包括多个图像传感器40,为了使各个图像传感器40同步,即同时开始曝光或同时结束曝光,如图5所示,本申请实施例提供的摄像机还可以包括同步时钟50,多个图像传感器40分别与同步时钟50电连接,本申请实施例提供的一种图像生成方法,还可以包括:
同步时钟分别发送时钟同步信号至各个图像传感器。
由此,通过发送时钟同步信号的方式,使得各个图像传感器的时钟同步,保证了各个图像传感器的时钟的准确性,进一步保证了生成图像的精确性。
一般摄像机的动态范围为0-70dB,其中,摄像机的动态范围是摄像机拍摄的一个画面内,能正常显示细节的最亮和最暗物体的亮度值所包含的区间,摄像机的动态范围有两个单位,一个是倍数,一个是dB,两者存在换算公式,20lg(倍数)=dB数,比如,2倍≈6dB。
摄像机的动态范围越大,过亮或过暗的物体在同一个画面中均能正常显示的程度也就越大,因此,为了更好的解决图像颜色失真的问题,需要使摄像机的动态范围为宽动态范围,其中,宽动态范围为100-120dB。
针对包含一个图像传感器的摄像机而言,摄像机的动态范围的最大值也就是高光强光线处的曝光量与弱光强光线处的曝光量的比值,由于本申请实施例中,包含多个图像传感器40,因此,摄像机的动态范围的最大值也就是分光之后所分出的最强光线与最弱光线在图像传感器上的曝光量比值,其中,曝光量为光强与曝光时间的乘积。
由于普通摄像机的最大动态为70dB,而实际拍摄的宽动态场景的动态范围超过100-120dB,因此,为了使摄像机捕捉到宽动态范围场景的图像,需要将摄像机的动态范围至少增加30dB,通过上述换算公式可得,即至少增加32倍,即分光之后所分出的最强光线与最弱光线在图像传感器上的曝光量比值为32:1左右。
在本方案中,首先通过分光装置,将光线强度降低(变弱),使得各个图像传感器40对光线较强的部分光电转化时不会发生溢出,即增加了摄像机的动态范围;进一步通过减小曝光时间,使得图像传感器40的光电转换时间变短,进一步减少对光线较强的部分光电转化时的溢出,即进一步增加了摄像机的动态范围。
详细的,可以根据各个图像传感器40的曝光时间与接收到的光线的光强来确定各个图像传感器40接收到的光线对应的曝光量。在各个图像传感器40接收到的光线的光强比固定时,可以根据各个图像传感器40所接收到的光线的强度比确定各个图像传感器40对应的曝光比,即曝光时间比。
例如:分光装置分离出3束光线,3束光线的光强比为8:1:1,为了使分光之后所分出的最强光线与最弱光线在图像传感器上的曝光量比值达到32:1,可以将3束光线对应的图像传感器40的曝光时间比设置为4:4:1,则分光之后各个图像传感器40所接收到的光线的比值为32:4:1,可见,分光之后所分出的最强光线与最弱光线在图像传感器上的曝光量比值为32:1,即摄像机增加的动态范围比值为32:1,此时,摄像机的动态范围增加了32倍,即30dB。
由此,通过增加摄像机的动态范围的方式解决了图像颜色失真的问题,相比于通过图像处理的方式来解决图像颜色失真的问题,本申请实施例中无需对图像中的失真颜色区域的位置进行检测,例如:对信号灯的位置进行检测,提高了实用性。
目前,图像传感器一般采用卷帘式曝光方式进行曝光,卷帘式曝光方式是通过图像传感器逐行曝光的方式实现的,在曝光开始时,图像传感器逐行扫描逐行进行曝光,直至所有像素点都被曝光。当然,所有的动作在极短的时间内完成。
由于卷帘式曝光方式是逐行进行曝光,使得不同行像素点的开始曝光时间不同,即存在先后曝光的顺序,因此,卷帘式曝光方式对于运动物体的成像效果不好,会产生运动物体变形的问题,因此,为了避免这一情况的发生,在一种实施方式中,每个图像传感器可以采用全局式曝光的曝光方式。
全局式曝光方式是通过图像传感器的所有像素点在同一时间进行曝光实现的,即在曝光开始时,图像传感器的所有像素点开始收集光线,在曝光结束时,图像传感器的所有像素点不再收集光线,由于全局式曝光的曝光方式是所有像素点同时曝光,不同行像素点的开始曝光时间和结束曝光时间相同,即不存在先后曝光的顺序,因此,避免了产生运动物体变形的情况。
为了避免生成的图像发红,在一种实施方式中,本申请实施例提供的摄像机还可以包括多个红外滤光片,红外滤光片的数量与图像传感器的数量相同,每个红外滤光片接收分光装置分离出的一束光线,并滤除接收到的光线中的红外光线,其中,每个红外滤光片接收不同的光线束;每个图像传感器接收分光装置分离出的一束光线,并将接收到的光线转换为初始图像,可以包括:
每个图像传感器接收分光装置经过一个红外滤光片射出的一束光线,并将接收到的光线转换为初始图像,其中,每个图像传感器接收不同的光线束。
如图6中的箭头所示,光线从外部通过镜头10射入分光装置20,分光装置20将从镜头10射出的光线分离为多束光线并射出,射出的多束光线分别射入多个红外滤光片60,每个红外滤光片60滤除接收到的光线中的红外光线并射出,射出的多束光线分别射入多个图像传感器40,每个图像传感器40将接收到的光线转换为初始图像并发送至图像处理器40,图像处理器30将各个初始图像进行融合,生成融合图像。由此,在分光装置对光线进行分离之后进行红外光线的滤除,然后进行后续的图像生成,达到了避免生成的图像发红的目的。
需要说明的是,图6中的光路走向仅表示光线路径的示意图,并不代表精确的光线路径。
为了避免生成的图像发红,在另一种实施方式中,如图所示,本申请实施例提供的摄像机还可以包括红外滤光片,红外滤光片接收从镜头射出的光线,并滤除接收到的光线中的红外光线,上述分光装置将从镜头射出的光线分离为多束光线,可以包括:
分光装置接收镜头经过红外滤光片射出的光线,并将接收到的光线分离为多束光线。
如图7中的箭头所示,光线从外部通过镜头10射入红外滤光片60,红外滤光片60接收从镜头10射出的光线,滤除接收到的光线中的红外光线并射出,射出的光线射入分光装置20,分光装置20将从红外滤光片60射出的光线分离为多束光线并射出,射出的多束光线分别射入多个图像传感器40,每个图像传感器40将接收到的光线转换为初始图像并发送至图像处理器40,图像处理器30将各个初始图像进行融合,生成融合图像。
由此,在分光装置对光线进行分离之前进行红外光线的滤除,然后再进行后续的图像生成,相较于分光装置对光线进行分离之前进行红外光线的滤除,减少了红外滤光片60的数量,减少了摄像机的成本。
需要说明的是,图7中的光路走向仅表示光线路径的示意图,并不代表精确的光线路径。
在一种实现方式中,图像处理器将各个初始图像进行融合,生成融合图像,可以包括:
针对每张初始图像,根据该初始图像中的每个像素点的像素值,确定该初始图像中的每个像素点对应的权值;
根据每个像素点对应的权值以及像素值,将每张初始图像中位于相同位置的像素点的像素值进行融合,生成融合图像。
由于本申请实施例中包含多个图像传感器,每个图像传感器生成一张初始图像,为了将多张初始图像融合为一张图像,首先需要对各张初始图像进行图像配准,校正由于出厂或安装所造成的各个图像传感器之间的轻微像素偏移,校正后,通过融合算法将各个初始图像融合,生成融合图像。
详细的,通过融合算法将各个初始图像融合,生成融合图像,可以为:针对每张初始图像,根据该初始图像中的每个像素点的像素值,确定该初始图像中的每个像素点对应的权值;根据每个像素点对应的权值以及像素值,将每张初始图像中位于相同位置的像素点的像素值进行融合,生成融合图像。
其中,根据该初始图像中的每个像素点的像素值,确定该初始图像中的每个像素点对应的权值的方式可以为:针对该初始图像中的每个像素点,判断该像素点的像素值与预设像素值的差值对应的目标差值范围,根据预设差值范围与权值的对应关系,确定目标差值范围对应的权值,将所确定的目标差值范围对应的权值确定为该像素点对应的权值。
根据每个像素点对应的权值以及像素值,将每张初始图像中位于相同位置的像素点的像素值进行融合,生成融合图像,可以为:将每张初始图像中位于相同位置的像素点的像素值根据对应的权值进行加权融合,生成融合图像。
例如:假设初始图像为A、B和C,假设三张初始图像中位于相同位置H的像素点分别为像素点I、J和K,像素点I的像素值为100,对应的权值为0.1,像素点J的像素值为10,对应的权值为0.3,像素点K的像素值为200,对应的权值为0.5;
则融合图像中位于位置H的像素点的像素值为:100×0.1+10×0.3+200×0.5/(0.1+0.3+0.5)=126。
由此,通过将各个初始图像进行融合的方式生成融合图像。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述任一种图像生成方法。
本申请实施例还公开了一种可执行程序代码,所述可执行程序代码用于被运行以执行上述任一种图像生成方法。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于计算机可读存储介质实施例、以及可执行程序代码实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。
Claims (15)
- 一种摄像机,包括:镜头、分光装置、图像处理器和多个图像传感器,多个所述图像传感器分别与所述图像处理器电连接,其中,至少一个图像传感器用于处理光强大于第一光强阈值的光线,至少一个图像传感器用于处理光强小于第二光强阈值的光线,所述分光装置,用于将从所述镜头射出的光线分离为多束光线,其中,分离得到的光线束的数量与所述图像传感器的数量相同;每个所述图像传感器,用于接收所述分光装置分离出的一束光线,并将接收到的光线转换为初始图像,其中,每个所述图像传感器接收不同的光线束;所述图像处理器,用于将各个初始图像进行融合,生成融合图像。
- 根据权利要求1所述的摄像机,其中,所述分光装置包括N个分光棱镜;所述N个分光棱镜,用于将从所述镜头射出的光线分离为N+1束光线,其中N为正整数。
- 根据权利要求2所述的摄像机,其中,所述N个分光棱镜包括第一分光棱镜和第二分光棱镜;所述第一分光棱镜,用于将从所述镜头射出的光线分离为两束光线,其中,分离出的两束光线的光强比与所述第一分光棱镜对应的透射反射比相同;所述第二分光棱镜,用于将所述第一分光棱镜分离出的两束光线中的最弱光强的光线分离为两束光线,其中,分离出的两束光线的光强比与所述第二分光棱镜对应的透射反射比相同;每个所述图像传感器,用于接收所述第一分光棱镜分离出的光线中未被所述第二分光棱镜分离的光线、以及所述第二分光棱镜分离出的两束光线中的一束光线,并将接收到的光线转换为初始图像,其中,每个所述图像传感器接收不同的光线束。
- 根据权利要求1所述的摄像机,其中,还包括同步时钟,多个所述图像传感器分别与所述同步时钟电连接;所述同步时钟,用于分别发送时钟同步信号至各个图像传感器。
- 根据权利要求1-4任一所述的摄像机,其中,每个所述图像传感器的曝光方式为全局式曝光。
- 根据权利要求1所述的摄像机,其中,还包括多个红外滤光片,所述红外滤光片的数量与所述图像传感器的数量相同;每个所述红外滤光片,用于接收所述分光装置分离出的一束光线,并滤除接收到的光线中的红外光线,其中,每个所述红外滤光片接收不同的光线束;每个所述图像传感器,用于接收从一个所述红外滤光片射出的一束光线,并将接收到的光线转换为初始图像,其中,每个所述图像传感器接收不同的光线束。
- 一种图像生成方法,所述图像生成方法应用于摄像机,摄像机包括:镜头、分光装置、图像处理器和多个图像传感器,多个所述图像传感器分别与所述图像处理器电连接,且至少一个图像传感器用于处理光强大于第一光强阈值的光线,至少一个图像传感器用于处理光强小于第二光强阈值的光线,所述方法包括:所述分光装置将从所述镜头射出的光线分离为多束光线,其中,分离得到的光线束的数量与所述图像传感器的数量相同;每个所述图像传感器接收所述分光装置分离出的一束光线,并将接收到的光线转换为初始图像,其中,每个所述图像传感器接收不同的光线束;所述图像处理器将各个初始图像进行融合,生成融合图像。
- 根据权利要求7所述的方法,其中,所述分光装置包括N个分光棱镜,所述分光装置将从所述镜头射出的光线分离为多束光线的步骤,包括:所述N个分光棱镜将从所述镜头射出的光线分离为N+1束光线,其中N为正整数。
- 根据权利要求8所述的方法,其中,所述N个分光棱镜包括第一分光棱镜和第二分光棱镜,所述N个分光棱镜将从所述镜头射出的光线分离为N+1束 光线的步骤,包括:所述第一分光棱镜将从所述镜头射出的光线分离为两束光线,其中,分离出的两束光线的光强比与所述第一分光棱镜对应的透射反射比相同;所述第二分光棱镜将所述第一分光棱镜分离出的两束光线中的最弱光强的光线分离为两束光线,其中,分离出的两束光线的光强比与所述第二分光棱镜对应的透射反射比相同;所述每个所述图像传感器接收所述分光装置分离出的一束光线,并将接收到的光线转换为初始图像的步骤,包括:每个所述图像传感器接收所述第一分光棱镜分离出的光线中未被所述第二分光棱镜分离的光线、以及所述第二分光棱镜分离出的两束光线中的一束光线,并将接收到的光线转换为初始图像,其中,每个所述图像传感器接收不同的光线束。
- 根据权利要求7所述的方法,其中,摄像机还包括同步时钟,多个所述图像传感器分别与所述同步时钟电连接,所述方法还包括:所述同步时钟分别发送时钟同步信号至各个图像传感器。
- 根据权利要求7-10任一所述的方法,其中,每个所述图像传感器的曝光方式为全局式曝光。
- 根据权利要求7所述的方法,其中,摄像机还包括多个红外滤光片,所述红外滤光片的数量与所述图像传感器的数量相同,每个所述红外滤光片接收所述分光装置分离出的一束光线,并滤除接收到的光线中的红外光线,其中,每个所述红外滤光片接收不同的光线束;所述每个所述图像传感器接收所述分光装置分离出的一束光线,并将接收到的光线转换为初始图像的步骤,包括:每个所述图像传感器接收所述分光装置经过一个红外滤光片射出的一束光线,并将接收到的光线转换为初始图像,其中,每个所述图像传感器接收不同的光线束。
- 根据权利要求7所述的方法,其中,所述图像处理器将各个初始图像 进行融合,生成融合图像的步骤,包括:针对每张初始图像,根据该初始图像中的每个像素点的像素值,确定该初始图像中的每个像素点对应的权值;根据每个像素点对应的权值以及像素值,将每张初始图像中位于相同位置的像素点的像素值进行融合,生成融合图像。
- 一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求7-13任一项图像生成方法。
- 一种可执行程序代码,所述可执行程序代码用于被运行以执行权利要求7-13任一项图像生成方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710970575.XA CN109688317A (zh) | 2017-10-18 | 2017-10-18 | 一种摄像机以及图像生成方法 |
CN201710970575.X | 2017-10-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019076150A1 true WO2019076150A1 (zh) | 2019-04-25 |
Family
ID=66173985
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/103788 WO2019076150A1 (zh) | 2017-10-18 | 2018-09-03 | 一种摄像机以及图像生成方法 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109688317A (zh) |
WO (1) | WO2019076150A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118169131A (zh) * | 2024-05-11 | 2024-06-11 | 合肥埃科光电科技股份有限公司 | 一种多倍频相机设计方法、系统及介质 |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110661960B (zh) * | 2019-10-30 | 2022-01-25 | Oppo广东移动通信有限公司 | 摄像模组和电子设备 |
CN110913101A (zh) * | 2019-11-14 | 2020-03-24 | 维沃移动通信有限公司 | 一种拍摄装置及电子设备 |
CN111045218B (zh) * | 2019-12-31 | 2022-02-22 | 上海禾赛科技有限公司 | 一种感光元件 |
EP4131916A4 (en) | 2020-04-29 | 2023-05-17 | Huawei Technologies Co., Ltd. | CAMERA AND IMAGE ACQUISITION METHOD |
CN111726493A (zh) * | 2020-06-17 | 2020-09-29 | Oppo广东移动通信有限公司 | 摄像头模组及终端设备 |
CN112702537B (zh) * | 2020-12-25 | 2022-06-28 | 上海科技大学 | 基于反照率差的高动态范围环境光动态采集系统 |
CN113132597B (zh) * | 2021-04-01 | 2023-04-07 | Oppo广东移动通信有限公司 | 图像采集系统及终端 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5801773A (en) * | 1993-10-29 | 1998-09-01 | Canon Kabushiki Kaisha | Image data processing apparatus for processing combined image signals in order to extend dynamic range |
CN1874499A (zh) * | 2006-05-12 | 2006-12-06 | 北京理工大学 | 一种高动态、超分辨率图像重建装置 |
CN102857681A (zh) * | 2012-09-13 | 2013-01-02 | 侯大威 | 一种通过半反射镜采集图像和提高图像质量的方法 |
CN103888689A (zh) * | 2014-03-13 | 2014-06-25 | 北京智谷睿拓技术服务有限公司 | 图像采集方法及图像采集装置 |
US20170289424A1 (en) * | 2016-04-04 | 2017-10-05 | Illinois Tool Works Inc. | Dynamic range enhancement systems and methods for use in welding applications |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101631202A (zh) * | 2008-07-16 | 2010-01-20 | 肖长诗 | 一种超宽动态范围图像的采集方法 |
KR101441589B1 (ko) * | 2008-10-07 | 2014-09-29 | 삼성전자 주식회사 | 가시광선 이미지와 원적외선 이미지를 광학적으로 융합하는장치 |
EP2630802B1 (en) * | 2010-10-22 | 2019-12-04 | University of New Brunswick | Camera imaging systems and methods |
CN103533236B (zh) * | 2013-09-27 | 2016-10-26 | 中国工程物理研究院流体物理研究所 | 一种可用于多幅纹影照相的超高速数字摄影装置及分光器 |
CN107197168A (zh) * | 2017-06-01 | 2017-09-22 | 松下电器(中国)有限公司苏州系统网络研究开发分公司 | 图像采集方法以及应用该方法的图像采集系统 |
-
2017
- 2017-10-18 CN CN201710970575.XA patent/CN109688317A/zh active Pending
-
2018
- 2018-09-03 WO PCT/CN2018/103788 patent/WO2019076150A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5801773A (en) * | 1993-10-29 | 1998-09-01 | Canon Kabushiki Kaisha | Image data processing apparatus for processing combined image signals in order to extend dynamic range |
CN1874499A (zh) * | 2006-05-12 | 2006-12-06 | 北京理工大学 | 一种高动态、超分辨率图像重建装置 |
CN102857681A (zh) * | 2012-09-13 | 2013-01-02 | 侯大威 | 一种通过半反射镜采集图像和提高图像质量的方法 |
CN103888689A (zh) * | 2014-03-13 | 2014-06-25 | 北京智谷睿拓技术服务有限公司 | 图像采集方法及图像采集装置 |
US20170289424A1 (en) * | 2016-04-04 | 2017-10-05 | Illinois Tool Works Inc. | Dynamic range enhancement systems and methods for use in welding applications |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118169131A (zh) * | 2024-05-11 | 2024-06-11 | 合肥埃科光电科技股份有限公司 | 一种多倍频相机设计方法、系统及介质 |
Also Published As
Publication number | Publication date |
---|---|
CN109688317A (zh) | 2019-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019076150A1 (zh) | 一种摄像机以及图像生成方法 | |
US9832436B1 (en) | Image projection system and image projection method | |
CN111198445B (zh) | 一种分光偏振成像的设备和方法 | |
US20200045211A1 (en) | Camera lens and camera | |
US8823863B2 (en) | Image capturing apparatus and control method therefor | |
JP2006005608A (ja) | 撮像装置 | |
CN107995396B (zh) | 一种双摄像头模组以及终端 | |
WO2019047620A1 (zh) | 一种成像装置及成像方法 | |
US20190058837A1 (en) | System for capturing scene and nir relighting effects in movie postproduction transmission | |
WO2011095026A1 (zh) | 摄像方法及系统 | |
JP2018023077A (ja) | ビデオカメラ撮像装置 | |
JP6336337B2 (ja) | 撮像装置及びその制御方法、プログラム、記憶媒体 | |
US20140354875A1 (en) | Image capturing apparatus and control method therefor | |
WO2018176534A1 (zh) | 一种光度立体三维重建方法及分光式光度立体相机 | |
JP2005229317A (ja) | 画像表示システム及び撮像装置 | |
EP3651444A2 (en) | Image processing apparatus and image processing method that reduce image noise, and storage medium | |
RU2447511C1 (ru) | Охранная телевизионная система "день-ночь" | |
JPS5937777A (ja) | 電子的撮像装置 | |
WO2018235709A1 (ja) | 測距カメラおよび測距方法 | |
JP2018182470A (ja) | 波長選択偏光分離方式を採用した立体撮像装置 | |
JP2013102362A (ja) | 光学機器、画像処理方法およびプログラム | |
JP5380825B2 (ja) | 撮像機能付き投射光学装置 | |
JP2010098586A (ja) | 画像処理機能を有するカメラシステム | |
JP2024075194A (ja) | 撮像処理装置 | |
JP2017126954A (ja) | 画像処理装置、画像処理方法、及び、撮像装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18868559 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18868559 Country of ref document: EP Kind code of ref document: A1 |