WO2021093712A1 - 图像处理方法和相关产品 - Google Patents
图像处理方法和相关产品 Download PDFInfo
- Publication number
- WO2021093712A1 WO2021093712A1 PCT/CN2020/127608 CN2020127608W WO2021093712A1 WO 2021093712 A1 WO2021093712 A1 WO 2021093712A1 CN 2020127608 W CN2020127608 W CN 2020127608W WO 2021093712 A1 WO2021093712 A1 WO 2021093712A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- light image
- image
- visible light
- infrared
- infrared light
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 37
- 238000012545 processing Methods 0.000 claims abstract description 122
- 238000000034 method Methods 0.000 claims abstract description 83
- 238000007499 fusion processing Methods 0.000 claims description 62
- 238000001914 filtration Methods 0.000 claims description 26
- 238000004590 computer program Methods 0.000 claims description 19
- 238000005070 sampling Methods 0.000 claims description 17
- 238000005286 illumination Methods 0.000 abstract description 31
- 238000003384 imaging method Methods 0.000 abstract description 24
- 206010034960 Photophobia Diseases 0.000 abstract description 3
- 208000013469 light sensitivity Diseases 0.000 abstract description 3
- 230000000694 effects Effects 0.000 description 24
- 230000006870 function Effects 0.000 description 22
- 230000035945 sensitivity Effects 0.000 description 18
- 238000010586 diagram Methods 0.000 description 14
- 230000004927 fusion Effects 0.000 description 13
- 238000012806 monitoring device Methods 0.000 description 10
- 239000003086 colorant Substances 0.000 description 9
- 230000007613 environmental effect Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 238000012544 monitoring process Methods 0.000 description 7
- 238000007726 management method Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 230000004907 flux Effects 0.000 description 4
- 238000009877 rendering Methods 0.000 description 4
- 238000001228 spectrum Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000002035 prolonged effect Effects 0.000 description 3
- 238000004497 NIR spectroscopy Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000007599 discharging Methods 0.000 description 2
- 238000003780 insertion Methods 0.000 description 2
- 230000037431 insertion Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000004297 night vision Effects 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 238000009527 percussion Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/73—Circuitry for compensating brightness variation in the scene by influencing the exposure time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- the embodiments of the present application relate to the field of image processing technology, and in particular, to an image processing method and related products.
- the light passing through the lens is separated by the optical imaging system according to the wavelength band and the ratio, and the frequency components obtained by the separation are respectively imaged to obtain a visible light image and an infrared light image.
- the visible light image is a color image
- the infrared light image is The light image is a grayscale image. Then, through the preset fusion algorithm, the visible light image and the infrared light image are image fused to obtain the target image.
- the infrared image is a grayscale image
- the color component of the target image comes from the visible light image.
- the clarity of the visible light image will be severely affected, resulting in poor color effects of the fused target image.
- the embodiments of the present application provide an image processing method and related products, which can improve the photosensitive ability of an imaging device under low illumination and improve image quality.
- an image processing method including:
- the above-mentioned image processing method can be applied to a device with imaging function, and the exposure time corresponding to the visible light image and the exposure time corresponding to the infrared light image can be respectively set before or after the device leaves the factory.
- the visible light image and the infrared light image are fused to make the fused image have better color effects and improve the quality of the captured image .
- a spectrum prism and two image sensors are set behind the lens of the camera device; after passing through the lens, the ambient light passes through the frequency dividing prism and is divided into visible light and infrared light.
- the visible light is divided into visible light and infrared light.
- NIR Near Infrared
- the size of the target surface of the image sensor corresponding to the visible light image is larger than the size of the target surface of the image sensor corresponding to the infrared light image.
- the size of the target surface is used to characterize the size of the photosensitive part in the image sensor, and the larger the size of the target surface, the greater the light flux of the corresponding image sensor. It is possible to set the target surface size of the image sensor corresponding to the visible light image to be larger than the target surface size of the image sensor corresponding to the infrared light image before the phase forming device leaves the factory, so that the image sensor corresponding to the visible light image has more powerful photosensitive ability.
- the acquiring a visible light image of the current scene through a camera includes:
- the visible light image of the current scene is acquired by adopting the binning mode.
- the binning mode (pixel combining mode) can combine two or more adjacent pixels of the same color into one pixel. Since the color effect of visible light images taken in low-light environments will be affected, the image sensor uses binning mode to enhance the colors in the visible light image.
- the binning mode includes: 2x2 binning, 3x3 binning, and 4x4 binning.
- different binning modes can be selected according to the environmental conditions frequently used by the imaging device. In theory, the greater the number of combined pixel units, the stronger the adaptability to low illumination.
- the visible light image includes an RGB three-channel image, and the visible light The binning mode used by the RGB three-channel image of the image is different.
- the binning modes of the RGB channels corresponding to the visible light image can be different.
- the number of combined pixel units in the binning mode adopted by the B channel of the visible light image is greater than that of the R channel of the visible light image The number of combined pixel units in the binning mode used by the image, and/or
- the number of pixel units combined in the binning mode adopted by the R channel of the visible light image is greater than the number of pixel units combined in the binning mode adopted by the G channel image of the visible light image.
- the binning mode of the corresponding color channel can be set according to the sensitivity of the color to the light intensity, thereby improving the color effect of the visible light image.
- the fusion processing is performed on the visible light image and the infrared image to obtain a fused image, include:
- the visible light image can be deblurred by a deblurring algorithm to obtain a deblurred visible light image, and finally the deblurred visible light image and the infrared image are fused to obtain a fused image.
- Deblurring the visible light image can effectively improve the quality of the fused image.
- This embodiment does not limit the specific algorithm for deblurring. For example, a binarization denoising method can be used to deblur the visible light image, or professional deblurring software can be used for deblurring to eliminate smear caused by long exposure.
- the fusion processing is performed on the visible light image and the infrared image to obtain a fused image, include:
- the visible light image acquired by the long exposure may sacrifice the image resolution, so that the resolution of the visible light image is lower than that of the infrared light image. Therefore, before image fusion, the visible light image needs to be up-sampled to obtain a visible light image with the same resolution as the infrared light image.
- the embodiment of the present application does not limit the specific algorithm of upsampling.
- the up-sampling process is performed on the visible light image to obtain a visible light image with the same resolution as the infrared light image, include:
- interpolation processing is performed on the visible light image to obtain a visible light image with the same resolution as the infrared light image.
- an interpolation method may be used to make the resolution of the visible light image consistent with the resolution of the infrared light image.
- methods such as neighbor interpolation, linear interpolation, mean interpolation, median interpolation, etc. may be used, and this embodiment does not limit the specific interpolation algorithm.
- performing fusion processing on the visible light image and the infrared image to obtain a fused image includes:
- the frame rate of the visible light image is lower than the frame rate of the infrared light image.
- the number of visible light images and infrared light images collected in the same time period is different. Therefore, it is necessary to perform frame interpolation processing on at least two of the visible light images to obtain a comparison with the infrared light image. Visible light images with the same frame rate to ensure the same number of fused images as infrared light images.
- performing fusion processing on the visible light image and the infrared image to obtain a fused image includes:
- the high-frequency information of the infrared light image includes texture information
- the low-frequency information of the visible light image and the high-frequency information of the infrared light image are fused to obtain the fused image.
- the visible light image and the infrared light image are respectively subjected to low-pass filtering processing to eliminate high frequency components and noise, and low frequency information corresponding to the visible light image and low frequency information corresponding to the infrared light image are obtained. Then, according to the low frequency information corresponding to the infrared light image and the infrared light image, the texture information of the infrared light image is obtained. Since the texture information of the infrared light image is richer than that of the visible light image, during fusion, the texture information of the filtered infrared light image is fused to the filtered visible light image to obtain the fused image. Thereby, the quality of the fused image can be improved, and it has richer texture information.
- an image processing method including:
- the above-mentioned image processing method can be applied to a device with imaging function, and the resolution corresponding to the visible light image and the resolution corresponding to the infrared light image can be respectively set before or after the device leaves the factory.
- the visible light image and the infrared light image are fused to make the fused image have better color effects and improve the image quality of the shot .
- a spectrum prism and two image sensors are set behind the lens of the camera device; after passing through the lens, the ambient light passes through the frequency dividing prism and is divided into visible light and infrared light.
- the visible light is divided into visible light and infrared light.
- NIR Near Infrared
- the size of the target surface of the image sensor corresponding to the visible light image is larger than the size of the target surface of the image sensor corresponding to the infrared light image.
- the size of the target surface is used to characterize the size of the photosensitive part in the image sensor, and the larger the size of the target surface, the greater the light flux of the corresponding image sensor. It is possible to set the target surface size of the image sensor corresponding to the visible light image to be larger than the target surface size of the image sensor corresponding to the infrared light image before the phase forming device leaves the factory, so that the image sensor corresponding to the visible light image has more powerful photosensitive ability.
- the acquiring a visible light image of the current scene through a camera includes:
- the visible light image of the current scene is acquired by adopting the binning mode.
- the binning mode can combine two or more adjacent pixels of the same color into one pixel. Since the color effect of visible light images taken in low-light environments will be affected, the image sensor uses binning mode to enhance the colors in the visible light image.
- the binning mode includes: 2x2 binning, 3x3 binning, and 4x4 binning.
- different binning modes can be selected according to the environmental conditions frequently used by the imaging device. In theory, the greater the number of combined pixel units, the stronger the adaptability to low illumination.
- the visible light image includes an RGB three-channel image, and the visible light The binning mode used by the RGB three-channel image of the image is different.
- the binning modes of the RGB channels corresponding to the visible light image can be different.
- the number of combined pixel units in the binning mode adopted by the B-channel image of the visible light image is greater than the R of the visible light image.
- the number of pixel units combined in the binning mode adopted by the R channel image of the visible light image is greater than the number of pixel units combined in the binning mode adopted by the G channel image of the visible light image.
- the binning mode of the corresponding color channel can be set according to the sensitivity of the color to the light intensity, thereby improving the color effect of the visible light image.
- the fusion processing is performed on the visible light image and the infrared image to obtain a fused image, include:
- a smear is introduced in the visible light image, which makes the visible light image blurry. Therefore, the visible light image can be deblurred by a deblurring algorithm to obtain a deblurred visible light image, and finally the deblurred visible light image and the infrared image are fused to obtain a fused image. Deblurring the visible light image can effectively improve the quality of the fused image.
- This embodiment does not limit the specific algorithm for deblurring. For example, a binarization denoising method can be used to deblur the visible light image, or professional deblurring software can be used for deblurring to eliminate smear caused by long exposure.
- the fusion processing is performed on the visible light image and the infrared image to obtain a fused image, include:
- the resolution of the visible light image is lower than the resolution of the infrared light image. Therefore, before image fusion, the visible light image needs to be up-sampled to obtain a visible light image with the same resolution as the infrared light image.
- the embodiment of this application does not limit the specific algorithm of upsampling.
- the up-sampling process is performed on the visible light image to obtain a visible light image with the same resolution as the infrared light image, include:
- interpolation processing is performed on the visible light image to obtain a visible light image with the same resolution as the infrared light image.
- an interpolation method may be used to make the resolution of the visible light image consistent with the resolution of the infrared light image.
- methods such as neighbor interpolation, linear interpolation, mean interpolation, median interpolation, etc. may be used, and this embodiment does not limit the specific interpolation algorithm.
- performing fusion processing on the visible light image and the infrared image to obtain a fused image includes:
- the high-frequency information of the infrared light image includes texture information
- the low-frequency information of the visible light image and the high-frequency information of the infrared light image are fused to obtain the fused image.
- the visible light image and the infrared light image are respectively subjected to low-pass filtering processing to eliminate high frequency components and noise, and low frequency information corresponding to the visible light image and low frequency information corresponding to the infrared light image are obtained. Then, according to the low frequency information corresponding to the infrared light image and the infrared light image, the texture information of the infrared light image is obtained. Since the texture information of the infrared light image is richer than that of the visible light image, during fusion, the texture information of the filtered infrared light image is fused to the filtered visible light image to obtain the fused image. Thereby, the quality of the fused image can be improved, so that it has richer texture information.
- an image processing device including:
- the camera module is used to obtain the visible light image and the infrared light image of the current scene; wherein the exposure time corresponding to the visible light image is longer than the exposure time corresponding to the infrared light image;
- the processing module is used to perform fusion processing on the visible light image and the infrared image to obtain a fused image.
- the size of the target surface of the image sensor corresponding to the visible light image is larger than the size of the target surface of the image sensor corresponding to the infrared light image.
- the camera module is specifically used for:
- the visible light image of the current scene is acquired by adopting the binning mode.
- the binning mode includes: 2x2binning, 3x3binning, and 4x4binning.
- the visible light image includes an RGB three-channel image, and the visible light The binning mode used by the RGB three-channel image of the image is different.
- the number of combined pixel units in the binning mode adopted by the B-channel image of the visible light image is greater than the R of the visible light image.
- the number of pixel units combined in the binning mode adopted by the R channel image of the visible light image is greater than the number of pixel units combined in the binning mode adopted by the G channel image of the visible light image.
- the processing module is specifically configured to:
- the processing module is specifically configured to:
- the processing module is specifically configured to:
- interpolation processing is performed on the visible light image to obtain a visible light image with the same resolution as the infrared light image.
- the processing module is specifically configured to:
- the processing module is specifically configured to:
- the high-frequency information of the infrared light image includes texture information
- the low-frequency information of the visible light image and the high-frequency information of the infrared light image are fused to obtain the fused image.
- an image processing device including:
- a camera module for acquiring visible light images and infrared light images of the current scene; wherein the resolution of the visible light image is lower than the resolution of the infrared light image;
- the processing module is used to perform fusion processing on the visible light image and the infrared image to obtain a fused image.
- the size of the target surface of the image sensor corresponding to the visible light image is larger than the size of the target surface of the image sensor corresponding to the infrared light image.
- the camera module is specifically used for:
- the visible light image of the current scene is acquired by adopting the binning mode.
- the binning mode includes: 2x2 binning, 3x3 binning, and 4x4 binning.
- the visible light image includes an RGB three-channel image, and the visible light The binning mode used by the RGB three-channel image of the image is different.
- the number of combined pixel units in the binning mode adopted by the B-channel image of the visible light image is greater than the R of the visible light image.
- the number of pixel units combined in the binning mode adopted by the R channel image of the visible light image is greater than the number of pixel units combined in the binning mode adopted by the G channel image of the visible light image.
- the processing module is specifically configured to:
- the processing module is specifically configured to:
- the processing module is specifically configured to:
- interpolation processing is performed on the visible light image to obtain a visible light image with the same resolution as the infrared light image.
- the processing module is specifically configured to:
- the high-frequency information of the infrared light image includes texture information
- the low-frequency information of the visible light image and the high-frequency information of the infrared light image are fused to obtain the fused image.
- an embodiment of the present application provides a imaging device, including a camera, a processor, and a memory; the lens is used to collect visible light images and infrared images, the memory is used to store program instructions, and the processor is used to call The program instructions in the memory execute the image processing method as described in the first aspect or any possible implementation manner in the first aspect.
- an embodiment of the present application provides a imaging device, including a camera, a processor, and a memory; the lens is used to collect visible light images and infrared images, the memory is used to store program instructions, and the processor is used to call The program instructions in the memory execute the image processing method as described in the second aspect or any possible implementation manner in the second aspect.
- an embodiment of the present application provides a readable storage medium on which a computer program is stored; when the computer program is executed, the image processing described in the embodiment of the present application in the first aspect is implemented. method.
- an embodiment of the present application provides a readable storage medium on which a computer program is stored; when the computer program is executed, the image processing described in the embodiment of the present application in the second aspect is implemented. method.
- an embodiment of the present application provides a program product, the program product includes a computer program, the computer program is stored in a readable storage medium, and at least one processor of an image processing apparatus can read from the readable storage medium The computer program is read, and the computer program is executed by the at least one processor to enable the image processing apparatus to implement the image processing method according to any one of the embodiments of the present application in the first aspect.
- the storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
- an embodiment of the present application provides a program product, the program product includes a computer program, the computer program is stored in a readable storage medium, and at least one processor of an image processing apparatus can read from the readable storage medium The computer program is read, and the computer program is executed by the at least one processor to cause the image processing apparatus to implement the image processing method according to any one of the embodiments of the present application in the second aspect.
- the storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
- the visible light image and the infrared image of the current scene are acquired through the camera through different exposure times, and the visible light image and the infrared image are fused to obtain the fused image. Therefore, by setting different exposure durations, the sensitivity of the presentation device under low illumination can be increased, so that the fused image has better color effects and the quality of the captured image is improved.
- FIG. 1 is a schematic structural diagram of an imaging device provided by an embodiment of the application.
- FIG. 2 is a first flowchart of an image processing method provided by an embodiment of this application.
- Figure 3 is a schematic diagram of the principle of pixel binning of a 2x2binning array
- FIG. 5 is a third flowchart of an image processing method provided by an embodiment of the application.
- FIG. 6 is a fourth flowchart of an image processing method provided by an embodiment of the application.
- FIG. 7 is a schematic diagram of the principle of frame insertion of visible light images
- FIG. 8 is a fifth flowchart of an image processing method provided by an embodiment of this application.
- FIG. 9 is a sixth flowchart of an image processing method provided by an embodiment of this application.
- FIG. 10 is a seventh flowchart of an image processing method provided by an embodiment of this application.
- FIG. 11 is a first structural diagram of an image processing device provided by an embodiment of the application.
- FIG. 12 is a second structural diagram of an image processing device provided by an embodiment of this application.
- FIG. 13 is a schematic structural diagram of a monitoring device provided by an embodiment of the application.
- FIG. 14 is a block diagram of a part of the structure of an imaging device provided by an embodiment of the present application.
- FIG. 1 is a schematic structural diagram of an imaging device provided by an embodiment of the application.
- the imaging device 100 includes: an infrared lens 110, a frequency dividing prism 120, an RGB sensor 130, a NIR sensor 140, a processor 150, and a display screen 160
- the ambient light passes through the frequency dividing prism 120 and is divided into visible light and infrared light; among them, the visible light is received by the RGB sensor 130 and generates a visible light image; the infrared light is received by the NIR sensor 140 and generates an infrared light image.
- the processor 150 performs fusion processing on the generated visible light image and infrared light image respectively, and displays the fused image on the display screen 160.
- the exposure time length of the visible light image is different from the exposure time length of the infrared light image.
- the exposure time of the infrared light image can be set to a standard exposure time, and the exposure time of the visible light image can be prolonged, so that the visible light image can acquire more color information to be suitable for a low-light environment.
- the processor 150 performs restoration processing on the visible light image to obtain a visible light restoration image; then performs fusion processing on the visible light restoration image and the infrared image to obtain a fused image. This method can increase the sensitivity of the rendering device under low illumination, so that the fused image has better color effects and improves the quality of the captured image.
- Infrared lens refers to a lens that can simultaneously receive visible light and infrared light in ambient light.
- the existing monitoring equipment is generally equipped with an infrared lens.
- Frequency dividing prism refers to a prism that can separate infrared light (wavelength> 700nm) and visible light (wavelength 400nm ⁇ 700nm).
- RGB sensor also known as a color sensor, a sensor that can recognize three colors of red R, green G, and blue B, and can convert visible light into the same image that the human eye perceives.
- NIR sensor Near Infrared (NIR) sensor, used to convert infrared light into grayscale image.
- FIG. 2 is a first flowchart of an image processing method provided by an embodiment of the application. Referring to FIG. 2, the method in this embodiment includes:
- Step S101 Obtain the visible light image and the infrared light image of the current scene through the camera; wherein the exposure time corresponding to the visible light image is longer than the exposure time corresponding to the infrared light image.
- Step S102 Perform fusion processing on the visible light restored image and the infrared image to obtain a fused image.
- the photosensitive ability under low illumination is increased.
- the method in this embodiment is suitable for dark scenes, such as night environments, rainy days, and dim indoor environments, and so on.
- the frequency division prism camera includes: infrared lens, frequency division prism, RGB sensor and NIR sensor. After entering the infrared lens, the ambient light passes through the spectrum prism and is divided into visible light and infrared light. The visible light is received by the RGB sensor to generate a visible light image, and the infrared light is received by the NIR sensor to generate an infrared light image. Before or after the crossover prism camera is shipped from the factory, set the exposure time corresponding to the visible light image and the exposure time corresponding to the infrared light image.
- the exposure time length of the visible light image can be set to 80 ms; the exposure time length of the infrared light image is 10 ms.
- the exposure of both the visible light image and the infrared light image is 10 ms.
- the exposure time of the visible light image can be increased to 80 ms, so that the quality of the fused image can be significantly improved, and it is suitable for low-light scenes.
- this embodiment does not limit the specific exposure time of the visible light image and the infrared light image.
- the setting of the exposure time of the infrared light image is based on the reference standard that does not cause motion blur.
- the exposure time of the light image can be flexibly adjusted according to the actual environmental illuminance.
- the size of the target surface of the image sensor corresponding to the visible light image is larger than the size of the target surface of the image sensor corresponding to the infrared light image.
- the size of the target surface is used to characterize the size of the photosensitive part in the image sensor, and the larger the size of the target surface, the greater the light flux of the corresponding image sensor. It is possible to set the target surface size of the image sensor corresponding to the visible light image to be larger than the target surface size of the image sensor corresponding to the infrared light image before the phase forming device leaves the factory, so that the image sensor corresponding to the visible light image has more powerful photosensitive ability.
- the resolution of the image sensor corresponding to the visible light image may be set to be smaller than the resolution of the image sensor corresponding to the infrared light image.
- the resolution of the RGB sensor can be set to be lower than that of the NIR sensor before the camera leaves the factory, so that the visible light image collected by the RGB sensor can be applied to an environment with lower illuminance.
- the binning mode (also referred to as the pixel binning mode) may be adopted to obtain the visible light image of the current scene.
- the infrared image is not collected in binning mode.
- the binning mode refers to combining multiple pixels collected by the image sensor into one pixel. Since the color effect of visible light images taken in low-light environments will be affected, the image sensor uses binning mode to enhance the colors in the visible light image.
- the binning mode includes: 2x2binning, 3x3binning, 4x4binning, and so on.
- a 2x2 binning array refers to combining four pixels of the same color into one pixel
- a 4x4 binning array refers to combining 16 pixels of the same color into one pixel.
- different binning modes can be selected according to the environmental conditions frequently used by the imaging device. In theory, the greater the number of combined pixel units, the stronger the adaptability to low illumination.
- FIG. 3 is a schematic diagram of the principle of pixel binning of a 2x2 binning array. As shown in FIG. 3, pixel binning is performed with 2 ⁇ 2 pixels as a unit; wherein, the combined pixel is the accumulated value of four pixels.
- the visible light image includes an RGB three-channel image, and the RGB three-channel image of the visible light image adopts a different binning mode.
- the binning modes of the RGB channels corresponding to the visible light image can be different.
- the number of pixel units combined in the binning mode adopted by the B-channel image of the visible light image is greater than the number of pixel units combined in the binning mode adopted by the R-channel image of the visible light image, and/or
- the number of pixel units combined in the binning mode adopted by the R channel image of the visible light image is greater than the number of pixel units combined in the binning mode adopted by the G channel image of the visible light image.
- different binning modes of the corresponding color channels can be set according to the sensitivity of the color to the light intensity, thereby improving the color effect of the visible light image.
- step S102 the texture of the infrared light image is clearer than that of the visible light image in a scene under low illumination, so the texture information of the infrared light image can be fused to the visible light image to obtain a fused image.
- the exposure time of visible light images is extended to increase the sensitivity of the rendering device under low illumination, so that the fused image has better color effects and improves the quality of the captured image.
- the visible light image can be low-pass filtered to obtain the low-frequency information of the visible light image;
- the infrared light image can be low-pass filtered to obtain the low-frequency information of the infrared light image, based on the low-frequency information of the infrared light image and the infrared light image ,
- the high frequency information of the infrared light image includes texture information; fuse the low frequency information of the visible light image and the high frequency information of the infrared light image to obtain the fused image.
- the visible light image and the infrared light image are respectively subjected to low-pass filtering to eliminate high frequency components and noise, and low frequency information corresponding to the visible light image and low frequency information corresponding to the infrared light image are obtained. Then, according to the low frequency information corresponding to the infrared light image and the infrared light image, the texture information of the infrared light image is obtained. Since the texture information of the infrared light image is richer than that of the visible light image, during fusion, the texture information of the filtered infrared light image is fused to the filtered visible light image to obtain a fused image. Thereby, the quality of the fused image can be improved, so that it has richer texture information.
- the visible light image and the infrared light image of the current scene are acquired through the camera; and the exposure time corresponding to the visible light image is longer than the exposure time corresponding to the infrared light image; then the visible light image and the infrared image are fused to obtain the fused image.
- the photosensitive ability of the imaging device under low illumination can be improved, and the image quality can be improved.
- Fig. 4 is a second flowchart of an image processing method provided by an embodiment of the application. Referring to Fig. 4, the method of this embodiment includes:
- Step S201 Obtain the visible light image and the infrared light image of the current scene through the camera; wherein the exposure time corresponding to the visible light image is longer than the exposure time corresponding to the infrared light image.
- step S201 in this embodiment are similar to the implementation principle and implementation process of the method shown in FIG. 2 and will not be repeated here.
- Step S202 Deblurring the visible light image to obtain a deblurred visible light image.
- Step S203 Perform fusion processing on the deblurred visible light image and the infrared image to obtain a fused image.
- steps S202 and S203 of this embodiment since the exposure time of the visible light image is prolonged, smear is introduced in the visible light image, which makes the visible light image blurry. Therefore, the visible light image can be deblurred by the deblurring algorithm to obtain the deblurred visible light image, and finally the deblurred visible light image and the infrared image are fused to obtain the fused image. Deblurring the visible light image can effectively improve the quality of the fused image.
- This embodiment does not limit the specific algorithm for deblurring. For example, a binarization denoising method can be used to deblur the visible light image, or professional deblurring software can be used for deblurring to eliminate smear caused by long exposure.
- the visible light image and infrared light image of the current scene are acquired through the camera; and the exposure time corresponding to the visible light image is longer than the exposure time corresponding to the infrared light image; then the visible light image is subjected to blur processing, and the deblurred visible light image and infrared light image The image is fused to obtain a fused image.
- the smear caused by visible light images under long exposure can be eliminated, and the quality of the fused image can be improved.
- Fig. 5 is a third flowchart of an image processing method provided by an embodiment of the application. Referring to Fig. 5, the method of this embodiment includes:
- Step S301 Obtain the visible light image and the infrared light image of the current scene through the camera; wherein the exposure time corresponding to the visible light image is longer than the exposure time corresponding to the infrared light image.
- step S301 in this embodiment are similar to the implementation principle and implementation process of the method shown in FIG. 2 and will not be repeated here.
- Step S302 Up-sampling the visible light image to obtain a visible light image with the same resolution as the infrared light image.
- Step S303 Perform fusion processing on the visible light image and the infrared image with the same resolution as the infrared light image to obtain a fused image.
- the visible light image acquired by the long exposure may sacrifice the image resolution, so that the resolution of the visible light image is lower than that of the infrared light image. Therefore, before image fusion, the visible light image needs to be up-sampled to obtain a visible light image with the same resolution as the infrared light image.
- the embodiment of the present application does not limit the specific algorithm of upsampling.
- the infrared light image adopts full resolution
- the visible light image adopts low resolution
- the visible light image adopts low resolution
- the visible light image can be applied to a longer exposure time, and sufficient color information can be obtained under low illumination.
- the resolution of the visible light image is 1280x720 and the resolution of the infrared light image is 2560x1440
- the visible light image can be upsampled from 1280x720 to 2560x1440 through the bicubic algorithm to obtain a visible light image with the same resolution as the infrared light image.
- the visible light image may be interpolated according to the resolution of the infrared light image to obtain a visible light image with the same resolution as the infrared light image.
- an interpolation method can be used to make the resolution of the visible light image consistent with the resolution of the infrared light image.
- methods such as neighbor interpolation, linear interpolation, mean interpolation, median interpolation, etc. may be used, and this embodiment does not limit the specific interpolation algorithm.
- the visible light image and infrared light image of the current scene are acquired through the camera; and the exposure time corresponding to the visible light image is greater than the exposure time corresponding to the infrared light image; then the visible light image is up-sampling processing to obtain the resolution of the infrared light image For the same visible light image, finally the visible light image and the infrared image with the same resolution as the infrared light image are fused to obtain a fused image.
- the visible light image can be more adapted to lower environmental illuminance, so that the color effect of the fused image is better.
- Fig. 6 is a fourth flowchart of an image processing method provided by an embodiment of this application. Referring to Fig. 6, the method of this embodiment includes:
- Step S401 Obtain the visible light image and the infrared light image of the current scene through the camera; wherein the exposure time corresponding to the visible light image is longer than the exposure time corresponding to the infrared light image.
- step S401 in this embodiment are similar to the implementation principle and implementation process of the method shown in FIG. 2 and will not be repeated here.
- Step S402 Perform frame interpolation processing on at least two visible light images to obtain a visible light image with the same frame rate as the infrared light image.
- Step S403 Perform fusion processing on the visible light image and the infrared image with the same frame rate as the infrared light image to obtain a fused image.
- the collection frame rate of the visible light image is less than the collection frame rate of the infrared light image.
- the frame rate of the infrared light image acquisition may be an existing standard image acquisition frame rate.
- the acquisition frame rate of the visible light image is less than the acquisition frame rate corresponding to 40ms
- the visible light image is processed by frame interpolation.
- the acquisition frame rate of the visible light image is less than the acquisition frame rate corresponding to 33.3 ms
- the visible light image is subjected to frame interpolation processing.
- this embodiment does not limit the specific acquisition frame rate of the visible light image.
- the frame interpolation can be used to make the acquisition frame rate of the visible light image the same as the acquisition frame rate of the infrared light image.
- steps S402 and S403 since the exposure time corresponding to the visible light image is longer than the exposure time corresponding to the infrared light image, the frame rate of the visible light image is lower than the frame rate of the infrared light image.
- the number of visible light images and infrared light images collected in the same time period is different, so it is necessary to interpolate at least two visible light images to obtain the same number of visible light as the infrared light images Image to ensure the same number of fusion images as the infrared image.
- the exposure time is extended, thus reducing the frame rate of the visible light image acquisition.
- the visible light image needs to be frame complemented by frame interpolation technology.
- the visible light images of the intermediate frame can be predicted by analyzing the two visible light images before and after, so that the number of visible light images is the same as the number of infrared light images.
- FIG. 7 is a schematic diagram of the frame insertion principle of a visible light image.
- the collection frame rate of the infrared sensor is twice the collection frame rate of the visible light sensor.
- the infrared sensor has collected 1-10 frames of infrared light images; the visible light sensor has collected five frames of visible light images 1, 3, 5, 7, and 9.
- the second frame of visible light image is predicted; through the third frame of visible light image and the fifth frame of visible light image, the fourth frame of visible light image is predicted; through the fifth frame of visible light image and the first frame 7 frames of visible light images, the 6th frame of visible light image is predicted; through the 7th frame of visible light image and the 9th frame of visible light image, the 8th frame of visible light image is predicted; and so on, until the same number of visible light images as collected by the infrared sensor is obtained.
- the collection frame rate of the visible light image can be set to 1/2 of the collection frame rate of the infrared light; thus, it is convenient for the subsequent frame interpolation operation when the visible light image is restored.
- the first frame rate may be set to 25 fps
- the second frame rate may be set to 12.5 fps.
- this embodiment does not limit the specific collection frame rate of the visible light image and the infrared light image, and the resolution of the visible light image and the infrared light image. In practical applications, it can be flexibly set according to the performance of the imaging device itself.
- the visible light image and infrared light image of the current scene are acquired through the camera; and the exposure time corresponding to the visible light image is longer than the exposure time corresponding to the infrared light image; then at least two visible light images are interpolated according to the frame rate of the infrared light image acquisition.
- FIG. 8 is the fifth flowchart of the image processing method provided by the embodiment of the application. Referring to FIG. 8, the method in this embodiment includes:
- Step S501 Obtain the visible light image and the infrared light image of the current scene through the camera; wherein the resolution of the visible light image is lower than the resolution of the infrared light image.
- Step S502 Perform fusion processing on the visible light image and the infrared image to obtain a fused image.
- the method in this embodiment is suitable for dark scenes, such as night environments, rainy days, and dim indoor environments, and so on.
- the frequency division prism camera includes: infrared lens, frequency division prism, RGB sensor and NIR sensor. After entering the infrared lens, the ambient light passes through the spectrum prism and is divided into visible light and infrared light. The visible light is received by the RGB sensor to generate a visible light image, and the infrared light is received by the NIR sensor to generate an infrared light image. Before or after the frequency divider prism camera is shipped, the resolution of the visible light image is set to be lower than the resolution of the infrared light image. In this way, the sensitivity of the imaging device under low illumination is increased, and then the visible light image and the infrared light image are fused to make the fused image have better color effects and improve the quality of the captured image.
- the size of the target surface of the image sensor corresponding to the visible light image is larger than the size of the target surface of the image sensor corresponding to the infrared light image.
- the size of the target surface is used to characterize the size of the photosensitive part in the image sensor, and the larger the size of the target surface, the greater the light flux of the corresponding image sensor. It is possible to set the target surface size of the image sensor corresponding to the visible light image to be larger than the target surface size of the image sensor corresponding to the infrared light image before the phase-forming equipment leaves the factory, so that the image sensor corresponding to the visible light image has a stronger photosensitive ability.
- the binning mode may be used to obtain the visible light image of the current scene.
- the infrared image is not collected in binning mode.
- the binning mode refers to combining multiple pixels collected by the image sensor into one pixel. Since the color effect of visible light images taken in low-light environments will be affected, the image sensor uses binning mode to enhance the colors in the visible light image.
- the binning mode includes: 2x2binning, 3x3binning, 4x4binning, and so on.
- a 2x2 binning array refers to combining four pixels of the same color into one pixel
- a 4x4 binning array refers to combining 16 pixels of the same color into one pixel.
- different binning modes can be selected according to the environmental conditions frequently used by the imaging device. In theory, the greater the number of combined pixel units, the stronger the adaptability to low illumination.
- the visible light image includes an RGB three-channel image, and the RGB three-channel image of the visible light image adopts a different binning mode.
- the binning modes of the RGB channels corresponding to the visible light image can be different.
- the number of pixel units combined in the binning mode adopted by the B-channel image of the visible light image is greater than the number of pixel units combined in the binning mode adopted by the R-channel image of the visible light image, and/or
- the number of pixel units combined in the binning mode adopted by the R channel image of the visible light image is greater than the number of pixel units combined in the binning mode adopted by the G channel image of the visible light image.
- different binning modes of the corresponding color channels can be set according to the sensitivity of the color to the light intensity, thereby improving the color effect of the visible light image.
- step S502 the texture of the infrared light image is clearer than that of the visible light image in a scene under low illumination, so the texture information of the infrared light image can be fused to the visible light image to obtain a fused image.
- the exposure time of visible light images is extended to increase the sensitivity of the rendering device under low illumination, so that the fused image has better color effects and improves the quality of the captured image.
- the visible light image can be low-pass filtered to obtain the low-frequency information of the visible light image;
- the infrared light image can be low-pass filtered to obtain the low-frequency information of the infrared light image, based on the low-frequency information of the infrared light image and the infrared light image ,
- the high frequency information of the infrared light image includes texture information; fuse the low frequency information of the visible light image and the high frequency information of the infrared light image to obtain the fused image.
- the visible light image and the infrared light image are respectively subjected to low-pass filtering to eliminate high frequency components and noise, and low frequency information corresponding to the visible light image and low frequency information corresponding to the infrared light image are obtained. Then, according to the low frequency information corresponding to the infrared light image and the infrared light image, the texture information of the infrared light image is obtained. Since the texture information of the infrared light image is richer than that of the visible light image, during fusion, the texture information of the filtered infrared light image is fused to the filtered visible light image to obtain a fused image. Thereby, the quality of the fused image can be improved, so that it has richer texture information.
- the visible light image and the infrared light image of the current scene are acquired through the camera; and the resolution of the visible light image is lower than that of the infrared light image; then the visible light image and the infrared image are fused to obtain the fused image.
- the photosensitive ability of the imaging device under low illumination can be improved, and the image quality can be improved.
- FIG. 9 is a sixth flowchart of an image processing method provided by an embodiment of the application. Referring to FIG. 9, the method in this embodiment includes:
- Step S601 Obtain the visible light image and the infrared light image of the current scene through the camera; wherein the resolution of the visible light image is lower than the resolution of the infrared light image.
- step S601 in this embodiment are similar to the implementation principle and implementation process of the method shown in FIG. 8 and will not be repeated here.
- Step S602 Deblurring the visible light image to obtain a deblurred visible light image.
- Step S603 Perform fusion processing on the deblurred visible light image and the infrared image to obtain a fused image.
- the visible light image can be deblurred by a deblurring algorithm to obtain a deblurred visible light image, and finally the deblurred visible light image and infrared image are fused to obtain a fused image .
- Deblurring the visible light image can effectively improve the quality of the fused image.
- This embodiment does not limit the specific algorithm for deblurring.
- a binarization denoising method can be used to deblur the visible light image, or professional deblurring software can be used for deblurring to eliminate smear caused by long exposure.
- the visible light image and infrared light image of the current scene are acquired through the camera; and the resolution of the visible light image is lower than that of the infrared light image; then the visible light image is subjected to blur processing, and the deblurred visible light image and infrared image Perform fusion processing to obtain a fusion image.
- the smear caused by visible light images under long exposure can be eliminated, and the quality of fused images can be improved.
- Fig. 10 is a flowchart 7 of the image processing method provided by an embodiment of the application. Referring to Fig. 10, the method of this embodiment includes:
- Step S701 Obtain the visible light image and the infrared light image of the current scene through the camera; wherein the resolution of the visible light image is lower than the resolution of the infrared light image.
- step S701 in this embodiment are similar to the implementation principle and implementation process of the method shown in FIG. 6, and will not be repeated here.
- Step S702 Perform up-sampling processing on the visible light image to obtain a visible light image with the same resolution as the infrared light image.
- Step S703 Perform fusion processing on the visible light image and the infrared image with the same resolution as the infrared light image to obtain a fused image.
- step S702 and step S703 since the resolution of the visible light image is lower than that of the infrared light image, before image fusion, the visible light image needs to be up-sampled to obtain a visible light with the same resolution as the infrared light image. image.
- the embodiment of the present application does not limit the specific algorithm of upsampling.
- the infrared light image adopts full resolution
- the visible light image adopts low resolution, so that sufficient color information can be obtained under low illumination.
- the resolution of the visible light image is 1280x720 and the resolution of the infrared light image is 2560x1440, and then the visible light image can be upsampled from 1280x720 to 2560x1440 through the bicubic algorithm to obtain a visible light image with the same resolution as the infrared light image.
- the visible light image may be interpolated according to the resolution of the infrared light image to obtain a visible light image with the same resolution as the infrared light image.
- an interpolation method can be used to make the resolution of the visible light image consistent with the resolution of the infrared light image.
- methods such as neighbor interpolation, linear interpolation, mean interpolation, median interpolation, etc. may be used, and this embodiment does not limit the specific interpolation algorithm.
- the visible light image and infrared light image of the current scene are acquired through the camera; and the resolution of the visible light image is lower than the resolution of the infrared light image; then the visible light image is up-sampled to obtain the same resolution as the infrared light image Finally, the visible light image and the infrared image with the same resolution as the infrared light image are fused to obtain the fused image.
- the visible light image can be more adapted to lower environmental illuminance, so that the color effect of the fused image is better.
- FIG. 11 is a first structural diagram of an image processing device provided by an embodiment of the application. Referring to FIG. 11, the device in this embodiment includes:
- the camera module 810 is used to obtain the visible light image and the infrared light image of the current scene through the camera; wherein the exposure time corresponding to the visible light image is longer than the exposure time corresponding to the infrared light image;
- the processing module 820 is configured to perform fusion processing on the visible light image and the infrared image to obtain a fused image.
- modules may be software modules, hardware units or circuit units.
- the image processing device is suitable for low-light scenes, such as nighttime environments, cloudy and rainy days, and low-light indoor environments, and so on.
- the camera module 810 includes a visible light image sensor and an infrared light image sensor. After entering the lens of the camera module 810, the ambient light is divided into visible light and infrared light. The visible light is received by the visible light image sensor to generate a visible light image, and the infrared light is received by the infrared light image sensor to generate an infrared light image. Before or after the image processing device leaves the factory, the exposure time corresponding to the visible light image and the exposure time corresponding to the infrared light image in the camera module 810 can be set.
- the image processing device's light sensitivity under low illumination is increased, and then the visible light image and the infrared light image are fused through the processing module 820, so that the fused image has a better color effect. Improve the quality of captured images.
- processing module 820 may be pre-loaded with an image processing program, and when the program is called, the fusion processing of the visible light image and the infrared light image is executed to obtain the fused image.
- the camera module 810 captures visible light images and infrared light images in a night environment, where the exposure time corresponding to the visible light image is longer than the exposure time corresponding to the infrared light image.
- the camera module 810 sends the collected visible light image and infrared light image to the processing module 820, so that the processing module 820 performs fusion processing on the visible light image and the infrared light image.
- the processing module 820 performs low-pass filtering on the visible light image to obtain the denoised visible light image, and extract the texture information of the infrared light image; finally, the texture information of the infrared light image is fused to the denoised visible light image to obtain Fusion image.
- this embodiment does not limit the image processing algorithm loaded by the processing module 820.
- the size of the target surface of the image sensor corresponding to the visible light image is larger than the size of the target surface of the image sensor corresponding to the infrared light image.
- the camera module 810 is specifically used for:
- the binning mode is used to obtain the visible light image of the current scene.
- the binning modes include: 2x2binning, 3x3binning, and 4x4binning.
- the visible light image includes an RGB three-channel image, and the RGB three-channel image of the visible light image adopts a different binning mode.
- the number of pixel units combined in the binning mode adopted by the B-channel image of the visible light image is greater than the number of pixel units combined in the binning mode adopted by the R-channel image of the visible light image, and/or
- the number of pixel units combined in the binning mode adopted by the R channel image of the visible light image is greater than the number of pixel units combined in the binning mode adopted by the G channel image of the visible light image.
- processing module 820 is specifically used for:
- processing module 820 is specifically used for:
- the visible light image and the infrared image with the same resolution as the infrared light image are fused to obtain a fused image.
- processing module 820 is specifically used for:
- the visible light image is interpolated to obtain a visible light image with the same resolution as the infrared light image.
- processing module 820 is specifically used for:
- the visible light image and the infrared image with the same frame rate as the infrared light image are fused to obtain a fused image.
- processing module 820 is specifically used for:
- the high-frequency information of the infrared light image includes texture information
- the low-frequency information of the visible light image and the high-frequency information of the infrared light image are fused to obtain a fused image.
- the visible light image and the infrared light image of the current scene are acquired through the camera; and the exposure time corresponding to the visible light image is longer than the exposure time corresponding to the infrared light image; the visible light image and the infrared image are fused to obtain the fused image.
- the photosensitive ability of the imaging device under low illumination can be improved, and the image quality can be improved.
- FIG. 12 is a second structural diagram of an image processing apparatus provided by an embodiment of this application. Referring to FIG. 12, the apparatus of this embodiment includes:
- the camera module 910 is used to obtain the visible light image and the infrared light image of the current scene through the camera; wherein the resolution of the visible light image is lower than the resolution of the infrared light image;
- the processing module 920 is configured to perform fusion processing on the visible light image and the infrared image to obtain a fused image.
- modules may be software modules, hardware units or circuit units.
- the image processing device is suitable for low-light scenes, such as nighttime environments, cloudy and rainy days, and low-light indoor environments, and so on.
- the camera module 910 includes a visible light image sensor and an infrared light image sensor. After entering the lens of the camera module 910, the ambient light is divided into visible light and infrared light. The visible light is received by the visible light image sensor to generate a visible light image, and the infrared light is received by the infrared light image sensor to generate an infrared light image. Before or after the image processing device leaves the factory, the resolution of the visible light image in the camera module 910 can be set to be lower than the resolution of the infrared light image.
- the image processing device's light-sensitivity under low illuminance is increased, and then the visible light image and the infrared light image are fused through the processing module 920, so that the fused image has better color effects and improves shooting Image quality.
- processing module 920 may be pre-loaded with an image processing program, and when the program is called, the fusion processing of the visible light image and the infrared light image is executed to obtain the fused image.
- the camera module 910 captures visible light images and infrared light images in a night environment, where the resolution of the visible light image is lower than that of the infrared light image.
- the camera module 910 sends the collected visible light image and infrared light image to the processing module 920, so that the processing module 820 performs fusion processing on the visible light image and the infrared light image.
- the processing module 920 up-samples the visible light image to obtain a visible light image with the same resolution as the infrared light image, and then performs low-pass filtering on the visible light image with the same resolution as the infrared light image to obtain a denoised visible light image , And extract the texture information of the infrared light image; finally, the texture information of the infrared light image is fused to the denoised visible light image to obtain the fused image. It should be noted that this embodiment does not limit the image processing algorithm loaded by the processing module 920.
- the size of the target surface of the image sensor corresponding to the visible light image is larger than the size of the target surface of the image sensor corresponding to the infrared light image.
- the camera module 910 is specifically used for:
- the binning mode is used to obtain the visible light image of the current scene.
- the binning modes include: 2x2binning, 3x3binning, and 4x4binning.
- the visible light image includes an RGB three-channel image, and the RGB three-channel image of the visible light image adopts a different binning mode.
- the number of pixel units combined in the binning mode adopted by the B-channel image of the visible light image is greater than the number of pixel units combined in the binning mode adopted by the R-channel image of the visible light image, and/or
- the number of pixel units combined in the binning mode adopted by the R channel image of the visible light image is greater than the number of pixel units combined in the binning mode adopted by the G channel image of the visible light image.
- processing module 920 is specifically used for:
- processing module 920 is specifically used for:
- the visible light image and the infrared image with the same resolution as the infrared light image are fused to obtain a fused image.
- processing module 920 is specifically used for:
- the visible light image is interpolated to obtain a visible light image with the same resolution as the infrared light image.
- processing module 920 is specifically used for:
- the high-frequency information of the infrared light image includes texture information
- the low-frequency information of the visible light image and the high-frequency information of the infrared light image are fused to obtain a fused image.
- the visible light image and the infrared light image of the current scene are acquired through the camera; and the resolution of the visible light image is lower than that of the infrared light image; the infrared image is fused to obtain the fused image.
- the photosensitive ability of the imaging device under low illumination can be improved, and the image quality can be improved.
- FIG. 13 is a schematic structural diagram of a monitoring device provided by an embodiment of the application. As shown in FIG. 13, components such as a processor 1010, a memory 1020, a lens 1030, a power supply 1040, and a data transmission interface 1050. Those skilled in the art can understand that the structure of the monitoring device shown in FIG. 13 does not constitute a limitation on the monitoring device, and may include more or fewer components than shown in the figure, or a combination of certain components, or a different component arrangement .
- the monitoring device in this embodiment can execute the image processing method in any one of the embodiments shown in Figure 2, Figure 4, Figure 5, Figure 6, and Figure 8 to Figure 10. Please refer to Figure 2 to Figure for the specific implementation process and implementation principle. The related description in the embodiment shown in 10 will not be repeated here.
- the monitoring equipment in this embodiment may include each module in the image processing device shown in FIG. 11 and FIG. 12, and execute FIG. 2 and FIG. 4 through each module in the image processing device shown in FIG. 11 and FIG.
- FIG. 2 and FIG. 4 through each module in the image processing device shown in FIG. 11 and FIG.
- the memory 1020 may be used to store software programs and modules.
- the processor 1010 executes various functional applications and data processing of the imaging device by running the software programs and modules stored in the memory 1020.
- the memory 1020 may mainly include a program storage area and a data storage area.
- the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data (such as audio data, etc.) created by the use of the camera device, etc.
- the memory 1020 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
- the lens 1030 in the monitoring device can acquire optical images, including infrared light images and/or visible light images.
- the lens 1030 in the monitoring device can be one or at least two (not shown in the figure), which can be specifically based on Adjustment of actual design requirements.
- the processor 1010 is the control center of the camera device. It uses various data transmission interfaces 1050 and lines to connect the various parts of the entire monitoring equipment, runs or executes the software programs and/or modules stored in the memory 1020, and calls the memory 1020. Within the data, perform various functions of the monitoring equipment and process the data.
- the processor 1010 may include one or more processing units; preferably, the processor 1010 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, application programs, etc. , The modem processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 1010.
- the monitoring device also includes a power supply 1040 (such as a battery) for supplying power to various components.
- a power supply 1040 can be logically connected to the processor 1010 through a power management system, so as to manage charging, discharging, and power consumption management through the power management system.
- the camera device may also include a Bluetooth module, etc., which will not be repeated here.
- the visible light image and the infrared image of the current scene are acquired, and the visible light restored image and the infrared image are fused to obtain a fused image.
- the sensitivity of the presentation device under low illumination is increased, so that the fused image has better color effects and the quality of the captured image is improved.
- FIG. 14 is a block diagram of a part of the structure of a camera device provided by an embodiment of the present application.
- the camera device includes: a radio frequency (RF) circuit 2010, a memory 2020, an input unit 2030, a display unit 2040, and a sensor 2050, audio circuit 2060, lens 2070, processor 2080, power supply 2090 and other components.
- RF radio frequency
- FIG. 14 does not constitute a limitation on the camera device, and may include more or fewer components than shown in the figure, or combine certain components, or arrange different components.
- the RF circuit 2010 can be used for receiving and sending signals during the process of sending and receiving information or talking. In particular, after receiving the downlink information of the base station, it is processed by the processor 2080; in addition, the designed uplink data is sent to the base station.
- the RF circuit 2010 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a duplexer, and the like.
- the RF circuit 2010 can also communicate with the network and other devices through wireless communication.
- the above-mentioned wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile Communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division) Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), Email, Short Messaging Service (SMS), etc.
- GSM Global System of Mobile Communication
- GPRS General Packet Radio Service
- CDMA Code Division Multiple Access
- WCDMA Wideband Code Division Multiple Access
- LTE Long Term Evolution
- Email Short Messaging Service
- the memory 2020 may be used to store software programs and modules.
- the processor 2080 executes various functional applications and data processing of the camera device by running the software programs and modules stored in the memory 2020.
- the memory 2020 may mainly include a storage program area and a storage data area.
- the storage program area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; the storage data area may store data according to Data (such as audio data, phone book, etc.) created by the use of the camera device, etc.
- the memory 2020 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
- the input unit 2030 may be used to receive inputted numeric or character information, and generate key signal inputs related to user settings and function control of the camera device.
- the input unit 2030 may include a touch panel 2031 and other input devices 2032.
- the touch panel 2031 also called a touch screen, can collect user touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 2031 or near the touch panel 2031. Operation), and drive the corresponding connection device according to the preset program.
- the touch panel 2031 may include two parts: a touch detection device and a touch controller.
- the touch detection device detects the user's touch position, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 2080, and can receive and execute the commands sent by the processor 2080.
- the touch panel 2031 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave.
- the input unit 2030 may also include other input devices 2032.
- the other input device 2032 may include, but is not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick.
- the display unit 2040 may be used to display information input by the user or information provided to the user and various menus of the camera device.
- the display unit 2040 may include a display panel 2041.
- the display panel 2041 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an organic light-emitting diode (Organic Light-Emitting Diode, OLED), etc.
- the touch panel 2031 can cover the display panel 2041. When the touch panel 2031 detects a touch operation on or near it, it transmits it to the processor 2080 to determine the type of the touch event, and then the processor 2080 responds to the touch event. The type provides corresponding visual output on the display panel 2041.
- the touch panel 2031 and the display panel 2041 are used as two independent components to realize the input and input functions of the camera device, in some embodiments, the touch panel 2031 and the display panel 2041 can be integrated And realize the input and output functions of the camera device.
- the camera device may also include at least one sensor 2050, such as a light sensor, a motion sensor, and other sensors.
- the light sensor may include an ambient light sensor and a proximity sensor.
- the ambient light sensor can adjust the brightness of the display panel 2041 according to the brightness of the ambient light.
- the proximity sensor can close the display panel 2041 and the display panel 2041 when the camera is moved to the ear. / Or backlight.
- the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three-axis), and can detect the magnitude and direction of gravity when it is stationary.
- the camera device can be used for applications that recognize the posture of the camera device (such as horizontal and vertical screen switching, Related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer, percussion), etc.; as for the camera device can also be equipped with gyroscope, barometer, hygrometer, thermometer, infrared sensor and other sensors, here No longer.
- posture of the camera device such as horizontal and vertical screen switching, Related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer, percussion), etc.
- the camera device can also be equipped with gyroscope, barometer, hygrometer, thermometer, infrared sensor and other sensors, here No longer.
- the audio circuit 2060, the speaker 2061, and the microphone 2062 can provide an audio interface between the user and the camera device.
- the audio circuit 2060 can transmit the electrical signal converted from the received audio data to the speaker 2061, and the speaker 2061 converts it into a sound signal for output; on the other hand, the microphone 2062 converts the collected sound signal into an electrical signal, which is then output by the audio circuit 2060. After being received, it is converted into audio data, and then processed by the audio data output processor 2080, and then sent to, for example, another camera device via the RF circuit 2010, or the audio data is output to the memory 2020 for further processing.
- the lens 2070 in the camera device can acquire optical images, including infrared light images and/or visible light images.
- the lens in the camera device can be one or at least two (not shown in the figure). Design requirements adjustment.
- the processor 2080 is the control center of the camera device, which uses various interfaces and lines to connect the various parts of the entire camera device, runs or executes the software programs and/or modules stored in the memory 2020, and calls the data stored in the memory 2020 , Perform various functions of the camera device and process data, so as to monitor the camera device as a whole.
- the processor 2080 may include one or more processing units; preferably, the processor 2080 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, application programs, etc. , The modem processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 2080.
- the camera device also includes a power source 2090 (such as a battery) for supplying power to various components.
- a power source 2090 such as a battery
- the power source can be logically connected to the processor 2080 through a power management system, so that functions such as charging, discharging, and power consumption management can be managed through the power management system.
- the camera device may also include a camera, a Bluetooth module, etc., which will not be repeated here.
- the camera device in this embodiment can execute the image processing method in any one of the embodiments shown in FIGS. 2, 4, 5, 6, and 8 to 10.
- FIGS. 2, 4, 5, 6, and 8 For specific implementation processes and implementation principles, please refer to Figures 2 to Figures. The related description in the embodiment shown in 10 will not be repeated here.
- the camera device in this embodiment may include each module in the image processing device shown in FIG. 11 and FIG. 12, and execute FIG. 2 and FIG. 4 through each module in the image processing device shown in FIG. 11 and FIG.
- FIG. 2 and FIG. 4 through each module in the image processing device shown in FIG. 11 and FIG.
- the embodiment of the present application provides a computer-readable storage medium, and the computer-readable storage medium stores instructions.
- the computer executes the method performed by the terminal device in the above-mentioned embodiment of the present application.
- the embodiment of the present application provides a computer-readable storage medium, and the computer-readable storage medium stores instructions.
- the computer executes the method performed by the network device in the foregoing embodiment of the present application.
- the disclosed device and method can be implemented in other ways.
- the device embodiments described above are merely illustrative, for example, the division of units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or integrated. To another system, or some features can be ignored, or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional units.
- modules in the embodiments of the present application is illustrative, and is only a logical function division, and there may be other division methods in actual implementation.
- the functional modules in the embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
- the above-mentioned integrated modules can be implemented in the form of hardware or software function modules.
- the integrated module is implemented in the form of a software function module and sold or used as an independent product, it can be stored in a computer readable storage medium.
- the technical solution of the present application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) execute all or part of the steps of the methods of the various embodiments of the present application.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .
- the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
- software it can be implemented in the form of a computer program product in whole or in part.
- the computer program product includes one or more computer instructions.
- the computer program instructions When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application are generated in whole or in part.
- the computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
- Computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
- computer instructions may be transmitted from a website, computer, server, or data center through a cable (such as Coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) to transmit to another website site, computer, server or data center.
- the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
- the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Studio Devices (AREA)
- Camera Bodies And Camera Details Or Accessories (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
Abstract
一种图像处理方法和相关产品,以提高成像设备在低照度下的感光能力,提升图像质量,属于图像处理技术领域。所述方法包括:通过摄像头获取当前场景的可见光图像和红外光图像;其中,所述可见光图像对应的曝光时长大于所述红外光图像对应的曝光时长(S101);对所述可见光图像和所述红外图像进行融合处理,得到融合图像(S102)。
Description
本申请要求于2019年11月11日提交中国专利局、申请号为201911096690.4、申请名称为“图像处理方法和相关产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请实施例涉及图像处理技术领域,尤其涉及一种图像处理方法和相关产品。
随着摄像技术的发展,在高照度下,摄像设备能够拍摄到清晰的图像,而在低照度下,拍摄的图像的清晰度会受到影响,导致低照度下拍摄的图像模糊不清。
现有技术中,通过光学成像系统将经过镜头的光线按照波段以及比例进行分离,并通过分离得到的各个频率分量分别进行成像,得到可见光图像与红外光图像,其中,可见光图像为彩色图像,红外光图像为灰度图像。然后,通过预置的融合算法对可见光图像与红外光图像进行图像融合,得到目标图像。
然而现有技术中,由于红外图像是灰度图像,因此目标图像的色彩分量来自于可见光图像。而在低照度下,可见光图像的清晰度会受到严重影响,从而导致融合的目标图像的色彩效果较差。
发明内容
本申请实施例提供一种图像处理方法和相关产品,提高成像设备在低照度下的感光能力,提升图像质量。
第一方面,本申请实施例提供一种图像处理方法,包括:
通过摄像头获取当前场景的可见光图像和红外光图像;其中,所述可见光图像对应的曝光时长大于所述红外光图像对应的曝光时长不同;
对所述可见光还原图像和所述红外图像进行融合处理,得到融合图像。
在本申请实施例中,上述图像处理方法可以应用在具备成像功能的设备中,可以在该设备出厂之前或者之后,分别设置可见光图像对应的曝光时长和红外光图像对应的曝光时长。通过延长可见光图像的对应的曝光时长,来增加呈现设备在低照度下的感光能力,然后再对可见光图像和红外光图像进行融合处理,使得融合图像具备更佳的色彩效果,提升拍摄的图像质量。
具体地,以摄像设备为例,在摄像设备的镜头后设置一个分频谱棱镜和两个图像传感器;环境光经过镜头之后通过分频棱镜,分为可见光和红外光,可见光被RGB传感器(色彩传感器)接收,得到可见光图像,红外光被近红外光谱技术(Near Infrared,NIR)传感器接收,得到红外光图像。由于可将光图像对应的曝光时长大于红外图像的对应的曝光时长,因此可见光图像更够适应更低的环境照度,使得融合图像的色彩效果更佳。
结合第一方面,在第一种可能的实施方式中,所述可见光图像对应的图像传感器的靶面尺寸大于所述红外光图像对应的图像传感器的靶面尺寸。
在本申请实施例中,靶面尺寸用于表征图像传感器中感光部分的大小,靶面尺寸越大,则对应的图像传感器有更大的通光量。可以在成相设备出厂之前,设置可见光图像对应的图像传感器的靶面尺寸大于红外光图像对应的图像传感器的靶面尺寸,从而使得可见光图像对应的图像传感器具备更强大的感光能力。
结合第一方面,或者,结合第一方面的第一种可能的实施方式,在第二种可能的实施方式中,所述通过摄像头获取当前场景的可见光图像,包括:
采用binning模式获取当前场景的所述可见光图像。
在本申请实施例中,binning模式(像素合并模式)可以将相邻两个或以上相同颜色的像素合并为一个像素。由于低照度环境下拍摄的可见光图像的色彩效果会受到影响,因此图像传感器通过binning模式来强化可见光图像中的色彩。
结合第一方面的第二种可能的实施方式,在第三种可能的实施方式中,所述binning模式包括:2x2binning、3x3binning、4x4binning。
在本申请实施例中,可以根据成像设备经常使用的环境条件,选择不同的binning模式,理论上合并的像素单元数量越多,则对低照度的适应性越强。
结合第一方面的第二种可能的实施方式,或者结合第一方面的第三种可能的实施方式,在第四种可能的实施方式中,所述可见光图像包括RGB三通道图像,所述可见光图像的RGB三通道图像所采用的binning模式不同。
在本申请实施例中,由于低照度环境下,不同颜色对光强度的敏感度不同,因此可以将可见光图像对应的RGB通道的binning模式不同。
结合第一方面的第四种可能的实施方式,在第五种可能的实施方式中,所述可见光图像的B通道所采用的binning模式中合并的像素单元的数量大于所述可见光图像的R通道图像所采用的binning模式中合并的像素单元的数量,和/或
所述可见光图像的R通道所采用的binning模式中合并的像素单元的数量大于所述可见光图像的G通道图像所采用的binning模式中合并的像素单元的数量。
在本申请实施例中,可以根据颜色对光强度的敏感度设置对应颜色通道的binning模式不同,从而提高可见光图像的色彩效果。
结合第一方面的第一种至第五种中任一可能的实施方式,在第六种可能的实施方式中,所述对所述可见光图像和所述红外图像进行融合处理,得到融合图像,包括:
对所述可见光图像进行去模糊处理,得到去模糊后的可见光图像;
对所述去模糊后的可见光图像和所述红外图像进行融合处理,得到所述融合图像。
在本申请实施例中,由于延长了可见光图像的曝光时长,因此可见光图像中会引入拖影,使得可见光图像变得模糊。因此,可以通过去模糊算法对可见光图像进行去模糊处理,得到去模糊后的可见光图像,最后对去模糊后的可见光图像和所述红外图像进行融合处理,得到融合图像。对可见光图像进行去模糊处理,可以有效提高融合图像的质量。本实施例不限定去模糊的具体算法,例如可以采用二值化去噪方式对可见光图像进行去模糊处理,或者利用专业的去模糊软件进行去模糊处理,用以消除长曝光引入的拖影。
结合第一方面的第一种至第六种中任一可能的实施方式,在第七种可能的实施方式中,所述对所述可见光图像和所述红外图像进行融合处理,得到融合图像,包括:
对所述可见光图像进行上采样处理,得到与所述红外光图像的分辨率相同的可见光图像;
对所述与所述红外光图像的分辨率相同的可见光图像和所述红外图像进行融合处理,得到所述融合图像。
在本申请实施例中,由于长曝光获取到的可见光图像可能会牺牲图像分辨率,使得可见光图像的分辨率低于红外光图像的分辨率。因此在进行图像融合之前,需要对可见光图像进行上采样处理,得到与红外光图像的分辨率相同的可见光图像。本申请实施例不限定上采样的具体算法。
结合第一方面的第七种可能的实施方式,在第八种可能的实施方式中,所述对所述可见光图像进行上采样处理,得到与所述红外光图像的分辨率相同的可见光图像,包括:
根据所述红外光图像的分辨率,对所述可见光图像进行插值处理,得到与所述红外光图像的分辨率相同的可见光图像。
在本申请实施例中,可以采用插值方式使得可见光图像的分辨率与红外光图像的分辨率一致。可选地,可以采用邻近插值、线性插值、均值插值、中值插值等等方法,本实施例不限定具体的插值算法。
结合第一方面的第一种至第八种中任一可能的实施方式,在第九种可能的实施方式中,对所述可见光图像和所述红外图像进行融合处理,得到融合图像,包括:
对至少两张所述可见光图像进行插帧处理,得到与所述红外光图帧率相同的可见光图像;
对所述与所述红外光图像帧率相同的可见光图像和所述红外图像进行融合处理,得到所述融合图像。
在本申请实施例中,由于可见光图像对应的曝光时长大于红外光图像对应的曝光时长,因此可见光图像的采集帧率低于红外光图像的采集帧率。当应用在监控摄像头拍摄的视频图像时,在相同时间段内采集的可见光图像和红外光图像的数量不同,因此需要对至少两张所述可见光图像进行插帧处理,得到与所述红外光图帧率相同的可见光图像,以确保得到与红外光图像数量相同的融合图像。
结合第一方面的第一种至第九种中任一可能的实施方式,在第十种可能的实施方式中,对所述可见光图像和所述红外图像进行融合处理,得到融合图像,包括:
对所述可见光图像进行低通滤波处理,得到所述可见光图像的低频信息;
对所述红外光图像进行低通滤波处理,得到所述红外光图像的低频信息,根据所述红外光图像的低频信息和所述红外光图像,得到所述红外光图像的高频信息;所述红外光图像的高频信息包括纹理信息;
将所述可见光图像的低频信息和所述红外光图像的高频信息进行融合,得到所述融合图像。
在本申请实施例中,通过分别对可见光图像和红外光图像进行低通滤波处理,以消除高频分量和噪声,得到可见光图像对应的低频信息,以及红外光图像对应的低频信息。然后根据红外光图像对应的低频信息和红外光图像,得到红外光图像的纹理信息。由于红外光图像的纹理信息比可见光图像的纹理信息要丰富,因此,在融合时,将滤波后的红外光图像的纹理信息融合到所述滤波后的可见光图像上,得到所述融合图像。从而可以提升融合图像的质量,使其具备更加丰富的纹理信息。
第二方面,本申请实施例提供一种图像处理方法,包括:
通过摄像头获取当前场景的可见光图像和红外光图像;其中,所述可见光图像的分辨率低于所述红外光图像的分辨率;
对所述可见光图像和所述红外图像进行融合处理,得到融合图像。
在本申请实施例中,上述图像处理方法可以应用在具备成像功能的设备中,可以在该设备出厂之前或者之后,分别设置可见光图像对应的分辨率和红外光图像对应的分辨率。通过降低可见光图像的对应的分辨率,来增加呈现设备在低照度下的感光能力,然后再对可见光图像和红外光图像进行融合处理,使得融合图像具备更佳的色彩效果,提升拍摄的图像质量。
具体地,以摄像设备为例,在摄像设备的镜头后设置一个分频谱棱镜和两个图像传感器;环境光经过镜头之后通过分频棱镜,分为可见光和红外光,可见光被RGB传感器(色彩传感器)接收,得到可见光图像,红外光被近红外光谱技术(Near Infrared,NIR)传感器接收,得到红外光图像。由于可将光图像对应的分辨率低于红外图像的对应的分辨率,因此可见光图像更够适应更低的环境照度,使得融合图像的色彩效果更佳。
结合第二方面,在第一种可能的实施方式中,所述可见光图像对应的图像传感器的靶面尺寸大于所述红外光图像对应的图像传感器的靶面尺寸。
在本申请实施例中,靶面尺寸用于表征图像传感器中感光部分的大小,靶面尺寸越大,则对应的图像传感器有更大的通光量。可以在成相设备出厂之前,设置可见光图像对应的图像传感器的靶面尺寸大于红外光图像对应的图像传感器的靶面尺寸,从而使得可见光图像对应的图像传感器具备更强大的感光能力。
结合第二方面,或者结合第二方面的第一种可能的实施方式,在第二种可能的实施方式中,所述通过摄像头获取当前场景的可见光图像,包括:
采用binning模式获取当前场景的所述可见光图像。
在本申请实施例中,binning模式可以将相邻两个或以上相同颜色的像素合并为一个像素。由于低照度环境下拍摄的可见光图像的色彩效果会受到影响,因此图像传感器通过binning模式来强化可见光图像中的色彩。
结合第二方面的第二种可能的实施方式,在第三种可能的实施方式中,所述binning模式包括:2x2binning、3x3binning、4x4binning。
在本申请实施例中,可以根据成像设备经常使用的环境条件,选择不同的binning模式,理论上合并的像素单元数量越多,则对低照度的适应性越强。
结合第二方面的第二种可能的实施方式,或者结合第二方面的第三种可能的实施方式,在第四种可能的实施方式中,所述可见光图像包括RGB三通道图像,所述可见光图像的RGB三通道图像所采用的binning模式不同。
在本申请实施例中,由于低照度环境下,不同颜色对光强度的敏感度不同,因此可以将可见光图像对应的RGB通道的binning模式不同。
结合第二方面的第四种可能的实施方式,在第五种可能的实施方式中,所述可见光图像的B通道图像所采用的binning模式中合并的像素单元的数量大于所述可见光图像的R通道图像所采用的binning模式中合并的像素单元的数量,和/或
所述可见光图像的R通道图像所采用的binning模式中合并的像素单元的数量大于所述可见光图像的G通道图像所采用的binning模式中合并的像素单元的数量。
在本申请实施例中,可以根据颜色对光强度的敏感度设置对应颜色通道的binning模式不同,从而提高可见光图像的色彩效果。
结合第二方面的第一种至第五种中任一可能的实施方式,在第六种可能的实施方式中,所述对所述可见光图像和所述红外图像进行融合处理,得到融合图像,包括:
对所述可见光图像进行去模糊处理,得到去模糊后的可见光图像;
对所述去模糊后的可见光图像和所述红外图像进行融合处理,得到所述融合图像。
在本申请实施例中,在低照度环境下,可见光图像中会引入拖影,使得可见光图像变得模糊。因此,可以通过去模糊算法对可见光图像进行去模糊处理,得到去模糊后的可见光图像,最后对去模糊后的可见光图像和所述红外图像进行融合处理,得到融合图像。对可见光图像进行去模糊处理,可以有效提高融合图像的质量。本实施例不限定去模糊的具体算法,例如可以采用二值化去噪方式对可见光图像进行去模糊处理,或者利用专业的去模糊软件进行去模糊处理,用以消除长曝光引入的拖影。
结合第二方面的第一种至第六种中任一可能的实施方式,在第七种可能的实施方式中,所述对所述可见光图像和所述红外图像进行融合处理,得到融合图像,包括:
对所述可见光图像进行上采样处理,得到与所述红外光图像的分辨率相同的可见光图像;
对所述与所述红外光图像的分辨率相同的可见光图像和所述红外图像进行融合处理,得到所述融合图像。
在本申请实施例中,由于可见光图像的分辨率低于红外光图像的分辨率。因此在进行图像融合之前,需要对可见光图像进行上采样处理,得到与红外光图像的分辨率相同的可见光图像。本申请实施例不限定上采样的具体算法。
结合第二方面的第七种可能的实施方式,在第八种可能的实施方式中,所述对所述可见光图像进行上采样处理,得到与所述红外光图像的分辨率相同的可见光图像,包括:
根据所述红外光图像的分辨率,对所述可见光图像进行插值处理,得到与所述红外光图像的分辨率相同的可见光图像。
在本申请实施例中,可以采用插值方式使得可见光图像的分辨率与红外光图像的分辨率一致。可选地,可以采用邻近插值、线性插值、均值插值、中值插值等等方法,本实施例不限定具体的插值算法。
结合第二方面的第一种至第八种中任一可能的实施方式,在第九种可能的实施方式中,对所述可见光图像和所述红外图像进行融合处理,得到融合图像,包括:
对所述可见光图像进行低通滤波处理,得到所述可见光图像的低频信息;
对所述红外光图像进行低通滤波处理,得到所述红外光图像的低频信息,根据所述红外光图像的低频信息和所述红外光图像,得到所述红外光图像的高频信息;所述红外光图像的高频信息包括纹理信息;
将所述可见光图像的低频信息和所述红外光图像的高频信息进行融合,得到所述融合图像。
在本申请实施例中,通过分别对可见光图像和红外光图像进行低通滤波处理,以消除高频分量和噪声,得到可见光图像对应的低频信息,以及红外光图像对应的低频信息。然后根据红外光图像对应的低频信息和红外光图像,得到红外光图像的纹理信息。由于红外光图像的纹理信息比可见光图像的纹理信息要丰富,因此,在融合时,将滤波后的红外光图像的纹理信息融合到所述滤波后的可见光图像上,得到所述融合图像。从而可以提升融合图像的质量,使其具备更加丰富的纹理信息。
第三方面,本申请实施例提供一种图像处理装置,包括:
摄像模块,用于获取当前场景的可见光图像和红外光图像;其中,所述可见光图像对应的曝光时长大于所述红外光图像对应的曝光时长;
处理模块,用于对所述可见光图像和所述红外图像进行融合处理,得到融合图像。
结合第三方面,在第一种可能的实施方式中,所述可见光图像对应的图像传感器的靶面尺寸大于所述红外光图像对应的图像传感器的靶面尺寸。
结合第三方面,或者结合第三方面的第一种可能的实施方式中,在第二种可能的实施方式中,所述摄像模块,具体用于:
采用binning模式获取当前场景的所述可见光图像。
结合第三方面的第二种可能的实施方式,在第三种可能的实施方式中,所述binning模式包括:2x2binning、3x3binning、4x4binning。
结合第三方面的第二种可能的实施方式,或者结合第三方面的第三种可能的实施方式,在第四种可能的实施方式中,所述可见光图像包括RGB三通道图像,所述可见光图像的RGB三通道图像所采用的binning模式不同。
结合第三方面的第四种可能的实施方式,在第五种可能的实施方式中,所述可见光图像的B通道图像所采用的binning模式中合并的像素单元的数量大于所述可见光图像的R通道图像所采用的binning模式中合并的像素单元的数量,和/或
所述可见光图像的R通道图像所采用的binning模式中合并的像素单元的数量大于所述可见光图像的G通道图像所采用的binning模式中合并的像素单元的数量。
结合第三方面的第一种至第五种中任一可能的实施方式,在第六种可能的实施方式中,所述处理模块,具体用于:
对所述可见光图像进行去模糊处理,得到去模糊后的可见光图像;
对所述去模糊后的可见光图像和所述红外图像进行融合处理,得到所述融合图像。
结合第三方面的第一种至第六种中任一可能的实施方式,在第七种可能的实施方式中,所述处理模块,具体用于:
对所述可见光图像进行上采样处理,得到与所述红外光图像的分辨率相同的可见光图像;
对所述与所述红外光图像的分辨率相同的可见光图像和所述红外图像进行融合处理,得到所述融合图像。
结合第三方面的第七种可能的实施方式,在第八种可能的实施方式中,所述处理模块,具体用于:
根据所述红外光图像的分辨率,对所述可见光图像进行插值处理,得到与所述红外光图像的分辨率相同的可见光图像。
结合第三方面的第一种至第八种中任一可能的实施方式,在第九种可能的实施方式中,所述处理模块,具体用于:
对至少两张所述可见光图像进行插帧处理,得到与所述红外光图像帧率相同的可见光图像;
对所述与所述红外光图像帧率相同的可见光图像和所述红外图像进行融合处理,得到所述融合图像。
结合第三方面的第一种至第九种中任一可能的实施方式,在第十种可能的实施方式中,所述处理模块,具体用于:
对所述可见光图像进行低通滤波处理,得到所述可见光图像的低频信息;
对所述红外光图像进行低通滤波处理,得到所述红外光图像的低频信息,根据所述红外光图像的低频信息和所述红外光图像,得到所述红外光图像的高频信息;所述红外光图像的高频信息 包括纹理信息;
将所述可见光图像的低频信息和所述红外光图像的高频信息进行融合,得到所述融合图像。
第四方面,本申请实施例提供一种图像处理装置,包括:
摄像模块,用于获取当前场景的可见光图像和红外光图像;其中,所述可见光图像的分辨率低于所述红外光图像的分辨率;
处理模块,用于对所述可见光图像和所述红外图像进行融合处理,得到融合图像。
结合第四方面,在第一种可能的实施方式中,所述可见光图像对应的图像传感器的靶面尺寸大于所述红外光图像对应的图像传感器的靶面尺寸。
结合第四方面,或者结合第四方面的第一种可能的实施方式,在第二种可能的实施方式中,所述摄像模块,具体用于:
采用binning模式获取当前场景的所述可见光图像。
结合第四方面的第二种可能的实施方式,在第三种可能的实施方式中,所述binning模式包括:2x2binning、3x3binning、4x4binning。
结合第四方面的第二种可能的实施方式,或者结合第四方面的第三种可能的实施方式,在第四种可能的实施方式中,所述可见光图像包括RGB三通道图像,所述可见光图像的RGB三通道图像所采用的binning模式不同。
结合第四方面的第四种可能的实施方式,在第五种可能的实施方式中,所述可见光图像的B通道图像所采用的binning模式中合并的像素单元的数量大于所述可见光图像的R通道图像所采用的binning模式中合并的像素单元的数量,和/或
所述可见光图像的R通道图像所采用的binning模式中合并的像素单元的数量大于所述可见光图像的G通道图像所采用的binning模式中合并的像素单元的数量。
结合第四方面的第一种至第五种中任一可能的实施方式,在第六种可能的实施方式中,所述处理模块,具体用于:
对所述可见光图像进行去模糊处理,得到去模糊后的可见光图像;
对所述去模糊后的可见光图像和所述红外图像进行融合处理,得到所述融合图像。
结合第四方面的第一种至第六种中任一可能的实施方式,在第七种可能的实施方式中,所述处理模块,具体用于:
对所述可见光图像进行上采样处理,得到与所述红外光图像的分辨率相同的可见光图像;
对所述与所述红外光图像的分辨率相同的可见光图像和所述红外图像进行融合处理,得到所述融合图像。
结合第四方面的第七种可能的实施方式,在第八种可能的实施方式中,所述处理模块,具体用于:
根据所述红外光图像的分辨率,对所述可见光图像进行插值处理,得到与所述红外光图像的分辨率相同的可见光图像。
结合第四方面的第一种至第八种中任一可能的实施方式,在第九种可能的实施方式中,所述处理模块,具体用于:
对所述可见光图像进行低通滤波处理,得到所述可见光图像的低频信息;
对所述红外光图像进行低通滤波处理,得到所述红外光图像的低频信息,根据所述红外光图像的低频信息和所述红外光图像,得到所述红外光图像的高频信息;所述红外光图像的高频信息 包括纹理信息;
将所述可见光图像的低频信息和所述红外光图像的高频信息进行融合,得到所述融合图像。
第五方面,本申请实施例提供一种成相设备,包括摄像头、处理器、存储器;所述镜头用于采集可见光图像和红外图像,所述存储器用于存储程序指令,所述处理器用于调用存储器中的程序指令执行如第一方面或第一方面中任一可能的实施方式所述的图像处理方法。
第六方面,本申请实施例提供一种成相设备,包括摄像头、处理器、存储器;所述镜头用于采集可见光图像和红外图像,所述存储器用于存储程序指令,所述处理器用于调用存储器中的程序指令执行如第二方面或第二方面中任一可能的实施方式所述的图像处理方法。
第七方面,本申请实施例提供一种可读存储介质,所述可读存储介质上存储有计算机程序;所述计算机程序在被执行时,实现第一方面本申请实施例所述的图像处理方法。
第八方面,本申请实施例提供一种可读存储介质,所述可读存储介质上存储有计算机程序;所述计算机程序在被执行时,实现第二方面本申请实施例所述的图像处理方法。
第九方面,本申请实施例提供一种程序产品,所述程序产品包括计算机程序,所述计算机程序存储在可读存储介质中,图像处理装置的至少一个处理器可以从所述可读存储介质读取所述计算机程序,所述至少一个处理器执行所述计算机程序使得图像处理装置实施第一方面本申请实施例任一所述的图像处理方法。该存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
第十方面,本申请实施例提供一种程序产品,所述程序产品包括计算机程序,所述计算机程序存储在可读存储介质中,图像处理装置的至少一个处理器可以从所述可读存储介质读取所述计算机程序,所述至少一个处理器执行所述计算机程序使得图像处理装置实施第二方面本申请实施例任一所述的图像处理方法。该存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
在本申请实施例中,通过不同的曝光时长,通过摄像头获取当前场景的可见光图像和红外图像,对可见光图像和红外图像进行融合处理,得到融合图像。因此,可以通过设置不同的曝光时长,来增加呈现设备在低照度下的感光能力,从而使得融合图像具备更佳的色彩效果,提升拍摄的图像质量。
图1为本申请实施例提供的成像装置的结构示意图;
图2为本申请实施例提供的图像处理方法的流程图一;
图3为2x2binning阵列的像素合并原理示意图;
图4为本申请实施例提供的图像处理方法的流程图二;
图5为本申请实施例提供的图像处理方法的流程图三;
图6为本申请实施例提供的图像处理方法的流程图四;
图7为可见光图像的插帧原理示意图;
图8为本申请实施例提供的图像处理方法的流程图五;
图9为本申请实施例提供的图像处理方法的流程图六;
图10为本申请实施例提供的图像处理方法的流程图七;
图11为本申请实施例提供的图像处理装置的结构示意图一;
图12为本申请实施例提供的图像处理装置的结构示意图二;
图13为本申请实施例提供的监控设备的结构示意图;
图14为与本申请实施例提供的摄像装置的部分结构的框图。
图1为本申请实施例提供的成像装置的结构示意图,如图1所示,成像装置100包括:红外镜头110、分频棱镜120、RGB传感器130、NIR传感器140、处理器150、显示屏160;环境光经过红外镜头110之后通过分频棱镜120,分为可见光和红外光;其中,可见光被RGB传感器130接收,并生成可见光图像;红外光被NIR传感器140接收,并生成红外光图像。处理器150分别对生成的可见光图像和红外光图像进行融合处理,并在显示屏160上显示融合图像。
在本申请实施例中,可见光图像的曝光时长与红外光图像的曝光时长不同。例如,可以将红外光图像的曝光时长设置为标准的曝光时长,将可见光图像的曝光时长延长,使得可见光图像能够获取到更多的色彩信息,以适用于低照度环境。处理器150对可见光图像进行还原处理,得到可见光还原图像;然后对可见光还原图像和红外图像进行融合处理,得到融合图像。这种方式可以增加呈现设备在低照度下的感光能力,从而使得融合图像具备更佳的色彩效果,提升拍摄的图像质量。
以下,对本申请中的部分用语进行解释说明,以便于本领域技术人员理解:
1)红外镜头:是指能够同时接收环境光中的可见光和红外光的镜头。现有的监控设备一般都安装有红外镜头。
2)分频棱镜:是指能够将红外光(波长>700nm)和可见光(波长为400nm~700nm)分离的棱镜。
3)RGB传感器:又称为色彩传感器,能够识别红色R、绿色G、蓝色B三种颜色的传感器,能够将可见光转换为人眼视觉感知相同的图像。
4)NIR传感器:近红外光谱技术(Near Infrared,NIR)传感器,用于将红外光转换为灰度图像。
下面采用具体的实施例对本申请的通信的方法进行详细说明,需要说明的是,下面几个具体实施例可以相互结合,对于相同或相似的内容,在不同的实施例中不再进行重复说明。
图2为本申请实施例提供的图像处理方法的流程图一,参见图2,本实施例的方法包括:
步骤S101、通过摄像头获取当前场景的可见光图像和红外光图像;其中,可见光图像对应的曝光时长大于红外光图像对应的曝光时长。
步骤S102、对可见光还原图像和红外图像进行融合处理,得到融合图像。
本实施例中,可以应用在具备成像功能的设备中,例如监控设备、夜视仪等等。通过设置对可见光图像对应的曝光时长大于红外光图像的对应的曝光时长,来增加在低照度下的感光能力。本实施例中的方法适用于光线较暗的场景,例如夜间环境、阴雨天,以及光线昏暗的室内环境等等。
示例性的,以分频棱镜相机为例。分频棱镜相机包括:红外镜头、分频谱棱镜、RGB传感器和NIR传感器。环境光进入红外镜头之后通过分频谱棱镜,分为可见光和红外光,可见光被RGB传感器接收,生成可见光图像,红外光被NIR传感器接收,生成红外光图像。在分频棱镜相机出 厂之前或者之后,设置可见光图像对应的曝光时长和红外光图像对应的曝光时长。通过延长可见光图像的对应的曝光时长,来增加成像设备在低照度下的感光能力,然后再对可见光图像和红外光图像进行融合处理,使得融合图像具备更佳的色彩效果,提升拍摄的图像质量。
示例性的,可以设置可见光图像的曝光时长为80ms;红外光图像的曝光时长为10ms。具体地,以人脸场景为例,现有技术中,为了避免产生模糊问题,可见光图像和红外光图像的曝光都是10ms。应用本申请中的方法时,可以将可见光图像的曝光时间提升到80ms,从而可以显著改善融合图像的质量,适用于低照度场景。
需要说明的是,本实施例不限定可见光图像和红外光图像的具体曝光时长。红外光图像的曝光时长的设置以不引起运动模糊为参照标准。可将光图像的曝光时长则可以根据实际的环境照度进行灵活调整。示例性的,可见光图像对应的图像传感器的靶面尺寸大于红外光图像对应的图像传感器的靶面尺寸。
本实施例中,靶面尺寸用于表征图像传感器中感光部分的大小,靶面尺寸越大,则对应的图像传感器有更大的通光量。可以在成相设备出厂之前,设置可见光图像对应的图像传感器的靶面尺寸大于红外光图像对应的图像传感器的靶面尺寸,从而使得可见光图像对应的图像传感器具备更强大的感光能力。
示例性的,可以设置可见光图像对应的图像传感器的分辨率小于红外光图像对应的图像传感器的分辨率。
具体地,以分频棱镜相机为例,可以在相机出厂之前设置RGB传感器的分辨率低于NIR传感器的分辨率,从而使得RGB传感器采集的可见光图像能够适用于光照度更低的环境。
示例性的,可以采用binning模式(又称为像素合并模式)获取当前场景的可见光图像。
需要说明的是,红外光图像不使用binning模式采集。
本实施例中,binning模式是指将图像传感器采集的多个像素合并为一个像素。由于低照度环境下拍摄的可见光图像的色彩效果会受到影响,因此图像传感器通过binning模式来强化可见光图像中的色彩。
示例性的,binning模式包括:2x2binning、3x3binning、4x4binning等等。具体地,2x2binning阵列,是指将四个相同颜色的像素合并为一个像素;4x4binning阵列是指将16个相同颜色的像素合并为一个像素。
本实施例中,可以根据成像设备经常使用的环境条件,选择不同的binning模式,理论上合并的像素单元数量越多,则对低照度的适应性越强。
具体地,图3为2x2binning阵列的像素合并原理示意图,如图3所示,以2x2个像素为一个单元进行像素合并;其中,合并后的像素为四个像素的累加值。
示例性的,可见光图像包括RGB三通道图像,可见光图像的RGB三通道图像所采用的binning模式不同。
在本申请实施例中,由于低照度环境下,不同颜色对光强度的敏感度不同,因此可以将可见光图像对应的RGB通道的binning模式不同。
示例性的,可见光图像的B通道图像所采用的binning模式中合并的像素单元的数量大于可见光图像的R通道图像所采用的binning模式中合并的像素单元的数量,和/或
可见光图像的R通道图像所采用的binning模式中合并的像素单元的数量大于可见光图像的G通道图像所采用的binning模式中合并的像素单元的数量。
本实施例中,可以根据颜色对光强度的敏感度设置对应颜色通道的binning模式不同,从而提高可见光图像的色彩效果。
在步骤S102中,在低照度下的场景下,红外光图像的纹理较可见光图像更清晰,因此可以将红外光图像的纹理信息融合到可见光图像上,从而得到融合图像。而延长可见光图像的曝光时长来增加呈现设备在低照度下的感光能力,从而使得融合图像具备更佳的色彩效果,提升拍摄的图像质量。
示例性的,可以对可见光图像进行低通滤波处理,得到可见光图像的低频信息;对红外光图像进行低通滤波处理,得到红外光图像的低频信息,根据红外光图像的低频信息和红外光图像,得到红外光图像的高频信息;红外光图像的高频信息包括纹理信息;将可见光图像的低频信息和红外光图像的高频信息进行融合,得到融合图像。
本实施例中,通过分别对可见光图像和红外光图像进行低通滤波处理,以消除高频分量和噪声,得到可见光图像对应的低频信息,以及红外光图像对应的低频信息。然后根据红外光图像对应的低频信息和红外光图像,得到红外光图像的纹理信息。由于红外光图像的纹理信息比可见光图像的纹理信息要丰富,因此,在融合时,将滤波后的红外光图像的纹理信息融合到滤波后的可见光图像上,得到融合图像。从而可以提升融合图像的质量,使其具备更加丰富的纹理信息。
本实施例,通过摄像头获取当前场景的可见光图像和红外光图像;且可见光图像对应的曝光时长大于红外光图像对应的曝光时长;然后对可见光图像和红外图像进行融合处理,得到融合图像。从而可以提高成像设备在低照度下的感光能力,提升图像质量。
图4为本申请实施例提供的图像处理方法的流程图二,参见图4,本实施例的方法包括:
步骤S201、通过摄像头获取当前场景的可见光图像和红外光图像;其中,可见光图像对应的曝光时长大于红外光图像对应的曝光时长。
本实施例中步骤S201的现具体的实现原理和实现过程与图2所示方法的实现原理和实现过程类似,此处不再赘述。
步骤S202、对可见光图像进行去模糊处理,得到去模糊后的可见光图像。
步骤S203、对去模糊后的可见光图像和红外图像进行融合处理,得到融合图像。
本实施例的步骤S202、步骤S203中,由于延长了可见光图像的曝光时长,因此可见光图像中会引入拖影,使得可见光图像变得模糊。因此,可以通过去模糊算法对可见光图像进行去模糊处理,得到去模糊后的可见光图像,最后对去模糊后的可见光图像和红外图像进行融合处理,得到融合图像。对可见光图像进行去模糊处理,可以有效提高融合图像的质量。本实施例不限定去模糊的具体算法,例如可以采用二值化去噪方式对可见光图像进行去模糊处理,或者利用专业的去模糊软件进行去模糊处理,用以消除长曝光引入的拖影。
本实施例,通过摄像头获取当前场景的可见光图像和红外光图像;且可见光图像对应的曝光时长大于红外光图像对应的曝光时长;然后对可见光图像进去模糊处理,将去模糊处理的可见光图像和红外图像进行融合处理,得到融合图像。从而可以消除可见光图像在长曝光下引入的拖影,提高融合图像的质量。
图5为本申请实施例提供的图像处理方法的流程图三,参见图5,本实施例的方法包括:
步骤S301、通过摄像头获取当前场景的可见光图像和红外光图像;其中,可见光图像对应的曝光时长大于红外光图像对应的曝光时长。
本实施例中步骤S301的现具体的实现原理和实现过程与图2所示方法的实现原理和实现过程 类似,此处不再赘述。
步骤S302、对可见光图像进行上采样处理,得到与红外光图像的分辨率相同的可见光图像。
步骤S303、对与红外光图像的分辨率相同的可见光图像和红外图像进行融合处理,得到融合图像。
在步骤S302、步骤S303中,由于长曝光获取到的可见光图像可能会牺牲图像分辨率,使得可见光图像的分辨率低于红外光图像的分辨率。因此在进行图像融合之前,需要对可见光图像进行上采样处理,得到与红外光图像的分辨率相同的可见光图像。本申请实施例不限定上采样的具体算法。
示例性的,红外光图像采用全分辨率,可见光图像采用低分辨率,从而可以使得可见光图像能够适用于更长的曝光时长,实现在低照度下获取充分的色彩信息。例如,可见光图像的分辨率为1280x720,红外光图像的分辨率为2560x1440,然后可以通过bicubic算法将可见光图像从1280x720上采样到2560x1440,得到与红外光图像分辨率相同的可见光图像。
示例性的,可以根据红外光图像的分辨率,对可见光图像进行插值处理,得到与红外光图像的分辨率相同的可见光图像。
本实施例中,可以采用插值方式使得可见光图像的分辨率与红外光图像的分辨率一致。可选地,可以采用邻近插值、线性插值、均值插值、中值插值等等方法,本实施例不限定具体的插值算法。
本实施例,通过摄像头获取当前场景的可见光图像和红外光图像;且可见光图像对应的曝光时长大于红外光图像对应的曝光时长;然后对可见光图像进去上采样处理,得到与红外光图像的分辨率相同的可见光图像,最后将与红外光图像分辨率相同的可见光图像和红外图像进行融合处理,得到融合图像。从而可以使得可见光图像更够适应更低的环境照度,使得融合图像的色彩效果更佳。
图6为本申请实施例提供的图像处理方法的流程图四,参见图6,本实施例的方法包括:
步骤S401、通过摄像头获取当前场景的可见光图像和红外光图像;其中,可见光图像对应的曝光时长大于红外光图像对应的曝光时长。
本实施例中步骤S401的现具体的实现原理和实现过程与图2所示方法的实现原理和实现过程类似,此处不再赘述。
步骤S402、对至少两张可见光图像进行插帧处理,得到与红外光图像帧率相同的可见光图像。
步骤S403、对与红外光图像帧率相同的可见光图像和红外图像进行融合处理,得到融合图像。
在本实施例中,可见光图像的采集帧率小于红外光图像的采集帧率。示例性的,红外光图像的采集帧率可以是采用现有标准的图像采集帧率。当可见光图像的采集帧率小于40ms对应的采集帧率时,则对可见光图像进行插帧处理。或者,当可见光图像的采集帧率小于33.3ms对应的采集帧率时,则对可见光图像进行插帧处理。
需要说明的是,本实施例不限定可见光图像的具体采集帧率。当可见光图像的采集帧率小于红外光图像的采集帧率时,可以通过插帧的方式,使得可见光图像的采集帧率与红外光图像的采集帧率率相同。
在步骤S402、步骤S403中,由于可见光图像对应的曝光时长大于红外光图像对应的曝光时长,因此可见光图像的采集帧率低于红外光图像的采集帧率。当应用在监控摄像头拍摄的视频图像时,在相同时间段内采集的可见光图像和红外光图像的数量不同,因此需要对至少两张可见光 图像进行插帧处理,得到与红外光图数量相同的可见光图像,以确保得到与红外光图像数量相同的融合图像。
本实施例中,为了适应低照度环境,延长了曝光时间,因此降低了可见光图像的采集帧率。在融合之前,需要通过插帧技术对可见光图像进行补帧。具体地,可以同分析前后两帧可见光图像预测中间帧的可见光图像,从而使得可见光图像的数量与红外光图像的数量相同。
具体地,图7为可见光图像的插帧原理示意图,如图7所示,红外传感器的采集帧率是可见光传感器采集帧率的2倍。红外传感器采集到了1~10帧红外光图像;可见光传感器采集到了1、3、5、7、9共五帧可见光图像。通过第1帧可见光图像和第3帧可见光图像,预测得到第2帧可见光图像;通过第3帧可见光图像和第5帧可见光图像,预测得到第4帧可见光图像;通过第5帧可见光图像和第7帧可见光图像,预测得到第6帧可见光图像;通过第7帧可见光图像和第9帧可见光图像,预测得到第8帧可见光图像;依次类推,直到得到与红外传感器采集的相同数量的可见光图像。
示例性的,可以设置可见光图像的采集帧率为红外光的采集帧率的1/2;从而可以方便后续对可见光图像进行还原处理时的插帧运算。
示例性的,可以设置第一帧率为25fps,第二帧率为12.5fps。
需要说明的是,本实施例不限定可见光图像和红外光图像的具体采集帧率,以及可见光图像和红外光图像的分辨率。在实际应用中,可以根据成像设备自身器件的性能进行灵活设置。
本实施例,通过摄像头获取当前场景的可见光图像和红外光图像;且可见光图像对应的曝光时长大于红外光图像对应的曝光时长;然后根据红外光图像的采集帧率对至少两张可见光图像进行插帧处理,得到与红外光图数量相同的可见光图像,最后对与红外光图像数量相同的可见光图像和红外图像进行融合处理,得到融合图像。从而可以应用于监控设备拍摄的多帧视频图像的融合处理,使得视频图像质量更佳。
图8为本申请实施例提供的图像处理方法的流程图五,参见图8,本实施例的方法包括:
步骤S501、通过摄像头获取当前场景的可见光图像和红外光图像;其中,可见光图像的分辨率低于红外光图像的分辨率。
步骤S502、对可见光图像和红外图像进行融合处理,得到融合图像。
本实施例中,可以应用在具备成像功能的设备中,例如监控设备、夜视仪等等。通过降低可见光图像的分辨率,来增加在低照度下的感光能力。本实施例中的方法适用于光线较暗的场景,例如夜间环境、阴雨天,以及光线昏暗的室内环境等等。
示例性的,以分频棱镜相机为例。分频棱镜相机包括:红外镜头、分频谱棱镜、RGB传感器和NIR传感器。环境光进入红外镜头之后通过分频谱棱镜,分为可见光和红外光,可见光被RGB传感器接收,生成可见光图像,红外光被NIR传感器接收,生成红外光图像。在分频棱镜相机出厂之前或者之后,设置可见光图像的分辨率低于红外光图像的分辨率。从而增加成像设备在低照度下的感光能力,然后再对可见光图像和红外光图像进行融合处理,使得融合图像具备更佳的色彩效果,提升拍摄的图像质量。
示例性的,可见光图像对应的图像传感器的靶面尺寸大于红外光图像对应的图像传感器的靶面尺寸。
本实施例中,靶面尺寸用于表征图像传感器中感光部分的大小,靶面尺寸越大,则对应的图像传感器有更大的通光量。可以在成相设备出厂之前,设置可见光图像对应的图像传感器的靶面 尺寸大于红外光图像对应的图像传感器的靶面尺寸,从而使得可见光图像对应的图像传感器具备更强大的感光能力。
示例性的,可以采用binning模式获取当前场景的可见光图像。
需要说明的是,红外光图像不使用binning模式采集。
本实施例中,binning模式是指将图像传感器采集的多个像素合并为一个像素。由于低照度环境下拍摄的可见光图像的色彩效果会受到影响,因此图像传感器通过binning模式来强化可见光图像中的色彩。
示例性的,binning模式包括:2x2binning、3x3binning、4x4binning等等。具体地,2x2binning阵列,是指将四个相同颜色的像素合并为一个像素;4x4binning阵列是指将16个相同颜色的像素合并为一个像素。
本实施例中,可以根据成像设备经常使用的环境条件,选择不同的binning模式,理论上合并的像素单元数量越多,则对低照度的适应性越强。
示例性的,可见光图像包括RGB三通道图像,可见光图像的RGB三通道图像所采用的binning模式不同。
在本申请实施例中,由于低照度环境下,不同颜色对光强度的敏感度不同,因此可以将可见光图像对应的RGB通道的binning模式不同。
示例性的,可见光图像的B通道图像所采用的binning模式中合并的像素单元的数量大于可见光图像的R通道图像所采用的binning模式中合并的像素单元的数量,和/或
可见光图像的R通道图像所采用的binning模式中合并的像素单元的数量大于可见光图像的G通道图像所采用的binning模式中合并的像素单元的数量。
本实施例中,可以根据颜色对光强度的敏感度设置对应颜色通道的binning模式不同,从而提高可见光图像的色彩效果。
在步骤S502中,在低照度下的场景下,红外光图像的纹理较可见光图像更清晰,因此可以将红外光图像的纹理信息融合到可见光图像上,从而得到融合图像。而延长可见光图像的曝光时长来增加呈现设备在低照度下的感光能力,从而使得融合图像具备更佳的色彩效果,提升拍摄的图像质量。
示例性的,可以对可见光图像进行低通滤波处理,得到可见光图像的低频信息;对红外光图像进行低通滤波处理,得到红外光图像的低频信息,根据红外光图像的低频信息和红外光图像,得到红外光图像的高频信息;红外光图像的高频信息包括纹理信息;将可见光图像的低频信息和红外光图像的高频信息进行融合,得到融合图像。
本实施例中,通过分别对可见光图像和红外光图像进行低通滤波处理,以消除高频分量和噪声,得到可见光图像对应的低频信息,以及红外光图像对应的低频信息。然后根据红外光图像对应的低频信息和红外光图像,得到红外光图像的纹理信息。由于红外光图像的纹理信息比可见光图像的纹理信息要丰富,因此,在融合时,将滤波后的红外光图像的纹理信息融合到滤波后的可见光图像上,得到融合图像。从而可以提升融合图像的质量,使其具备更加丰富的纹理信息。
本实施例,通过摄像头获取当前场景的可见光图像和红外光图像;且可见光图像的分辨率低于红外光图像的分辨率;然后对可见光图像和红外图像进行融合处理,得到融合图像。从而可以提高成像设备在低照度下的感光能力,提升图像质量。
图9为本申请实施例提供的图像处理方法的流程图六,参见图9,本实施例的方法包括:
步骤S601、通过摄像头获取当前场景的可见光图像和红外光图像;其中,可见光图像的分辨率低于红外光图像的分辨率。
本实施例中步骤S601的现具体的实现原理和实现过程与图8所示方法的实现原理和实现过程类似,此处不再赘述。
步骤S602、对可见光图像进行去模糊处理,得到去模糊后的可见光图像。
步骤S603、对去模糊后的可见光图像和红外图像进行融合处理,得到融合图像。
本实施例的步骤S602、步骤S603中,可以通过去模糊算法对可见光图像进行去模糊处理,得到去模糊后的可见光图像,最后对去模糊后的可见光图像和红外图像进行融合处理,得到融合图像。对可见光图像进行去模糊处理,可以有效提高融合图像的质量。本实施例不限定去模糊的具体算法,例如可以采用二值化去噪方式对可见光图像进行去模糊处理,或者利用专业的去模糊软件进行去模糊处理,用以消除长曝光引入的拖影。
本实施例,通过摄像头获取当前场景的可见光图像和红外光图像;且可见光图像的分辨率低于红外光图像的分辨率;然后对可见光图像进去模糊处理,将去模糊处理的可见光图像和红外图像进行融合处理,得到融合图像。从而可以消除可见光图像在长曝光下引入的拖影,提高融合图像的质量。
图10为本申请实施例提供的图像处理方法的流程图七,参见图10,本实施例的方法包括:
步骤S701、通过摄像头获取当前场景的可见光图像和红外光图像;其中,可见光图像的分辨率低于红外光图像的分辨率。
本实施例中步骤S701的现具体的实现原理和实现过程与图6所示方法的实现原理和实现过程类似,此处不再赘述。
步骤S702、对可见光图像进行上采样处理,得到与红外光图像的分辨率相同的可见光图像。
步骤S703、对与红外光图像的分辨率相同的可见光图像和红外图像进行融合处理,得到融合图像。
在步骤S702、步骤S703中,由于可见光图像的分辨率低于红外光图像的分辨率,因此在进行图像融合之前,需要对可见光图像进行上采样处理,得到与红外光图像的分辨率相同的可见光图像。本申请实施例不限定上采样的具体算法。
示例性的,红外光图像采用全分辨率,可见光图像采用低分辨率,实现在低照度下获取充分的色彩信息。例如,可见光图像的分辨率为1280x720,红外光图像的分辨率为2560x1440,然后可以通过bicubic算法将可见光图像从1280x720上采样到2560x1440,得到与红外光图像分辨率相同的可见光图像。
示例性的,可以根据红外光图像的分辨率,对可见光图像进行插值处理,得到与红外光图像的分辨率相同的可见光图像。
本实施例中,可以采用插值方式使得可见光图像的分辨率与红外光图像的分辨率一致。可选地,可以采用邻近插值、线性插值、均值插值、中值插值等等方法,本实施例不限定具体的插值算法。
本实施例,通过摄像头获取当前场景的可见光图像和红外光图像;且可见光图像的分辨率低于红外光图像的分辨率;然后对可见光图像进去上采样处理,得到与红外光图像的分辨率相同的可见光图像,最后将与红外光图像分辨率相同的可见光图像和红外图像进行融合处理,得到融合 图像。从而可以使得可见光图像更够适应更低的环境照度,使得融合图像的色彩效果更佳。
图11为本申请实施例提供的图像处理装置的结构示意图一,参见图11,本实施例的装置包括:
摄像模块810,用于通过摄像头获取当前场景的可见光图像和红外光图像;其中,可见光图像对应的曝光时长大于红外光图像对应的曝光时长;
处理模块820,用于对可见光图像和红外图像进行融合处理,得到融合图像。
应理解,上述模块可以是软件模块,也可以是硬件单元或者电路单元。
本实施例中,图像处理装置适用于光线较暗的场景,例如夜间环境、阴雨天,以及光线昏暗的室内环境等等。摄像模块810包括:可见光图像传感器和红外光图像传感器。环境光进入摄像模块810的镜头之后被分为可见光和红外光,可见光被可见光图像传感器接收,生成可见光图像,红外光被红外光图像传感器接收,生成红外光图像。在图像处理装置出厂之前或者之后,可以设置摄像模块810中可见光图像对应的曝光时长和红外光图像对应的曝光时长。通过延长可见光图像的对应的曝光时长,来增加图像处理装置在低照度下的感光能力,然后再通过处理模块820对可见光图像和红外光图像进行融合处理,使得融合图像具备更佳的色彩效果,提升拍摄的图像质量。
需要说明的是,处理模块820中可以预先加载有图像处理程序,当程序被调用时,执行对可见光图像和红外光图像的融合处理,得到融合图像。
具体地,以夜间监控场景为例,摄像模块810拍摄夜间环境下的可见光图像和红外光图像,其中,可见光图像对应的曝光时长大于红外光图像对应的曝光时长。摄像模块810将采集到的可见光图像和红外光图像发送给处理模块820,以使得处理模块820对可见光图像和红外光图像进行融合处理。例如,处理模块820对可见光图像进行低通滤波处理,得到去噪后的可见光图像,并提取红外光图像的纹理信息;最后将红外光图像的纹理信息融合到去噪后的可见光图像上,得到融合图像。需要说明的是,本实施例不限定处理模块820所加载的图像处理算法。
示例性的,可见光图像对应的图像传感器的靶面尺寸大于红外光图像对应的图像传感器的靶面尺寸。
示例性的,摄像模块810,具体用于:
采用binning模式获取当前场景的可见光图像。
示例性的,binning模式包括:2x2binning、3x3binning、4x4binning。
示例性的,可见光图像包括RGB三通道图像,可见光图像的RGB三通道图像所采用的binning模式不同。
示例性的,可见光图像的B通道图像所采用的binning模式中合并的像素单元的数量大于可见光图像的R通道图像所采用的binning模式中合并的像素单元的数量,和/或
可见光图像的R通道图像所采用的binning模式中合并的像素单元的数量大于可见光图像的G通道图像所采用的binning模式中合并的像素单元的数量。
示例性的,处理模块820,具体用于:
对可见光图像进行去模糊处理,得到去模糊后的可见光图像;
对去模糊后的可见光图像和红外图像进行融合处理,得到融合图像。
示例性的,处理模块820,具体用于:
对可见光图像进行上采样处理,得到与红外光图像的分辨率相同的可见光图像;
对与红外光图像的分辨率相同的可见光图像和红外图像进行融合处理,得到融合图像。
示例性的,处理模块820,具体用于:
根据红外光图像的分辨率,对可见光图像进行插值处理,得到与红外光图像的分辨率相同的可见光图像。
示例性的,处理模块820,具体用于:
对至少两张可见光图像进行插帧处理,得到与红外光图帧率相同的可见光图像;
对与红外光图像帧率相同的可见光图像和红外图像进行融合处理,得到融合图像。
示例性的,处理模块820,具体用于:
对可见光图像进行低通滤波处理,得到可见光图像的低频信息;
对红外光图像进行低通滤波处理,得到红外光图像的低频信息,根据红外光图像的低频信息和红外光图像,得到红外光图像的高频信息;红外光图像的高频信息包括纹理信息;
将可见光图像的低频信息和红外光图像的高频信息进行融合,得到融合图像。
本实施例,通过摄像头获取当前场景的可见光图像和红外光图像;且可见光图像对应的曝光时长大于红外光图像对应的曝光时长;对可见光图像和红外图像进行融合处理,得到融合图像。从而可以提高成像设备在低照度下的感光能力,提升图像质量。其具体实现过程和实现原理请参见图2~图7所示实施例中的相关描述,此处不再赘述。
图12为本申请实施例提供的图像处理装置的结构示意图二,参见图12,本实施例的装置包括:
摄像模块910,用于通过摄像头获取当前场景的可见光图像和红外光图像;其中,可见光图像的分辨率低于红外光图像的分辨率;
处理模块920,用于对可见光图像和红外图像进行融合处理,得到融合图像。
应理解,上述模块可以是软件模块,也可以是硬件单元或者电路单元。
本实施例中,图像处理装置适用于光线较暗的场景,例如夜间环境、阴雨天,以及光线昏暗的室内环境等等。摄像模块910包括:可见光图像传感器和红外光图像传感器。环境光进入摄像模块910的镜头之后被分为可见光和红外光,可见光被可见光图像传感器接收,生成可见光图像,红外光被红外光图像传感器接收,生成红外光图像。在图像处理装置出厂之前或者之后,可以设置摄像模块910中可见光图像的分辨率低于红外光图像的分辨率。通过降低可见光图像的分辨率,来增加图像处理装置在低照度下的感光能力,然后再通过处理模块920对可见光图像和红外光图像进行融合处理,使得融合图像具备更佳的色彩效果,提升拍摄的图像质量。
需要说明的是,处理模块920中可以预先加载有图像处理程序,当程序被调用时,执行对可见光图像和红外光图像的融合处理,得到融合图像。
具体地,以夜间监控场景为例,摄像模块910拍摄夜间环境下的可见光图像和红外光图像,其中,可见光图像的分辨率低于红外光图像的分辨率。摄像模块910将采集到的可见光图像和红外光图像发送给处理模块920,以使得处理模块820对可见光图像和红外光图像进行融合处理。例如,处理模块920对可见光图像进行上采样,得到与红外光图像相同分辨率的可见光图像,然后再对与红外光图像相同分辨率的可见光图像进行低通滤波处理,得到去噪后的可见光图像,并提取红外光图像的纹理信息;最后将红外光图像的纹理信息融合到去噪后的可见光图像上,得到融合图像。需要说明的是,本实施例不限定处理模块920所加载的图像处理算法。
示例性的,可见光图像对应的图像传感器的靶面尺寸大于红外光图像对应的图像传感器的靶 面尺寸。
示例性的,摄像模块910,具体用于:
采用binning模式获取当前场景的可见光图像。
示例性的,binning模式包括:2x2binning、3x3binning、4x4binning。
示例性的,可见光图像包括RGB三通道图像,可见光图像的RGB三通道图像所采用的binning模式不同。
示例性的,可见光图像的B通道图像所采用的binning模式中合并的像素单元的数量大于可见光图像的R通道图像所采用的binning模式中合并的像素单元的数量,和/或
可见光图像的R通道图像所采用的binning模式中合并的像素单元的数量大于可见光图像的G通道图像所采用的binning模式中合并的像素单元的数量。
示例性的,处理模块920,具体用于:
对可见光图像进行去模糊处理,得到去模糊后的可见光图像;
对去模糊后的可见光图像和红外图像进行融合处理,得到融合图像。
示例性的,处理模块920,具体用于:
对可见光图像进行上采样处理,得到与红外光图像的分辨率相同的可见光图像;
对与红外光图像的分辨率相同的可见光图像和红外图像进行融合处理,得到融合图像。
示例性的,处理模块920,具体用于:
根据红外光图像的分辨率,对可见光图像进行插值处理,得到与红外光图像的分辨率相同的可见光图像。
示例性的,处理模块920,具体用于:
对可见光图像进行低通滤波处理,得到可见光图像的低频信息;
对红外光图像进行低通滤波处理,得到红外光图像的低频信息,根据红外光图像的低频信息和红外光图像,得到红外光图像的高频信息;红外光图像的高频信息包括纹理信息;
将可见光图像的低频信息和红外光图像的高频信息进行融合,得到融合图像。
本实施例,通过摄像头获取当前场景的可见光图像和红外光图像;且可见光图像的分辨率低于红外光图像的分辨率;对红外图像进行融合处理,得到融合图像。从而可以提高成像设备在低照度下的感光能力,提升图像质量。其具体实现过程和实现原理请参见图8~图10所示实施例中的相关描述,此处不再赘述。
图13为本申请实施例提供的监控设备的结构示意图,如图13所示,处理器1010、存储器1020、镜头1030、电源1040、数据传输接口1050等部件。本领域技术人员可以理解,图13中示出的监控设备的结构并不构成对监控设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
本实施例中的监控设备可以执行图2、图4、图5、图6、图8~图10中任一实施例中的图像处理方法,其具体实现过程和实现原理请参见图2~图10所示实施例中的相关描述,此处不再赘述。
本实施例中的监控设备可以包括图11、图12所示的图像的处理装置中的各个模块,并通过图11、图12所示的图像的处理装置中的各个模块执行图2、图4、图5、图6、图8~图10中任一实施例中的图像处理方法,其具体实现过程和实现原理请参见图2~图10所示实施例中的相关描述,此处不再赘述。
存储器1020可用于存储软件程序以及模块,处理器1010通过运行存储在存储器1020的软件程序以及模块,从而执行成像设备的各种功能应用以及数据处理。存储器1020可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据摄像装置的使用所创建的数据(比如音频数据等)等。此外,存储器1020可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
监控设备中的镜头1030可以获取光学图像,包括红外光图像和/或可见光图像,其中,监控设备中的镜头1030可以是一个,也可以是至少两个(图中未示出),具体可根据实际设计需求调整。
处理器1010是摄像装置的控制中心,利用各种数据传输接口1050和线路连接整个监控设备的各个部分,通过运行或执行存储在存储器1020内的软件程序和/或模块,以及调用存储在存储器1020内的数据,执行监控设备的各种功能和处理数据。可选的,处理器1010可包括一个或多个处理单元;优选的,处理器1010可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1010中。
监控设备还包括给各个部件供电的电源1040(比如电池),可选地,电源1040可以通过电源管理系统与处理器1010逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管未示出,摄像装置还可以包括蓝牙模块等,在此不再赘述。
本实施例,获取当前场景的可见光图像和红外图像,对可见光还原图像和红外图像进行融合处理,得到融合图像。通过增加可见光图像的曝光时长和/或降低可见光图像的分辨率,来增加呈现设备在低照度下的感光能力,从而使得融合图像具备更佳的色彩效果,提升拍摄的图像质量。
图14为与本申请实施例提供的摄像装置的部分结构的框图,如图14所示,摄像装置包括:射频(Radio Frequency,RF)电路2010、存储器2020、输入单元2030、显示单元2040、传感器2050、音频电路2060、镜头2070、处理器2080、以及电源2090等部件。本领域技术人员可以理解,图14中示出的摄像装置结构并不构成对摄像装置的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图14对摄像装置的各个构成部件进行具体的介绍:
RF电路2010可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,给处理器2080处理;另外,将设计上行的数据发送给基站。通常,RF电路2010包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,RF电路2010还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE)、电子邮件、短消息服务(Short Messaging Service,SMS)等。
存储器2020可用于存储软件程序以及模块,处理器2080通过运行存储在存储器2020的软件程序以及模块,从而执行摄像装置的各种功能应用以及数据处理。存储器2020可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声 音播放功能、图像播放功能等)等;存储数据区可存储根据摄像装置的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器2020可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
输入单元2030可用于接收输入的数字或字符信息,以及产生与摄像装置的用户设置以及功能控制有关的键信号输入。具体地,输入单元2030可包括触控面板2031以及其他输入设备2032。触控面板2031,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板2031上或在触控面板2031附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板2031可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器2080,并能接收处理器2080发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板2031。除了触控面板2031,输入单元2030还可以包括其他输入设备2032。具体地,其他输入设备2032可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元2040可用于显示由用户输入的信息或提供给用户的信息以及摄像装置的各种菜单。显示单元2040可包括显示面板2041,可选的,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板2041。进一步的,触控面板2031可覆盖显示面板2041,当触控面板2031检测到在其上或附近的触摸操作后,传送给处理器2080以确定触摸事件的类型,随后处理器2080根据触摸事件的类型在显示面板2041上提供相应的视觉输出。虽然在图14中,触控面板2031与显示面板2041是作为两个独立的部件来实现摄像装置的输入和输入功能,但是在某些实施例中,可以将触控面板2031与显示面板2041集成而实现摄像装置的输入和输出功能。
摄像装置还可包括至少一种传感器2050,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板2041的亮度,接近传感器可在摄像装置移动到耳边时,关闭显示面板2041和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别摄像装置姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于摄像装置还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路2060、扬声器2061,传声器2062可提供用户与摄像装置之间的音频接口。音频电路2060可将接收到的音频数据转换后的电信号,传输到扬声器2061,由扬声器2061转换为声音信号输出;另一方面,传声器2062将收集的声音信号转换为电信号,由音频电路2060接收后转换为音频数据,再将音频数据输出处理器2080处理后,经RF电路2010以发送给比如另一摄像装置,或者将音频数据输出至存储器2020以便进一步处理。
摄像装置中的镜头2070可以获取光学图像,包括红外光图像和/或可见光图像,其中,摄像装置中的镜头可以是一个,也可以是至少两个(图中未示出),具体可根据实际设计需求调整。
处理器2080是摄像装置的控制中心,利用各种接口和线路连接整个摄像装置的各个部分,通过运行或执行存储在存储器2020内的软件程序和/或模块,以及调用存储在存储器2020内的数据,执行摄像装置的各种功能和处理数据,从而对摄像装置进行整体监控。可选的,处理器2080可包 括一个或多个处理单元;优选的,处理器2080可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器2080中。
摄像装置还包括给各个部件供电的电源2090(比如电池),优选的,电源可以通过电源管理系统与处理器2080逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管未示出,摄像装置还可以包括摄像头、蓝牙模块等,在此不再赘述。
本实施例中的摄像装置可以执行图2、图4、图5、图6、图8~图10中任一实施例中的图像处理方法,其具体实现过程和实现原理请参见图2~图10所示实施例中的相关描述,此处不再赘述。
本实施例中的摄像装置可以包括图11、图12所示的图像的处理装置中的各个模块,并通过图11、图12所示的图像的处理装置中的各个模块执行图2、图4、图5、图6、图8~图10中任一实施例中的图像处理方法,其具体实现过程和实现原理请参见图2~图10所示实施例中的相关描述,此处不再赘述。
本申请实施例提供一种计算机可读存储介质,计算机可读存储介质存储有指令,当指令被执行时,使得计算机执行如本申请上述实施例中终端设备执行的方法。
本申请实施例提供一种计算机可读存储介质,计算机可读存储介质存储有指令,当指令被执行时,使得计算机执行如本申请上述实施例中网络设备执行的方法。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。在本申请的实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
Claims (46)
- 一种图像处理方法,其特征在于,所述方法包括:通过摄像头获取当前场景的可见光图像和红外光图像;其中,所述可见光图像对应的曝光时长大于所述红外光图像对应的曝光时长;对所述可见光图像和所述红外图像进行融合处理,得到融合图像。
- 根据权利要求1所述的方法,其特征在于,所述可见光图像对应的图像传感器的靶面尺寸大于所述红外光图像对应的图像传感器的靶面尺寸。
- 根据权利要求1或2所述的方法,其特征在于,所述通过摄像头获取当前场景的可见光图像,包括:采用binning模式获取当前场景的所述可见光图像。
- 根据权利要求3所述的方法,其特征在于,所述binning模式包括:2x2binning、3x3binning、4x4 binning。
- 根据权利要求3或4所述的方法,其特征在于,所述可见光图像的RGB通道的binning模式不同。
- 根据权利要求5所述的方法,其特征在于,所述可见光图像的B通道图像所采用的binning模式中合并的像素单元的数量大于所述可见光图像的R通道图像所采用的binning模式中合并的像素单元的数量,和/或所述可见光图像的R通道图像所采用的binning模式中合并的像素单元的数量大于所述可见光图像的G通道图像所采用的binning模式中合并的像素单元的数量。
- 根据权利要求1-6中任一项所述的方法,其特征在于,所述对所述可见光图像和所述红外图像进行融合处理,得到融合图像,包括:对所述可见光图像进行去模糊处理,得到去模糊后的可见光图像;对所述去模糊后的可见光图像和所述红外图像进行融合处理,得到所述融合图像。
- 根据权利要求1-7中任一项所述的方法,其特征在于,所述对所述可见光图像和所述红外图像进行融合处理,得到融合图像,包括:对所述可见光图像进行上采样处理,得到与所述红外光图像的分辨率相同的可见光图像;对所述与所述红外光图像的分辨率相同的可见光图像和所述红外图像进行融合处理,得到所述融合图像。
- 根据权利要求8所述的方法,其特征在于,所述对所述可见光图像进行上采样处理,得到与所述红外光图像的分辨率相同的可见光图像,包括:根据所述红外光图像的分辨率,对所述可见光图像进行插值处理,得到与所述红外光图像的分辨率相同的可见光图像。
- 根据权利要求1-9中任一项所述的方法,其特征在于,对所述可见光图像和所述红外图像进行融合处理,得到融合图像,包括:对至少两张所述可见光图像进行插帧处理,得到与所述红外光图像帧率相同的可见光图像;对所述与所述红外光图像帧率相同的可见光图像和所述红外图像进行融合处理,得到所述融合图像。
- 根据权利要求1-10中任一项所述的方法,其特征在于,对所述可见光图像和所述红外图像进行融合处理,得到融合图像,包括:对所述可见光图像进行低通滤波处理,得到所述可见光图像的低频信息;对所述红外光图像进行低通滤波处理,得到所述红外光图像的低频信息,根据所述红外光图像的低频信息和所述红外光图像,得到所述红外光图像的高频信息;所述红外光图像的高频信息包括纹理信息;将所述可见光图像的低频信息和所述红外光图像的高频信息进行融合,得到所述融合图像。
- 一种图像处理方法,其特征在于,所述方法包括:通过摄像头获取当前场景的可见光图像和红外光图像;其中,所述可见光图像的分辨率低于所述红外光图像的分辨率;对所述可见光图像和所述红外图像进行融合处理,得到融合图像。
- 根据权利要求12所述的方法,其特征在于,所述可见光图像对应的图像传感器的靶面尺寸大于所述红外光图像对应的图像传感器的靶面尺寸。
- 根据权利要求12或13所述的方法,其特征在于,所述通过摄像头获取当前场景的可见光图像,包括:采用binning模式获取当前场景的所述可见光图像。
- 根据权利要求14所述的方法,其特征在于,所述binning模式包括:2x2binning、3x3binning、4x4 binning。
- 根据权利要求14或15所述的方法,其特征在于,所述可见光图像的RGB通道的binning模式不同。
- 根据权利要求16所述的方法,其特征在于,所述可见光图像的B通道图像所采用的binning模式中合并的像素单元的数量大于所述可见光图像的R通道图像所采用的binning模式中合并的像素单元的数量,和/或所述可见光图像的R通道图像所采用的binning模式中合并的像素单元的数量大于所述可见光图像的G通道图像所采用的binning模式中合并的像素单元的数量。
- 根据权利要求12-17中任一项所述的方法,其特征在于,所述对所述可见光图像和所述红外图像进行融合处理,得到融合图像,包括:对所述可见光图像进行去模糊处理,得到去模糊后的可见光图像;对所述去模糊后的可见光图像和所述红外图像进行融合处理,得到所述融合图像。
- 根据权利要求12-17中任一项所述的方法,其特征在于,所述对所述可见光图像和所述红外图像进行融合处理,得到融合图像,包括:对所述可见光图像进行上采样处理,得到与所述红外光图像的分辨率相同的可见光图像;对所述与所述红外光图像的分辨率相同的可见光图像和所述红外图像进行融合处理,得到所述融合图像。
- 根据权利要求19所述的方法,其特征在于,所述对所述可见光图像进行上采样处理,得到与所述红外光图像的分辨率相同的可见光图像,包括:根据所述红外光图像的分辨率,对所述可见光图像进行插值处理,得到与所述红外光图像的分辨率相同的可见光图像。
- 根据权利要求12-20中任一项所述的方法,其特征在于,对所述可见光图像和所述红外图像进行融合处理,得到融合图像,包括:对所述可见光图像进行低通滤波处理,得到所述可见光图像的低频信息;对所述红外光图像进行低通滤波处理,得到所述红外光图像的低频信息,根据所述红外光图像的低频信息和所述红外光图像,得到所述红外光图像的高频信息;所述红外光图像的高频信息包括纹理信息;将所述可见光图像的低频信息和所述红外光图像的高频信息进行融合,得到所述融合图像。
- 一种图像处理装置,其特征在于,所述装置包括:摄像模块,用于获取当前场景的可见光图像和红外光图像;其中,所述可见光图像对应的曝光时长大于所述红外光图像对应的曝光时长;处理模块,用于对所述可见光图像和所述红外图像进行融合处理,得到融合图像。
- 根据权利要求22所述的装置,其特征在于,所述可见光图像对应的图像传感器的靶面尺寸大于所述红外光图像对应的图像传感器的靶面尺寸。
- 根据权利要求22或23所述的装置,其特征在于,所述摄像模块,具体用于:采用binning模式获取当前场景的所述可见光图像。
- 根据权利要求24所述的装置,其特征在于,所述binning模式包括:2x2binning、3x3binning、4x4 binning。
- 根据权利要求24或25所述的装置,其特征在于,所述可见光图像的RGB通道的binning模式不同。
- 根据权利要求26所述的装置,其特征在于,所述可见光图像的B通道图像所采用的binning模式中合并的像素单元的数量大于所述可见光图像的R通道图像所采用的binning模式中合并的像素单元的数量,和/或所述可见光图像的R通道图像所采用的binning模式中合并的像素单元的数量大于所述可见光图像的G通道图像所采用的binning模式中合并的像素单元的数量。
- 根据权利要求22-27中任一项所述的装置,其特征在于,所述处理模块,具体用于:对所述可见光图像进行去模糊处理,得到去模糊后的可见光图像;对所述去模糊后的可见光图像和所述红外图像进行融合处理,得到所述融合图像。
- 根据权利要求22-28中任一项所述的装置,其特征在于,所述处理模块,具体用于:对所述可见光图像进行上采样处理,得到与所述红外光图像的分辨率相同的可见光图像;对所述与所述红外光图像的分辨率相同的可见光图像和所述红外图像进行融合处理,得到所述融合图像。
- 根据权利要求29所述的装置,其特征在于,所述处理模块,具体用于:根据所述红外光图像的分辨率,对所述可见光图像进行插值处理,得到与所述红外光图像的分辨率相同的可见光图像。
- 根据权利要求22-30中任一项所述的装置,其特征在于,所述处理模块,具体用于:对至少两张所述可见光图像进行插帧处理,得到与所述红外光图像帧率相同的可见光图像;对所述与所述红外光图像帧率相同的可见光图像和所述红外图像进行融合处理,得到所述融合图像。
- 根据权利要求22-31中任一项所述的装置,其特征在于,所述处理模块,具体用于:对所述可见光图像进行低通滤波处理,得到所述可见光图像的低频信息;对所述红外光图像进行低通滤波处理,得到所述红外光图像的低频信息,根据所述红外光图像的低频信息和所述红外光图像,得到所述红外光图像的高频信息;所述红外光图像的高频信息 包括纹理信息;将所述可见光图像的低频信息和所述红外光图像的高频信息进行融合,得到所述融合图像。
- 一种图像处理装置,其特征在于,所述装置包括:摄像模块,用于获取当前场景的可见光图像和红外光图像;其中,所述可见光图像的分辨率低于所述红外光图像的分辨率;处理模块,用于对所述可见光图像和所述红外图像进行融合处理,得到融合图像。
- 根据权利要求33所述的装置,其特征在于,所述可见光图像对应的图像传感器的靶面尺寸大于所述红外光图像对应的图像传感器的靶面尺寸。
- 根据权利要求33或34所述的装置,其特征在于,所述摄像模块,具体用于:采用binning模式获取当前场景的所述可见光图像。
- 根据权利要求35所述的装置,其特征在于,所述binning模式包括:2x2binning、3x3binning、4x4 binning。
- 根据权利要求35或36所述的装置,其特征在于,所述可见光图像的RGB通道的binning模式不同。
- 根据权利要求37所述的装置,其特征在于,所述可见光图像的B通道图像所采用的binning模式中合并的像素单元的数量大于所述可见光图像R通道图像所采用的binning模式中合并的像素单元的数量,和/或所述可见光图像的R通道所采用的binning模式中合并的像素单元的数量大于所述可见光图像G通道图像所采用的binning模式中合并的像素单元的数量。
- 根据权利要求33-38中任一项所述的装置,其特征在于,所述处理模块,具体用于:对所述可见光图像进行去模糊处理,得到去模糊后的可见光图像;对所述去模糊后的可见光图像和所述红外图像进行融合处理,得到所述融合图像。
- 根据权利要求33-39中任一项所述的装置,其特征在于,所述处理模块,具体用于:对所述可见光图像进行上采样处理,得到与所述红外光图像的分辨率相同的可见光图像;对所述与所述红外光图像的分辨率相同的可见光图像和所述红外图像进行融合处理,得到所述融合图像。
- 根据权利要求40所述的装置,其特征在于,所述处理模块,具体用于:根据所述红外光图像的分辨率,对所述可见光图像进行插值处理,得到与所述红外光图像的分辨率相同的可见光图像。
- 根据权利要求33-41中任一项所述的装置,其特征在于,所述处理模块,具体用于:对所述可见光图像进行低通滤波处理,得到所述可见光图像的低频信息;对所述红外光图像进行低通滤波处理,得到所述红外光图像的低频信息,根据所述红外光图像的低频信息和所述红外光图像,得到所述红外光图像的高频信息;所述红外光图像的高频信息包括纹理信息;将所述可见光图像的低频信息和所述红外光图像的高频信息进行融合,得到所述融合图像。
- 一种成相设备,其特征在于,包括摄像头、处理器、存储器;所述镜头用于采集可见光图像和红外图像,所述存储器用于存储程序指令,所述处理器用于调用存储器中的程序指令执行如权利要求1-11中任一项所述的图像处理方法。
- 一种成相设备,其特征在于,包括摄像头、处理器、存储器;所述镜头用于采集可见光 图像和红外图像,所述存储器用于存储程序指令,所述处理器用于调用存储器中的程序指令执行如权利要求12-21中任一项所述的图像处理方法。
- 一种可读存储介质,其特征在于,所述可读存储介质上存储有计算机程序;所述计算机程序被执行时,实现如权利要求1-11中任一项所述的图像处理方法。
- 一种可读存储介质,其特征在于,所述可读存储介质上存储有计算机程序;所述计算机程序被执行时,实现如权利要求12-21中任一项所述的图像处理方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911096690.4 | 2019-11-11 | ||
CN201911096690.4A CN112785510B (zh) | 2019-11-11 | 2019-11-11 | 图像处理方法和相关产品 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021093712A1 true WO2021093712A1 (zh) | 2021-05-20 |
Family
ID=75749293
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/127608 WO2021093712A1 (zh) | 2019-11-11 | 2020-11-09 | 图像处理方法和相关产品 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112785510B (zh) |
WO (1) | WO2021093712A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113691730A (zh) * | 2021-09-03 | 2021-11-23 | 浙江宇视科技有限公司 | 一种摄像头的任务切换控制方法、装置、介质及电子设备 |
CN114140877A (zh) * | 2021-11-26 | 2022-03-04 | 北京比特易湃信息技术有限公司 | 一种带红外热成像辅助的人体运动姿态与运动意图的预测方法 |
CN114285978A (zh) * | 2021-12-28 | 2022-04-05 | 维沃移动通信有限公司 | 视频处理方法、视频处理装置和电子设备 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11910121B2 (en) * | 2021-01-26 | 2024-02-20 | Zf Friedrichshafen Ag | Converting dual-context video data to full color video |
CN113177905A (zh) * | 2021-05-21 | 2021-07-27 | 浙江大华技术股份有限公司 | 一种图像获取方法、装置、设备及介质 |
CN117676120A (zh) * | 2023-12-14 | 2024-03-08 | 深圳市眼科医院(深圳市眼病防治研究所) | 一种用于扩大视野缺损患者视野范围的智能助视眼镜 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7340099B2 (en) * | 2003-01-17 | 2008-03-04 | University Of New Brunswick | System and method for image fusion |
CN102783135A (zh) * | 2010-03-03 | 2012-11-14 | 伊斯曼柯达公司 | 利用低分辨率图像提供高分辨率图像的方法和装置 |
CN107563971A (zh) * | 2017-08-12 | 2018-01-09 | 四川精视科技有限公司 | 一种真彩高清夜视成像方法 |
CN110248105A (zh) * | 2018-12-10 | 2019-09-17 | 浙江大华技术股份有限公司 | 一种图像处理方法、摄像机及计算机存储介质 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110136183B (zh) * | 2018-02-09 | 2021-05-18 | 华为技术有限公司 | 一种图像处理的方法、装置以及摄像装置 |
CN110308153A (zh) * | 2019-08-03 | 2019-10-08 | 广西师范大学 | 基于单目立体视觉的金属工件缺陷检测方法、系统、存储介质、以及装置 |
-
2019
- 2019-11-11 CN CN201911096690.4A patent/CN112785510B/zh active Active
-
2020
- 2020-11-09 WO PCT/CN2020/127608 patent/WO2021093712A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7340099B2 (en) * | 2003-01-17 | 2008-03-04 | University Of New Brunswick | System and method for image fusion |
CN102783135A (zh) * | 2010-03-03 | 2012-11-14 | 伊斯曼柯达公司 | 利用低分辨率图像提供高分辨率图像的方法和装置 |
CN107563971A (zh) * | 2017-08-12 | 2018-01-09 | 四川精视科技有限公司 | 一种真彩高清夜视成像方法 |
CN110248105A (zh) * | 2018-12-10 | 2019-09-17 | 浙江大华技术股份有限公司 | 一种图像处理方法、摄像机及计算机存储介质 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113691730A (zh) * | 2021-09-03 | 2021-11-23 | 浙江宇视科技有限公司 | 一种摄像头的任务切换控制方法、装置、介质及电子设备 |
CN113691730B (zh) * | 2021-09-03 | 2023-05-26 | 浙江宇视科技有限公司 | 一种摄像头的任务切换控制方法、装置、介质及电子设备 |
CN114140877A (zh) * | 2021-11-26 | 2022-03-04 | 北京比特易湃信息技术有限公司 | 一种带红外热成像辅助的人体运动姿态与运动意图的预测方法 |
CN114285978A (zh) * | 2021-12-28 | 2022-04-05 | 维沃移动通信有限公司 | 视频处理方法、视频处理装置和电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN112785510A (zh) | 2021-05-11 |
CN112785510B (zh) | 2024-03-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021093712A1 (zh) | 图像处理方法和相关产品 | |
JP6967160B2 (ja) | 画像処理方法および関連デバイス | |
KR102156597B1 (ko) | 광학 촬영 방법 및 장치 | |
EP3410390B1 (en) | Image processing method and device, computer readable storage medium and electronic device | |
CN108605099B (zh) | 用于终端拍照的方法及终端 | |
US9077917B2 (en) | Image sensor having HDR capture capability | |
WO2018137267A1 (zh) | 图像处理方法和终端设备 | |
CN105491358B (zh) | 一种图像处理方法及装置、终端 | |
WO2020078273A1 (zh) | 一种拍摄方法及电子设备 | |
CN107040723B (zh) | 一种基于双摄像头的成像方法、移动终端及存储介质 | |
CN106993136B (zh) | 移动终端及其基于多摄像头的图像降噪方法和装置 | |
WO2023160285A9 (zh) | 视频处理方法和装置 | |
CN111885307B (zh) | 一种景深拍摄方法、设备及计算机可读存储介质 | |
WO2022267506A1 (zh) | 图像融合方法、电子设备、存储介质及计算机程序产品 | |
CN109510941B (zh) | 一种拍摄处理方法、设备及计算机可读存储介质 | |
CN113472980A (zh) | 一种图像处理方法、装置、设备、介质和芯片 | |
CN107360378B (zh) | 一种曝光控制方法、移动终端和计算机存储介质 | |
CN112150357A (zh) | 一种图像处理方法及移动终端 | |
CN107610073B (zh) | 一种图像处理方法及终端、存储介质 | |
CN114727015A (zh) | 图像处理方法、智能终端及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20886681 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20886681 Country of ref document: EP Kind code of ref document: A1 |