CN113170048A - Image processing device and method - Google Patents

Image processing device and method Download PDF

Info

Publication number
CN113170048A
CN113170048A CN201980082094.2A CN201980082094A CN113170048A CN 113170048 A CN113170048 A CN 113170048A CN 201980082094 A CN201980082094 A CN 201980082094A CN 113170048 A CN113170048 A CN 113170048A
Authority
CN
China
Prior art keywords
image
information
image signal
signal
fused
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980082094.2A
Other languages
Chinese (zh)
Inventor
涂娇姣
杨红明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN113170048A publication Critical patent/CN113170048A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

The embodiment of the application discloses an image processing device and a method, wherein the image processing device comprises: the device comprises a first receiving interface, a second receiving interface and a processor; the image processing device can receive two paths of image signals of a target scene, wherein the first image signal is a high-resolution infrared image signal and contains more brightness details, the second image signal is a low-resolution visible image signal, a single pixel contains more chrominance information, the first image information containing more brightness details and the second image information containing more chrominance information are fused to obtain a fused image, the detail expression and the brightness signal-to-noise ratio of the image are improved, and the color information of the image is well kept.

Description

Image processing device and method Technical Field
The present application relates to the field of image processing, and in particular, to an apparatus and method for image processing.
Background
When the light sensing capability of the image sensor is limited, the image sensor can obtain a high-quality image under normal illumination, but in a low-light scene, due to the fact that the light entering amount of visible light is seriously insufficient, the signal-to-noise ratio of the brightness and the chromaticity of the image obtained by the image sensor is sharply reduced, the detail loss is serious, the color is poor, and the image quality is seriously deteriorated. The image acquired under the low-light scene cannot meet the requirements for subjective observation of human eyes or machine recognition of the scene.
In order to improve the quality of an image in a low-light scene, the most common scheme is to optimize the image by using an image processing technology, such as noise reduction, enhancement, high dynamic range processing, and the like. In another solution, in a low-light scene, the image sensor is switched to an infrared light supplement mode, so that although the brightness and light sensitivity can be greatly improved, the color information of the image is completely lost.
Disclosure of Invention
The embodiment of the application provides an image processing device and method, which improve the quality of an image acquired in a low-light scene.
A first aspect of the present application provides an apparatus for image processing, the apparatus comprising: the device comprises a first receiving interface, a second receiving interface and a processor; the first receiving interface is configured to receive a first image signal of a target scene, where the first image signal includes first luminance information, the first luminance information is luminance information carrying infrared light information, and the first image signal has a first resolution; the second receiving interface is configured to receive a second image signal of the target scene, the second image signal including second luminance information and second chrominance information, the second luminance information being luminance information excluding infrared light information, the second image signal having a second resolution, the first resolution being higher than the second resolution; the processor is used for fusing the first image signal and the second image signal to obtain a fused image.
The image processing device provided by the embodiment of the application can receive two paths of image signals of a target scene, wherein the first image signal comprises a high-resolution infrared image signal and contains more brightness details, the second image signal is a low-resolution visible light image signal, a single pixel contains more chrominance information, the first image information containing more brightness details and the second image information containing more chrominance information are fused to obtain a fused image, the detail expression and the brightness signal-to-noise ratio of the image are improved, and the color information of the image is well kept. The fused image has richer details, higher signal-to-noise ratio and better color expression.
In one possible implementation, the processor is specifically configured to: performing up-sampling processing on the second image signal to obtain a third image signal, wherein the third image signal has the first resolution; and fusing the first image signal and the third image signal to obtain a fused image.
In a possible implementation, the third image signal includes third luminance information, the fused image includes fused luminance information, and the processor is specifically configured to: acquiring high-frequency detail information of the first brightness information, wherein the high-frequency detail information of the first brightness information comprises brightness detail information of the first image signal; denoising the third brightness information to filter high-frequency noise of the third brightness information to obtain fourth brightness information; and superposing the high-frequency detail information of the first brightness information and the fourth brightness information to obtain the fused brightness information.
In one possible implementation, the processor is specifically configured to: bilateral filtering is carried out on the first brightness information to obtain filtered first brightness information; and subtracting the filtered first brightness information from the first brightness information to obtain a high-frequency detail signal of the first brightness information.
In one possible implementation, the processor is specifically configured to: and carrying out bilateral filtering on the third brightness information to obtain the fourth brightness information.
In a possible implementation, the third image signal further includes third chrominance information, the fused image further includes fused chrominance information, and the processor is further specifically configured to: extracting the third chrominance information from the third image signal; and carrying out color correction on the third chroma information to obtain the fused chroma information.
In one possible implementation, the processor is specifically configured to: and compensating the third chroma information according to the difference value of the fused brightness information and the third brightness information to obtain the fused chroma information.
In a possible implementation, the processor is further specifically configured to: and combining the fused brightness information and the fused chrominance information to obtain the fused image.
In one possible embodiment, the apparatus further comprises: a first image sensor having the first resolution and a second image sensor having the second resolution; the first image sensor is used for working in an infrared supplementary lighting mode to generate a first image signal; the second image sensor is used for working in an infrared light cut-off mode to generate the second image signal.
The image processing device provided by the embodiment of the application comprises two image sensors with different resolutions, wherein in a low-light scene, the image sensor with the high resolution is switched to an infrared fill-in mode, an image generated by the image sensor by sensing infrared light has more detail information, the image sensor with the low resolution acquires visible light and cuts off the infrared light to avoid the infrared light from influencing the color information of the image, because the size of each pixel of the image sensor with the low resolution is larger, compared with the image sensor with the equal-area high resolution, the single pixel has stronger light sensing capability and more captured photons to acquire better chromaticity information, namely, the image sensor with the high resolution acquires more detail information of the image due to the infrared light fill-in, the image sensor with the low resolution acquires more color information of the image due to the larger size area of the single pixel, and two paths of signals acquired by the image sensor with the high resolution and the low resolution are fused, the detail representation and the signal-to-noise ratio of the image are improved, and the color information of the image is well reserved.
In one possible embodiment, the first image sensor and the second image sensor have the same target surface size.
The image processing device provided by the embodiment of the application comprises two image sensors with the same target surface size and different resolutions, wherein in a low-light scene, the high-resolution image sensor is switched to an infrared light supplementing mode, an image generated by the image sensor by sensing infrared light has more detail information, the low-resolution image sensor acquires visible light and cuts off the infrared light to avoid the infrared light from influencing the color information of the image, because the size of each pixel of the low-resolution image sensor is larger, compared with the image sensor with the same area and high resolution, the single pixel has stronger light sensing capability and more captured photons, and can acquire better chromaticity information, namely, the high-resolution image sensor acquires more detail information of the image due to the infrared light supplementing, and the low-resolution image sensor acquires more color information of the image due to the larger size area of the single pixel, two paths of signals acquired by the high-resolution and low-resolution image sensors are fused, so that the detail expression and the signal-to-noise ratio of the image are improved, and the color information of the image is well kept.
In one possible embodiment, the first image Sensor is a first RGB Sensor, and the second image Sensor is a second RGB Sensor.
The image processing device provided by the embodiment of the application comprises two RGB sensors with the same target surface size and different resolutions, wherein in a low-illumination scene, the high-resolution RGB Sensor is switched to an infrared light supplement mode, an image generated by the RGB Sensor by sensing infrared light has more detail information, the low-resolution RGB Sensor acquires visible light and cuts off the infrared light to avoid the infrared light from influencing the color information of the image, because the size of each pixel of the low-resolution RGB Sensor is larger, compared with the RGB Sensor with the same area and high resolution, the single pixel has stronger light sensing capability and more captured photons, better RGB color information can be acquired, namely, the high-resolution RGB Sensor acquires more detail information of the image due to the infrared light supplement, the low-resolution Sensor acquires more color information of the image due to the larger size and area of the single pixel, and two paths of signals acquired by the high-resolution RGB Sensor and the low-resolution Sensor are fused, the detail representation and the signal-to-noise ratio of the image are improved, and the color information of the image is well reserved.
In one possible embodiment, the apparatus further comprises: the first light filtering mode switcher is used for switching the first image sensor to the infrared light supplementing mode.
In one possible embodiment, the first filter mode switcher includes: a full-transmission spectrum filter and an infrared cut-off filter; the first filter mode switcher is specifically configured to switch to the full-transmittance spectral filter so that the first image sensor senses an infrared light signal to generate the infrared light image signal.
In one possible embodiment, the apparatus further comprises: an infrared lamp for providing the infrared light signal in the low-light scene.
In one possible embodiment, the infrared light signal is a fill light signal provided by an infrared lamp in the low-light scene.
In one possible embodiment, the apparatus further comprises: a second filter mode switcher for switching the second image sensor to the infrared light cut mode.
In one possible embodiment, the second filter mode switcher includes: the full-transmission spectrum filter and the infrared cut-off filter; the second filter mode switcher is specifically configured to switch to the infrared cut filter so that the second image sensor senses a visible light signal and generates the visible light image signal.
In one possible embodiment, the apparatus further comprises: a lens and a beam splitter; the lens is used for receiving the optical signal of the target scene; the optical splitter is configured to split an optical signal of the target scene into a first optical signal and a second optical signal, where the first optical signal is an optical signal sent to the first image sensor, and the second optical signal is an optical signal sent to the second image sensor.
In one possible embodiment, the apparatus further comprises: the first lens is used for receiving a first optical signal of the target scene, wherein the first optical signal is an optical signal sent to the first image sensor; and the second lens is used for receiving a second optical signal of the target scene, and the second optical signal is an optical signal sent to the second image sensor.
In one possible implementation, the processor is further configured to: and performing alignment processing on the first image signal and the second image signal.
In one possible implementation, the processor is further configured to: performing at least one of demosaicing processing, color space conversion, noise reduction processing, contrast processing, image enhancement, or dynamic range processing on the first image signal or the second image signal.
In one possible implementation, the processor is further configured to: and acquiring second brightness information of the second image, and performing up-sampling processing on the second brightness information to obtain third brightness information, wherein the third brightness information has the first resolution.
In one possible implementation, the processor is further configured to: and acquiring second chrominance information of the second image, and performing up-sampling processing on the second chrominance information to obtain third chrominance information, wherein the third chrominance information has the first resolution.
In one possible embodiment, the ratio of the first resolution to the second resolution is: 4:1.
In one possible embodiment, the first image sensor operates in the infrared light cut-off mode to sense a visible light signal and generate a visible light image signal; the second image sensor is turned off.
In one possible embodiment, the first image sensor operates in the infrared light cut-off mode to generate a first visible light image signal, the second image sensor operates in the infrared light cut-off mode to generate a second visible light image signal, and the first visible light image signal and the second visible light image signal are fused to obtain a fused visible light image signal.
A second aspect of the present application provides a method of image processing, the method comprising: receiving a first image signal of a target scene, wherein the first image signal comprises first brightness information, the first brightness information is brightness information carrying infrared light information, and the first image signal has a first resolution; receiving a second image signal of the target scene, the second image signal including second luminance information and second chrominance information, the second luminance information being luminance information excluding infrared light information, the second image signal having a second resolution, the first resolution being higher than the second resolution; and fusing the first image signal and the second image signal to obtain a fused image.
The image processing method provided by the embodiment of the application receives two paths of image signals of a target scene, wherein the first image signal comprises a high-resolution infrared image signal and contains more brightness details, the second image signal is a low-resolution visible image signal, a single pixel contains more chrominance information, the first image information containing more brightness details and the second image information containing more chrominance information are fused to obtain a fused image, the detail expression and the brightness signal-to-noise ratio of the image are improved, and the color information of the image is well kept. The fused image has richer details, higher signal-to-noise ratio and better color expression.
In a possible implementation manner, the fusing the first image signal and the second image signal to obtain a fused image specifically includes: performing up-sampling processing on the second image signal to obtain a third image signal, wherein the third image signal has the first resolution; and fusing the first image signal and the third image signal to obtain a fused image.
Because the resolutions of the first image signal and the second image signal are different, the second image signal with low resolution is up-sampled before the fusion, so that the processed second image signal and the first image signal have the same resolution.
In a possible implementation manner, the third image signal includes third luminance information, the fused image includes fused luminance information, and the fusing the first image signal and the third image signal to obtain the fused image specifically includes: acquiring high-frequency detail information of the first brightness information, wherein the high-frequency detail information of the first brightness information comprises brightness detail information of the first image signal; denoising the third brightness information to filter high-frequency noise of the third brightness information to obtain fourth brightness information; and superposing the high-frequency detail information of the first brightness information and the fourth brightness information to obtain the fused brightness information.
The fourth brightness information is the brightness information obtained after the visible light image signal is filtered to remove the high-frequency noise, and comprises the medium-low frequency information and the average brightness information of the visible light image signal; because the signal-to-noise ratio of the infrared light image signal is very high and the noise is very little, the high-frequency detail information of the first brightness information is mainly the brightness detail information of the image; because the high-frequency detail information of the first brightness information mainly comprises the high-frequency brightness detail of the infrared light image signal, the fourth brightness information mainly comprises the medium-low frequency information and the average brightness information of the visible light image signal, and the medium-low frequency information and the average brightness information are combined to obtain the fused brightness information, on one hand, the average brightness of the fused brightness information can be kept close to the average brightness of the visible light image as much as possible, and on the other hand, more detail information from the infrared light image signal is also contained.
In a possible implementation manner, the acquiring the high-frequency detail information of the first luminance information specifically includes: filtering the first brightness information to obtain filtered first brightness information; and subtracting the filtered first brightness information from the first brightness information to obtain high-frequency detail information of the first brightness information.
In a possible implementation manner, the denoising processing is performed on the third luminance information to filter high-frequency noise of the third luminance information, so as to obtain fourth luminance information, which specifically includes: and carrying out bilateral filtering on the third brightness information to obtain the fourth brightness information.
In a possible implementation manner, the third image signal further includes third chrominance information, the fused image further includes fused chrominance information, and the fusing the first image signal and the third image signal to obtain the fused image specifically further includes: extracting the third chrominance information from the third image signal; and carrying out color correction on the third chroma information to obtain the fused chroma information.
In a possible implementation manner, the color correcting the third chromaticity information to obtain the fused chromaticity information specifically includes: and compensating the third chroma information according to the difference value of the fused brightness information and the third brightness information to obtain the fused chroma information.
In a possible implementation manner, the fusing the first image signal and the third image signal to obtain the fused image specifically further includes: and combining the fused brightness information and the fused chrominance information to obtain the fused image.
In one possible embodiment, the method further comprises: performing at least one of demosaicing processing, color space conversion, noise reduction processing, contrast processing, image enhancement, or dynamic range processing on the first image signal or the second image signal.
In one possible embodiment, the method further comprises: a first image sensor senses an infrared light signal and generates a first image signal; the second image sensor is sensitive to visible light signals and generates the second image signals, the first image sensor has the first resolution, and the second image sensor has the second resolution.
In one possible embodiment, the first image Sensor is a first RGB Sensor, and the second image Sensor is a second RGB Sensor.
In one possible embodiment, the method further comprises: switching the first image sensor to an infrared light supplement mode to sense the infrared light signal; and switching the second image sensor to an infrared light cut-off mode to sense the visible light signal.
In a possible implementation manner, the switching the first image sensor to the infrared light supplementary mode to sense the infrared light signal specifically includes: switching a first light filtering mode switcher to a full-transmission spectrum filter to enable the first image sensor to be switched to the infrared light supplementing mode; should switch this second image sensor to infrared light cut-off mode to sensitization this visible light signal, specifically include: and switching a second filtering mode switcher to an infrared cut filter so that the first image sensor is switched to the infrared cut mode.
In one possible embodiment, the method further comprises: receiving an optical signal of the target scene; the optical signal of the target scene is divided into a first optical signal and a second optical signal, the first optical signal is the optical signal sent to the first image sensor, and the second optical signal is the optical signal sent to the second image sensor.
In one possible embodiment, the method further comprises: receiving a first optical signal of the target scene, wherein the first optical signal is an optical signal sent to the first image sensor; and receiving a second optical signal of the target scene, wherein the second optical signal is an optical signal sent to the second image sensor.
In a possible embodiment, before the fusing the first image signal and the second image signal to obtain the fused image, the method further comprises: aligning the first image signal and the second image signal to obtain a processed first image signal and a processed second image signal; should fuse this first image signal and this second image signal, obtain the fusion image, specifically include: and fusing the processed first image signal and the processed second image signal to obtain the fused image.
In a possible implementation, before the switching the first RGB Sensor to the infrared light fill-in mode, the method further includes: the infrared lamp is turned on to provide the infrared light signal.
A third aspect of the present application provides an apparatus for image processing, the apparatus comprising: the system comprises a first receiving interface, a second receiving interface and a fusion module; the first receiving interface is configured to receive a first image signal of a target scene, where the first image signal includes first luminance information, the first luminance information is luminance information carrying infrared light information, and the first image signal has a first resolution; the second receiving interface is configured to receive a second image signal of the target scene, the second image signal including second luminance information and second chrominance information, the second luminance information being luminance information excluding infrared light information, the second image signal having a second resolution, the first resolution being higher than the second resolution; the fusion is used for fusing the first image signal and the second image signal to obtain a fused image.
In one possible embodiment, the apparatus further comprises: the up-sampling module is used for carrying out up-sampling processing on the second image signal to obtain a third image signal, and the third image signal has the first resolution; the fusion module is specifically configured to fuse the first image signal and the third image signal to obtain the fusion image.
In a possible implementation, the third image signal includes third luminance information, the fused image includes fused luminance information, the apparatus further includes: the first brightness processing module is used for acquiring high-frequency detail information of the first brightness information, and the high-frequency detail information of the first brightness information comprises brightness detail information of the first image signal; the second brightness processing module is configured to perform denoising processing on the third brightness information to filter high-frequency noise of the third brightness information, so as to obtain fourth brightness information; the fusion module is specifically configured to superimpose the high-frequency detail information of the first luminance information and the fourth luminance information to obtain the fusion luminance information.
In a possible implementation manner, the first luminance processing module is specifically configured to perform bilateral filtering on the first luminance information to obtain filtered first luminance information; and subtracting the filtered first brightness information from the first brightness information to obtain a high-frequency detail signal of the first brightness information.
In a possible implementation manner, the second luminance processing module is specifically configured to perform bilateral filtering on the third luminance information to obtain the fourth luminance information.
In a possible implementation, the third image signal further includes third chrominance information, the fused image further includes fused chrominance information, the apparatus further includes a chrominance processing module: for extracting the third chrominance information from the third image signal; and carrying out color correction on the third chroma information to obtain the fused chroma information.
In a possible implementation, the chrominance processing module is specifically configured to: and compensating the third chroma information according to the difference value of the fused brightness information and the third brightness information to obtain the fused chroma information.
In a possible implementation, the fusion module is further specifically configured to: and combining the fused brightness information and the fused chrominance information to obtain the fused image.
In one possible embodiment, the apparatus further comprises: a first image sensor having the first resolution and a second image sensor having the second resolution; the first image sensor is used for working in an infrared supplementary lighting mode to generate a first image signal; the second image sensor is used for working in an infrared light cut-off mode to generate the second image signal.
The image processing device provided by the embodiment of the application comprises two image sensors with different resolutions, wherein in a low-light scene, the image sensor with the high resolution is switched to an infrared fill-in mode, an image generated by the image sensor by sensing infrared light has more detail information, the image sensor with the low resolution acquires visible light and cuts off the infrared light to avoid the infrared light from influencing the color information of the image, because the size of each pixel of the image sensor with the low resolution is larger, compared with the image sensor with the equal-area high resolution, the single pixel has stronger light sensing capability and more captured photons to acquire better chromaticity information, namely, the image sensor with the high resolution acquires more detail information of the image due to the infrared light fill-in, the image sensor with the low resolution acquires more color information of the image due to the larger size area of the single pixel, and two paths of signals acquired by the image sensor with the high resolution and the low resolution are fused, the detail representation and the signal-to-noise ratio of the image are improved, and the color information of the image is well reserved.
In one possible embodiment, the first image sensor and the second image sensor have the same target surface size.
In one possible embodiment, the first image Sensor is a first RGB Sensor, and the second image Sensor is a second RGB Sensor.
In one possible embodiment, the apparatus further comprises: the first light filtering mode switcher is used for switching the first image sensor to the infrared light supplementing mode.
In one possible embodiment, the first filter mode switcher includes: a full-transmission spectrum filter and an infrared cut-off filter; the first filter mode switcher is specifically configured to switch to the full-transmittance spectral filter so that the first image sensor senses an infrared light signal to generate the infrared light image signal.
In one possible embodiment, the apparatus further comprises: a second filter mode switcher for switching the second image sensor to the infrared light cut mode.
In one possible embodiment, the second filter mode switcher includes: the full-transmission spectrum filter and the infrared cut-off filter; the second filter mode switcher is specifically configured to switch to the infrared cut filter so that the second image sensor senses a visible light signal and generates the visible light image signal.
In one possible embodiment, the apparatus further comprises: a lens and a beam splitter; the lens is used for receiving the optical signal of the target scene; the optical splitter is configured to split an optical signal of the target scene into a first optical signal and a second optical signal, where the first optical signal is an optical signal sent to the first image sensor, and the second optical signal is an optical signal sent to the second image sensor.
In one possible embodiment, the apparatus further comprises: the first lens is used for receiving a first optical signal of the target scene, wherein the first optical signal is an optical signal sent to the first image sensor; and the second lens is used for receiving a second optical signal of the target scene, and the second optical signal is an optical signal sent to the second image sensor.
In one possible embodiment, the apparatus further comprises: and the alignment module is used for performing alignment processing on the first image signal and the second image signal.
A fourth aspect of the present application provides a computer-readable storage medium having stored therein instructions, which, when run on a computer or processor, cause the computer or processor to perform the method as set forth in the second aspect or any one of its possible embodiments.
A fifth aspect of the present application provides a computer program product comprising instructions which, when run on a computer or processor, cause the computer or processor to perform the method as set forth in the second aspect or any one of its possible embodiments.
Drawings
FIG. 1a is a schematic diagram of an exemplary RGB Sensor provided in an embodiment of the present application;
FIG. 1b is a schematic diagram of another exemplary RGB Sensor provided in an embodiment of the present application;
fig. 2a is a schematic diagram of an exemplary application system architecture provided in an embodiment of the present application;
FIG. 2b is a schematic diagram of another exemplary application system architecture provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of another exemplary application system architecture provided by an embodiment of the present application;
fig. 4 is a schematic diagram of a hardware architecture of an exemplary image processing apparatus according to an embodiment of the present application;
fig. 5 is a schematic diagram of an exemplary image processing apparatus according to an embodiment of the present application;
FIG. 6 is a schematic diagram of another exemplary image processing apparatus provided in an embodiment of the present application;
FIG. 7 is a flowchart of an exemplary image fusion method provided by an embodiment of the present application;
fig. 8 is a flowchart illustrating an exemplary method for processing an image signal according to an embodiment of the present disclosure.
Detailed Description
The terms "first," "second," and the like in the description and in the claims of the present application and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprises" and "comprising," as well as any variations thereof, are intended to cover a non-exclusive inclusion, such as a list of steps or elements. A method, system, article, or apparatus is not necessarily limited to those steps or elements explicitly listed, but may include other steps or elements not explicitly listed or inherent to such process, system, article, or apparatus.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
People have higher and higher requirements on the image quality of imaging equipment, for example, in the field of security monitoring, images acquired by security monitoring equipment need to meet the requirement of upgrading subjective feelings of human eyes on one hand, and also need to meet the requirement of identifying various machines on the other hand; for example, in the field of mobile phone photography, whether portrait or landscape photography, the definition and color expression of images are required to be higher and higher. The quality of the image obtained by the imaging device is related to the sensitivity of the image sensor and the light conditions of the scene being photographed.
Some terms referred to in the present application will be described first below.
Pixel: one of the technical indexes of the image sensor is that the sensor is provided with a plurality of photosensitive units which can convert optical signals into electric signals, each photosensitive unit corresponds to one pixel, the more the number of pixels in the same target surface size is, the higher the resolution is, the more details which can be sensed by the sensor is, and the clearer the obtained image is. When the target surface size of the image sensor is the same, the size of the sensor single pixel with higher resolution is smaller, and the size of the sensor single pixel with lower resolution is larger. The larger the size of an individual pixel of the sensor, the greater the light sensitivity of the individual pixel. The resolution of the image signal indicates the number of pixels of an image, for example, the resolution of an image is 1080 × 1080, which indicates that the image has 1080 × 1080 pixels, corresponding to a 1080 × 1080 size digital matrix, where one pixel corresponds to one matrix element. Target surface size: the size of the light sensing portion of the image sensor is generally expressed in inches. Generally, the diagonal length of the image sensor is used as the data, and the larger the size of the target surface is, the better the amount of light passing through is meant, while the smaller the size of the target surface is, the larger the depth of field is easily obtained.
Sensitivity: the sensor and the related electronic circuit sense the intensity of an incident light signal, and the higher the light sensitivity is, the higher the sensitivity of the light-sensitive surface of the image sensor to light is.
Signal-to-noise ratio: the ratio of the effective signal to the noise signal may be, for example, a ratio of a signal voltage to a noise voltage, where the unit of the signal-to-noise ratio is dB, and a larger signal-to-noise ratio indicates that the noise in the image is less and the image is cleaner.
Illuminance (Illuminance): the luminous flux received per unit area is given in Lux (Lux).
YCC color space: in the present application, representing a color space with separated colors, the three YCC components represent luminance-chrominance (Luma-Chroma), respectively; common YCC space image formats are YUV, YCbCr, ICtCp, etc.
A Red Green Blue Sensor (RGB Sensor) is one of the image sensors widely used at present, and as shown in fig. 1a and fig. 1b, two exemplary RGB sensors with different resolutions are provided for the embodiments of the present application. For a natural image, each pixel contains R, G, B three color components, R represents a red component among the three primary color components of red, green and blue, G represents a blue component among the three primary color components of red, green and blue, and B represents a blue component among the three primary color components of red, green and blue. Each cell in the figure represents a pixel, each pixel of the RGB Sensor only senses one color component of R, G, B three components, and therefore the sensing capability of the RGB Sensor is limited. The two RGB sensors shown in fig. 1a and 1b have the same target surface size, the RGB Sensor in fig. 1a has 16 pixels and higher resolution, the RGB Sensor in fig. 1b has 4 pixels and lower resolution, and the ratio of the resolutions of the RGB Sensor in fig. 1a and the RGB Sensor in fig. 1b is 4: 1. The image sensor in fig. 1a is sensitive to obtain an image with a resolution of 4 × 4, and when converted into a digital image signal, the digital image signal may be represented as a 4 × 4 matrix, the image sensor in fig. 1b is sensitive to obtain an image with a resolution of 2 × 2, and when converted into a digital image signal, the digital image signal may be represented as a 2 × 2 matrix. Up-sampling the digital image signal generated by the sensor of fig. 1b may obtain the digital image signal generated by the sensor of fig. 1 a.
Taking the RGB Sensor as an example, in a low-light scene, because light is weak, the light sensing capability of the RGB Sensor is rapidly reduced, so that an image acquired by the RGB Sensor in the low-light scene is unsatisfactory in image details, color, signal-to-noise ratio and other aspects. It should be understood that the low-light scene is a scene with weak light, for example, the illumination threshold of the low-light scene may be 1Lux, and when the ambient light illumination is lower than 1Lux, the ambient is the low-light scene. In an optional case, the illumination threshold of the low-light scene may also take other values, and the low-light scene mentioned in the embodiment of the present application is a scene when the ambient light illumination is lower than a preset threshold, and the value of the preset threshold is not limited. In order to improve the signal-to-noise ratio of an image acquired in a low-light scene and improve the detail representation and the color reduction degree of the image, the embodiment of the application provides an image processing method and an image processing device based on heterogeneous dual RGB sensors.
Fig. 2a is a schematic diagram illustrating an exemplary application system architecture provided in an embodiment of the present application.
Wherein 201 is an external scene light signal, 202 is an image processing device, the image processing device 202 may include a lens 2021, an imaging sensor 2022 and an image processor 2023, and optionally, the image processing device 202 may further include a memory (not shown in the figure) for storing an image signal generated by the image processing device; the image processing apparatus 202 may further include an infrared lamp (not shown in the figure) integrated inside the image processing apparatus 202, and the infrared lamp provides infrared light supplementary light for the imaging sensor 2022 when the external scene light signal is weak. The scene light signal 201 is a light signal of an external scene of the image processing apparatus 202, and the light signal of the external scene is converted into an electrical signal by the image processing apparatus 202, and the electrical signal can be further converted into a digital image signal. Specifically, the scene light signal 201 reaches the imaging sensor 2022 through the lens 2021, the scene light signal 201 is sensed by the imaging sensor 2022 to form an electrical signal, the electrical signal may be further converted into a Digital image signal through an Analog-to-Digital Converter (ADC), and the Digital image signal may be stored in a memory of the image processing apparatus 202, an external memory or sent to the display apparatus 203 for displaying after being processed by the image processor 2023. In an alternative case, the image processing apparatus 202 itself includes a display screen 2024, as shown in fig. 2b, and the digital image signal is processed by the image processor 2023 and then sent to the display screen 2024 for display, or sent to an external display device for display.
Illustratively, the image processing apparatus 202 may be a security monitoring camera of a cell, an intelligent transportation electronic eye device, a video camera, a still camera, a mobile phone, and other terminal devices having imaging, photographing, or video recording functions. The imaging Sensor 2022 may be an RGB Sensor, which may be a Charge-Coupled Device (CCD) or a Complementary Metal-Oxide Semiconductor (CMOS) Sensor, etc. The Image Processor 2023 may be a dedicated Image Signal Processor (ISP), a module having an Image Processing function in a general-purpose Processor, a Graphics Processing Unit (GPU), an integrated circuit having an Image or video Processing function, or the like. The Display screen 2024 may be a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) Display, an Organic Light-Emitting Diode (OLED) Display screen, a Cathode Ray Tube (CRT) Display screen, or the like. It should be understood that the image processing apparatus 202 can process both image signals and video signals, and the video signals can be embodied as image signals of one frame and one frame which are continuous in time.
Illustratively, the image processing device 202 is a security monitoring camera device of a community, the scene optical signal 201 is an optical signal of a scene at a gate of the community, and the security monitoring camera images the scene at the gate of the community to form an image signal at the gate of the community. The scene light signal is a light signal of an actual scene, and the light signal is sensed by a sensor to generate an image signal. At this time, the display device 203 may be a monitor or a monitor screen in a monitoring room.
Fig. 3 is a schematic diagram of another exemplary application system architecture provided in the embodiment of the present application.
The scene light signal 301 is the same as the scene light signal 201, and is not described herein again.
The imaging device 302 comprises a lens 3021 and an imaging sensor 3022, and the imaging device 302 transmits an image signal generated by exposure to light to the image processing device 303 for processing, and transmits the image signal to the display device 304 for display after the image signal is processed by the image processing device 303. For example, the image processing apparatus 303 may be a chip or a processor with an image processing function, the image processing apparatus 303 includes a receiving interface 3031 and an image processor 3032, the receiving interface 3031 is used for receiving an image signal generated by an imaging apparatus, the image processor 3032 refers to the description of the image processor 2023, the image processor 3032 is used for processing the image signal, and the processing that the image processor 3032 can perform includes but is not limited to: the method comprises the steps of fusing two paths of image signals, performing up-sampling or down-sampling on the image signals, demosaicing, removing noise, enhancing images, processing a dynamic range, processing contrast, correcting colors and the like. It is to be understood that the image processor 3032 may also process video signals. Optionally, the image processing apparatus 303 further includes a transmission interface (not shown in the figure), and transmits the processed image signal to the display apparatus 304 through the transmission interface. The display device 304 is a device with a display function, and may be, for example, a television, a computer, a monitor, a smart phone, a mobile terminal, or the like, and optionally, the display device 304 may also be an LCD display screen, an LED display screen, an OLED display screen, a CRT display screen, or the like.
In an alternative case, the imaging device 302, the image processing device 303 and the display device 304 together form a complete mobile terminal, the mobile terminal has image acquisition, image processing and image display functions, for example, the mobile terminal may be a mobile phone, a camera, a video camera, a surveillance camera, etc., the imaging device 302 is a lens and image sensor set, the image processing device 303 is an image processing chip or processor set, and the display device 304 is a display screen.
Fig. 4 is a schematic diagram of a hardware architecture of an exemplary image processing apparatus according to an embodiment of the present disclosure. The hardware architecture of the image processing apparatus 400 is suitable for the image processor 2023 in fig. 2a and 2b, and the image processing apparatus 303 in fig. 3.
Illustratively, the image Processing apparatus 400 includes at least one Central Processing Unit (CPU), at least one memory, a GPU, a decoder, a dedicated video or graphics processor, a receiving interface, a transmitting interface, and the like. Optionally, the image processing apparatus 400 may further include a microprocessor and a Microcontroller (MCU) or the like. In an alternative case, the above parts of the image processing apparatus 400 are coupled through a connector, and it should be understood that in the embodiments of the present application, the coupling refers to interconnection through a specific manner, including direct connection or indirect connection through other devices, for example, connection through various interfaces, transmission lines or buses, which are generally electrical communication interfaces, but mechanical interfaces or other interfaces are not excluded, and the present embodiment is not limited thereto. In an alternative case, the above-mentioned parts are integrated on the same chip; in another alternative, the CPU, GPU, decoder, receive interface, and transmit interface are integrated on a chip, and portions within the chip access external memory via a bus. The dedicated video/graphics processor may be integrated on the same chip as the CPU or may exist as a separate processor chip, e.g., the dedicated video/graphics processor may be a dedicated ISP. The chips referred to in the embodiments of the present application are systems manufactured on the same semiconductor substrate in an integrated circuit process, also called semiconductor chip, which may be a collection of integrated circuits formed on the substrate (typically a semiconductor material such as silicon) by an integrated circuit process, the outer layers of which are typically encapsulated by a semiconductor encapsulation material. The integrated circuit may include various types of functional devices, each of which includes a logic gate circuit, a Metal-Oxide-Semiconductor (MOS) transistor, a bipolar transistor, a diode, or other transistors, and may also include a capacitor, a resistor, or an inductor, or other components. Each functional device can work independently or under the action of necessary driving software, and can realize various functions such as communication, operation, storage and the like.
Alternatively, the CPU may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor; alternatively, the CPU may be a processor group including a plurality of processors, and the plurality of processors are coupled to each other via one or more buses. In an alternative case, the processing of the image or video signal is performed partly by the GPU, partly by a dedicated video/graphics processor, and possibly by software code running on a general purpose CPU or GPU.
A memory operable to store computer program instructions, including an Operating System (OS), various user application programs, and various computer program code for executing the program code of aspects of the present application; the memory may also be used to store video data, image signal data, and the like; the CPU may be configured to execute computer program code stored in the memory to implement the methods of embodiments of the present application. Alternatively, the Memory 302 may be a non-volatile Memory, such as an Embedded multimedia Card (EMMC), a Universal Flash Memory (UFS) or a Read-Only Memory (ROM), or other types of static Storage devices capable of storing static information and instructions, a volatile Memory (volatile Memory), such as a Random Access Memory (RAM), or other types of dynamic Storage devices capable of storing information and instructions, or an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc Read-Only Memory (CD-ROM), or other optical Disc Storage, optical Disc (including Compact Disc, laser Disc, digital versatile Disc, CD-ROM, etc.), or other optical Disc Storage, optical Disc (including Compact Disc, laser Disc, digital versatile Disc, etc.), or other optical Disc Storage devices, But is not limited to, magnetic disk storage media or other magnetic storage devices, or any other computer-readable storage medium that can be used to carry or store program code in the form of instructions or data structures and that can be accessed by a computer.
The receiving Interface may be an Interface for data input of the processor chip, and in an optional case, the receiving Interface may be a High Definition Multimedia Interface (HDMI).
Fig. 5 is a schematic diagram of an exemplary image processing apparatus according to an embodiment of the present disclosure.
The image processing apparatus 500 includes: the image processing apparatus 500 may further include a first filter mode switch 503, a second filter mode switch 504, and an infrared lamp 508, where the first RGB Sensor 505 has a higher resolution, the second RGB Sensor506 has a lower resolution, the first RGB Sensor 505 operates in the ir fill mode in a low-light scene, and the second RGB Sensor506 operates in the ir cut-off mode in a low-light scene, it should be understood that the first RGB Sensor 505 and the second RGB Sensor506 may both operate in the ir fill mode and the ir cut-off mode. In one possible implementation, the first RGB Sensor 505 and the second RGB Sensor506 have the same target surface size, the resolution of the first RGB Sensor 505 is higher than that of the second RGB Sensor506, and the size of a single pixel of the first RGB Sensor 505 is smaller than that of a single pixel of the second RGB Sensor 506.
A lens 501, configured to receive a light signal, where the light signal is a scene light signal outside the image processing apparatus. The lens 501 may be composed of a plurality of lenses, or may be composed of a single lens, and the embodiment of the present application is not limited.
The optical splitter 502 is configured to split an optical signal received by the lens 501 into a first optical signal and a second optical signal, where the first optical signal is an optical signal sent to a first RGB Sensor 505, and the second optical signal is an optical signal sent to a second RGB Sensor 506. For example, the optical splitter 502 may be a light splitting prism, such as a light splitting coated prism, which splits an optical signal into a first optical signal and a second optical signal by means of coated light splitting, and the optical splitter 502 may also be a diffraction light splitting prism, a mechanical blocking device, or other types of light splitting devices.
The first optical signal passes through the first filter mode switch 503 to reach the first RGB Sensor 505, and the second optical signal passes through the second filter mode switch 504 to reach the second RGB Sensor 506. For example, the first filter mode switch 503 and the second filter mode switch 504 each include a full-transmittance spectrum filter and an ir-cut filter, and the first filter mode switch 503 and the second filter mode switch 504 may further include other types of filters, for example, filters that can only pass a spectrum in a certain frequency band or filters that do not allow a spectrum in a certain frequency band to pass through may be included, which is not limited in the embodiment of the present application. The first filter mode switch 503 and the second filter mode switch 504 may further include a power system for switching between the full transmission spectrum filter and the ir cut filter, which may be a small electromagnetic, motor, or other power source, etc. In an optional case, the first filter mode switch 503 and the second filter mode switch 504 may be Infrared filters (IR-cuts), the IR-cuts are dual-filter switches, the IR-cuts can be switched according to the intensity of an external light signal, and when the light is strong, the IR-cuts are switched to Infrared Cut filters to avoid color deviation caused by Infrared light; when the light is weak, for example, when the illumination intensity is lower than 1Lux, the IR-Cut is switched to a full-transmission spectral filter to improve the brightness of the low-illumination scene. When the first filter mode switch 503 is switched to the full-transmittance spectral filter, both the infrared light and the visible light can reach the first RGB Sensor 505 through the full-transmittance spectral filter. When the second filter mode switch 504 is switched to the ir-cut filter, the ir light cannot pass through the ir-cut filter, and the visible light signal passes through the ir-cut filter to reach the second RGB Sensor 506.
When the image processing apparatus 500 works in a low-light scene, the infrared lamp 508 is turned on to provide an infrared light signal, the first filtering mode switch 503 switches the first RGB Sensor 505 to the infrared fill-in mode, and the second filtering mode switch 504 switches the second RGB Sensor506 to the infrared cut-off mode, specifically, the first filtering mode switch 503 switches to the full-transmission spectral filter to make the first RGB Sensor 505 sense the first filtering light signal to generate a first image signal, the first filtering light signal includes an infrared light signal and a visible light signal, since the visible light signal is weak in the low-light scene and the infrared light signal provided by the infrared lamp is strong, the first image signal generated by the first RGB Sensor 505 sensing the infrared light signal in the infrared fill-in mode is an infrared light image signal, the first image signal includes first luminance information, which can be approximately considered that the infrared light image signal only includes luminance information, no chrominance information is contained. The second filter mode switch 504 is switched to the ir-cut filter so that the second RGB Sensor506 senses the second filtered light signal to generate a second image signal, the second filtered light signal includes a visible light signal and does not include an infrared light signal, the second RGB Sensor506 senses the visible light signal in the ir-cut mode to generate a second image signal, the second image signal is mainly a visible light image signal, and the second image signal includes second luminance information and second chrominance information.
It should be understood that the infrared light supplement mode indicates that the sensor can receive infrared light and sense the infrared light to generate an infrared light image signal, and the infrared light image signal loses color information but has better image detail information; the infrared light cut-off mode indicates that the sensor cannot receive infrared light, and the light sensed by the sensor mainly comprises visible light signals to generate visible light image signals, wherein the visible light image signals have good color information. The filtering mode switch is an implementation manner for switching the RGB Sensor between the infrared light supplement mode and the infrared cut-off mode, and may actually include other switching manners, for example, the RGB Sensor itself has two working modes, and the RGB Sensor may be switched between the two working modes by hardware setting or software setting, which is not limited in this embodiment of the present application.
The first RGB Sensor 505 is configured to operate in the infrared fill light mode in a low-light scene, and generate a first image signal by sensing the first filtered light signal transmitted through the first filter mode switch 503.
And a second RGB Sensor506 operating in the infrared light cut-off mode in a low-light scene and sensing a second filtered light signal transmitted through the second filter mode switch 504 to generate a second image signal. The second image signal has a second resolution. The first resolution is greater than the second resolution, and illustratively, the ratio of the first resolution to the second resolution may be 4: 1. The first resolution and the second resolution may also be 16:1, or other ratios. Illustratively, the first image signal and the second image signal are both images in the raw image file raw format.
The processor 507 is configured to receive a first image signal generated by the first RGB Sensor 505 and a second image signal generated by the second RGB Sensor506, respectively, where the first image signal is a high-resolution infrared image signal, the luminance signal-to-noise ratio is relatively higher, and the image detail information is richer; the second image signal is mainly a visible light image signal, the visible light image signal includes luminance information and chrominance information, and as the resolution of the second RGB Sensor506 is low, the size of a single pixel is large, the light sensing capability is strong, better chrominance information is obtained as far as possible in a low-light scene, and the color information of the image is well retained. The processor 507 performs first image processing on the infrared image signal, where the first image processing may include demosaicing processing, color space conversion, noise reduction processing, dynamic range processing, contrast processing, image enhancement, and the like, and since the infrared image signal loses color information, the infrared image signal may be considered to contain only luminance information and not chrominance information approximately, and through the first image processing, the signal-to-noise ratio and image details of the luminance information of the infrared image signal are improved; illustratively, the processor 507 demosaics the first image signal, converts the first image signal from raw format to RGB format, or further converts the RGB format to YUV format or other target format image by color space conversion.
The processor 507 performs a second image processing on the visible light image signal, where the second image processing may include demosaicing processing, noise reduction processing, dynamic range processing, contrast processing, image enhancement, and the like, and by the second image processing, the signal-to-noise ratio and color expression of the visible light image signal are improved. Illustratively, the processor 507 demosaics the second image signal, converts the second image signal from raw format to RGB format, or further converts the RGB format to YUV format or other target format image by color space conversion. Further, the processor 507 may further perform upsampling on the second image signal to obtain a third image signal, where the third image signal has the same resolution as the first image signal, or to say, the second image signal after the upsampling has the same resolution as the first image signal, and the third image signal is the second image signal after the upsampling. It should be understood that the present embodiment does not limit the order of the second image processing and the upsampling processing, nor does it limit the order of the plurality of kinds of processing in the first image processing and the second image processing.
The processor 507 may be further configured to process the luminance information and the chrominance information separately. Illustratively, the processor 507 obtains chrominance information and luminance information of the visible light image signal and obtains luminance information of the infrared light image signal, respectively, and the processor 507 includes two processing paths, one of which is used to fuse the luminance information of the infrared light image signal and the luminance information of the visible light image signal to obtain luminance information of a fused image, and the other is used to process the chrominance information of the visible light image signal, for example, to perform color correction on the chrominance information of the visible light image signal, and the like. Further, the luminance information of the fused image and the chrominance information of the fused image are combined together to obtain the fused image.
Optionally, the processor 507 may further perform third image processing on the fused image to further improve signal-to-noise ratio, detail expression, and color expression of the fused image, where the third image processing may include, for example, noise reduction processing, dynamic range processing, contrast processing, image enhancement, color correction, and the like.
Under a low-light scene, the first RGB Sensor with high resolution works in an infrared light supplement mode, the brightness sensitization performance of the first RGB Sensor is improved, the obtained first image signal has more brightness details and the signal-to-noise ratio of brightness information is higher, the second RGB Sensor with low resolution works in an infrared light cut-off mode, although the resolution is reduced, human eyes are insensitive to the resolution of colors, the sensitization performance of a single pixel is improved due to the fact that the size of the single pixel of the RGB Sensor with low resolution is larger, the color information of the obtained second image signal is more restored, the signal-to-noise ratio of the color signal is higher, the color signal obtained through the RGB Sensor with low resolution does not affect the resolution of the human eyes on the colors, and the color representation of the obtained image is also improved. The first image signal provides high-resolution brightness information, the second image signal provides chrominance information with a higher reduction signal-to-noise ratio, the first image signal and the second image signal are fused to obtain a final output fused image, the fused image is richer in details, better in color expression and higher in signal-to-noise ratio, and the quality of the image acquired in a low-light scene is improved.
Fig. 6 is a schematic diagram of another exemplary image processing apparatus according to an embodiment of the present disclosure.
The image processing apparatus 600 includes: the image processing apparatus 600 may further include a first filter mode switch 603, a second filter mode switch 604, and an infrared lamp 608, where the first RGB Sensor 605 has a higher resolution, the second RGB Sensor 606 has a lower resolution, the first RGB Sensor 605 operates in the infrared fill mode in a low-light scene, and the second RGB Sensor 606 operates in the infrared cut-off mode in a low-light scene, it should be understood that both the first RGB Sensor 605 and the second RGB Sensor 606 may operate in the infrared fill mode and the infrared cut-off mode.
A first lens 601 for receiving a first scene light signal;
a second lens 602 for receiving a second scene light signal;
it should be understood that the first scene light signal and the second scene light signal are two light signals of the same scene received by the two lenses.
The first scene light signal passes through the first filter mode switch 603 to the first RGB Sensor 605, and the second scene light signal passes through the second filter mode switch 604 to the second RGB Sensor 606. Please refer to the corresponding parts of fig. 5 for the first filter mode switch 603 and the second filter mode switch 604, which are not described herein again.
When the image processing apparatus 600 operates in a low-light scene, the infrared lamp 608 is turned on to provide an infrared light signal, the first filtering mode switch 603 switches the first RGB Sensor 605 to the infrared light supplementary mode, and the second filtering mode switch 604 switches the second RGB Sensor 606 to the infrared light cut-off mode, it should be understood that the infrared light supplementary mode indicates that the Sensor can receive infrared light and sense the infrared light to generate an infrared light image signal, and the infrared light image signal loses color information but has better image detail information; the infrared light cut-off mode indicates that the sensor cannot receive infrared light, and the light sensed by the sensor mainly comprises visible light signals to generate visible light image signals, wherein the visible light image signals have good color information. For how the filter mode switch implements switching of the operating modes of the RGB Sensor, please refer to the description of the related embodiment portion of fig. 5, which is not repeated herein.
The functions of the first RGB Sensor 605, the second RGB Sensor 606 and the processor 607 can refer to the descriptions of the first RGB Sensor 505, the second RGB Sensor506 and the processor 507, and are not described herein again.
Another exemplary embodiment is given below.
The first RGB Sensor 605 is used for acquiring an infrared light image signal of a scene to be imaged, wherein the infrared light image signal comprises brightness information;
the second RGB Sensor 606 is configured to acquire a visible light image signal of the scene to be imaged, where the visible light image signal includes chrominance information and luminance information; the resolution of the infrared light image signal is higher than that of the visible light image signal, or the number of pixels of the infrared light image signal is larger than that of the visible light image signal.
A processor 607 for respectively obtaining the brightness information of the infrared light image signal and the brightness information of the visible light image signal, and fusing the brightness information of the infrared light image signal and the brightness information of the visible light image signal to obtain the brightness information of the fused image;
the processor 607 is further configured to obtain chrominance information of the visible light image signal, and process the chrominance information to obtain chrominance information of the fused image, where the processing on the chrominance information includes, but is not limited to, color correction, color enhancement, color denoising, and the like.
The processor 607 is further configured to combine the luminance information of the fused image and the chrominance information of the fused image to obtain a fused image.
It should be understood that, in an alternative case, since the resolution of the visible light image signal is different from that of the infrared light image signal, when the visible light image signal and the infrared light image signal are fused, the visible light image signal and the infrared light image signal need to be adjusted to the same resolution. For example, a low resolution visible light image signal may be up-sampled such that the resolution of the visible light image signal is equal to the resolution of the infrared light image signal. The visible light image signal can be subjected to up-sampling processing, and then the brightness information and the chromaticity information of the visible light image signal are separated; alternatively, the luminance information and the chrominance information of the visible light image signal may be separated first, and then the luminance information and the chrominance information may be up-sampled, for example, when the luminance information of the visible light image signal and the luminance information of the infrared light image signal are fused, the luminance information of the visible light image signal may be up-sampled.
In a possible embodiment, before acquiring the luminance information of the visible light image signal, the processor 607 is further configured to perform an upsampling process on the visible light image signal to obtain a second visible light image signal, so that the resolution of the second visible light image signal is equal to the resolution of the infrared light image signal, and the processor 607 is specifically configured to acquire the luminance information of the second visible light image signal and fuse the luminance information of the second visible light image signal and the luminance information of the infrared light image signal.
In one possible embodiment, when fusing the luminance information of the infrared light image and the luminance information of the visible light image, the processor 607 is further configured to perform an upsampling process on the luminance information of the visible light image signal, so that the resolution of the luminance information of the visible light image signal after the upsampling process is equal to the resolution of the luminance information of the infrared light image signal. And a processor 607 for performing an up-sampling process on the chrominance information of the visible light image signal. The image processing apparatus 600 shown in fig. 6 has a dual-lens, dual-Sensor structure, with each lens corresponding to one RGB Sensor. It should be understood that, because the two image signals are from the two lenses, although the two image signals are obtained by imaging the two optical signals of the same scene, because there is a relative position between the two lenses, the scenes of the two images are not completely consistent, and there is a certain parallax. In an alternative, the alignment process is performed on the first image signal and the second image signal before the processor 607 fuses the first image signal and the second image signal.
Under a low-light scene, the first RGB Sensor with high resolution works in an infrared light supplement mode, the brightness sensitization performance of the first RGB Sensor is improved, the obtained first image signal has more brightness details and the signal-to-noise ratio of brightness information is higher, the second RGB Sensor with low resolution works in an infrared light cut-off mode, although the resolution is reduced, human eyes are insensitive to the resolution of colors, the sensitization performance of a single pixel is improved due to the fact that the size of the single pixel of the RGB Sensor with low resolution is larger, the color information of the obtained second image signal is more restored, the signal-to-noise ratio of the color signal is higher, the color signal obtained through the RGB Sensor with low resolution does not affect the resolution of the human eyes on the colors, and the color representation of the obtained image is also improved. The first image signal and the second image signal are fused to obtain a final output fused image, the fused image is richer in details, better in color expression and higher in signal-to-noise ratio, and the quality of the image acquired in a low-light scene is improved.
With the image processing apparatus shown in fig. 5 and 6, under normal illumination, it is sufficient to acquire the first image signal using only the first RGB Sensor of high resolution. Illustratively, a first RGB Sensor is switched to an infrared light cut-off mode to capture visible light image signals, a second RGB Sensor is normally sensitive to light, a processor only receives the first image signals, image signals obtained by the second RGB Sensor are discarded, and fusion processing of two paths of image signals is not needed; in an optional scheme, the second RGB Sensor may also be turned off, for example, power supply to the second RGB Sensor is cut off, or the second RGB Sensor enters low power consumption and cannot generate an image signal by sensing light, and since the second RGB Sensor may be turned on or off adaptively according to the specific situation of the scene illuminance or adjusted to low power consumption, power consumption of the image processing apparatus is saved as much as possible on the premise of ensuring image quality. In an optional scheme, under normal illumination, both RGB sensors may be switched to an infrared cut-off mode to obtain two paths of visible light image signals, and the two paths of visible light image signals are subjected to fusion processing to improve the signal-to-noise ratio and color representation of the finally obtained color image.
Based on the same conception, the application also provides an image signal fusion method. Fig. 7 is a schematic flowchart of an exemplary method for image signal fusion according to an embodiment of the present disclosure. The method firstly carries out up-sampling processing on a visible light image signal and then separates chrominance information and luminance information of the visible light image signal, and comprises the following steps:
acquiring a visible light image signal, wherein the visible light image signal comprises brightness information and chrominance information, and the visible light image signal has a first resolution;
an infrared light image signal is acquired, the infrared light image signal including luminance information, the infrared light image signal having a second resolution, the first resolution being lower than the second resolution.
And performing up-sampling processing on the acquired visible light image signal to enable the resolution of the processed visible light image signal to be consistent with that of the infrared light image signal.
Extracting first luminance information from the processed visible light image signal, the first luminance information being luminance information excluding infrared light information; the first luminance information is filtered and denoised to obtain second luminance information, the second luminance information is obtained after the first luminance information is subjected to high-frequency noise filtering, illustratively, a bilateral Filter or a Guided Filter (Guided Filter) can be adopted to Filter the first luminance information to remove the high-frequency noise, and the obtained second luminance information contains medium and low frequency information and average luminance information of the visible light image signal.
Extracting third brightness information from the infrared light image signal, wherein the third brightness information is brightness information carrying infrared light information, and it should be understood that the infrared light image signal only includes brightness information and does not include chrominance information, so that the obtained infrared light image signal can be directly used as the third brightness information;
and acquiring high-frequency detail information of the third brightness information, wherein the high-frequency detail information is mainly high-frequency brightness detail information of the infrared light image signal.
Specifically, the acquiring the high-frequency detail information of the third luminance information includes:
the third luminance information is filtered to obtain fourth luminance information, for example, a bilateral filter or a guided filter may be used to filter the third luminance information, and the high frequency information is removed, and the obtained fourth luminance information includes the medium and low frequency information and the average luminance information of the infrared light image signal.
And subtracting the fourth brightness information from the third brightness information to obtain fifth brightness information, wherein the high-frequency information filtered and removed by the bilateral filter is mainly high-frequency detail information of the infrared light image signal because the signal-to-noise ratio of the infrared light image signal is high, and therefore the fifth brightness information mainly contains the high-frequency brightness details of the infrared light image signal.
And superposing the second brightness information and the fifth brightness information to obtain fused brightness information. The second brightness information mainly comprises medium and low frequency information and average brightness information of the visible light image signal, the fifth brightness information mainly comprises high frequency brightness details of the infrared light image signal, and the fused brightness information is obtained by combining the medium and low frequency information and the average brightness information, so that on one hand, the average brightness of the fused brightness information is kept close to the average brightness of the visible light image as much as possible, and on the other hand, more detail information from the infrared light image signal is also contained.
Extracting and acquiring first chrominance information from the processed visible light image signal, wherein the first chrominance information is chrominance information of the visible light image signal; carrying out color correction on the first chrominance information to obtain second chrominance information; illustratively, the chrominance information may be compensated identically based on the difference between the fused luminance and the luminance of the visible light image. Optionally, the processing of the first chrominance information may further include color enhancement, color denoising, and the like.
And combining the fusion brightness information and the second chrominance information to obtain a complete fusion image. The fused image has good color expression and more detailed information.
In a possible implementation, before the up-sampling processing is performed on the visible light image signal, demosaicing, color space conversion, image enhancement, denoising processing, and the like may also be performed on the visible light image signal; correspondingly, before the filtering process is performed on the third luminance information, demosaicing, color space conversion, image enhancement, denoising process, and the like may be performed on the infrared light image signal.
In one possible embodiment, after the complete fused image is obtained, the fused image may be further subjected to image processing, which includes but is not limited to: contrast adjustment, image enhancement, color space conversion, etc. For example, if the image fusion process is performed in the RGB space and the target output image is in the image format of the YCC color space, the fused image may be converted from the RGB format to the image format of the YCC color space through color space conversion. The method separates the visible light image signal after the up-sampling processing into the brightness information and the chrominance information, respectively processes the brightness information and the chrominance information, and finally combines the processed brightness information and the chrominance information, so that the brightness and the chrominance of the obtained fusion image are improved, and the image quality obtained in a low-light scene is improved.
Fig. 8 is a schematic flowchart of an exemplary method for processing an image signal according to an embodiment of the present disclosure. The method comprises the following steps:
801. receiving a first image signal of a target scene, wherein the first image signal comprises first brightness information, the first brightness information is brightness information carrying infrared light information, and the first image signal has a first resolution;
802. receiving a second image signal of the target scene, the second image signal including second luminance information and second chrominance information, the second luminance information being luminance information excluding infrared light information, the second image signal having a second resolution, the first resolution being higher than the second resolution;
803. fusing the first image signal and the second image signal to obtain a fused image
The first image signal is an infrared image signal with high resolution, contains more brightness details and has higher brightness signal-to-noise ratio; the second image signal is a visible light image signal with low resolution, and the single pixel has stronger light sensitivity and contains more chrominance information. The fused image obtained by fusing the first image signal and the second image signal has richer details, higher signal-to-noise ratio and better color expression.
In a possible implementation manner, the fusing the first image signal and the second image signal to obtain a fused image specifically includes: performing up-sampling processing on the second image signal to obtain a third image signal, wherein the third image signal has the first resolution; and fusing the first image signal and the third image signal to obtain a fused image.
Because the resolutions of the first image signal and the second image signal are different, the second image signal with low resolution is up-sampled before the fusion, so that the processed second image signal and the first image signal have the same resolution.
In a possible implementation manner, the third image signal includes third luminance information, the fused image includes fused luminance information, and the fusing the first image signal and the third image signal to obtain the fused image specifically includes: acquiring high-frequency detail information of the first brightness information, wherein the high-frequency detail information of the first brightness information comprises brightness detail information of the first image signal; denoising the third brightness information to filter high-frequency noise of the third brightness information to obtain fourth brightness information; and superposing the high-frequency detail information of the first brightness information and the fourth brightness information to obtain the fused brightness information.
The embodiment of the application comprises a luminance processing channel and a chrominance processing channel, and when image fusion is carried out, a luminance signal and a chrominance signal are respectively processed. In the brightness processing channel, the brightness of the infrared light image signal and the visible light image signal is fused. The fourth brightness information is the brightness information obtained after the visible light image signal is filtered to remove the high-frequency noise, and comprises the medium-low frequency information and the average brightness information of the visible light image signal; because the signal-to-noise ratio of the infrared light image signal is very high and the noise is very little, the high-frequency detail information of the first brightness information is mainly the brightness detail information of the image; because the high-frequency detail information of the first brightness information mainly comprises the high-frequency brightness detail of the infrared light image signal, the fourth brightness information mainly comprises the medium-low frequency information and the average brightness information of the visible light image signal, and the medium-low frequency information and the average brightness information are combined to obtain the fused brightness information, on one hand, the average brightness of the fused brightness information can be kept close to the average brightness of the visible light image as much as possible, and on the other hand, more detail information from the infrared light image signal is also contained.
In a possible implementation manner, the acquiring the high-frequency detail information of the first luminance information specifically includes: filtering the first brightness information to obtain filtered first brightness information; and subtracting the filtered first brightness information from the first brightness information to obtain high-frequency detail information of the first brightness information.
In a possible implementation manner, the third image signal further includes third chrominance information, the fused image further includes fused chrominance information, and the fusing the first image signal and the third image signal to obtain the fused image specifically further includes: extracting the third chrominance information from the third image signal; and carrying out color correction on the third chroma information to obtain the fused chroma information.
In a possible implementation manner, the color correcting the third chromaticity information to obtain the fused chromaticity information specifically includes: and compensating the third chroma information according to the difference value of the fused brightness information and the third brightness information to obtain the fused chroma information.
In a possible implementation manner, the fusing the first image signal and the third image signal to obtain the fused image specifically further includes: and combining the fused brightness information and the fused chrominance information to obtain the fused image.
Embodiments of the present application also provide a computer-readable storage medium, which stores instructions that, when executed on a computer or a processor, cause the computer or the processor to execute any one of the methods provided by the embodiments of the present application.
Embodiments of the present application also provide a computer program product containing instructions, which when executed on a computer or a processor, causes the computer or the processor to execute any one of the methods provided by the embodiments of the present application.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (31)

  1. An apparatus for image signal processing, the apparatus comprising: the device comprises a first receiving interface, a second receiving interface and a processor;
    the first receiving interface is configured to receive a first image signal of a target scene, where the first image signal includes first luminance information, the first luminance information is luminance information carrying infrared light information, and the first image signal has a first resolution;
    the second receiving interface is configured to receive a second image signal of the target scene, where the second image signal includes second luminance information and second chrominance information, the second luminance information is luminance information excluding infrared light information, and the second image signal has a second resolution, and the first resolution is higher than the second resolution;
    the processor is configured to fuse the first image signal and the second image signal to obtain a fused image.
  2. The apparatus of claim 1, wherein the processor is specifically configured to:
    performing upsampling processing on the second image signal to obtain a third image signal, wherein the third image signal has the first resolution;
    and fusing the first image signal and the third image signal to obtain the fused image.
  3. The apparatus according to claim 2, wherein the third image signal comprises third luminance information, wherein the fused image comprises fused luminance information, and wherein the processor is specifically configured to:
    acquiring high-frequency detail information of the first brightness information, wherein the high-frequency detail information of the first brightness information comprises brightness detail information of the first image signal;
    denoising the third brightness information to filter high-frequency noise of the third brightness information to obtain fourth brightness information;
    and superposing the high-frequency detail information of the first brightness information and the fourth brightness information to obtain the fusion brightness information.
  4. The apparatus according to claim 3, wherein the third image signal further comprises third chrominance information, wherein the fused image further comprises fused chrominance information, and wherein the processor is further configured to:
    extracting the third chrominance information from the third image signal;
    and carrying out color correction on the third chroma information to obtain the fused chroma information.
  5. The apparatus of claim 4, wherein the processor is specifically configured to:
    and compensating the third chroma information according to the difference value of the fused brightness information and the third brightness information to obtain the fused chroma information.
  6. The apparatus of claim 4 or5, wherein the processor is further configured to:
    and combining the fused brightness information and the fused chrominance information to obtain the fused image.
  7. The apparatus of any one of claims 1 to 6, further comprising: a first image sensor having the first resolution and a second image sensor having the second resolution;
    the first image sensor is used for working in an infrared supplementary lighting mode to generate the first image signal;
    the second image sensor is used for working in an infrared light cut-off mode to generate the second image signal.
  8. The apparatus of claim 7, wherein the first image Sensor is a first RGB Sensor and the second image Sensor is a second RGB Sensor.
  9. The apparatus of claim 7 or 8, further comprising:
    and the first filtering mode switcher is used for switching the first image sensor to the infrared light supplementing mode.
  10. The apparatus of claim 9, wherein the first filter mode switcher comprises: a full-transmission spectrum filter and an infrared cut-off filter;
    the first filtering mode switcher is specifically configured to switch to the full-transmission spectral filter, so that the first image sensor senses an infrared light signal and generates the infrared light image signal.
  11. The apparatus of claim 7 or 8, further comprising:
    a second filter mode switcher for switching the second image sensor to the infrared light cut mode.
  12. The apparatus of claim 11, wherein said second filter mode switcher comprises: the full-transmission spectrum filter and the infrared cut-off filter;
    the second filter mode switcher is specifically configured to switch to the infrared cut filter so that the second image sensor senses a visible light signal and generates the visible light image signal.
  13. The apparatus of any one of claims 7 to 12, further comprising: a lens and a beam splitter;
    the lens is used for receiving the optical signal of the target scene;
    the optical splitter is configured to split an optical signal of the target scene into a first optical signal and a second optical signal, where the first optical signal is an optical signal sent to the first image sensor, and the second optical signal is an optical signal sent to the second image sensor.
  14. The apparatus of any one of claims 7 to 12, further comprising:
    the first lens is used for receiving a first optical signal of the target scene, wherein the first optical signal is an optical signal sent to the first image sensor;
    and the second lens is used for receiving a second optical signal of the target scene, wherein the second optical signal is an optical signal sent to the second image sensor.
  15. The apparatus of claim 14, wherein the processor is further configured to:
    and carrying out alignment processing on the first image signal and the second image signal.
  16. A method of image signal processing, the method comprising:
    receiving a first image signal of a target scene, wherein the first image signal comprises first brightness information, the first brightness information is brightness information carrying infrared light information, and the first image signal has a first resolution;
    receiving a second image signal of the target scene, the second image signal including second luminance information and second chrominance information, the second luminance information being luminance information excluding infrared light information, the second image signal having a second resolution, the first resolution being higher than the second resolution;
    and fusing the first image signal and the second image signal to obtain a fused image.
  17. The method according to claim 16, wherein the fusing the first image signal and the second image signal to obtain a fused image specifically comprises:
    performing upsampling processing on the second image signal to obtain a third image signal, wherein the third image signal has the first resolution;
    and fusing the first image signal and the third image signal to obtain the fused image.
  18. The method according to claim 17, wherein the third image signal includes third luminance information, the fused image includes fused luminance information, and the fusing the first image signal and the third image signal to obtain the fused image specifically includes:
    acquiring high-frequency detail information of the first brightness information, wherein the high-frequency detail information of the first brightness information comprises brightness detail information of the first image signal;
    denoising the third brightness information to filter high-frequency noise of the third brightness information to obtain fourth brightness information; and superposing the high-frequency detail information of the first brightness information and the fourth brightness information to obtain the fusion brightness information.
  19. The method according to claim 18, wherein the acquiring the high-frequency detail information of the first luminance information specifically comprises:
    filtering the first brightness information to obtain filtered first brightness information;
    and subtracting the filtered first brightness information from the first brightness information to obtain high-frequency detail information of the first brightness information.
  20. The method according to claim 18 or 19, wherein the third image signal further includes third chrominance information, the fused image further includes fused chrominance information, and the fusing the first image signal and the third image signal to obtain the fused image further includes:
    extracting the third chrominance information from the third image signal;
    and carrying out color correction on the third chroma information to obtain the fused chroma information.
  21. The method according to claim 20, wherein the color correcting the third chrominance information to obtain the fused chrominance information specifically comprises:
    and compensating the third chroma information according to the difference value of the fused brightness information and the third brightness information to obtain the fused chroma information.
  22. The method according to claim 20 or 21, wherein the fusing the first image signal and the third image signal to obtain the fused image further comprises:
    and combining the fused brightness information and the fused chrominance information to obtain the fused image.
  23. The method of any one of claims 16 to 22, further comprising:
    a first image sensor senses an infrared light signal and generates a first image signal;
    a second image sensor is exposed to visible light signals, generating the second image signals, the first image sensor having the first resolution, the second image sensor having the second resolution.
  24. The method of claim 23, wherein the first image Sensor is a first RGB Sensor and the second image Sensor is a second RGB Sensor.
  25. The method according to claim 23 or 24, further comprising:
    switching the first image sensor to an infrared light supplement mode to sense the infrared light signal;
    and switching the second image sensor to an infrared light cut-off mode to sense the visible light signal.
  26. The method according to claim 25, wherein the switching the first image sensor to an infrared light supplement mode to sense the infrared light signal comprises:
    switching a first filtering mode switcher to a full-transmission spectrum filter so that the first image sensor is switched to the infrared light supplement mode;
    the switching the second image sensor to an infrared light cut-off mode to sense the visible light signal specifically includes:
    switching a second filter mode switcher to an infrared cut filter so that the first image sensor is switched to the infrared cut mode.
  27. The method of any one of claims 23 to 26, further comprising:
    receiving a light signal of the target scene;
    and dividing the optical signal of the target scene into a first optical signal and a second optical signal, wherein the first optical signal is the optical signal sent to the first image sensor, and the second optical signal is the optical signal sent to the second image sensor.
  28. The method of any one of claims 23 to 26, further comprising:
    receiving a first optical signal of the target scene, wherein the first optical signal is an optical signal sent to the first image sensor;
    and receiving a second optical signal of the target scene, wherein the second optical signal is an optical signal sent to the second image sensor.
  29. The method of claim 28, wherein prior to said fusing said first image signal and said second image signal to obtain a fused image, said method further comprises:
    aligning the first image signal and the second image signal to obtain a processed first image signal and a processed second image signal;
    the fusing the first image signal and the second image signal to obtain a fused image specifically includes:
    and fusing the processed first image signal and the processed second image signal to obtain the fused image.
  30. A computer-readable storage medium having stored therein instructions which, when run on a computer or processor, cause the computer or processor to perform the method of any one of claims 16-29.
  31. A computer program product comprising instructions which, when run on a computer or processor, cause the computer or processor to perform the method of any of claims 16-29.
CN201980082094.2A 2019-02-19 2019-02-19 Image processing device and method Pending CN113170048A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/075472 WO2020168465A1 (en) 2019-02-19 2019-02-19 Image processing device and method

Publications (1)

Publication Number Publication Date
CN113170048A true CN113170048A (en) 2021-07-23

Family

ID=72143990

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980082094.2A Pending CN113170048A (en) 2019-02-19 2019-02-19 Image processing device and method

Country Status (2)

Country Link
CN (1) CN113170048A (en)
WO (1) WO2020168465A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115239610A (en) * 2022-07-28 2022-10-25 爱芯元智半导体(上海)有限公司 Image fusion method, device, system and storage medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI767468B (en) * 2020-09-04 2022-06-11 聚晶半導體股份有限公司 Dual sensor imaging system and imaging method thereof
CN112422784B (en) * 2020-10-12 2022-01-11 浙江大华技术股份有限公司 Imaging method, imaging apparatus, electronic apparatus, and storage medium
CN114911062A (en) * 2021-02-07 2022-08-16 浙江舜宇智能光学技术有限公司 Optical system with double imaging light paths and optical device with double imaging light paths
CN112767298B (en) * 2021-03-16 2023-06-13 杭州海康威视数字技术股份有限公司 Fusion method and device of visible light image and infrared image
CN113676628B (en) * 2021-08-09 2023-05-02 Oppo广东移动通信有限公司 Image forming apparatus and image processing method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106488201A (en) * 2015-08-28 2017-03-08 杭州海康威视数字技术股份有限公司 A kind of processing method of picture signal and system
CN106780330A (en) * 2016-12-08 2017-05-31 中国人民解放军国防科学技术大学 A kind of super resolution ratio reconstruction method based on colored and black and white dual camera
CN107563971A (en) * 2017-08-12 2018-01-09 四川精视科技有限公司 A kind of very color high-definition night-viewing imaging method
CN107566747A (en) * 2017-09-22 2018-01-09 浙江大华技术股份有限公司 A kind of brightness of image Enhancement Method and device
CN107580163A (en) * 2017-08-12 2018-01-12 四川精视科技有限公司 A kind of twin-lens black light camera

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110056096A (en) * 2009-11-20 2011-05-26 삼성전자주식회사 Digital image processing apparatus and the method for photographing of the same
CN104661008B (en) * 2013-11-18 2017-10-31 深圳中兴力维技术有限公司 The treating method and apparatus that color image quality is lifted under low light conditions
CN104079908B (en) * 2014-07-11 2015-12-02 上海富瀚微电子股份有限公司 Infrared with visible image signal processing method and implement device thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106488201A (en) * 2015-08-28 2017-03-08 杭州海康威视数字技术股份有限公司 A kind of processing method of picture signal and system
CN106780330A (en) * 2016-12-08 2017-05-31 中国人民解放军国防科学技术大学 A kind of super resolution ratio reconstruction method based on colored and black and white dual camera
CN107563971A (en) * 2017-08-12 2018-01-09 四川精视科技有限公司 A kind of very color high-definition night-viewing imaging method
CN107580163A (en) * 2017-08-12 2018-01-12 四川精视科技有限公司 A kind of twin-lens black light camera
CN107566747A (en) * 2017-09-22 2018-01-09 浙江大华技术股份有限公司 A kind of brightness of image Enhancement Method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115239610A (en) * 2022-07-28 2022-10-25 爱芯元智半导体(上海)有限公司 Image fusion method, device, system and storage medium
CN115239610B (en) * 2022-07-28 2024-01-26 爱芯元智半导体(上海)有限公司 Image fusion method, device, system and storage medium

Also Published As

Publication number Publication date
WO2020168465A1 (en) 2020-08-27

Similar Documents

Publication Publication Date Title
CN113170048A (en) Image processing device and method
US9756247B2 (en) Dynamic camera mode switching
WO2020057199A1 (en) Imaging method and device, and electronic device
US20160352999A1 (en) RAW Camera Peripheral With Handheld Mobile Unit Processing RAW Image Data
US20140340515A1 (en) Image processing method and system
JP2022071177A (en) Multiplexed high dynamic range image
CN113711584B (en) Camera device
KR20140099777A (en) Method and System for Image Fusion using Multi-spectral filter array sensor
CN108712608A (en) Terminal device image pickup method and device
US20230005240A1 (en) Image sensor and image light sensing method
CN103546730A (en) Method for enhancing light sensitivities of images on basis of multiple cameras
WO2023231583A1 (en) Image processing method and related device thereof
CN108900785A (en) Exposal control method, device and electronic equipment
CN108833803A (en) Imaging method, device and electronic equipment
US20100207958A1 (en) Color image creating apparatus
WO2023036034A1 (en) Image processing method and related device thereof
Hertel et al. A low-cost VIS-NIR true color night vision video system based on a wide dynamic range CMOS imager
CN115550575B (en) Image processing method and related device
WO2022078036A1 (en) Camera and control method therefor
CN109447925B (en) Image processing method and device, storage medium and electronic equipment
US20070046787A1 (en) Chrominance filter for white balance statistics
KR20190139788A (en) Methods and apparatus for capturing media using plurality of cameras in electronic device
WO2022044915A1 (en) Image processing device, imaging device, image processing method, and image processing program
US8854490B2 (en) Method and apparatus for compensating a black level of an image signal
CN103167183A (en) Translucent camera aperture processing method, system and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210723