CN112449095A - Image processing method and device, electronic equipment and readable storage medium - Google Patents

Image processing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN112449095A
CN112449095A CN202011263594.7A CN202011263594A CN112449095A CN 112449095 A CN112449095 A CN 112449095A CN 202011263594 A CN202011263594 A CN 202011263594A CN 112449095 A CN112449095 A CN 112449095A
Authority
CN
China
Prior art keywords
image
camera module
lens
monochrome
optical filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011263594.7A
Other languages
Chinese (zh)
Inventor
黄毅鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202011263594.7A priority Critical patent/CN112449095A/en
Publication of CN112449095A publication Critical patent/CN112449095A/en
Priority to PCT/CN2021/117721 priority patent/WO2022100256A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/665Control of cameras or camera modules involving internal camera communication with the image sensor, e.g. synchronising or multiplexing SSIS control signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/75Circuitry for compensating brightness variation in the scene by influencing optical camera components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Blocking Light For Cameras (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to an image processing method and device, electronic equipment and a computer readable storage medium, which are applied to the electronic equipment comprising at least one RGB camera module and at least one monochrome camera module, wherein the monochrome camera module comprises a lens, an optical filter and a monochrome image sensor; the transmittance of the lens to infrared rays is greater than that to visible light, and the transmittance of the optical filter to infrared rays is greater than that to visible light; and the monochromatic image sensor is used for receiving the first target light rays after the light rays reflected by the shooting scene are filtered by the lens and the optical filter in sequence and imaging the first target light rays. The method comprises the following steps: shooting a shooting scene through a monochrome camera module to obtain a first image; shooting a shooting scene through the RGB camera module to obtain a second image; and carrying out image fusion on the first image and the second image to obtain a target image. The defogging processing is carried out while the color information of the shooting scene is restored, so that the image is clearer.

Description

Image processing method and device, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method and apparatus, an electronic device, and a readable storage medium.
Background
With the rapid development of electronic technology, the shooting function and shooting level of cameras on electronic equipment are higher and higher, and accordingly, the requirements of people on the image quality of shot images are higher and higher. The global environmental problem is more outstanding at present, and the frequency of appearance of bad weather such as haze is higher. Due to the influence of bad weather such as haze and the like, sometimes the shot image is always frosted and not clear enough, so that the image quality of the image is greatly reduced.
In order to solve the problem of the image quality of an image captured in bad weather such as haze, it is necessary to perform defogging processing on the image. According to the traditional image defogging method, image synthesis is performed by adopting a mode of switching optical filters through a single camera module to achieve defogging, but the size of the adopted single camera module is larger, and the mechanical structure in the camera module is easily damaged by frequently switching the optical filters.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, electronic equipment and a readable storage medium, which can reduce the volume of a camera module while defogging, and do not need to switch optical filters frequently.
An image processing method is applied to electronic equipment comprising at least one RGB camera module and at least one monochrome camera module, wherein the monochrome camera module comprises a lens, an optical filter and a monochrome image sensor; the transmittance of the lens to infrared rays is greater than that to visible light, and the transmittance of the optical filter to infrared rays is greater than that to visible light; the monochromatic image sensor is used for receiving first target light rays reflected by a shooting scene and filtered by the lens and the optical filter in sequence, and imaging the first target light rays; the method comprises the following steps:
shooting the shooting scene through the monochrome camera module to obtain a first image;
shooting the shooting scene through the RGB camera module to obtain a second image;
and carrying out image fusion on the first image and the second image to obtain a target image.
An electronic device comprises at least one RGB camera module and at least one monochrome camera module;
the monochrome camera module comprises a lens, an optical filter and a monochrome image sensor; the transmittance of the lens to infrared rays is greater than that to visible light, and the transmittance of the optical filter to infrared rays is greater than that to visible light;
the monochrome image sensor is used for receiving first target light rays reflected by a shooting scene after the light rays sequentially pass through the lens and the optical filter, and imaging the first target light rays.
An image processing device is applied to electronic equipment comprising at least one RGB camera module and at least one monochrome camera module, wherein the monochrome camera module comprises a lens, an optical filter and a monochrome image sensor; the transmittance of the lens to infrared rays is greater than that to visible light, and the transmittance of the optical filter to infrared rays is greater than that to visible light; the monochromatic image sensor is used for receiving first target light rays reflected by a shooting scene and filtered by the lens and the optical filter in sequence, and imaging the first target light rays; the device comprises:
the first image generation module is used for shooting the shooting scene through the monochrome camera module to obtain a first image;
the second image generation module is used for shooting the shooting scene through the RGB camera module to obtain a second image;
and the image fusion module is used for carrying out image fusion on the first image and the second image to obtain a target image.
An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of the image processing method as described above.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method as described above.
The image processing method and device, the electronic equipment and the computer readable storage medium are applied to the electronic equipment comprising at least one RGB camera module and at least one monochrome camera module, wherein the monochrome camera module comprises a lens, an optical filter and a monochrome image sensor; the transmittance of the lens to infrared rays is greater than that to visible light, and the transmittance of the optical filter to infrared rays is greater than that to visible light; the monochrome image sensor is used for receiving first target light rays reflected by a shooting scene after the light rays are filtered by the lens and the optical filter in sequence and imaging the first target light rays; the image processing method comprises the following steps: shooting a shooting scene through a monochrome camera module to obtain a first image; shooting a shooting scene through the RGB camera module to obtain a second image; and carrying out image fusion on the first image and the second image to obtain a target image.
Because the transmissivity of the lens and the optical filter in the monochromatic camera module to the infrared ray is greater than the transmissivity to the visible light, the visible light can be filtered through the lens and the optical filter in the monochromatic camera module, and the infrared ray is reserved. So as to shoot a shooting scene through the monochrome camera module group to obtain a first image, wherein the first image is an image corresponding to the infrared ray. Since the wavelength of infrared rays is longer than that of visible light, the diffraction capability of infrared rays is also stronger than that of visible light, and the infrared rays can bypass obstacles (suspended particles in haze weather) and can be transmitted. Therefore, the defogging problem of the image photographed in bad weather such as haze can be solved by means of infrared imaging. And adopt monochromatic camera module, RGB camera module to form images respectively, can not lead to the volume of a module of making a video recording too big, also need not often switch over the light filter. Therefore, the problems that the camera module in the traditional defogging method is large in size and a mechanical structure in the camera module is easy to damage are solved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of an exemplary embodiment of an image processing method;
FIG. 2 is a block diagram of a monochrome camera module in one embodiment;
FIG. 3 is a flow diagram of a method of image processing in one embodiment;
FIG. 4 is a flowchart illustrating a method for obtaining a first image by using the monochrome camera module to capture a captured scene in FIG. 3;
FIG. 5 is a coating curve of a lens in a monochrome camera module according to an embodiment;
FIG. 6 is a graph of transmittance of a filter in a monochrome camera module, according to an embodiment;
FIG. 7A is a schematic illustration of a monochromatic image in one embodiment;
FIG. 7B is a diagram of an example RGB image;
FIG. 8 is a schematic illustration of a target image in one embodiment;
FIG. 9 is a schematic diagram of an embodiment of RGB image to target image contrast;
FIG. 10 is a block diagram showing the configuration of an image processing apparatus according to an embodiment;
fig. 11 is a schematic diagram of an internal structure of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Fig. 1 is a diagram illustrating an application scenario of an image processing method according to an embodiment. As shown in fig. 1, the application environment includes an electronic device 100. The electronic device 100 includes at least one RGB camera module 120 and at least one monochrome camera module 140. As shown in fig. 2, the monochrome camera module 140 includes a lens 142, a filter 144, and a monochrome image sensor 146; the transmittance of the lens to infrared rays is greater than that to visible light, and the transmittance of the optical filter to infrared rays is greater than that to visible light; and the monochromatic image sensor is used for receiving the first target light rays after the light rays reflected by the shooting scene are filtered by the lens and the optical filter in sequence and imaging the first target light rays. The electronic device 100 obtains a first image by shooting a shooting scene through a monochrome camera module by using an image processing method in the application; shooting a shooting scene through the RGB camera module to obtain a second image; and carrying out image fusion on the first image and the second image to obtain a target image. Here, the electronic device 100 may be any terminal device such as a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a wearable device, and a smart home device.
FIG. 3 is a flow diagram of a method of image processing in one embodiment. The image processing method in the present embodiment is described by taking the electronic device 100 in fig. 1 as an example. As shown in fig. 3, the image processing method includes steps 320 to 360. Wherein,
step 320, shooting a shooting scene through a monochrome camera module to obtain a first image;
the monochrome camera module refers to a mono (monochrome) camera module, which can also be called a black and white camera module. The monochrome camera module can capture all incident light rays, and the light incident amount of the monochrome camera module is several times of that of a common Bayer array CMOS, so that the pixel details can be effectively improved.
Because the transmissivity of the lens and the optical filter in the monochromatic camera module to the infrared ray is greater than the transmissivity to the visible light, the visible light can be filtered through the lens and the optical filter in the monochromatic camera module, and the infrared ray is reserved. So as to shoot a shooting scene through the monochrome camera module group to obtain a first image, wherein the first image is an image corresponding to the infrared ray. Since the wavelength of infrared rays is longer than that of visible light, the diffraction capability of infrared rays is also stronger than that of visible light, and the infrared rays can bypass obstacles (suspended particles in haze weather) and can be transmitted. Therefore, the defogging problem of the image photographed in bad weather such as haze can be solved by means of infrared imaging.
When the electronic equipment shoots a shooting scene, the shooting scene is shot through the monochrome camera module to obtain a first image. Specifically, the monochromatic camera module receives light reflected by a shooting scene, and a first image is generated after the light reflected by the shooting scene is processed by a lens, an optical filter and a sensor in the monochromatic camera module.
Step 340, shooting a shooting scene through the RGB camera module to obtain a second image;
the RGB camera module adopts a traditional RGBG Bayer array and is used for recording color information in a picture. When the electronic equipment shoots the shooting scene, the shooting scene is shot through the RGB camera module to obtain a second image. Similarly, the RGB camera module receives the light reflected from the shooting scene, and generates a second image after the light reflected from the shooting scene is processed by the lens, the optical filter, and the sensor in the RGB camera module. The second image has rich color information.
And step 360, carrying out image fusion on the first image and the second image to obtain a target image.
The sequence of generating the first image and the second image is not limited, the first image and the second image may be generated simultaneously, the first image may be generated first and then the second image may be generated, or the second image may be generated first and then the first image may be generated, which is not limited in this application.
And after the first image and the second image are obtained, carrying out image fusion on the first image and the second image to obtain a target image. Specifically, the first image and the second image may be subjected to image superposition or image stitching to generate the target image. Therefore, the generated target image has pixel details in the first image and rich color information in the second image, and the defogging effect is realized.
In the conventional defogging method, a single camera module performs image synthesis by switching optical filters to achieve defogging, but the volume of the single camera module is large, for example, the size of the adopted CCD or CMOS sensor is too large. The CCD is an abbreviation of a charge coupled device (charge coupled device). CMOS is an abbreviation for Complementary Metal Oxide Semiconductor. And frequent switching of the filters can cause the mechanical structure in the camera module to be easily damaged. In the embodiment of the application, adopt monochromatic camera module, RGB camera module to form images respectively, can not lead to the volume of a module of making a video recording too big, also need not often switch over the light filter. Therefore, the problems that the camera module in the traditional defogging method is large in size and a mechanical structure in the camera module is easy to damage are solved.
In one embodiment, as shown in fig. 4, capturing a first image of a capture scene by a monochrome camera module includes:
step 322, receiving a first target light ray after the light ray reflected by the shooting scene is filtered by a lens and an optical filter in sequence, wherein the first target light ray comprises an infrared ray;
the electromagnetic wave can be divided into different bands according to the wavelength, generally, the band with the wavelength between 380-780 nm is called visible light, because the electromagnetic wave in this band can be received by human eyes in the form of light. The wavelength range over which different human individuals receive visible light may vary slightly, but is substantially near the above-mentioned wavelength range. Invisible light refers to electromagnetic waves of wavelengths not perceivable by the human eye, other than visible light, and includes radio waves, microwaves, infrared rays, ultraviolet rays, x-rays, gamma rays, and the like. Wherein, Infrared (IR) is an electromagnetic wave with a frequency between microwave and visible light, and the wavelength is 760nm (nanometer) to 1mm (millimeter). Infrared rays can be further classified into Near Infrared Rays (NIR), middle Infrared rays, and far Infrared rays, and ultraviolet rays can be also classified into Near Infrared rays, middle Infrared rays, and far ultraviolet rays in the same way.
The lens in the monochromatic camera module has a transmittance to infrared rays higher than that to visible light, and the filter in the monochromatic camera module has a transmittance to infrared rays higher than that to visible light. For example, a coating curve of a lens in a monochrome camera module is set as shown in fig. 5. The principle of lens coating is that the medicines are vaporized by a high-voltage electron gun and are uniformly distributed on the surface of the lens. When light enters different transmitting substances (e.g. air enters the glass), about 5% of the light is reflected. Modern optical lenses are usually coated with a single layer or multiple layers of magnesium fluoride antireflection film, where a single layer can reduce the reflection to 1.5% and multiple layers can reduce the reflection to 0.25%, so that the light transmittance can reach 95% if the lens is properly coated. As can be seen from fig. 5, the transmittance of the lens after coating with the film is large, approximately 90%. The lens is adopted in the monochromatic camera module, so that a large amount of infrared rays can be transmitted.
For example, as shown in fig. 6, the transmittance curve of the filter in the monochrome camera module is shown in fig. 6, and it can be seen from fig. 6 that the transmittance of the filter for near infrared rays is greater than 90% and the transmittance for visible light is less than 10%. Therefore, the filter is adopted in the monochromatic camera module, so that a large amount of infrared rays can be transmitted, and most visible light can be reflected.
Therefore, the light reflected by the shooting scene is filtered by the lens and the optical filter in sequence to obtain a first target light, and the first target light comprises a large amount of infrared rays and hardly comprises visible light. Therefore, the effect of filtering visible light and retaining infrared rays is realized, and defogging is realized through the infrared rays. And the monochromatic camera module not only can realize the defogging to the haze scene, can also improve the definition of shooing the scene that has dust or muddy water. In addition, since the amount of light entering the monochrome camera module is relatively large with respect to the RGB camera model, an effect of detail enhancement can be achieved. For example, the texture, hierarchy, etc. of a green plant may be well characterized for the green plant. Of course, the present application is not limited to enhancing only the details of the green plant.
In step 324, the first object light is photoelectrically converted by the monochrome image sensor to generate a first image, and the first image is a monochrome image.
The Monochrome Image Sensor is used for receiving a first target light ray after the light ray reflected by the shooting scene sequentially passes through the lens and the filter and imaging the first target light ray. Specifically, the first target light is photoelectrically converted by the monochrome image sensor to generate a first image, which is a monochrome image.
In the embodiment of the application, the first target light rays reflected by the shooting scene are received and filtered by the lens and the optical filter in sequence, and the first target light rays comprise infrared rays. The first target light is photoelectrically converted by the monochrome image sensor to generate a first image, which is a monochrome image. Since the wavelength of infrared rays is longer than that of visible light, the diffraction capability of infrared rays is also stronger than that of visible light, and the infrared rays can bypass obstacles (suspended particles in haze weather) and can be transmitted. Therefore, the problem of defogging of an image photographed in bad weather such as haze can be solved by means of infrared imaging based on the monochrome image sensor.
In one embodiment, capturing the captured scene with the RGB camera module to obtain the second image includes:
receiving second target light rays reflected by a shooting scene, which are filtered by a lens and an optical filter in the RGB camera module in sequence, wherein the second target light rays comprise visible light;
and performing photoelectric conversion on the second target light through an image sensor in the RGB camera module to generate a second image, wherein the second image is an RGB image.
Specifically, lenses and filters in an RGB camera module in the electronic device can pass three primary colors of red, green and blue. Therefore, the RGB camera module group receives the second target light after the light reflected by the shooting scene sequentially passes through the lens and the optical filter in the RGB camera module group. The second target light includes visible light, and particularly includes three colors of red, green and blue.
And then, performing photoelectric conversion on the second target light through an image sensor in the RGB camera module to generate a second image. Since the second target light includes three colors of red, green and blue, the second image obtained after being imaged by the image sensor is an RGB image. RGB images have rich color information.
In the embodiment of the application, the second target light rays reflected by the shooting scene are received and filtered by the lens and the optical filter in the RGB camera module in sequence, and the second target light rays comprise visible light. And performing photoelectric conversion on the second target light through an image sensor in the RGB camera module to generate a second image, wherein the second image is an RGB image. And then carrying out image fusion on the RGB image and the monochrome image obtained by the monochrome camera module to obtain a target image. Because the RGB image has rich color information and the monochromatic image generated based on the infrared ray can realize the defogging effect, the RGB image and the monochromatic image are fused to form the target image, so that the color information of the shooting scene is restored, and meanwhile, the defogging treatment is carried out on the shooting scene, so that the image is clearer.
In one embodiment, image fusion of the first image and the second image to obtain the target image includes:
carrying out image alignment on the first image and the second image to generate an aligned image;
and carrying out image fusion based on the aligned images to obtain a target image.
In the embodiment of the present application, by performing image registration (also referred to as alignment) on one image and a second image, positions of pixel points of images of the same object on different image frames can be aligned, so that contents that are misaligned on different image frames can be aligned. Then, image fusion is performed based on the aligned images, and a target image is obtained. Through image registration, the problem that the target image generated by finally performing image fusion is blurred due to content dislocation on different image frames can be avoided.
In one embodiment, the resolution of the first image is less than or equal to the resolution of the second image.
The display resolution is also referred to as pixel resolution, which is simply referred to as resolution, and refers to the number of pixels that can be displayed by the display, and usually, the number of pixels per row is multiplied by the number of pixels per column, for example: 1024 × 768, namely the displayer can display 768 rows and 1024 columns, and 786432 pixels can be displayed in total; the resolution is 640 × 480, which means that the display can display 480 rows and 640 columns, and can display 307200 pixels in total.
Since the second image is mainly captured by the RGB camera module as a reference and the first image is captured by the monochrome camera module mainly for achieving the defogging effect, the resolution of the first image may be set to be less than or equal to the resolution of the second image. For example, a monochrome camera module employs a 1200 ten thousand pixel camera, and an RGB camera module employs a 500 ten thousand pixel camera.
In the embodiment of the present application, since the second image obtained by shooting with the RGB camera module is mainly used as a reference, and the first image obtained by shooting with the monochrome camera module is mainly used for achieving the defogging effect, the resolution of the first image may be set to be less than or equal to the resolution of the second image. Therefore, the production cost is reduced while the defogging is realized.
In one embodiment, with reference to fig. 1, an electronic device is provided, which includes at least one RGB camera module and at least one monochrome camera module;
the monochrome camera module comprises a lens, an optical filter and a monochrome image sensor; the transmittance of the lens to infrared rays is greater than that to visible light, and the transmittance of the optical filter to infrared rays is greater than that to visible light;
and the monochromatic image sensor is used for receiving the first target light rays after the light rays reflected by the shooting scene are filtered by the lens and the optical filter in sequence and imaging the first target light rays.
Specifically, the electronic equipment comprises at least one RGB camera module and at least one monochrome camera module. For example, the electronic device includes three RGB camera modules and one monochrome camera module. Wherein, these three RGB camera module include a main camera module, a wide angle camera module and a long burnt camera module. Of course, this is not limited in this application.
Referring to fig. 2, the monochrome camera module includes a lens, a filter, and a monochrome image sensor from top to bottom. The lens has a transmittance for infrared rays higher than that for visible light, and the filter has a transmittance for infrared rays higher than that for visible light. The light reflected by the shooting scene is filtered by the lens and the optical filter in sequence to obtain first target light, and the first target light comprises a large amount of infrared rays and almost no visible light. Therefore, the effect of filtering visible light and retaining infrared rays is realized, and defogging is realized through the infrared rays.
In the embodiment of the application, adopt monochromatic camera module, RGB camera module to form images respectively, can not lead to the volume of a module of making a video recording too big, also need not often switch over the light filter. Therefore, the problems that the camera module in the traditional defogging method is large in size and a mechanical structure in the camera module is easy to damage are solved.
In one embodiment, the filter in the monochrome camera module is a high-pass filter or a band-pass filter.
In the embodiments of the present application, the high-pass filter refers to a filter that allows light having a wavelength of a certain range or longer to pass through, and the band-pass filter refers to a filter that allows light having a wavelength of a certain range to pass through.
In one embodiment, the high pass filter can pass light having a wavelength greater than or equal to a first wavelength threshold, which is greater than or equal to 700 nm.
In the present embodiment, the wavelength of the infrared ray is 760nm (nanometers) to 1mm (millimeters). The high pass filter is arranged to pass light having a wavelength greater than or equal to a first wavelength threshold. Wherein the first wavelength threshold is greater than or equal to 700 nm. For example, the first wavelength threshold is 750nm, then the wavelength of the light that can be passed by the high pass filter is greater than or equal to 750 nm. The high-pass filter can filter visible light and retain infrared rays.
In one embodiment, the wavelength of the light passed by the band-pass filter is between the second wavelength threshold and the third wavelength threshold; the second wavelength threshold is greater than or equal to 750nm, the third wavelength threshold is less than or equal to 1000nm, and the second wavelength threshold is less than the third wavelength threshold.
In the embodiment of the present application, the wavelength of the light that can pass through the bandpass filter is between the second wavelength threshold and the third wavelength threshold. And setting the second wavelength threshold to be greater than or equal to 750nm, the third wavelength threshold to be less than or equal to 1000nm, and the second wavelength threshold to be less than the third wavelength threshold. The wavelength of the light that can pass through the band pass filter can be set to be between 750nm and 950nm, which is not limited in this application. Thus, the band-pass filter can also filter visible light and retain infrared rays.
In a specific embodiment, an image processing method is provided, which is applied to an electronic device including at least one RGB camera module and at least one monochrome camera module, and includes:
the method comprises the following steps that firstly, electronic equipment receives first target light rays reflected by a shooting scene, wherein the first target light rays are filtered by a lens and an optical filter in sequence, and the first target light rays comprise infrared rays;
and secondly, performing photoelectric conversion on the first target light through a monochromatic image sensor to generate a first image, wherein the first image is a monochromatic image. The monochrome image is shown in fig. 7A, and as can be seen from fig. 7A, the texture and the hierarchy of the green plant are enhanced in the monochrome image, and the details of the building at a distance are also enhanced.
Step three, the electronic equipment receives second target light rays reflected by a shooting scene, wherein the second target light rays are filtered by a lens and an optical filter in the RGB camera module in sequence, and the second target light rays comprise visible light;
and fourthly, performing photoelectric conversion on the second target light through an image sensor in the RGB camera module to generate a second image, wherein the second image is an RGB image. Among them, the RGB image has rich color information as shown in fig. 7B.
Fifthly, aligning the first image with the second image to generate an aligned image;
and step six, carrying out image fusion based on the aligned images to obtain a target image. As shown in fig. 8, it can be seen from fig. 8 that the target image not only retains rich color information, but also enhances the texture and hierarchy of green plants, and also enhances the details of buildings at a distance. The whole target image is clearer and the defogging effect is achieved compared with the RGB image shown in FIG. 7B. For a specific comparison process, reference may be made to fig. 9, where the left diagram of fig. 9 is a reduced diagram of fig. 7B, and the right diagram of fig. 9 is a reduced diagram of fig. 8.
In the embodiment of the application, the first target light rays reflected by the shooting scene are received and filtered by the lens and the optical filter in sequence, and the first target light rays comprise infrared rays. The first target light is photoelectrically converted by the monochrome image sensor to generate a first image, which is a monochrome image. Since the wavelength of infrared rays is longer than that of visible light, the diffraction capability of infrared rays is also stronger than that of visible light, and the infrared rays can bypass obstacles (suspended particles in haze weather) and can be transmitted. Therefore, the problem of defogging of an image photographed in bad weather such as haze can be solved by means of infrared imaging based on the monochrome image sensor. Since the RGB image has rich color information. Therefore, the RGB images shot by the monochromatic image and the RGB camera module are fused, the color information of the shot scene is restored, and meanwhile, the shot scene is subjected to defogging treatment, so that the images are clearer.
In one embodiment, an image processing apparatus 1000 is provided, which is applied to an electronic device including at least one RGB camera module and at least one monochrome camera module, where the monochrome camera module includes a lens, a filter, and a monochrome image sensor; the transmittance of the lens to infrared rays is greater than that to visible light, and the transmittance of the optical filter to infrared rays is greater than that to visible light; the monochrome image sensor is used for receiving first target light rays reflected by a shooting scene after the light rays are filtered by the lens and the optical filter in sequence and imaging the first target light rays; there is provided an image processing apparatus 1000 including:
a first image generation module 1020, configured to capture a capture scene through a monochrome camera module to obtain a first image;
the second image generation module 1040 is configured to capture a capture scene through the RGB camera module to obtain a second image;
the image fusion module 1060 is configured to perform image fusion on the first image and the second image to obtain a target image.
In one embodiment, the first image generating module 1020 is further configured to receive a first target light ray after the light ray reflected by the shooting scene is filtered by a lens and an optical filter in sequence, where the first target light ray includes infrared rays; the first target light is photoelectrically converted by the monochrome image sensor to generate a first image, which is a monochrome image.
In an embodiment, the second image generating module 1040 is further configured to receive a second target light ray after the light ray reflected by the shooting scene is sequentially filtered by a lens and an optical filter in the RGB camera module, where the second target light ray includes visible light; and performing photoelectric conversion on the second target light through an image sensor in the RGB camera module to generate a second image, wherein the second image is an RGB image.
In one embodiment, the image fusion module 1060 is further configured to image-align the first image with the second image, and generate an aligned image; and carrying out image fusion based on the aligned images to obtain a target image.
In one embodiment, the resolution of the first image is less than or equal to the resolution of the second image.
It should be understood that, although the steps in the flowcharts in the above-described figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in the above figures may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the sub-steps or stages of other steps.
The division of the modules in the image processing apparatus is only for illustration, and in other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the image processing apparatus.
For specific limitations of the image processing apparatus, reference may be made to the above limitations of the image processing method, which are not described herein again. The respective modules in the image processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, an electronic device is further provided, which includes a memory and a processor, wherein the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to perform the steps of one of the image processing methods provided in the above embodiments.
Fig. 11 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 11, the electronic device includes a processor and a memory connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor for implementing an image processing method provided in the above embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The electronic device may be any terminal device such as a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a Point of Sales (POS), a vehicle-mounted computer, and a wearable device.
The implementation of each module in the image processing apparatus provided in the embodiment of the present application may be in the form of a computer program. The computer program may be run on an electronic device or an electronic device. The program modules constituting the computer program may be stored on the electronic device or a memory of the electronic device. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the image processing method.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform an image processing method.
Any reference to memory, storage, database, or other medium used by embodiments of the present application may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above image processing examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (12)

1. The image processing method is characterized by being applied to electronic equipment comprising at least one RGB camera module and at least one monochrome camera module, wherein the monochrome camera module comprises a lens, an optical filter and a monochrome image sensor; the transmittance of the lens to infrared rays is greater than that to visible light, and the transmittance of the optical filter to infrared rays is greater than that to visible light; the monochromatic image sensor is used for receiving first target light rays reflected by a shooting scene and filtered by the lens and the optical filter in sequence, and imaging the first target light rays; the method comprises the following steps:
shooting the shooting scene through the monochrome camera module to obtain a first image;
shooting the shooting scene through the RGB camera module to obtain a second image;
and carrying out image fusion on the first image and the second image to obtain a target image.
2. The method of claim 1, wherein capturing the captured scene with the monochrome camera module results in a first image comprising:
receiving first target light rays reflected by the shooting scene and filtered by the lens and the optical filter in sequence, wherein the first target light rays comprise infrared rays;
and performing photoelectric conversion on the first target light through the monochromatic image sensor to generate the first image, wherein the first image is a monochromatic image.
3. The method of claim 2, wherein capturing the captured scene with the RGB camera module results in a second image comprising:
receiving second target light rays reflected by the shooting scene and filtered by a lens and an optical filter in the RGB camera module in sequence, wherein the second target light rays comprise visible light;
and performing photoelectric conversion on the second target light through an image sensor in the RGB camera module to generate a second image, wherein the second image is an RGB image.
4. The method of claim 3, wherein the image fusing the first image and the second image to obtain a target image comprises:
carrying out image alignment on the first image and the second image to generate an aligned image;
and carrying out image fusion based on the aligned images to obtain a target image.
5. The method of any of claims 1 to 4, wherein the resolution of the first image is less than or equal to the resolution of the second image.
6. An electronic device is characterized by comprising at least one RGB camera module and at least one monochrome camera module;
the monochrome camera module comprises a lens, an optical filter and a monochrome image sensor; the transmittance of the lens to infrared rays is greater than that to visible light, and the transmittance of the optical filter to infrared rays is greater than that to visible light;
the monochrome image sensor is used for receiving first target light rays reflected by a shooting scene after the light rays sequentially pass through the lens and the optical filter, and imaging the first target light rays.
7. The electronic device of claim 6, wherein the filter in the monochrome camera module is a high-pass filter or a band-pass filter.
8. The electronic device of claim 7, wherein the high pass filter passes light having a wavelength greater than or equal to a first wavelength threshold, the first wavelength threshold being greater than or equal to 700 nm.
9. The electronic device of claim 7, wherein the bandpass filter is capable of passing light having a wavelength between a second wavelength threshold and a third wavelength threshold; the second wavelength threshold is greater than or equal to 750nm, the third wavelength threshold is less than or equal to 950nm, and the second wavelength threshold is less than the third wavelength threshold.
10. An image processing device is characterized in that the image processing device is applied to electronic equipment comprising at least one RGB camera module and at least one monochrome camera module, wherein the monochrome camera module comprises a lens, an optical filter and a monochrome image sensor; the transmittance of the lens to infrared rays is greater than that to visible light, and the transmittance of the optical filter to infrared rays is greater than that to visible light; the monochromatic image sensor is used for receiving first target light rays reflected by a shooting scene and filtered by the lens and the optical filter in sequence, and imaging the first target light rays; the device comprises:
the first image generation module is used for shooting the shooting scene through the monochrome camera module to obtain a first image;
the second image generation module is used for shooting the shooting scene through the RGB camera module to obtain a second image;
and the image fusion module is used for carrying out image fusion on the first image and the second image to obtain a target image.
11. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the computer program, when executed by the processor, causes the processor to perform the steps of the image processing method according to any of claims 1 to 5.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 5.
CN202011263594.7A 2020-11-12 2020-11-12 Image processing method and device, electronic equipment and readable storage medium Pending CN112449095A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011263594.7A CN112449095A (en) 2020-11-12 2020-11-12 Image processing method and device, electronic equipment and readable storage medium
PCT/CN2021/117721 WO2022100256A1 (en) 2020-11-12 2021-09-10 Image processing method and apparatus, electronic device, readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011263594.7A CN112449095A (en) 2020-11-12 2020-11-12 Image processing method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN112449095A true CN112449095A (en) 2021-03-05

Family

ID=74737030

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011263594.7A Pending CN112449095A (en) 2020-11-12 2020-11-12 Image processing method and device, electronic equipment and readable storage medium

Country Status (2)

Country Link
CN (1) CN112449095A (en)
WO (1) WO2022100256A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177905A (en) * 2021-05-21 2021-07-27 浙江大华技术股份有限公司 Image acquisition method, device, equipment and medium
CN113840065A (en) * 2021-08-26 2021-12-24 深圳市锐尔觅移动通信有限公司 Camera module and mobile terminal
WO2022100256A1 (en) * 2020-11-12 2022-05-19 Oppo广东移动通信有限公司 Image processing method and apparatus, electronic device, readable storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510007A (en) * 2009-03-20 2009-08-19 北京科技大学 Real time shooting and self-adapting fusing device for infrared light image and visible light image
CN102334141A (en) * 2010-04-23 2012-01-25 前视红外系统股份公司 Infrared resolution and contrast enhancement with fusion
CN104995910A (en) * 2012-12-21 2015-10-21 菲力尔系统公司 Infrared imaging enhancement with fusion
US20160073043A1 (en) * 2014-06-20 2016-03-10 Rambus Inc. Systems and Methods for Enhanced Infrared Imaging
CN107563971A (en) * 2017-08-12 2018-01-09 四川精视科技有限公司 A kind of very color high-definition night-viewing imaging method
CN107580163A (en) * 2017-08-12 2018-01-12 四川精视科技有限公司 A kind of twin-lens black light camera
US9894298B1 (en) * 2014-09-26 2018-02-13 Amazon Technologies, Inc. Low light image processing
CN111327800A (en) * 2020-01-08 2020-06-23 深圳深知未来智能有限公司 All-weather vehicle-mounted vision system and method suitable for complex illumination environment
US20200275021A1 (en) * 2018-08-27 2020-08-27 SZ DJI Technology Co., Ltd. Image processing and presentation
CN111741281A (en) * 2020-06-30 2020-10-02 Oppo广东移动通信有限公司 Image processing method, terminal and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160295133A1 (en) * 2015-04-06 2016-10-06 Heptagon Micro Optics Pte. Ltd. Cameras having a rgb-ir channel
CN107534733B (en) * 2015-04-23 2019-12-20 富士胶片株式会社 Image pickup apparatus, image processing method of the same, and medium
CN206595991U (en) * 2017-03-29 2017-10-27 厦门美图移动科技有限公司 A kind of double-camera mobile terminal
CN207283655U (en) * 2017-08-12 2018-04-27 四川精视科技有限公司 A kind of twin-lens black light camera
CN109005343A (en) * 2018-08-06 2018-12-14 Oppo广东移动通信有限公司 Control method, device, imaging device, electronic equipment and readable storage medium storing program for executing
CN209390111U (en) * 2018-11-05 2019-09-13 华为技术有限公司 A kind of video camera and electronic equipment
CN112449095A (en) * 2020-11-12 2021-03-05 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510007A (en) * 2009-03-20 2009-08-19 北京科技大学 Real time shooting and self-adapting fusing device for infrared light image and visible light image
CN102334141A (en) * 2010-04-23 2012-01-25 前视红外系统股份公司 Infrared resolution and contrast enhancement with fusion
CN104995910A (en) * 2012-12-21 2015-10-21 菲力尔系统公司 Infrared imaging enhancement with fusion
US20160073043A1 (en) * 2014-06-20 2016-03-10 Rambus Inc. Systems and Methods for Enhanced Infrared Imaging
US9894298B1 (en) * 2014-09-26 2018-02-13 Amazon Technologies, Inc. Low light image processing
CN107563971A (en) * 2017-08-12 2018-01-09 四川精视科技有限公司 A kind of very color high-definition night-viewing imaging method
CN107580163A (en) * 2017-08-12 2018-01-12 四川精视科技有限公司 A kind of twin-lens black light camera
US20200275021A1 (en) * 2018-08-27 2020-08-27 SZ DJI Technology Co., Ltd. Image processing and presentation
CN111327800A (en) * 2020-01-08 2020-06-23 深圳深知未来智能有限公司 All-weather vehicle-mounted vision system and method suitable for complex illumination environment
CN111741281A (en) * 2020-06-30 2020-10-02 Oppo广东移动通信有限公司 Image processing method, terminal and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022100256A1 (en) * 2020-11-12 2022-05-19 Oppo广东移动通信有限公司 Image processing method and apparatus, electronic device, readable storage medium
CN113177905A (en) * 2021-05-21 2021-07-27 浙江大华技术股份有限公司 Image acquisition method, device, equipment and medium
CN113840065A (en) * 2021-08-26 2021-12-24 深圳市锐尔觅移动通信有限公司 Camera module and mobile terminal

Also Published As

Publication number Publication date
WO2022100256A1 (en) 2022-05-19

Similar Documents

Publication Publication Date Title
WO2022100256A1 (en) Image processing method and apparatus, electronic device, readable storage medium
EP2380345B1 (en) Improving the depth of field in an imaging system
CN108712608B (en) Terminal equipment shooting method and device
WO2021000592A1 (en) Image capturing device and method
US20130021504A1 (en) Multiple image processing
CN107592473A (en) Exposure parameter method of adjustment, device, electronic equipment and readable storage medium storing program for executing
WO2021022592A1 (en) Imaging compensation apparatus, imaging compensation method and application thereof
CN112866549B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN102783135A (en) Method and apparatus for providing a high resolution image using low resolution
CN103546730A (en) Method for enhancing light sensitivities of images on basis of multiple cameras
CN114928688B (en) Array camera module, electronic equipment with array camera module and image processing method
CN112991245B (en) Dual-shot blurring processing method, device, electronic equipment and readable storage medium
Wang et al. Stereoscopic dark flash for low-light photography
JP5708036B2 (en) Imaging device
US9860456B1 (en) Bayer-clear image fusion for dual camera
Cheng et al. A mutually boosting dual sensor computational camera for high quality dark videography
US20240054613A1 (en) Image processing method, imaging processing apparatus, electronic device, and storage medium
CN116582759A (en) Image processing device, method, electronic device, and medium
CN112995529A (en) Imaging method and device based on optical flow prediction
WO2022160995A1 (en) Image sensor, camera, electronic device, and control method
US10616493B2 (en) Multi camera system for zoom
CN109218604A (en) Image capture unit, image brilliance modulating method and image processor
CN112104796B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109447925B (en) Image processing method and device, storage medium and electronic equipment
CN216356939U (en) Image capturing device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210305