WO2023130922A1 - Image processing method and electronic device - Google Patents

Image processing method and electronic device Download PDF

Info

Publication number
WO2023130922A1
WO2023130922A1 PCT/CN2022/138808 CN2022138808W WO2023130922A1 WO 2023130922 A1 WO2023130922 A1 WO 2023130922A1 CN 2022138808 W CN2022138808 W CN 2022138808W WO 2023130922 A1 WO2023130922 A1 WO 2023130922A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
frames
images
camera module
processing
Prior art date
Application number
PCT/CN2022/138808
Other languages
French (fr)
Chinese (zh)
Inventor
肖斌
乔晓磊
朱聪超
王宇
邵涛
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2023130922A1 publication Critical patent/WO2023130922A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/32Transforming X-rays
    • H04N5/321Transforming X-rays with video transmission of fluoroscopic images
    • H04N5/325Image enhancement, e.g. by subtraction techniques using polyenergetic X-rays

Definitions

  • the present application relates to the field of image processing, and in particular, relates to an image processing method and electronic equipment.
  • image enhancement processing is a method for enhancing useful information in the image and improving the visual effect of the image.
  • the present application provides an image processing method and electronic equipment, which can perform image enhancement on images acquired by a camera module of a main camera to improve image quality.
  • an image processing method which is applied to an electronic device, and the electronic device includes a first camera module and a second camera module, and the second camera module is a near-infrared camera module or an infrared camera module, the image processing method includes:
  • the first interface includes a first control
  • N frames of first images and M frames of second images are acquired, the first images are images collected by the first camera module, and the second images are images collected by the second camera module
  • the images collected by the group, N and M are both positive integers greater than or equal to 1;
  • the target image is obtained based on the N frames of the first image and the M frames of the second image, including:
  • the image quality of the N frames of the third image is higher than the image quality of the N frames of the first image
  • the image quality of the fourth image of the M frames is higher than the image quality of the second image of the M frames
  • the third image of the N frames and the fourth image of the M frame are fused to obtain a fused image, and the semantically segmented image is based on any frame image in the first image of the N frames or the The detailed information of the fused image is better than the detailed information of the N frames of the first image;
  • the first camera module may be a visible light camera module, or the first camera module may be other camera modules capable of obtaining visible light; this application does not make any limitation on the first camera module.
  • the first camera module may include a first lens, a first lens and an image sensor, and the spectral range through which the first lens can pass is visible light (400nm-700nm).
  • the first lens may refer to a filter lens; the first lens may be used to absorb light of certain specific wavelength bands and allow light of visible light bands to pass through.
  • the second camera module may include a second lens, a second lens and an image sensor, and the spectral range that the second lens can pass is near-infrared light (700nm ⁇ 1100nm).
  • the second lens may refer to a filter lens; the second lens may be used to absorb light of certain specific wavelength bands and allow light of near-infrared wavelength bands to pass through.
  • the electronic device may include a first camera module and a second camera module, wherein the second camera module is a near-infrared camera module or an infrared camera module (for example, the acquired spectral range is 700nm ⁇ 1100nm); the first image is collected by the first camera module, and the second image is collected by the second camera module; since the image information included in the second image (for example, a near-infrared image) is in the first image ( For example, a visible light image) cannot be obtained; similarly, the image information included in the third image cannot be obtained by the fourth image; therefore, by comparing the third image (for example, a visible light image) with the fourth image (for example, near-infrared light image) for fusion processing, the multi-spectral information fusion of near-infrared light image information and visible light image information can be realized, so that the fused image includes more detailed information; therefore, through the embodiment of the present application provided
  • the image processing method can perform image enhancement on the image acquired by the
  • the image quality of the N frames of the third image is higher than the image quality of the N frames of the first image may mean that the noise in the N frames of the third image is less than the noise in the N frames of the first image; or, by evaluating the image quality The algorithm evaluates N frames of the third image and N frames of the first image, and the evaluation result obtained is that the image quality of the N frames of the third image is higher than that of the N frames of the first image, etc., which is not limited in this application.
  • the image quality of the M frames of the fourth image is higher than the image quality of the M frames of the second image may mean that the noise in the M frames of the fourth image is less than the noise in the M frames of the second image; or, through the image quality
  • the evaluation algorithm evaluates the M frames of the fourth image and the M frames of the second image, and the obtained evaluation result is that the image quality of the M frames of the fourth image is higher than that of the M frames of the second image, etc.
  • the detailed information of the fused image is better than the detailed information of the N frames of the first image may mean that the detailed information in the fused image is more than the detailed information in any one of the first images in the N frames of the first image; or, the fused
  • the detail information of the image being better than the detail information of the N frames of first images may mean that the definition of the fused image is better than that of any first image in the N frames of first images.
  • the detail information may include edge information, texture information, etc. of the object to be photographed.
  • the third image of N frames and the fourth image of M frames can be fused based on the semantically segmented image to obtain a fused image;
  • the fused local image information can be determined by introducing the semantically segmented image into the fusion process
  • the local image information in the third image of N frames and the fourth image of M frames can be selected through the semantic segmentation image for fusion processing, so that the local detail information of the fused image can be increased.
  • the multi-spectral information fusion of near-infrared light image information and visible light image information can be realized, so that the fusion
  • the final image includes more detail information, which can enhance the detail information in the image.
  • the M frames of the fourth image are obtained by performing second image processing on the second image collected by the near-infrared camera module or the infrared camera module; therefore, the M frames of the fourth image include the reflection information of the near-infrared light of the photographed object ; Since the near-infrared light has a high reflectivity to the green scene, the details of the green scene captured by the near-infrared camera module or the infrared module are more; the green scene image can be selected from the fourth image by semantically segmenting the image Regions are fused, so that the details of the green scene in the dark and light areas of the image can be enhanced.
  • the fourth image of the M frames is obtained by performing second image processing on the second image collected by the near-infrared camera module or the infrared camera module; since the spectral range that the near-infrared camera module or the infrared camera module can obtain is Near-infrared light, the wavelength of the spectrum of near-infrared light is longer, so the diffraction ability of near-infrared light is stronger; for shooting scenes of cloud and fog or shooting scenes of distant objects, the near-infrared camera module or infrared camera module collects
  • the image has a stronger sense of transparency, that is, the image includes more detailed information of distant objects (for example, texture information of distant mountains); it is possible to select a distant image area from the fourth image by semantically segmenting the image,
  • the fusion processing is carried out with the nearby image area selected from the third image through the semantic segmentation image, so as to enhance the detailed information in the fusion image.
  • performing the second image processing on the M frames of the second image to obtain the M frames of the fourth image includes:
  • the global registration process may refer to taking the third image of the first frame as a reference, and mapping the whole of each fourth image in the M frames of fourth images to the third image of the first frame.
  • black level correction black level correction
  • the black level refers to the video signal level without a line of bright output on a calibrated display device.
  • Phase defect pixel correction can include phase point correction (phase defect correction, PDC) and bad pixel correction (bad pixel correction, BPC); wherein, the bad pixels in BPC are randomly positioned bright or dark
  • PDC phase defect correction
  • BPC bad pixel correction
  • the number of points is relatively small, and BPC can be realized by filtering algorithm; compared with ordinary pixel points, phase points are bad points with fixed positions, and the number is relatively large; PDC needs to remove phase points through the known phase point list.
  • the second image processing may also include but not limited to:
  • Lens shading correction is used to eliminate the inconsistency between the color and brightness of the surrounding image and the center of the image caused by the lens optical system.
  • the second image processing may include black level correction, phase bad pixel correction, and other Raw domain image processing algorithms; the above-mentioned examples describe other Raw domain image processing algorithms with automatic white balance processing and lens shading correction.
  • Other Raw domain image processing algorithms are not subject to any limitation.
  • the first step is performed on the M frames of fifth images based on any third image of the N frames of third images.
  • Registration processing to obtain the fourth image of the N frames, including:
  • the resolution of the fourth image can be adjusted to be the same as that of the third image; thus, it is convenient to perform fusion processing on N frames of the third image and M frames of the fourth image.
  • performing the second image processing on the M frames of the second image to obtain the M frames of the fourth image includes:
  • the first registration is performed on the M frames of the fifth image based on any one frame of the third image in the N frames of third images Process to obtain the first registration image of M frames, including:
  • the first registration processing is global registration processing.
  • the second registration processing is local registration processing.
  • performing first image processing on the N frames of first images to obtain N frames of third images includes:
  • black level correction black level correction
  • the black level refers to the video signal level without a line of bright output on a calibrated display device.
  • Phase defect pixel correction can include phase point correction (phase defect correction, PDC) and bad pixel correction (bad pixel correction, BPC); wherein, the bad pixels in BPC are randomly positioned bright or dark
  • PDC phase defect correction
  • BPC bad pixel correction
  • the number of points is relatively small, and BPC can be realized by filtering algorithm; compared with ordinary pixel points, phase points are bad points with fixed positions, and the number is relatively large; PDC needs to remove phase points through the known phase point list.
  • the first image processing may also include but not limited to:
  • Lens shading correction is used to eliminate the inconsistency between the color and brightness of the surrounding image and the center of the image caused by the lens optical system.
  • the first image processing may include black level correction, phase bad pixel correction, and other Raw domain image processing algorithms; the above-mentioned examples describe other Raw domain image processing algorithms with automatic white balance processing and lens shading correction.
  • Other Raw domain image processing algorithms are not subject to any limitation.
  • the electronic device further includes an infrared flash lamp
  • the image processing method further includes:
  • the dark light scene refers to a shooting scene in which the amount of light entering the electronic device is less than a preset threshold
  • acquiring N frames of the first image and M frames of the second image includes:
  • the N frames of the first image and the M frames of the second image are acquired.
  • the first interface includes a second control; the turning on the infrared flash in a dark scene includes:
  • the infrared flashlight is turned on in response to the second operation.
  • the infrared flashlight in the electronic device can be turned on; since the electronic device can include the first camera module and the second module, when the infrared flashlight is turned on, the reflected light of the object increases, so that The amount of light entering the second camera module increases; thereby increasing the detail information of the second image acquired through the second camera module; the first camera module and the second camera module are collected by the image processing method of the embodiment of the application
  • the image fusion processing can be performed on the image acquired by the camera module of the main camera, and the detailed information in the image can be improved.
  • the infrared flash is imperceptible to the user, and improves the detail information in the image without the user's perception.
  • performing fusion processing on the N frames of the third image and the M frames of the fourth image based on the semantically segmented image to obtain a fusion image including:
  • the third image of N frames and the fourth image of M frames are fused by an image processing model to obtain the fused image, and the image processing model is a pre-trained neural network.
  • the first interface refers to a photographing interface
  • the first control refers to a control for instructing photographing
  • the first operation may refer to a click operation on a control indicating to take a photo in the photo taking interface.
  • the first interface refers to a video recording interface
  • the first control refers to a control for instructing video recording.
  • the first operation may refer to a click operation on a control indicating to record a video in the video recording interface.
  • the first interface refers to a video call interface
  • the first control refers to a control for instructing a video call.
  • the first operation may refer to a click operation on a control indicating a video call in the video call interface.
  • the first operation may also include a voice instruction operation, or other operations instructing the electronic device to take a photo or make a video call; make any restrictions.
  • an electronic device in a second aspect, includes one or more processors, memory, a first camera module and a second camera module; the second camera module is a near-infrared camera module Or an infrared camera module, the memory is coupled with the one or more processors, the memory is used to store computer program codes, the computer program codes include computer instructions, and the one or more processors call the computer instructions to cause the device to perform:
  • the first interface includes a first control
  • N frames of first images and M frames of second images are acquired, the first images are images collected by the first camera module, and the second images are images collected by the second camera module
  • the images collected by the group, N and M are both positive integers greater than or equal to 1;
  • the obtaining the target image based on the N frames of the first image and the M frames of the second image includes:
  • the image quality of the N frames of the third image is higher than the image quality of the N frames of the first image
  • the third image of the N frames and the fourth image of the M frame are fused to obtain a fused image, and the semantically segmented image is based on any frame image in the first image of the N frames or the The detailed information of the fused image is better than the detailed information of the N frames of the first image;
  • the first camera module may be a visible light camera module, or the first camera module may be other camera modules capable of obtaining visible light; this application does not make any limitation on the first camera module.
  • the first camera module may include a first lens, a first lens and an image sensor, and the spectral range through which the first lens can pass includes visible light (400nm-700nm).
  • the first lens may refer to a filter lens; the first lens may be used to absorb light of certain specific wavelength bands and allow light of visible light bands to pass through.
  • the second camera module may include a second lens, a second lens and an image sensor, and the spectral range that the second lens can pass is near-infrared light (700nm ⁇ 1100nm).
  • the second lens may refer to a filter lens; the second lens may be used to absorb light of certain specific wavelength bands and allow light of near-infrared wavelength bands to pass through.
  • the electronic device may include a first camera module and a second camera module, wherein the second camera module is a near-infrared camera module or an infrared camera module (for example, the acquired spectral range is 700nm ⁇ 1100nm); the first image is collected by the first camera module, and the second image is collected by the second camera module; since the image information included in the second image (for example, a near-infrared image) is in the first image ( For example, a visible light image) cannot be obtained; similarly, the image information included in the third image cannot be obtained by the fourth image; therefore, by comparing the third image (for example, a visible light image) with the fourth image (for example, near-infrared light image) for fusion processing, the multi-spectral information fusion of near-infrared light image information and visible light image information can be realized, so that the fused image includes more detailed information; therefore, through the embodiment of the present application provided
  • the image processing method can perform image enhancement on the image acquired by the
  • the image quality of the N frames of the third image is higher than the image quality of the N frames of the first image may mean that the noise in the N frames of the third image is less than the noise in the N frames of the first image; or, by evaluating the image quality The algorithm evaluates N frames of the third image and N frames of the first image, and the evaluation result obtained is that the image quality of the N frames of the third image is higher than that of the N frames of the first image, etc., which is not limited in this application.
  • the image quality of the M frames of the fourth image is higher than the image quality of the M frames of the second image may mean that the noise in the M frames of the fourth image is less than the noise in the M frames of the second image; or, through the image quality
  • the evaluation algorithm evaluates the M frames of the fourth image and the M frames of the second image, and the obtained evaluation result is that the image quality of the M frames of the fourth image is higher than that of the M frames of the second image, etc.
  • the detailed information of the fused image is better than the detailed information of the N frames of the first image may mean that the detailed information in the fused image is more than the detailed information in any one of the first images in the N frames of the first image; or, the fused
  • the detail information of the image being better than the detail information of the N frames of first images may mean that the definition of the fused image is better than that of any first image in the N frames of first images.
  • the detail information may include edge information, texture information, etc. of the object to be photographed.
  • the third image of N frames and the fourth image of M frames can be fused based on the semantically segmented image to obtain a fused image;
  • the fused local image information can be determined by introducing the semantically segmented image into the fusion process
  • the local image information in the third image of N frames and the fourth image of M frames can be selected through the semantic segmentation image for fusion processing, so that the local detail information of the fused image can be increased.
  • the multi-spectral information fusion of near-infrared light image information and visible light image information can be realized, so that the fusion
  • the final image includes more detail information, which can enhance the detail information in the image.
  • the M frames of the fourth image are obtained by performing second image processing on the second image collected by the near-infrared camera module or the infrared camera module; therefore, the M frames of the fourth image include the reflection information of the near-infrared light of the photographed object ; Since the near-infrared light has a high reflectivity to the green scene, the details of the green scene captured by the near-infrared camera module or the infrared module are more; the green scene image can be selected from the fourth image by semantically segmenting the image Regions are fused, so that the details of the green scene in the dark and light areas of the image can be enhanced.
  • the fourth image of the M frames is obtained by performing second image processing on the second image collected by the near-infrared camera module or the infrared camera module; since the spectral range that the near-infrared camera module or the infrared camera module can obtain is Near-infrared light, the wavelength of the spectrum of near-infrared light is longer, so the diffraction ability of near-infrared light is stronger; for shooting scenes of cloud and fog or shooting scenes of distant objects, the near-infrared camera module or infrared camera module collects
  • the image has a stronger sense of transparency, that is, the image includes more detailed information of distant objects (for example, texture information of distant mountains); it is possible to select a distant image area from the fourth image by semantically segmenting the image,
  • the fusion processing is carried out with the nearby image area selected from the third image through the semantic segmentation image, so as to enhance the detailed information in the fusion image.
  • the one or more processors call the computer instructions so that the electronic device executes:
  • the global registration process may refer to taking the third image of the first frame as a reference, and mapping the whole of each fourth image in the M frames of fourth images to the third image of the first frame.
  • black level correction black level correction
  • the black level refers to the video signal level without a line of bright output on a calibrated display device.
  • Phase defect pixel correction can include phase point correction (phase defect correction, PDC) and bad pixel correction (bad pixel correction, BPC); wherein, the bad pixels in BPC are randomly positioned bright or dark
  • PDC phase defect correction
  • BPC bad pixel correction
  • the number of points is relatively small, and BPC can be realized by filtering algorithm; compared with ordinary pixel points, phase points are bad points with fixed positions, and the number is relatively large; PDC needs to remove phase points through the known phase point list.
  • the second image processing may also include but not limited to:
  • Lens shading correction is used to eliminate the inconsistency between the color and brightness of the surrounding image and the center of the image caused by the lens optical system.
  • the second image processing may include black level correction, phase bad pixel correction, and other Raw domain image processing algorithms; the above-mentioned examples describe other Raw domain image processing algorithms with automatic white balance processing and lens shading correction.
  • Other Raw domain image processing algorithms are not subject to any limitation.
  • the one or more processors call the computer instructions so that the electronic device executes:
  • the resolution of the fourth image can be adjusted to be the same as that of the third image; thus, it is convenient to perform fusion processing on N frames of the third image and M frames of the fourth image.
  • the one or more processors call the computer instructions so that the electronic device executes:
  • the one or more processors call the computer instructions so that the electronic device executes:
  • the first registration process is a global registration process.
  • the second registration processing is local registration processing.
  • the one or more processors call the computer instructions so that the electronic device executes:
  • black level correction black level correction
  • the black level refers to the video signal level without a line of bright output on a calibrated display device.
  • Phase defect pixel correction can include phase point correction (phase defect correction, PDC) and bad pixel correction (bad pixel correction, BPC); wherein, the bad pixels in BPC are randomly positioned bright or dark
  • PDC phase defect correction
  • BPC bad pixel correction
  • the number of points is relatively small, and BPC can be realized by filtering algorithm; compared with ordinary pixel points, phase points are bad points with fixed positions, and the number is relatively large; PDC needs to remove phase points through the known phase point list.
  • the first image processing may also include but not limited to:
  • Lens shading correction is used to eliminate the inconsistency between the color and brightness of the surrounding image and the center of the image caused by the lens optical system.
  • the first image processing may include black level correction, phase bad pixel correction, and other Raw domain image processing algorithms; the above-mentioned examples describe other Raw domain image processing algorithms with automatic white balance processing and lens shading correction.
  • Other Raw domain image processing algorithms are not subject to any limitation.
  • the electronic device includes an infrared flashlight, and the one or more processors call the computer instructions to make the electronic device execute:
  • the dark light scene refers to a shooting scene in which the amount of light entering the electronic device is less than a preset threshold
  • acquiring N frames of the first image and M frames of the second image includes:
  • the N frames of the first image and the M frames of the second image are acquired.
  • the first interface includes a second control
  • the one or more processors call the computer instructions to make the electronic device execute:
  • the infrared flashlight is turned on in response to the second operation.
  • the infrared flashlight in the electronic device can be turned on; since the electronic device can include the first camera module and the second module, when the infrared flashlight is turned on, the reflected light of the object increases, so that The amount of light entering the second camera module increases; thereby increasing the detail information of the second image acquired through the second camera module; the first camera module and the second camera module are collected by the image processing method of the embodiment of the application
  • the image fusion processing can be performed on the image acquired by the camera module of the main camera, and the detailed information in the image can be improved.
  • the infrared flash is imperceptible to the user, and improves the detail information in the image without the user's perception.
  • the one or more processors call the computer instructions so that the electronic device executes:
  • the third image of N frames and the fourth image of M frames are fused by an image processing model to obtain the fused image, and the image processing model is a pre-trained neural network.
  • the semantically segmented image is obtained by processing the third image in the first frame of the N frames of third images by using a semantic segmentation algorithm.
  • the first interface refers to a photographing interface
  • the first control refers to a control for instructing photographing
  • the first operation may refer to a click operation on a control indicating to take a photo in the photo taking interface.
  • the first interface refers to a video recording interface
  • the first control refers to a control for instructing video recording.
  • the first operation may refer to a click operation on a control indicating to record a video in the video recording interface.
  • the first interface refers to a video call interface
  • the first control refers to a control for instructing a video call.
  • the first operation may refer to a click operation on a control indicating a video call in the video call interface.
  • the first operation may also include a voice instruction operation, or other operations instructing the electronic device to take a photo or make a video call; make any restrictions.
  • an electronic device including a module/unit for executing the first aspect or any image processing method in the first aspect.
  • an electronic device includes: one or more processors, a memory, a first camera module, and a second camera module; the memory is coupled to the one or more processors , the memory is used to store computer program codes, the computer program codes include computer instructions, and the one or more processors call the computer instructions to make the electronic device perform the first aspect or any of the first aspects An image processing method.
  • a chip system is provided, the chip system is applied to an electronic device, and the chip system includes one or more processors, and the processor is used to call a computer instruction so that the electronic device executes the first aspect Or any image processing method in the first aspect.
  • a computer-readable storage medium stores computer program code, and when the computer program code is run by an electronic device, the electronic device executes the first aspect or the first Any image processing method in the aspect.
  • a computer program product comprising: computer program code, when the computer program code is run by an electronic device, the electronic device is made to execute the first aspect or any one of the first aspects.
  • An image processing method when the computer program code is run by an electronic device, the electronic device is made to execute the first aspect or any one of the first aspects.
  • the electronic device may include a first camera module and a second camera module, wherein the second camera module is a near-infrared camera module or an infrared camera module (for example, the acquired spectral range is 700nm ⁇ 1100nm); the first image is collected by the first camera module, and the second image is collected by the second camera module; since the image information included in the second image (for example, a near-infrared image) is in the first image ( For example, a visible light image) cannot be obtained; similarly, the image information included in the third image cannot be obtained by the fourth image; therefore, by comparing the third image (for example, a visible light image) with the fourth image (for example, near-infrared light image) for fusion processing, the multi-spectral information fusion of near-infrared light image information and visible light image information can be realized, so that the fused image includes more detailed information; therefore, through the embodiment of the present application provided
  • the image processing method can perform image enhancement on the image acquired by the
  • the spectral range that the second camera module can acquire is near-infrared light
  • the infrared light image collected by the second camera module is a grayscale image
  • the grayscale image is used to represent The actual value of brightness
  • the spectral range that the first camera module can obtain is visible light
  • the brightness value in the visible light image collected by the first camera module is discontinuous, and it is usually necessary to predict the discontinuous brightness value
  • the infrared light image true value of brightness
  • FIG. 1 is a schematic diagram of a hardware system applicable to an electronic device of the present application
  • Fig. 2 is a schematic diagram of a software system applicable to the electronic device of the present application
  • FIG. 3 is a schematic diagram of an application scenario applicable to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an application scenario applicable to an embodiment of the present application.
  • Fig. 5 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • Fig. 7 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • Fig. 8 is a schematic diagram of the first registration process and the up-sampling process provided by the embodiment of the present application.
  • FIG. 9 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • Fig. 11 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • Fig. 12 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • Fig. 13 is a schematic diagram showing the effect of the image processing method provided by the embodiment of the present application.
  • Fig. 14 is a schematic diagram of a graphical user interface applicable to the embodiment of the present application.
  • Fig. 15 is a schematic diagram of an optical path of a shooting scene applicable to an embodiment of the present application.
  • Fig. 16 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 17 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • NIR near infrared light
  • Near-infrared light refers to electromagnetic waves between visible light and mid-infrared light; the near-infrared light region can be divided into two regions: near-infrared short-wave (780nm-1100nm) and near-infrared long-wave (1100nm-2526nm).
  • the main camera module refers to a camera module that receives visible light in a spectral range; for example, the sensor included in the main camera module receives a spectral range of 400nm to 700nm.
  • a near-infrared camera module refers to a camera module that receives near-infrared light in a spectral range; for example, a sensor included in a near-infrared camera module receives a spectral range of 700 nm to 1100 nm.
  • the high-frequency information of an image refers to the region where the gray value changes drastically in the image; for example, the high-frequency information in the image includes edge information, texture information, etc. of an object.
  • the low-frequency information of the image refers to the area where the gray value changes slowly in the image; for an image, the part except the high-frequency information is low-frequency information; for example, the low-frequency information of the image can include the content information within the edge of the object.
  • the detail layer of the image includes high-frequency information of the image; for example, the detail layer of the image includes edge information, texture information, etc. of the object.
  • the base layer of the image includes the low-frequency information of the image; for an image, the part except the detail layer is the base layer; for example, the base layer of the image includes the content information within the edge of the object.
  • Image registration refers to the process of matching and superimposing two or more images acquired at different times, different sensors (imaging devices) or under different conditions (weather, illumination, camera position and angle, etc.).
  • the brightness value is used to estimate the ambient brightness, and its specific calculation formula is as follows:
  • Exposure is the exposure time; Aperture is the aperture size; Iso is the sensitivity; Luma is the average value of Y in the XYZ space of the image.
  • Color correction matrix color correction matrix
  • a color correction matrix is used to calibrate the accuracy of colors other than white.
  • Three dimension look up table (Threee dimension look up table, 3D LUT)
  • lookup tables can be used for image color correction, image enhancement, or image gamma correction; for example, LUTs can be loaded in image signal processors, and the original image can be processed according to the LUT table Processing, realize the pixel value mapping of the original image frame and change the color style of the image, so as to achieve different image effects.
  • GTM Global tone mapping
  • Global tone mapping is used to solve the problem of uneven distribution of gray values in high dynamic images.
  • Gamma processing is used to adjust the brightness, contrast and dynamic range of an image by adjusting the gamma curve.
  • a neural network refers to a network formed by connecting multiple single neural units together, that is, the output of one neural unit can be the input of another neural unit; the input of each neural unit can be connected to the local receptive field of the previous layer, To extract the features of the local receptive field, the local receptive field can be an area composed of several neural units.
  • the neural network can use the error back propagation (back propagation, BP) algorithm to correct the size of the parameters in the initial neural network model during the training process, so that the reconstruction error loss of the neural network model becomes smaller and smaller. Specifically, passing the input signal forward until the output will generate an error loss, and updating the parameters in the initial neural network model by backpropagating the error loss information, so that the error loss converges.
  • the backpropagation algorithm is a backpropagation movement dominated by error loss, aiming to obtain the optimal parameters of the neural network model, such as the weight matrix.
  • Fig. 1 shows a hardware system applicable to the electronic equipment of this application.
  • the electronic device 100 may be a mobile phone, a smart screen, a tablet computer, a wearable electronic device, a vehicle electronic device, an augmented reality (augmented reality, AR) device, a virtual reality (virtual reality, VR) device, a notebook computer, a super mobile personal computer ( ultra-mobile personal computer (UMPC), netbook, personal digital assistant (personal digital assistant, PDA), projector, etc.
  • augmented reality augmented reality
  • VR virtual reality
  • a notebook computer a super mobile personal computer ( ultra-mobile personal computer (UMPC), netbook, personal digital assistant (personal digital assistant, PDA), projector, etc.
  • UMPC ultra-mobile personal computer
  • PDA personal digital assistant
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and A subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
  • the structure shown in FIG. 1 does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than those shown in FIG. 2 , or the electronic device 100 may include a combination of some of the components shown in FIG. 2 , or , the electronic device 100 may include subcomponents of some of the components shown in FIG. 1 .
  • the components shown in FIG. 1 can be realized in hardware, software, or a combination of software and hardware.
  • Processor 110 may include one or more processing units.
  • the processor 110 may include at least one of the following processing units: an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor) , ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, neural network processor (neural-network processing unit, NPU).
  • an application processor application processor, AP
  • modem processor graphics processing unit
  • graphics processing unit graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
  • the processor 110 may be configured to execute the image processing method of the embodiment of the present application; for example, display a first interface, the first interface includes a first control; detect a first operation on the first control; respond to the first Operation, acquire N frames of the first image and M frames of the second image, the first image is the image collected by the first camera module, the second image is the image collected by the second camera module, and both N and M are greater than or equal to 1 A positive integer; based on the first image of N frames and the second image of M frames, the target image is obtained; the target image is saved; wherein, based on the first image of N frames and the second image of M frames, the target image is obtained, including: the first image of N frames Carrying out the first image processing on an image to obtain the third image of N frames, the image quality of the third image of N frames is higher than the image quality of the first image of N frames; performing the second image processing on the second image of M frames to obtain the first image of M frames Four images, the image quality of the fourth image of the M frame
  • connection relationship between the modules shown in FIG. 1 is only a schematic illustration, and does not constitute a limitation on the connection relationship between the modules of the electronic device 100 .
  • each module of the electronic device 100 may also adopt a combination of various connection modes in the foregoing embodiments.
  • the wireless communication function of the electronic device 100 may be realized by components such as the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, and a baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna may be used in conjunction with a tuning switch.
  • the electronic device 100 can realize the display function through the GPU, the display screen 194 and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • Display 194 may be used to display images or video.
  • the electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display screen 194 , and the application processor.
  • the ISP is used for processing the data fed back by the camera 193 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can optimize the algorithm of image noise, brightness and color, and ISP can also optimize parameters such as exposure and color temperature of the shooting scene.
  • the ISP may be located in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard red green blue (red green blue, RGB), YUV and other image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3 and MPEG4.
  • the gyro sensor 180B can be used to determine the motion posture of the electronic device 100 .
  • the angular velocity of the electronic device 100 around three axes may be determined by the gyro sensor 180B.
  • the gyro sensor 180B can be used for image stabilization. For example, when the shutter is pressed, the gyro sensor 180B detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shaking of the electronic device 100 through reverse motion to achieve anti-shake.
  • the gyro sensor 180B can also be used in scenarios such as navigation and somatosensory games.
  • the acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally x-axis, y-axis and z-axis). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The acceleration sensor 180E can also be used to identify the posture of the electronic device 100 as an input parameter for application programs such as horizontal and vertical screen switching and pedometer.
  • the distance sensor 180F is used to measure distance.
  • the electronic device 100 may measure the distance by infrared or laser. In some embodiments, for example, in a shooting scene, the electronic device 100 can use the distance sensor 180F for distance measurement to achieve fast focusing.
  • the ambient light sensor 180L is used for sensing ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket, so as to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to implement functions such as unlocking, accessing the application lock, taking pictures, and answering incoming calls.
  • the touch sensor 180K is also referred to as a touch device.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a touch screen.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor 180K may transmit the detected touch operation to the application processor to determine the touch event type.
  • Visual output related to the touch operation can be provided through the display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 and disposed at a different position from the display screen 194 .
  • the hardware system of the electronic device 100 has been described in detail above, and the software system of the image electronic device 100 will be introduced below.
  • Fig. 2 is a schematic diagram of the software system of the device provided by the embodiment of the present application.
  • the system architecture may include an application layer 210 , an application framework layer 220 , a hardware abstraction layer 230 , a driver layer 240 and a hardware layer 250 .
  • the application layer 210 may include a camera; optionally, the application program 210 may also include application programs such as a gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, and short message.
  • the application framework layer 220 provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer; the application framework layer may include some predefined functions.
  • API application programming interface
  • the application framework layer 220 may include a camera access interface; the camera access interface may include camera management and camera equipment.
  • the camera management can be used to provide an access interface for managing the camera; the camera device can be used to provide an interface for accessing the camera.
  • the hardware abstraction layer 230 is used to abstract hardware.
  • the hardware abstraction layer can include the camera abstraction layer and other hardware device abstraction layers; the camera hardware abstraction layer can call the algorithms in the camera algorithm library.
  • a library of camera algorithms may include software algorithms for image processing.
  • the driver layer 240 is used to provide drivers for different hardware devices.
  • the driver layer may include a camera device driver; a digital signal processor driver, a graphics processor driver, or a central processing unit driver.
  • the hardware layer 250 may include camera devices as well as other hardware devices.
  • the hardware layer 250 includes a camera device, a digital signal processor, a graphics processor or a central processing unit; for example, the camera device may include an image signal processor, and the image signal processor may be used for image processing.
  • the spectral range obtained by the main camera module on the terminal device is visible light (400nm ⁇ 700nm); Due to the poor light conditions of the shooting scene and the small amount of light entering the electronic device, there is a problem that some image detail information is lost in the image obtained by the camera module of the main camera.
  • an embodiment of the present application provides an image processing method, which is applied to an electronic device;
  • the electronic device may include a main camera module and a near-infrared camera module, wherein the spectral range that the main camera module can obtain is Including visible light (400nm-700nm); the spectral range that the near-infrared camera module can obtain is near-infrared light (700nm-1100nm); because the image collected by the near-infrared camera module includes the reflection information of the subject to near-infrared light;
  • the image collected by the main camera module is fused with the image collected by the near-infrared camera module, which can realize the multi-spectral information fusion of the image information of near-infrared light and the image information of visible light, so that the fused image includes more details information; therefore, through the image processing method provided in the embodiment of the present application, the detailed information in the image can be enhanced.
  • the image processing method in the embodiment of the present application can be applied to the field of photography (for example, single-view photography, dual-view photography, etc.), recording video field, video call field or other image processing fields;
  • a dual-camera module is used, and the dual-camera module includes a camera module that can obtain visible light and a camera module that can obtain near-infrared light (for example, a near-infrared camera module, or an infrared camera module); Visible light images and near-infrared light images are processed and fused to obtain images with enhanced image quality; image processing methods in the embodiments of the present application can enhance detailed information in the images and improve image quality.
  • the spectral range that the near-infrared camera module can obtain is near-infrared light, which is similar to the spectral range of visible light
  • the wavelength of the spectrum that can be obtained by the near-infrared camera module is longer, so the diffraction ability is stronger.
  • the image obtained by the image processing method provided by the embodiment of the present application after the image is collected by the main camera module and the near-infrared camera module; the image shown in Figure 3 is rich in detail information and can clearly display the details of the mountains Information; through the image processing method provided by the embodiment of the present application, the image acquired by the main camera module can be enhanced to enhance the detailed information in the image.
  • the terminal device shown in FIG. 3 may include a first camera module, a second camera module, and an infrared flashlight; wherein, the spectral range that the first camera module can acquire is visible light (400nm-700nm); the second The spectral range that the camera module can obtain is near-infrared light (700nm-1100nm).
  • the camera module of the main camera and the The green scene captured by the near-infrared camera module has more detail information, which can enhance the detail information of the green scene in the dark area of the image.
  • the infrared flashlight in the electronic device can be turned on.
  • the portrait can include the subject's face, eyes, nose, mouth, ears, eyebrows, etc.; because the electronic device It includes the main camera module and the near-infrared camera module.
  • the infrared flashlight When the infrared flashlight is turned on, the reflected light of the subject increases, which increases the amount of light entering the near-infrared camera module; thus making the photos taken by the near-infrared camera module
  • the detailed information of the portrait is increased, and the images collected by the main camera module and the near-infrared camera module are fused through the image processing method of the embodiment of the application, and the image acquired by the main camera module can be image enhanced to improve the image quality. details in .
  • the infrared flash is imperceptible to the user, and improves the detail information in the image without the user's perception.
  • the electronic device can turn off the near-infrared camera module when detecting food or a portrait.
  • a food shooting scene may include multiple foods, and the near-infrared camera module may collect images of some of the foods in the multiple foods; for example, the multiple foods may be peaches, apples, or watermelons, etc., and the near-infrared camera module may collect Images of peaches and apples are collected, and images of watermelons are not collected.
  • the near-infrared camera module can display prompt information, prompting whether to enable the near-infrared camera module; the near-infrared camera module can only be enabled to collect images after the user authorizes the near-infrared camera module to be activated.
  • the image processing method of the present application can be applied to a folding screen terminal device;
  • the folding screen terminal device can include an outer screen and an inner screen; the angle between the outer screen and the inner screen of the folding screen terminal device
  • the preview image can be displayed on the external screen, as shown in (a) in Figure 4; when the angle between the external screen and the internal screen of the folding screen terminal device is an acute angle, the preview image can be displayed on the external screen , as shown in (b) in Figure 4;
  • the angle between the outer screen and the inner screen of the folding screen terminal device is an obtuse angle, one side of the inner screen can display a preview image, and the other side can display a preview image for Indicate the control for shooting, as shown in (c) in Figure 4; when the angle between the outer screen and the inner screen of the folding screen terminal device is 180 degrees, the preview image can be displayed on the inner screen, as shown in Figure 4 As shown in (d); the above preview image may be obtained by processing the collected image through the image processing method
  • the folding screen terminal device shown in FIG. 4 may include a first camera module, a second camera module, and an infrared flashlight; wherein, the spectral range that the first camera module can obtain is visible light (400nm-700nm); The spectral range that the second camera module can acquire is near-infrared light (700nm-1100nm).
  • FIG. 5 is a schematic diagram of an image processing method provided by an embodiment of the present application.
  • the image processing method can be executed by the electronic device shown in FIG. 1; the method 200 includes step S201 to step S205, which will be described in detail below respectively.
  • the image processing method shown in FIG. 5 is applied to an electronic device, and the electronic device includes a first camera module and a second camera module, and the spectrum acquired by the second camera module is a near-infrared camera module or an infrared camera module. group (for example, the acquired spectral range is 700nm ⁇ 1100nm).
  • the first camera module may be a visible light camera module (for example, the acquired spectral range is 400nm-700nm), or the first camera module may be other camera modules capable of acquiring visible light.
  • Step S201 displaying a first interface, where the first interface includes a first control.
  • the first interface may refer to the photographing interface of the electronic device
  • the first control may refer to a control in the photographing interface for instructing photographing, as shown in FIG. 3 or FIG. 4 .
  • the first interface may refer to a video recording interface of the electronic device
  • the first control may refer to a control in the video recording interface for instructing to record a video
  • the first interface may refer to a video call interface of the electronic device
  • the first control may refer to a control on the video call interface used to indicate a video call.
  • Step S202 detecting a first operation on the first control.
  • the first operation may refer to a click operation on a control indicating to take a photo in the photo taking interface.
  • the first operation may refer to a click operation on a control indicating to record a video in the video recording interface.
  • the first operation may refer to a click operation on a control indicating a video call in the video call interface.
  • the first operation may also include a voice instruction operation, or other operations instructing the electronic device to take a photo or make a video call; make any restrictions.
  • Step S203 in response to the first operation, acquire N frames of the first image and M frames of the second image.
  • the N frames of the first image can be images collected by the first camera module
  • the M frames of the second image are images collected by the second camera module
  • the second camera module is a near-infrared camera module or an infrared camera module.
  • a camera module for example, the acquired spectrum ranges from 700nm to 1100nm
  • N and M are positive integers greater than 1.
  • the first image and the second image may refer to images in Raw color space.
  • the first image may refer to an RGB image in a Raw color space
  • the second image may refer to an NIR image in a Raw color space.
  • Step S204 based on the N frames of the first image and the M frames of the second image, the target image is obtained.
  • obtaining the target image may include the following steps:
  • the image quality of the third image of N frames is higher than the image quality of the first image of N frames; performing the second image processing on the second image of M frames, The fourth image of M frames is obtained, and the image quality of the fourth image of M frames is higher than the image quality of the second image of M frames; based on the semantic segmentation image, the third image of N frames and the fourth image of M frames are fused to obtain The fusion image, the semantic segmentation image is obtained based on any frame image in the first image of N frames or any frame image in the second image of M frames, and the detailed information of the fused image is better than the detailed information of the first image of N frames; The third image processing is performed on the fused image to obtain the target image.
  • the image quality of the N frames of the third image is higher than the image quality of the N frames of the first image may mean that the noise in the N frames of the third image is less than the noise in the N frames of the first image; or, by evaluating the image quality
  • the algorithm evaluates N frames of the third image and N frames of the first image, and the evaluation result obtained is that the image quality of the N frames of the third image is higher than that of the N frames of the first image, etc., which is not limited in this application.
  • Evaluation of image quality may include, for example, evaluation of aspects such as exposure, sharpness, color, texture, noise, anti-shake, flash, focus, and/or artifacts.
  • the image quality of the M frames of the fourth image is higher than the image quality of the M frames of the second image may mean that the noise in the M frames of the fourth image is less than the noise in the M frames of the second image; or, through the image quality
  • the evaluation algorithm evaluates the M frames of the fourth image and the M frames of the second image, and the obtained evaluation result is that the image quality of the M frames of the fourth image is higher than that of the M frames of the second image, etc.
  • the detailed information of the fused image is better than the detailed information of the N frames of the first image may mean that the detailed information in the fused image is more than the detailed information in any one of the first images in the N frames of the first image; or, the fused
  • the detail information of the image being better than the detail information of the N frames of first images may mean that the definition of the fused image is better than that of any first image in the N frames of first images. It can also be other situations, which are not limited in this application.
  • the detail information may include edge information and texture information of the subject (for example, hair edges, face details, clothes folds, edges of each tree of a large number of trees, branches and leaves of green plants, etc.).
  • the third image of N frames and the fourth image of M frames can be fused based on the semantically segmented image to obtain a fused image;
  • the fused local image information can be determined by introducing the semantically segmented image into the fusion process
  • the local image information in the third image of N frames and the fourth image of M frames can be selected through the semantic segmentation image for fusion processing, so that the local detail information of the fused image can be increased.
  • the multi-spectral information fusion of near-infrared light image information and visible light image information can be realized, so that the fusion
  • the final image includes more detail information, which can enhance the detail information in the image.
  • the fourth image of the M frames is obtained by performing second image processing on the second image collected by the near-infrared camera module or the infrared camera module; therefore, the fourth image of the M frames includes the reflection information of the object to the near-infrared light;
  • the near-infrared light has a high reflectivity to the green scene, so the details of the green scene captured by the near-infrared camera module or the infrared module are more detailed; the image area of the green scene can be selected from the fourth image by semantically segmenting the image. Fusion processing, so as to enhance the detail information of the green scene in the dark and light area of the image.
  • the fourth image of the M frames is obtained by performing second image processing on the second image collected by the near-infrared camera module or the infrared camera module; light, the near-infrared light spectrum has a longer wavelength, so the near-infrared light has a stronger diffraction ability; for cloud and fog shooting scenes or shooting scenes of distant objects, the images collected by the near-infrared camera module or infrared camera module
  • the sense of transparency is stronger, that is, the image includes more detailed information of distant objects (for example, the texture information of distant mountains); it is possible to select a distant image area from the fourth image through semantic segmentation, and through The semantic segmentation image selects the nearby image area from the third image for fusion processing, so as to enhance the detailed information in the fusion image.
  • the N frames of the first image may refer to N frames of Raw images (for example, RGGB images) collected by the first camera module; the first image processing of the N frames of the first image may obtain the N frames of the third image; Wherein, the first image processing may include black level correction processing and/or phase bad point correction processing.
  • phase defect pixel correction can include phase point correction (phase defect correction, PDC) and bad pixel correction (bad pixel correction, BPC); wherein, the bad pixels in BPC are randomly positioned bright or dark
  • PDC phase defect correction
  • bad pixel correction bad pixel correction
  • the first image processing may also include but not limited to:
  • Lens shading correction is used to eliminate the inconsistency between the color and brightness of the surrounding image and the center of the image caused by the lens optical system.
  • the first image processing may include black level correction, phase bad pixel correction, and other Raw domain image processing algorithms; the above-mentioned examples describe other Raw domain image processing algorithms with automatic white balance processing and lens shading correction.
  • Other Raw domain image processing algorithms are not subject to any limitation.
  • performing second image processing on M frames of second images to obtain M frames of fourth images including:
  • phase defect pixel correction can include phase point correction (phase defect correction, PDC) and bad pixel correction (bad pixel correction, BPC); wherein, the bad pixels in BPC are randomly positioned bright or dark
  • PDC phase defect correction
  • bad pixel correction bad pixel correction
  • the second image processing may also include but not limited to:
  • Lens shading correction is used to eliminate the inconsistency between the color and brightness of the surrounding image and the center of the image caused by the lens optical system.
  • the second image processing may include black level correction, phase bad pixel correction, and other Raw domain image processing algorithms; the above-mentioned examples describe other Raw domain image processing algorithms with automatic white balance processing and lens shading correction.
  • Other Raw domain image processing algorithms are not subject to any limitation.
  • the first camera module is a visible light camera module
  • the second camera module is a near-infrared camera module or an infrared camera module
  • N frames of RGB Raw images can be collected through the first camera module
  • N frames of RGB Raw images can be collected through the second camera module.
  • the two-camera module can collect M frames of NIR Raw images; perform black level correction processing and/or phase bad point correction processing on N frames of RGB Raw images to obtain N frames of processed RGB Raw images; perform M frames of NIR Raw images Black level correction processing and/or phase dead point correction processing can obtain M frame processed NIR Raw images; since the two camera modules are not set at the same position, for the same shooting scene, the first camera module and the second camera The existence of the module has a certain baseline distance; therefore, the NIR Raw image after the M frame processing can be globally registered with any frame of the RGB Raw image after the N frame processing, and the M frame registered image can be obtained.
  • the image; based on the semantic segmentation image, the RGB Raw image processed by N frames is fused with the registered image of M frames to obtain a fused image.
  • the subsequent FIG. 9 refer to the subsequent FIG. 9 .
  • the global registration processing can refer to the global registration processing of the NIR Raw image after the M frame processing on the basis of any frame image in the RGB Raw image after the N frame processing; or, the global processing also It can refer to the global registration process performed on the RGB Raw image after N frame processing based on any frame of image in the NIR Raw image after M frame processing.
  • the image resolution of the M frames of the fourth image can be adjusted to be the same as the image resolution of the N frames of the third image; for example, by using any one of the N frames of the third image
  • the three images are used as a reference, and the images after the registration of the M frames are subjected to up-sampling or down-sampling processing to obtain the fourth image of the M frames; or, the third image of any one of the fourth images of the M frames can be used as a reference.
  • the processed RGB Raw image of N frames is subjected to up-sampling processing or down-sampling processing to obtain a third image of N frames.
  • any one of the third images in the N frames of the third image perform up-sampling or down-sampling processing on the registered images of the M frames to obtain the fourth image of the M frames.
  • up-sampling or down-sampling processing on the registered images of the M frames to obtain the fourth image of the M frames.
  • performing second image processing on M frames of second images to obtain M frames of fourth images including:
  • the first registration process is performed on the images to obtain M frames of the first registration image; taking any third image of the frame as a reference, the second registration process is performed on the M frames of the first registration image to obtain the M frames of the fourth image.
  • the first camera module can be a visible light camera module
  • the second camera module can be a near-infrared camera module or an infrared camera module
  • N frames of RGB Raw images can be collected through the first camera module
  • N frames of RGB Raw images can be collected through the first camera module.
  • the second camera module can collect M frames of NIR Raw images; perform black level correction processing and/or phase bad point correction processing on N frames of RGB Raw images to obtain N frames of processed RGB Raw images; M frames of NIR Raw images Perform black level correction processing and/or phase dead point correction processing to obtain M frame processed NIR Raw images; since the two camera modules are not set at the same position, for the same shooting scene, the first camera module and the second camera module
  • the existence of the camera module has a certain baseline distance; therefore, the NIR Raw image after the M frame processing can be globally registered on the basis of any frame of the RGB Raw image after the N frame processing, and the first frame of the M frame can be obtained. Register the image; further, any frame of image in the N frames of processed RGB Raw images can be used as a reference.
  • the global registration processing can refer to the global registration processing of the NIR Raw image after the M frame processing on the basis of any frame image in the RGB Raw image after the N frame processing; or, the global processing also It can refer to the global registration process performed on the RGB Raw image after N frame processing based on any frame of image in the NIR Raw image after M frame processing.
  • local registration processing is further performed on the basis of the global registration processing, so that the local details in the M frames of the first registration image are subjected to image registration processing again; the local detail information of the fused image can be improved.
  • the image resolution of the M frames of the fourth image can be adjusted to be the same as the image resolution of the N frames of the third image; for example, by using any frame of the N frames of the third image
  • the three images are used as a reference, and the images after the registration of the M frames are subjected to up-sampling or down-sampling processing to obtain the fourth image of the M frames; or, the third image of any one of the fourth images of the M frames can be used as a reference.
  • the processed RGB Raw image of N frames is subjected to up-sampling processing or down-sampling processing to obtain a third image of N frames.
  • any one of the third images in the N frames of the third image perform up-sampling or down-sampling processing on the registered images of the M frames to obtain the fourth image of the M frames.
  • up-sampling or down-sampling processing on the registered images of the M frames to obtain the fourth image of the M frames.
  • the third image of N frames and the fourth image of M frames are fused to obtain a fused image, including:
  • the third image of N frames and the fourth image of M frames are fused by an image processing model to obtain a fused image.
  • the image processing model is a pre-trained neural network.
  • a large amount of sample data and a loss function can be used to iteratively update the parameters of the neural network through the backpropagation algorithm to obtain an image processing model.
  • Step S205 saving the target image.
  • a third image processing may be performed on the fused image to obtain a target image; the target image may refer to an image displayed on a display screen of an electronic device.
  • the fused image may refer to an image in RGB color space
  • the target image may refer to an image sent by an electronic device to be displayed on a screen
  • the third image processing may include but not limited to: RGB domain image algorithm, or YUV domain image algorithm ;
  • step S308 and step S309 shown in FIG. 6 refer to step S308 and step S309 shown in FIG. 6 .
  • the electronic device may also include an infrared flashlight; in a dark scene, the infrared flashlight may be turned on; when the infrared flashlight is turned on, N frames of the first image and M frames of the second image may be acquired; wherein, the dark light
  • the scene refers to a shooting scene in which the amount of light entering the electronic device is less than a preset threshold.
  • the amount of light entering the electronic device is relatively small; after the electronic device turns on the infrared flashlight, it can increase the reflected light acquired by the second camera module, thereby increasing the amount of light entering the second camera module; so that the second The sharpness of the second image collected by the camera module increases; because the sharpness of the second image increases, the sharpness of the third image processed by the first image increases; because the sharpness of the third image increases, the image processing The sharpness of the fused image processed by the model on the third image and the fourth image is increased.
  • the brightness value of the electronic device is larger, it means that the amount of light entering the electronic device is more; the brightness value of the electronic device can be used to determine the amount of light entering the electronic device. When the brightness value of the electronic device is less than the preset brightness threshold, Then the electronic device turns on the infrared flashlight.
  • the brightness value is used to estimate the ambient brightness, and its specific calculation formula is as follows:
  • Exposure is the exposure time; Aperture is the aperture size; Iso is the sensitivity; Luma is the average value of Y in the XYZ space of the image.
  • the first interface of the electronic device may further include a second control; in a dark scene, the electronic device detects a second operation on the second control; in response to the second operation, the electronic device may turn on an infrared flash.
  • the electronic device may include a first camera module and a second camera module, wherein the second camera module is a near-infrared camera module or an infrared camera module; for example, the available The spectral range is near-infrared light (700nm-1100nm); the first image is collected by the first camera module, and the second image is collected by the second camera module; since the image information included in the second image (for example, near-infrared image) It cannot be obtained in the first image (for example, visible light image); similarly, the image information included in the third image cannot be obtained in the fourth image; therefore, by combining the third image (for example, visible light image) with The fourth image (for example, a near-infrared light image) is fused, which can realize the multi-spectral information fusion of the image information of the near-infrared light and the image information of the visible light, so that the fused image includes more detailed information; therefore, by The image processing method provided in the embodiment of the present application can
  • FIG. 6 is a schematic diagram of an image processing method provided by an embodiment of the present application.
  • the image processing method can be executed by the electronic device shown in FIG. 1 ; the image processing method includes step S301 to step S309 , and step S301 to step S309 will be described in detail below.
  • the image processing method shown in FIG. 6 can be applied to the electronic device shown in FIG. 1 , the electronic device includes a first camera module and a second camera module; the spectral range that the first camera module can obtain is Visible light (400nm-700nm); the spectral range that the second camera module can obtain is near-infrared light (700nm-1100nm).
  • Step S301 Obtain a first Raw image (for example, an example of the first image) through the first camera module.
  • the first camera module may include a first lens, a first lens and an image sensor, and the spectral range through which the first lens can pass is visible light (400nm ⁇ 700nm).
  • the first lens may refer to a filter lens; the first lens may be used to absorb light of certain specific wavelength bands and allow light of visible light bands to pass through.
  • multiple frames of first Raw images may be acquired in step 301 .
  • Step S302 acquiring a second Raw image (an example of the second image) through the second camera module.
  • the second camera module may include a second lens, a second lens and an image sensor, and the spectral range that the second lens can pass is near-infrared light (700nm ⁇ 1100nm).
  • the second lens may refer to a filter lens; the second lens may be used to absorb light of certain specific wavelength bands and allow light of near-infrared wavelength bands to pass through.
  • the second Raw image collected by the second camera module may refer to a single-channel image; the second Raw image is used to represent the intensity information of photons superimposed together; for example, the second Raw images can be grayscale images in a single channel.
  • the second Raw images acquired in step 302 may refer to multiple frames of second Raw images (for example, M frames of second images).
  • step S301 and step S302 may be performed synchronously; that is, the first camera module and the second camera module may output frames synchronously, and obtain the first Raw image and the second Raw image respectively.
  • Step S303 black level correction and phase bad point correction.
  • a third Raw image (an example of a third image) is obtained by performing black level correction and phase defect correction on the first Raw image.
  • phase defect pixel correction phase defect pixel correction
  • PDC phase defect correction
  • BPC bad pixel correction
  • the black level correction and phase dead point correction are used as examples above; other image processing algorithms can also be performed on the first Raw image; for example, automatic white balance processing (Automatic white balance) can also be performed on the first Raw image , AWB) or Lens Shading Correction (Lens Shading Correction, LSC), etc.
  • automatic white balance processing Automatic white balance
  • AWB Automatic white balance
  • Lens Shading Correction Lens Shading Correction
  • Lens shading correction is used to eliminate the inconsistency between the color and brightness of the surrounding image and the center of the image caused by the lens optical system. It should be understood that the automatic white balance processing and lens shading correction are used as examples to describe other image processing algorithms, and this application does not make any limitation on other image processing algorithms.
  • Step S304 black level correction and phase bad point correction.
  • a fourth Raw image (an example of a fifth image) is obtained by performing black level correction and phase defect correction on the second Raw image.
  • step S303 and step S304 may not have timing requirements, or step S303 and step S304 may also be executed simultaneously.
  • Step S305 acquiring a semantically segmented image.
  • a semantic segmentation algorithm may be used to process the first frame and the third Raw image to obtain a semantic segmentation image.
  • the semantic segmentation algorithm may include a multi-instance segmentation algorithm; labels of various regions in the image may be output through the semantic segmentation algorithm.
  • a semantically segmented image can be obtained, and the semantically segmented image is used for the fusion processing of the image processing model in step S307; by introducing the semantically segmented image in the fusion process, some image regions can be selected from different images for Fusion processing, so as to increase the local detail information of the fusion image.
  • a semantic segmentation algorithm may be used to process the fourth Raw image of the first frame to obtain a semantic segmentation image.
  • the semantic segmentation image can be obtained through the fourth Raw image, that is, the near-infrared image; since the near-infrared image has a better description ability for details, the semantic segmentation image detail information can be obtained through the near-infrared image richer.
  • Step S306 preprocessing.
  • preprocessing is performed on the third Raw image, the fourth Raw image and the semantic segmentation image.
  • the preprocessing may include performing upsampling processing and registration processing on the fourth Raw image.
  • the fourth Raw image may be upsampled and registered based on the third Raw image of the first frame to obtain the fifth image.
  • the preprocessing may also include feature splicing processing; the feature splicing processing refers to the processing of superimposing the channel numbers of images.
  • the resolution of the fourth Raw image is smaller than that of the third Raw image, it is necessary to perform upsampling processing on the fourth Raw image so that the resolution of the fourth Raw image is the same as that of the third Raw image; in addition, since the fourth Raw image The image is collected by the second camera module, and the third Raw image is collected by the first camera module; since the first camera module and the second camera module are respectively arranged in different positions in the electronic device, the first There is a certain baseline distance between the camera module and the second camera module, that is, there is a certain parallax between the image collected by the first camera module and the image collected by the second camera module. The images are processed for registration.
  • the preprocessing process may include downsampling processing and registration processing; if the resolution of the fourth Raw image is The resolution is equal to the resolution of the third Raw image, and registration processing may be included in the preprocessing process; this embodiment of the present application does not make any limitation on this.
  • preprocessing process including upsampling and registration processing is used for illustration; the preprocessing in step S306 will be described in detail below in conjunction with FIG. 8 .
  • the fourth Raw image refers to the Raw image after performing black level correction and phase defect correction on the second Raw image, and the second Raw image refers to the Raw image collected by the second camera module;
  • the third Raw image is Refers to the Raw image after performing black level correction and phase defect correction on the first Raw image, and the first Raw image refers to the Raw image collected by the first camera module.
  • Step S320 acquiring a fourth Raw image.
  • the resolution size of the fourth Raw image is 7M.
  • Step S330 acquiring a third Raw image.
  • the resolution size of the third Raw image is 10M.
  • Step S340 registration processing.
  • registration processing is performed on the fourth Raw image with the third Raw image as a reference.
  • the registration process in step S330 can be used to acquire the same pixels from the fourth Raw image as in the third Raw image; for example, the third Raw image is 10M, the fourth Raw image is 7M, and the third Raw image is The same pixels are 80% in the fourth Raw image; the registration process can be used to obtain 80% of the same pixels as the 10M third Raw image from the 7M fourth Raw image.
  • registration processing may be performed on multiple frames of fourth Raw images with the first frame of Raw images in the multiple frames of third Raw images as a reference.
  • correction processing is performed on the fourth Raw image to obtain a fifth Raw image.
  • step S340 is used to perform up-sampling processing on the pixels in the fourth Raw image obtained in step S330 that are the same as those in the third Raw image to obtain a fifth Raw image with the same resolution and size as the fourth Raw image.
  • image conversion may be performed on the fourth Raw image through an image transformation matrix (for example, a homography matrix), so that some pixels in the fourth Raw image are mapped to an image of the same size as the third Raw image; wherein, A homography matrix refers to a mapping between two planar projections of an image.
  • a homography matrix refers to a mapping between two planar projections of an image.
  • the black area in the third Raw image of 10M as shown in FIG. 8 represents an empty pixel.
  • the preprocessing may further include performing feature extraction and splicing (contact) on the third Raw image, the fifth Raw image and the semantically segmented image.
  • feature splicing process refers to the process of superimposing the channel numbers of images.
  • the fifth Raw image is a 3-channel image
  • the third Raw image is a 3-channel image
  • the semantic segmentation image is a single-channel image
  • Step S307 image processing model.
  • the preprocessed image is input into the image processing model to obtain an output RGB image (an example of a fused image).
  • N frames of the third Raw image (an example of the third image), semantically segmented images, and M frames of the fifth Raw image (an example of the fourth image) can be input to the image processing model after preprocessing. Fusion processing; specific steps are shown in FIG. 9 .
  • N frames of the third Raw image (an example of the third image), the semantic segmentation image and the sixth Raw image (an example of the fourth image) of N frames can be input to the image processing model for fusion processing after preprocessing , wherein the sixth Raw image is a single-frame image, which is a single-frame image after fusion of the fifth Raw image of multiple frames based on the third Raw image in FIG. 10 ; the specific steps are shown in FIG. 10 .
  • the fifth Raw image/sixth Raw image in FIG. 6 and FIG. 7 is represented as the fifth Raw image or the sixth Raw image.
  • the image processing model is a pre-trained neural network; the image processing model can be used for fusion processing and demosaic processing; for example, the image processing model can perform fusion processing and demosaic processing on the input Raw image based on the semantic segmentation image, Get the RGB image.
  • the image processing model is a pre-trained neural network.
  • a large amount of sample data and a loss function can be used to iteratively update the parameters of the neural network through the backpropagation algorithm to obtain an image processing model.
  • the image processing model can be used for fusion processing and demosaic processing; for example, the image processing model can perform fusion processing and demosaic processing on the input Raw image based on the semantically segmented image to obtain an RGB image.
  • the image processing model can also be used for denoising processing; since the noise of the image mainly comes from Poisson distribution and Gaussian distribution, when denoising processing is performed through multiple frames of images, when the multi-frame images are superimposed and averaged, the Gaussian distribution can be is approximately 0; therefore, denoising through multiple frames of images can improve the denoising effect of the image.
  • the images input into the image processing model may include N frames of the third Raw image (visible light image) and M frames of the fifth Raw image (near-infrared light image); or, the N frames of the first Raw image Three Raw images (visible light image) and the sixth Raw image (near-infrared light image); when performing demosaic processing, since the infrared light image is a grayscale image, it represents the true value of brightness; the visible light image is a Raw image in Bayer format , the luminance value in the Raw image in the Bayer format is discontinuous, and the brightness of the discontinuous area is usually predicted by an interpolation method; therefore, the Raw image in the Bayer format can be guided to demosaic by the real value of the luminance in the near-infrared image, so that It can effectively reduce the pseudo texture appearing in the image.
  • the spectral range of the near-infrared image is 700nm to 1100nm, and the spectral range of the visible light image is 400nm to 700nm, since the image information included in the near-infrared image cannot be obtained in the visible light image; therefore, by The Raw image (visible light image) collected by the first camera module and the Raw image (near-infrared light image) collected by the second camera module are processed and then fused, so that the image information of near-infrared light and the image information of visible light can be realized.
  • the fusion of multi-spectral information makes the fused image include more detailed information.
  • the S307 can also output a Raw image (Raw color space), and then convert the Raw image into an RGB image (RGB color space) through other steps.
  • a Raw image Raw color space
  • RGB color space RGB color space
  • RGB domain algorithm processing is performed on the RGB image.
  • RGB domain algorithm processing may include but not limited to:
  • Color correction matrix processing or three-dimensional lookup table processing, etc.
  • the color correction matrix (color correction matrix, CCM) is used to calibrate the accuracy of colors other than white.
  • the three-dimensional look-up table (Look Up Table, LUT) is widely used in image processing; for example, the look-up table can be used for image color correction, image enhancement or image gamma correction, etc.; for example, the LUT can be loaded in the image signal processor, according to the LUT
  • the table can perform image processing on the original image, and realize the color style of the original image mapped to other images, so as to achieve different image effects.
  • RGB image processing may be performed according to the semantically segmented image; for example, brightness processing may be performed on different regions in the RGB image according to the semantically segmented image.
  • color correction matrix processing and three-dimensional lookup table processing are used as examples for illustration; this application does not make any limitation on RGB image processing.
  • Step S309 YUV image processing.
  • the RGB image is converted into the YUV domain and processed by an algorithm in the YUV domain to obtain the target image.
  • YUV domain algorithm processing may include but not limited to:
  • Global tone mapping (Global tone mapping, GTM) is used to solve the problem of uneven gray value distribution of high dynamic images.
  • Gamma processing is used to adjust the brightness, contrast and dynamic range of an image by adjusting the gamma curve.
  • step S307, step S308 and step S309 may be executed in an image processing model.
  • the image processing method provided by the embodiment of the present application may be executed through the above steps S301 to S309.
  • the electronic device may also include an infrared flashlight; when the electronic device is in a dark scene, that is, when the amount of light entering the electronic device is less than a preset threshold (for example, it can be judged according to the brightness value), the electronic device may execute Step S310 shown in Figure 7 turns on the infrared flashlight; after the infrared flashlight is turned on, the first Raw image is acquired by the first camera module, the second Raw image is acquired by the second camera module, and step S311 as shown in Figure 7 is executed Go to step S319; it should be understood that steps S310 to S319 are applicable to the relevant descriptions of steps S301 to S309, and will not be repeated here.
  • a preset threshold for example, it can be judged according to the brightness value
  • the brightness value of the electronic device is larger, it means that the amount of light entering the electronic device is more; the brightness value of the electronic device can be used to determine the amount of light entering the electronic device. When the brightness value of the electronic device is less than the preset brightness threshold, Then the electronic device turns on the infrared flashlight.
  • the brightness value is used to estimate the ambient brightness, and its specific calculation formula is as follows:
  • Exposure is the exposure time; Aperture is the aperture size; Iso is the sensitivity; Luma is the average value of Y in the XYZ space of the image.
  • the electronic device can focus first after detecting the shooting instruction, and perform scene detection synchronously; after recognizing the dark scene and completing the focus, the infrared flashlight can be turned on. After the infrared flashlight is turned on, the first Raw The frame of the image and the second Raw image can be synchronized.
  • the amount of light entering the electronic device is relatively small for dark scenes; after the electronic device turns on the infrared flashlight, it can increase the reflected light acquired by the second camera module, thereby increasing the light entering of the second camera module; making the second camera
  • the sharpness of the second Raw image collected by the module increases; due to the increased sharpness of the second Raw image, the sharpness of the fourth Raw image obtained through the second Raw image increases; due to the increased sharpness of the fourth Raw image, This increases the clarity of the fused image.
  • the electronic device may include a first camera module and a second camera module, wherein the spectral range that the first camera module can acquire includes visible light (400nm-700nm); the second camera module The spectral range that can be obtained is near-infrared light (700nm ⁇ 1100nm); the first image is collected by the first camera module, and the second image is collected by the second camera module; since the second image (for example, near-infrared image) includes The image information of the first image cannot be obtained in the first image (for example, visible light image); similarly, the image information included in the third image cannot be obtained in the fourth image; therefore, by analyzing the third image (for example, visible light image) and the fourth image (for example, a near-infrared light image) are fused, which can realize the multi-spectral information fusion of the image information of the near-infrared light and the image information of the visible light, so that the fused image includes more detailed information; Therefore, the detailed information in the image can be
  • the spectral range that the second camera module can acquire is near-infrared light
  • the infrared light image collected by the second camera module is a grayscale image
  • the grayscale image is used to represent The actual value of brightness
  • the spectral range that the first camera module can obtain is visible light
  • the brightness value in the visible light image collected by the first camera module is discontinuous, and it is usually necessary to predict the discontinuous brightness value
  • the infrared light image true value of brightness
  • Step S306 to step S307 shown in FIG. 6 will be described in detail below with reference to FIG. 9 and FIG. 10 .
  • image processing may be performed on the multi-frame first Raw images collected by the first camera module and the multi-frame second Raw images collected by the second camera module, so as to obtain an image with enhanced detail information; wherein, the image processing It may include but not limited to: noise reduction processing, demosaic processing or fusion processing.
  • FIG. 9 is a schematic diagram of an image processing method provided by an embodiment of the present application.
  • the image processing method can be executed by the electronic device shown in FIG. 1; the image processing method includes steps S401 to S406, and the steps S401 to S406 will be described in detail below.
  • Step S401 acquiring multiple frames of a third Raw image (an example of a third image).
  • multiple frames of first Raw images can be obtained through the first camera module (400nm-700nm), and multiple frames of first Raw images can be subjected to black level correction and phase defect correction processing to obtain multiple frames of third Raw images .
  • Step S402 acquiring multiple frames of the fifth Raw image (an example of the fourth image).
  • multiple frames of second Raw images can be acquired through the second camera module (700nm-1100nm); black level correction and phase defect correction processing are performed on multiple frames of second Raw images to obtain multiple frames of fourth Raw images ; Using the third Raw image as a reference, perform registration processing on the fourth Raw image to obtain a fifth Raw image.
  • Step S403 acquiring a semantically segmented image.
  • a semantically segmented image can be obtained through a semantically segmented algorithm.
  • the first frame of the third Raw image among the multiple frames of the third Raw image may be processed by a semantic segmentation algorithm to obtain the semantic segmentation image.
  • the first frame of the fourth Raw image among the multiple frames of the fourth Raw image may be processed according to the semantic segmentation algorithm to obtain the semantic segmentation image.
  • fusion processing can be performed on the third Raw image of multiple frames and the fifth Raw image of multiple frames to obtain a fused image; by introducing the semantically segmented image in the fusion process, the local part of the fusion can be determined Image information; for example, partial image information in multiple frames of the third Raw image and multiple frames of the fifth Raw image may be selected for fusion processing through semantically segmented images, so as to increase the local detail information of the fused image.
  • the third Raw image for example, a visible light image
  • the fifth Raw image for example, a near-infrared light image
  • multispectral information fusion of near-infrared light image information and visible light image information can be realized,
  • the fused image includes more detailed information, and the detailed information in the image can be enhanced.
  • Step S404 feature splicing processing.
  • the multi-frame fifth Raw image, the multi-frame third Raw image and the semantic segmentation image are subjected to feature splicing processing to obtain multi-channel image features.
  • feature splicing process refers to the process of superimposing the channel numbers of images.
  • the fifth aw image is a single-channel image
  • the third Raw image is a three-channel image
  • the semantic segmentation image is a single-channel image
  • Step S405 image processing model.
  • multi-channel image features are input to an image processing model for fusion processing.
  • the image processing model can be used to fuse images in the Raw color space; the image processing model is a pre-trained neural network.
  • a large amount of sample data and a loss function can be used to iteratively update the parameters of the neural network through the backpropagation algorithm to obtain an image processing model.
  • the image processing model can also be used for denoising processing and demosaic processing; for example, the image processing model can perform denoising processing and demosaicing on multiple frames of the third Raw image and multiple frames of the fifth Raw image based on the semantic segmentation image processing and fusion processing.
  • the noise of the image mainly comes from the Poisson distribution and the Gaussian distribution
  • the Gaussian distribution can be approximately 0 when multiple frames of images are superimposed and averaged; therefore, denoising through multiple frames of images Noise processing can improve the image denoising effect.
  • the fifth Raw image of multiple frames is an infrared light image;
  • the near-infrared light image is a single-channel grayscale image, and the grayscale image is used to represent the true value of brightness;
  • the third Raw image of multiple frames It is a visible light image;
  • the brightness value in the visible light image is discontinuous, and it is usually necessary to predict the discontinuous brightness value;
  • the near-infrared light image (the real value of brightness) is used as a guide to demosaic the visible light image, it can effectively reduce Pseudo-textures appearing in images.
  • the image processing model outputs an RGB image (an example of a fused image).
  • the spectral range that the first camera module can acquire is visible light from 400nm to 700nm
  • the spectral range that the second camera module can acquire is near-infrared light from 700nm to 1100nm;
  • Information cannot be obtained from visible light images; therefore, by processing the Raw images (visible light images) collected by the first camera module and the Raw images (near-infrared light images) collected by the second camera module, and then performing fusion processing, it is possible to achieve
  • the multi-spectral information fusion of the image information of near-infrared light and the image information of visible light makes the fused image include more detailed information; that is, through the image processing method provided in the embodiment of the present application, the main camera module can obtain Image enhancement is performed on the image to enhance the detail information in the image and improve the image quality.
  • multi-frame noise reduction, super-resolution processing, or local registration processing can be performed on the fifth Raw image of multiple frames according to the third Raw image to obtain the sixth Raw image (for example, the sixth Raw image of a single frame); the sixth Raw image is compared with the fifth Raw image, the sixth Raw image may refer to a Raw image of noise-free local registration; the sixth Raw image is processed by an image processing model , The multi-frame third Raw image is fused with the semantically segmented image.
  • the fourth Raw image is globally registered by using the method shown in FIG. 8 as a reference to obtain the fifth Raw image;
  • the image is used as a reference, and local registration is further performed on the fifth Raw image, and the local registration processing can enhance the detail information in the fifth Raw image, so that the local detail information of the fused image is enhanced.
  • FIG. 10 is a schematic diagram of an image processing method provided by an embodiment of the present application.
  • the image processing method can be executed by the electronic device shown in FIG. 1 ; the image processing method includes step S501 to step S510 , and step S501 to step S510 will be described in detail below.
  • Step S501 Acquire multiple frames of fifth Raw images (an example of the first registration image).
  • multiple frames of second Raw images can be acquired through the second camera module; black level correction and phase defect correction processing are performed on multiple frames of second Raw images to obtain multiple frames of fourth Raw images; The image is used as a reference to perform registration processing on the fourth Raw image to obtain a fifth Raw image.
  • the second camera module may include a second lens, a second lens and an image sensor, and the spectral range that the second lens can pass is near-infrared light (700nm ⁇ 1100nm).
  • the second lens may refer to a filter lens; the second lens may be used to absorb light of certain specific wavelength bands and allow light of near-infrared wavelength bands to pass through.
  • Step S502 acquiring a third Raw image (an example of the third image).
  • the third Raw image may refer to the first frame of Raw image in the multiple frames of the third Raw image.
  • multiple frames of first Raw images may be acquired by the first camera module, and black level correction and phase defect correction processing may be performed on the multiple frames of the first Raw images to obtain multiple frames of third Raw images.
  • Step S503 feature splicing processing.
  • feature splicing processing is performed on the fifth Raw image and the third Raw image in multiple frames to obtain image features.
  • Step S504 image processing.
  • the image processing may include but not limited to: one or more of multi-frame noise reduction, super-resolution processing, local registration processing, or fusion processing.
  • the noise of the image mainly comes from the Poisson distribution and the Gaussian distribution
  • the Gaussian distribution can be approximately 0 when multiple frames of images are superimposed and averaged; therefore, denoising through multiple frames of images Noise processing can improve the image denoising effect.
  • Step S505 obtaining the sixth Raw image (an example of the fourth image).
  • the sixth Raw image may refer to a Raw image that is locally registered without noise.
  • Step S506 acquiring multiple frames of third Raw images.
  • multiple frames of first Raw images may be acquired by the first camera module, and black level correction and phase defect correction processing may be performed on the multiple frames of the first Raw images to obtain multiple frames of third Raw images.
  • the first camera module may include a first lens, a first lens and an image sensor, and the spectral range through which the first lens can pass is visible light (400nm-700nm).
  • the first lens may refer to a filter lens; the first lens may be used to absorb light of certain specific wavelength bands and allow light of visible light bands to pass through.
  • Step S507 acquiring a semantically segmented image.
  • the first frame of the third Raw image among the multiple frames of the third Raw image may be processed by a semantic segmentation algorithm to obtain the semantic segmentation image.
  • the first frame of the fourth Raw image among the multiple frames of the fourth Raw image may be processed according to the semantic segmentation algorithm to obtain the semantic segmentation image.
  • the semantic segmentation algorithm may include a multi-instance segmentation algorithm; labels of various regions in the image may be output through the semantic segmentation algorithm.
  • fusion processing can be performed on the third Raw image of multiple frames and the fifth Raw image of multiple frames to obtain a fused image; by introducing the semantically segmented image in the fusion process, the local part of the fusion can be determined Image information; for example, partial image information in multiple frames of the third Raw image and multiple frames of the fifth Raw image may be selected for fusion processing through semantically segmented images, so as to increase the local detail information of the fused image.
  • the third Raw image for example, a visible light image
  • the fifth Raw image for example, a near-infrared light image
  • multispectral information fusion of near-infrared light image information and visible light image information can be realized,
  • the fused image includes more detailed information, and the detailed information in the image can be enhanced.
  • Step S508 feature splicing processing.
  • feature stitching is performed on the sixth Raw image of a single frame, the third Raw image of multiple frames, and the semantically segmented image to obtain multi-channel image features.
  • feature splicing process refers to the process of superimposing the channel numbers of images.
  • the sixth Raw image is a single-channel image
  • the third Raw image is a 3-channel image
  • the semantic segmentation image is a single-channel image
  • Step S509 image processing model.
  • multi-channel image features are input to an image processing model for fusion processing.
  • the image processing model can be used to fuse images in the Raw color space; the image processing model is a pre-trained neural network.
  • the image processing model can be used for denoising processing, demosaic processing, and fusion processing; for example, the image processing model can perform denoising processing, Demosaic processing and fusion processing.
  • Step S510 RGB image.
  • the image processing model outputs an RGB image (an example of a fused image).
  • the electronic device includes a first camera module and a second camera module; the spectral range that the first camera module can obtain is visible light from 400nm to 700nm, and the spectral range that the second camera module can obtain It is near-infrared light of 700nm to 1100nm; since the image information included in the near-infrared image cannot be obtained in the visible light image; therefore, the Raw image (visible light image) collected by the first camera module and the image information collected by the second camera module Raw images (near-infrared light images) are processed and then fused, which can realize the multi-spectral information fusion of near-infrared light image information and visible light image information, so that the fused image includes more detailed information; that is, through this application
  • the image processing method provided in the embodiment can perform image enhancement on the image acquired by the camera module of the main camera, enhance the detailed information in the image, and improve the image quality.
  • the image processing method shown in FIG. 5 or FIG. 6 can be used to perform fusion processing on the images collected by the first camera module and the second camera module; in addition, in order to enhance the fusion processing
  • the image processing method shown in Figure 11 can also be used to fuse the images collected by the first camera module and the second camera module; for example, the RGB image collected by the first camera can be processed by the second camera module
  • FIG. 11 is a schematic diagram of an image processing method provided by an embodiment of the present application.
  • the image processing method can be executed by the electronic device shown in FIG. 1; the image processing method includes step S601 to step S619, and step S601 to step S619 will be described in detail below.
  • Step S601 acquiring a first Raw image.
  • the first Raw image can be acquired through the first camera module;
  • the first camera module can include a first lens, a first lens and an image sensor, and the spectral range that the first lens can pass is visible light (400nm ⁇ 700nm).
  • Step S602 noise reduction processing.
  • noise reduction processing is performed on the first Raw image to obtain the first Raw image after noise reduction processing.
  • the noise information in the image can be effectively reduced by performing noise reduction processing on the first Raw image, so that when the fusion processing is performed on the first Raw image subsequently, the image quality of the fusion-processed image can be improved.
  • step S603 may be performed after step S601 is performed.
  • Step S603 demosaic processing.
  • demosaic processing is performed on the first Raw image after the noise reduction processing.
  • Step S604 RGB image.
  • demosaic processing is performed on the first Raw image after the noise reduction processing to obtain an RGB image.
  • Step S605 extract the V channel image from the HSV image.
  • the RGB image is converted into the HSV color space to obtain the HSV image; and the V channel image of the HSV image is extracted.
  • the RGB image in order to obtain the luminance channel corresponding to the RGB image, the RGB image may be converted to other color spaces, so as to obtain the luminance channel corresponding to the RGB image.
  • the above-mentioned HSV image color space is used as an example; it may also be a YUV color space, or other color spaces capable of obtaining brightness channels of an image.
  • Step S606 filter processing.
  • the V channel image is processed through an edge-preserving smoothing filter.
  • the edge-preserving smoothing filter may include, but not limited to: a guided filter, a bilateral filter, and a least-squares filter.
  • edge information in the image can be effectively preserved during the filtering process.
  • Step S607 the first detail layer image.
  • the image of the V channel is processed by an edge-preserving smoothing filter to obtain the first detail layer image.
  • the detail layer of the image includes high-frequency information of the image; for example, the detail layer of the image includes edge information, texture information, and the like of the object.
  • Step S608 the first base layer image.
  • the image of the V channel is processed by an edge-preserving smoothing filter to obtain the image of the first base layer.
  • the base layer of the image includes low-frequency information of the image; for an image, the part except the detail layer is the base layer; for example, the base layer of the image includes content information within the edge of the object.
  • Step S609 acquiring a second Raw image.
  • the second Raw image can be acquired through the second camera module;
  • the second camera module can include a second lens, a second lens and an image sensor, and the spectral range that the second lens can pass is near-infrared light (700nm ⁇ 1100nm).
  • the second Raw image collected by the second camera module may refer to a single-channel image; the second Raw image is used to represent the intensity information of photons superimposed together; for example, the second Raw images can be grayscale images in a single channel.
  • Step S610 noise reduction processing.
  • noise reduction processing is performed on the second Raw image to obtain the second Raw image after noise reduction processing.
  • the noise information in the image can be effectively reduced by performing noise reduction processing on the second Raw image, so that when the fusion processing is performed on the second Raw image later, the image quality of the fusion-processed image is improved.
  • step S612 may be performed after step S609 is performed.
  • step S610 is used as an example for illustration; the second Raw image may also be converted into an NIR image in other ways; this application does not make any limitation thereto.
  • Step S611 NIR image.
  • conversion processing is performed on the second Raw image to obtain an NIR image.
  • Step S612 filter processing.
  • the NIR image is processed through an edge-preserving smoothing filter.
  • Step S613 the second detail layer image.
  • the NIR image is processed by an edge-preserving smoothing filter to obtain a second detail layer image.
  • Step S614 the second base layer image.
  • the NIR image is processed by an edge-preserving smoothing filter to obtain a second base layer image.
  • Step S615 acquiring a semantically segmented image.
  • local area information in the second level of detail image can be obtained based on the semantically segmented image; the partial image information in the second level of detail image is fused with the first level of detail image to achieve selectivity The detail enhancement of the local area in the image.
  • the second detail level image is multiplied by the semantic segmentation image to obtain detail level information in the NIR image.
  • multiplying the second level-of-detail image by the semantic segmentation image may refer to multiplying the second level-of-detail image by pixel values of corresponding pixels in the semantic segmentation image.
  • the second detail layer image includes high-frequency information in the NIR image; local detail information in the image can be selectively enhanced according to the semantic segmentation image.
  • the second detail layer includes all the detailed information in the NIR image; when the visible light image captures the scene, some image detail information may be lost for the scene that is far away from the electronic device; therefore, the image can be segmented through semantics Multiply with the second detail layer to select the local area in the second detail layer image, and fuse the local area in the second detail layer image with the first detail layer image, so as to realize the optional local area in the image Details are enhanced.
  • Step S617 fusion processing.
  • fusion processing is performed on the detail layer information of the NIR image, the first detail layer image, and the first base layer image.
  • layer-of-detail information of the NIR image is superimposed on the first layer-of-detail image; and the first layer-of-detail image is superimposed on the first base layer image.
  • the HSV image is obtained after fusion processing.
  • the above HSV image may also be in other color spaces; for example, it may be an image in HSL color space.
  • the above-mentioned HSV image may also be an image in a YUV color space; or, an image in another color space from which a brightness channel can be extracted.
  • Step S619 RGB image.
  • the fused HSV image is converted into an RGB color space to obtain a fused RGB image.
  • the RGB image collected by the first camera module and the NIR image collected by the second camera module are filtered through an edge-preserving smoothing filter, and the first detail layer image and the first detail layer image included in the RGB image are respectively obtained.
  • the image processing method shown in FIG. 5, FIG. 6 or FIG. 11 can be used to perform fusion processing on the images collected by the first camera module and the second camera module;
  • the image processing method shown in Figure 12 can also be used to perform fusion processing on the images collected by the first camera module and the second camera module; for example , by fusing the similar image information in the RGB image and the NIR image, effectively avoiding the ghosting problem in the image after fusion processing.
  • the low-frequency information of the image is enhanced; by converting the RGB image to the YUV color space, the high-frequency information and low-frequency information of the Y channel are enhanced. Superposition is performed so that the high-frequency information part of the image is also enhanced.
  • Fig. 12 is a schematic diagram of an image processing method provided by an embodiment of the present application.
  • the image processing method can be executed by the electronic device shown in FIG. 1; the image processing method includes step S701 to step S715, and step S701 to step S715 will be described in detail below.
  • Step S701 acquiring a first Raw image.
  • the first Raw image can be acquired through the first camera module;
  • the first camera module can include a first lens, a first lens and an image sensor, and the spectral range through which the first lens can pass is visible light (400nm ⁇ 700nm).
  • Step S702 noise reduction processing.
  • noise reduction processing is performed on the first Raw image to obtain the first Raw image after noise reduction processing.
  • step S703 may be performed after step S701 is performed.
  • Step S703 demosaic processing, to obtain the first RGB image.
  • demosaic processing is performed on the first Raw image after the noise reduction processing to obtain the first RGB image.
  • Step S704 Process the first RGB image through a Gaussian low-pass filter to obtain low-frequency information in the first RGB image.
  • the first RGB image is processed through a Gaussian low-pass filter to filter out high-frequency detail features of the first RGB image to obtain low-frequency information of the first RGB image.
  • the low-frequency information of an image refers to the area where the gray value changes slowly in the image; for an image, the part except the high-frequency information is low-frequency information; for example, the low-frequency information of the image can include the content within the edge of the object information.
  • Step S705 acquiring a second Raw image.
  • the second Raw image can be acquired through the second camera module;
  • the second camera module can include a second lens, a second lens and an image sensor, and the spectral range that the second lens can pass is near-infrared light (700nm ⁇ 1100nm).
  • the second Raw image collected by the second camera module may refer to a single-channel image; the second Raw image is used to represent the intensity information of photons superimposed together; for example, the second Raw images can be grayscale images in a single channel.
  • Step S706 noise reduction processing.
  • noise reduction processing is performed on the second Raw image.
  • step S708 may be performed after step S705 is performed.
  • Step S707 NIR image.
  • step S706 is used as an example for illustration; the second Raw image may also be converted into an NIR image in other ways; this application does not make any limitation thereto.
  • Step S708 performing registration processing on the NIR image to obtain a registered NIR image.
  • a registration process is performed on the NIR image with the first RGB image as a reference to obtain a registered NIR image.
  • first camera module and the second camera module are respectively arranged at different positions in the electronic device, there is a certain baseline distance between the first camera module and the second camera module, that is, through the first camera module There is a certain parallax between the image collected by the module and the image collected by the second camera module, and the images collected by the two need to be registered.
  • Step S709 merging models.
  • the low-frequency information of the first RGB image and the registered NIR image are input to the fusion model for processing.
  • Step S710 outputting the second RGB image.
  • the fused model outputs a second RGB image.
  • the fusion model may refer to a neural network pre-trained by a large amount of sample data; the fusion model is used to perform fusion processing on input images and output a fusion processed image.
  • Step S711 extracting the image of the Y channel in the YUV image.
  • the second RGB image is converted into a YUV color space to obtain a first YUV image; and an image of a Y channel in the first YUV image is extracted.
  • the RGB image in order to obtain the luminance channel corresponding to the RGB image, the RGB image may be converted into the YUV color space to obtain the first YUV image, and the Y channel of the first YUV image may be extracted.
  • the RGB image can also be converted to other color spaces capable of extracting brightness channels, which is not limited in this application.
  • Step S712 converting the first RGB image to YUV.
  • the first RGB image is converted into a YUV color space to obtain a second YUV image.
  • Step S713 process the Y channel.
  • the Y channel of the second YUV image is processed.
  • Gaussian blur is performed on the Y channel of the second YUV image to obtain a blurred image of the Y channel (blur image of the Y channel); the blurred image of the Y channel includes the low-frequency information of the Y channel; the image of the Y channel is subtracted The blurred image of the Y channel, and the high frequency information of the Y channel is obtained.
  • the high-frequency information of the image refers to the region in the image where the gray value changes rapidly; for example, the high-frequency information in the image includes edge information, texture information, etc. of the object.
  • different gain coefficients can be determined according to different camera modes selected by the user; the Y channel of the second YUV image can be processed according to the following formula:
  • Step S714 adding Y channels.
  • the Y channel of the first YUV image is added to the Y channel of the processed second YUV image to obtain the processed YUV image.
  • step S710 the low-frequency information of the NIR image and the first RGB image are fused to obtain the second RGB image; in steps S713 and S714, the high-frequency information part of the image needs to be processed.
  • Step S715 obtaining a third RGB image.
  • the processed YUV image is converted into RGB color space to obtain a third RGB image.
  • the RGB image is collected by the first camera module
  • the NIR image is collected by the second camera module
  • the similar image information in the RGB image and the NIR image is fused to effectively avoid the post-fusion processing.
  • ghosting problems in the image for example, through image fusion processing of the low-frequency information in the RGB image and the NIR image, the low-frequency information of the image is enhanced; by converting the RGB image to the YUV color space, the high-frequency information of the Y channel Superimposed with the image enhanced by the low-frequency information part, the high-frequency information part of the image is also enhanced; because the similar image information in the RGB image and the NIR image is fused, it can effectively reduce the image after fusion processing. ghosts that appear in .
  • FIG. 13 is a schematic diagram of the effect of the image processing method provided by the embodiment of the present application.
  • (a) in Figure 13 is the output image obtained by the existing main camera camera module;
  • (b) in Figure 13 is the output image obtained by the image processing method provided by the embodiment of the present application ;
  • the image shown in (a) in Figure 13 shows that the detailed information in the mountains is severely distorted; compared with the output image shown in (a) in Figure 13, the image shown in (b) in Figure 13
  • the detailed information of the displayed output image is relatively rich, and can clearly display the detailed information of mountains; through the image processing method provided in the embodiment of the present application, the image obtained by the main camera camera module can be image enhanced to improve the detailed information in the image.
  • the user in a dark scene, can turn on the infrared flashlight in the electronic device; collect images through the main camera module and the near-infrared camera module, and use the image processing method provided by the embodiment of the application to process the collected images.
  • the image is processed to output the processed image or video.
  • FIG. 14 shows a graphical user interface (graphical user interface, GUI) of an electronic device.
  • the GUI shown in (a) in FIG. 14 is the desktop 810 of the electronic device; when the electronic device detects that the user clicks the operation of the camera application (application, APP) icon 820 on the desktop 810, the camera application can be started, and the display is as follows: Another GUI shown in (b) in Figure 14; the GUI shown in (b) in Figure 14 can be the display interface of the camera APP in the camera mode, and the GUI can include a shooting interface 830; Including a viewfinder frame 831 and controls; for example, the shooting interface 830 may include a control 832 for instructing shooting and a control 833 for instructing to turn on an infrared flash; in the preview state, a preview image may be displayed in real time in the viewfinder 831; wherein , the preview state can mean that before the user turns on the camera and does not press the photo/record button, the preview image can be displayed in the viewfinder in real time.
  • the shooting interface shown in (c) in Figure 14 is displayed;
  • the camera module collects images, processes the collected images through the image processing method provided in the embodiment of the present application, and outputs processed images with enhanced image quality.
  • Fig. 15 is a schematic diagram of an optical path of a shooting scene applicable to an embodiment of the present application.
  • the electronic device also includes an infrared flashlight; in a dark scene, the electronic device can turn on the infrared flashlight; when the infrared flashlight is turned on, the lighting in the shooting environment can include street lights and infrared flashlights; The light in the shooting environment is reflected, so that the electronic device obtains an image of the shooting object.
  • the infrared flashlight when the infrared flashlight is turned on, the reflected light of the shooting object increases, so that the amount of incoming light of the near-infrared camera module in the electronic device increases; thereby making the details of the image captured by the near-infrared camera module
  • the information is increased, and the images collected by the main camera module and the near-infrared camera module are fused through the image processing method of the embodiment of the present application, so that the image acquired by the main camera module can be image enhanced, and the details in the image can be improved.
  • the infrared flash is imperceptible to the user, and improves the detail information in the image without the user's perception.
  • FIG. 16 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device 900 includes a display module 910 and a processing module 920 .
  • the electronic device includes a first camera module and a second camera module, and the second camera module is a near-infrared camera module or an infrared camera module.
  • the display module 910 is used for displaying a first interface, and the first interface includes a first control; the processing module 920 is used for detecting a first operation on the first control; in response to the first Operation, acquiring N frames of first images and M frames of second images, the first images are images collected by the first camera module, the second images are images collected by the second camera module, N and M are both positive integers greater than or equal to 1; based on the N frames of the first image and the M frames of the second image, the target image is obtained; the target image is saved; wherein the processing module 920 is specifically used for:
  • the image quality of the N frames of the third image is higher than the image quality of the N frames of the first image; for the M frames of the first image
  • the second image is processed for the second image to obtain the fourth image of M frames, the image quality of the fourth image of the M frames is higher than the image quality of the second image of the M frames;
  • the third image of the N frames is The image and the fourth image of the M frames are fused to obtain a fused image, and the semantically segmented image is obtained based on any frame image in the first image of the N frames or any frame image in the second image of the M frames Yes, the detailed information of the fused image is better than the detailed information of the N frames of the first image; the third image processing is performed on the fused image to obtain the target image.
  • the processing module 920 is specifically configured to:
  • the processing module 920 is specifically configured to:
  • the processing module 920 is specifically configured to:
  • the processing module 920 is specifically configured to:
  • the first registration processing is global registration processing.
  • the second registration processing is local registration processing.
  • the processing module 920 is specifically configured to:
  • the electronic device further includes an infrared flash lamp
  • the processing module 920 is further configured to:
  • the dark light scene refers to a shooting scene in which the amount of light entering the electronic device is less than a preset threshold
  • acquiring N frames of the first image and M frames of the second image includes:
  • the N frames of the first image and the M frames of the second image are acquired.
  • the first interface includes a second control; the processing module 920 is specifically configured to:
  • the infrared flashlight is turned on in response to the second operation.
  • the processing module 920 is specifically configured to:
  • the third image of N frames and the fourth image of M frames are fused by an image processing model to obtain the fused image, and the image processing model is a pre-trained neural network.
  • the semantically segmented image is obtained by processing the third image in the first frame of the N frames of third images by using a semantic segmentation algorithm.
  • the first interface refers to a photographing interface
  • the first control refers to a control for instructing photographing
  • the first interface refers to a video recording interface
  • the first control refers to a control for instructing video recording.
  • the first interface refers to a video call interface
  • the first control refers to a control for instructing a video call.
  • module here may be implemented in the form of software and/or hardware, which is not specifically limited.
  • a “module” may be a software program, a hardware circuit or a combination of both to realize the above functions.
  • the hardware circuitry may include application specific integrated circuits (ASICs), electronic circuits, processors (such as shared processors, dedicated processors, or group processors) for executing one or more software or firmware programs. etc.) and memory, incorporating logic, and/or other suitable components to support the described functionality.
  • ASICs application specific integrated circuits
  • processors such as shared processors, dedicated processors, or group processors for executing one or more software or firmware programs. etc.
  • memory incorporating logic, and/or other suitable components to support the described functionality.
  • the units of each example described in the embodiments of the present application can be realized by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the present application.
  • FIG. 17 shows a schematic structural diagram of an electronic device provided by the present application.
  • the dotted line in FIG. 17 indicates that this unit or this module is optional; the electronic device 1000 can be used to implement the methods described in the foregoing method embodiments.
  • the electronic device 1000 includes one or more processors 1001, and the one or more processors 1001 can support the electronic device 1000 to implement the image processing method in the method embodiment.
  • Processor 1001 may be a general purpose processor or a special purpose processor.
  • the processor 1001 may be a central processing unit (central processing unit, CPU), a digital signal processor (digital signal processor, DSP), an application specific integrated circuit (application specific integrated circuit, ASIC), a field programmable gate array (field programmable gate array, FPGA) or other programmable logic devices such as discrete gates, transistor logic devices, or discrete hardware components.
  • the processor 1001 can be used to control the electronic device 1000, execute software programs, and process data of the software programs.
  • the electronic device 1000 may further include a communication unit 1005, configured to implement signal input (reception) and output (send).
  • the electronic device 1000 can be a chip
  • the communication unit 1005 can be an input and/or output circuit of the chip, or the communication unit 1005 can be a communication interface of the chip, and the chip can be used as a component of a terminal device or other electronic devices .
  • the electronic device 1000 may be a terminal device, and the communication unit 1005 may be a transceiver of the terminal device, or the communication unit 1005 may be a transceiver circuit of the terminal device.
  • the electronic device 1000 may include one or more memories 1002, on which a program 1004 is stored, and the program 1004 may be run by the processor 1001 to generate an instruction 1003, so that the processor 1001 executes the image processing described in the above method embodiment according to the instruction 1003 method.
  • data may also be stored in the memory 1002 .
  • the processor 1001 may also read data stored in the memory 1002, the data may be stored in the same storage address as the program 1004, and the data may also be stored in a different storage address from the program 1004.
  • the processor 1001 and the memory 1002 may be set separately, or may be integrated together, for example, integrated on a system-on-chip (system on chip, SOC) of a terminal device.
  • SOC system on chip
  • the memory 1002 can be used to store the related program 1004 of the image processing method provided in the embodiment of the present application
  • the processor 1001 can be used to call the related program 1004 of the image processing method stored in the memory 1002 when performing image processing
  • Execute the image processing method of the embodiment of the present application for example, display a first interface, the first interface includes a first control; detect a first operation on the first control; in response to the first operation, acquire N frames of the first image and M Frame the second image, the first image is the image collected by the first camera module, the second image is the image collected by the second camera module, N and M are both positive integers greater than or equal to 1; based on N frames of the first image and M frames of the second image to obtain the target image; save the target image; wherein, based on the N frames of the first image and the M frames of the second image, the target image is obtained, including: performing the first image processing on the N frames of the first image to obtain N The third image of the frame, the image quality of the third image of the
  • the present application also provides a computer program product, which implements the method of any method embodiment in the present application when the computer program product is executed by the processor 1001 .
  • the computer program product can be stored in the memory 1002, such as the program 1004, and the program 1004 is finally converted into an executable object file that can be executed by the processor 1001 through processes such as preprocessing, compiling, assembling and linking.
  • the present application also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a computer, the image processing method described in any method embodiment in the present application is implemented.
  • the computer program may be a high-level language program or an executable object program.
  • the computer-readable storage medium is, for example, the memory 1002 .
  • the memory 1002 may be a volatile memory or a nonvolatile memory, or, the memory 1002 may include both a volatile memory and a nonvolatile memory.
  • the non-volatile memory can be read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically programmable Erases programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • Volatile memory can be random access memory (RAM), which acts as external cache memory.
  • RAM random access memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM double data rate synchronous dynamic random access memory
  • double data rate SDRAM double data rate SDRAM
  • DDR SDRAM enhanced synchronous dynamic random access memory
  • ESDRAM enhanced synchronous dynamic random access memory
  • serial link DRAM SLDRAM
  • direct memory bus random access memory direct rambus RAM, DR RAM
  • the disclosed systems, devices and methods can be implemented in other ways.
  • the embodiments of the electronic equipment described above are only illustrative.
  • the division of the modules is only a logical function division, and there may be other division methods in actual implementation.
  • multiple units or components can be Incorporation may either be integrated into another system, or some features may be omitted, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • sequence numbers of the processes do not mean the order of execution, and the execution order of the processes should be determined by their functions and internal logic, rather than by the embodiments of the present application.
  • the implementation process constitutes any limitation.
  • the functions described above are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disc and other media that can store program codes. .

Abstract

An image processing method and an electronic device, which relate to the field of image processing. The image processing method is applied to the electronic device. The electronic device comprises a first camera module and a second camera module. The second camera module is a near-infrared camera module or an infrared camera module. The image processing method comprises: displaying a first interface, the first interface comprising a first control; detecting a first operation on the first control; in response to the first operation, obtaining N frames of a first image and M frames of a second image, wherein the first image is an image collected by the first camera module, the second image is an image collected by the second camera module, and N and M are both positive integers greater than or equal to 1; obtaining a target image on the basis of the N frames of the first image and the M frames of the second image; and storing the target image. On the basis of the technical scheme of the present application, image enhancement can be carried out on an image obtained by a main camera module in the electronic device, and the image quality is improved.

Description

图像处理方法与电子设备Image processing method and electronic device
本申请要求于2022年01月10交国家知识产权局、申请号为202210023611.2、申请名称为“图像处理方法与电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202210023611.2 and the application name "Image Processing Method and Electronic Equipment" submitted to the State Intellectual Property Office on January 10, 2022, the entire contents of which are incorporated in this application by reference.
技术领域technical field
本申请涉及图像处理领域,具体地,涉及一种图像处理方法与电子设备。The present application relates to the field of image processing, and in particular, relates to an image processing method and electronic equipment.
背景技术Background technique
随着多媒体技术和网络技术的飞速发展和广泛应用,人们在日常生活和生产活动中大量使用图像信息。在一些拍照场景中,例如,光照条件不好的拍摄场景中,比如,夜晚场景或者浓雾场景中,由于拍摄场景的光线条件较差,电子设备的进光量较少,导致主摄像头相机模组获取的图像中存在部分图像细节信息丢失的问题;为了提高图像质量,通常可以采用图像增强处理;图像增强处理是一种用于增强图像中的有用信息,改善图像的视觉效果的方法。With the rapid development and wide application of multimedia technology and network technology, people use a lot of image information in their daily life and production activities. In some shooting scenes, for example, shooting scenes with poor lighting conditions, such as night scenes or dense fog scenes, due to the poor light conditions of the shooting scene, the amount of light entering the electronic device is less, causing the camera module of the main camera to There is a problem of partial image detail information loss in the acquired image; in order to improve the image quality, image enhancement processing can usually be used; image enhancement processing is a method for enhancing useful information in the image and improving the visual effect of the image.
因此,如何对主摄像头相机模组获取的图像进行图像增强,提高图像质量成为一个亟需解决的问题。Therefore, how to enhance the image acquired by the camera module of the main camera and improve the image quality has become an urgent problem to be solved.
发明内容Contents of the invention
本申请提供了一种图像处理方法与电子设备,能够对主摄像头相机模组获取的图像进行图像增强,提高图像质量。The present application provides an image processing method and electronic equipment, which can perform image enhancement on images acquired by a camera module of a main camera to improve image quality.
第一方面,提供了一种图像处理方法,应用于电子设备,所述电子设备包括第一相机模组与第二相机模组,所述第二相机模组为近红外相机模组或者红外相机模组,所述图像处理方法包括:In a first aspect, an image processing method is provided, which is applied to an electronic device, and the electronic device includes a first camera module and a second camera module, and the second camera module is a near-infrared camera module or an infrared camera module, the image processing method includes:
显示第一界面,所述第一界面包括第一控件;displaying a first interface, where the first interface includes a first control;
检测到对所述第一控件的第一操作;detecting a first operation on the first control;
响应于所述第一操作,获取N帧第一图像与M帧第二图像,所述第一图像为所述第一相机模组采集的图像,所述第二图像为所述第二相机模组采集的图像,N和M均为大于或者等于1的正整数;In response to the first operation, N frames of first images and M frames of second images are acquired, the first images are images collected by the first camera module, and the second images are images collected by the second camera module The images collected by the group, N and M are both positive integers greater than or equal to 1;
基于所述N帧第一图像和所述M帧第二图像,得到目标图像;Obtaining a target image based on the N frames of the first image and the M frames of the second image;
保存所述目标图像;saving said target image;
其中,所述基于所述N帧第一图像和所述M帧第二图像,得到目标图像,包括:Wherein, the target image is obtained based on the N frames of the first image and the M frames of the second image, including:
对所述N帧第一图像进行第一图像处理,得到N帧第三图像,所述N帧第三图像的图像质量高于所述N帧第一图像的图像质量;performing first image processing on the N frames of the first image to obtain N frames of the third image, the image quality of the N frames of the third image is higher than the image quality of the N frames of the first image;
对所述M帧第二图像进行第二图像处理,得到M帧第四图像,所述M帧第四图 像的图像质量高于所述M帧第二图像的图像质量;Carrying out the second image processing to the second image of the M frames to obtain the fourth image of the M frames, the image quality of the fourth image of the M frames is higher than the image quality of the second image of the M frames;
基于语义分割图像,对所述N帧第三图像和所述M帧第四图像进行融合处理,得到融合图像,所述语义分割图像为基于所述N帧第一图像中任一帧图像或者所述M帧第二图像中任一帧图像得到的,所述融合图像的细节信息优于所述N帧第一图像的细节信息;Based on the semantically segmented image, the third image of the N frames and the fourth image of the M frame are fused to obtain a fused image, and the semantically segmented image is based on any frame image in the first image of the N frames or the The detailed information of the fused image is better than the detailed information of the N frames of the first image;
对所述融合图像进行第三图像处理,得到目标图像。Performing third image processing on the fused image to obtain a target image.
可选地,第一相机模组可以为可见光相机模组,或者第一相机模组为其他可以获取可见光的相机模组;本申请对第一相机模组不作任何限定。Optionally, the first camera module may be a visible light camera module, or the first camera module may be other camera modules capable of obtaining visible light; this application does not make any limitation on the first camera module.
可选地,第一相机模组中可以包括第一镜片、第一镜头与图像传感器,第一镜片可以通过的光谱范围为可见光(400nm~700nm)。Optionally, the first camera module may include a first lens, a first lens and an image sensor, and the spectral range through which the first lens can pass is visible light (400nm-700nm).
应理解,第一镜片可以是指滤光镜片;第一镜片可以用于吸收某些特定波段的光,让可见光波段的光通过。It should be understood that the first lens may refer to a filter lens; the first lens may be used to absorb light of certain specific wavelength bands and allow light of visible light bands to pass through.
示例性地,第二相机模组中可以包括第二镜片、第二镜头与图像传感器,第二镜片可以通过的光谱范围为近红外光(700nm~1100nm)。Exemplarily, the second camera module may include a second lens, a second lens and an image sensor, and the spectral range that the second lens can pass is near-infrared light (700nm˜1100nm).
应理解,第二镜片可以是指滤光镜片;第二镜片可以用于吸收某些特定波段的光,让近红外光波段的光通过。It should be understood that the second lens may refer to a filter lens; the second lens may be used to absorb light of certain specific wavelength bands and allow light of near-infrared wavelength bands to pass through.
在本申请的实施例中,电子设备中可以包括第一相机模组与第二相机模组,其中,第二相机模组为近红外相机模组或者红外相机模组(例如,获取的光谱范围为700nm~1100nm);通过第一相机模组采集第一图像,通过第二相机模组采集第二图像;由于第二图像(例如,近红外图像)中包括的图像信息是第一图像中(例如,可见光图像)无法获取到的;同理,第三图像中包括的图像信息是第四图像无法获取到的;因此,通过对第三图像(例如,可见光图像)与第四图像(例如,近红外光图像)进行融合处理,可以实现近红外光的图像信息与可见光的图像信息的多光谱信息融合,使得融合后的图像中包括更多的细节信息;因此,通过本申请实施例提供的图像处理方法,能够对主摄像头相机模组获取的图像进行图像增强,增强图像中的细节信息,提高图像质量。In an embodiment of the present application, the electronic device may include a first camera module and a second camera module, wherein the second camera module is a near-infrared camera module or an infrared camera module (for example, the acquired spectral range is 700nm~1100nm); the first image is collected by the first camera module, and the second image is collected by the second camera module; since the image information included in the second image (for example, a near-infrared image) is in the first image ( For example, a visible light image) cannot be obtained; similarly, the image information included in the third image cannot be obtained by the fourth image; therefore, by comparing the third image (for example, a visible light image) with the fourth image (for example, near-infrared light image) for fusion processing, the multi-spectral information fusion of near-infrared light image information and visible light image information can be realized, so that the fused image includes more detailed information; therefore, through the embodiment of the present application provided The image processing method can perform image enhancement on the image acquired by the camera module of the main camera, enhance the detailed information in the image, and improve the image quality.
应理解,N帧第三图像的图像质量高于N帧第一图像的图像质量可以是指N帧第三图像中的噪声少于N帧第一图像中的噪声;或者,通过图像质量的评估算法对N帧第三图像与N帧第一图像进行评估,得到的评估结果为N帧第三图像的图像质量高于N帧第一图像等,本申请对此不作任何限定。It should be understood that the image quality of the N frames of the third image is higher than the image quality of the N frames of the first image may mean that the noise in the N frames of the third image is less than the noise in the N frames of the first image; or, by evaluating the image quality The algorithm evaluates N frames of the third image and N frames of the first image, and the evaluation result obtained is that the image quality of the N frames of the third image is higher than that of the N frames of the first image, etc., which is not limited in this application.
还应理解,M帧第四图像的图像质量高于M帧第二图像的图像质量可以是指M帧第四图像中的噪声少于M帧第二图像中的噪声;或者,通过图像质量的评估算法对M帧第四图像与M帧第二图像进行评估,得到的评估结果为M帧第四图像的图像质量高于M帧第二图像等,本申请对此作任何限定。It should also be understood that the image quality of the M frames of the fourth image is higher than the image quality of the M frames of the second image may mean that the noise in the M frames of the fourth image is less than the noise in the M frames of the second image; or, through the image quality The evaluation algorithm evaluates the M frames of the fourth image and the M frames of the second image, and the obtained evaluation result is that the image quality of the M frames of the fourth image is higher than that of the M frames of the second image, etc. This application makes no limitation on this.
还应理解,融合图像的细节信息优于N帧第一图像的细节信息可以是指融合图像中的细节信息多于N帧第一图像中任意一帧第一图像中的细节信息;或者,融合图像的细节信息优于N帧第一图像的细节信息可以是指融合图像的清晰度优于N帧第一图像中任意一帧第一图像的清晰度。例如,细节信息可以包括拍摄对象的边缘信息、纹理信息等。It should also be understood that the detailed information of the fused image is better than the detailed information of the N frames of the first image may mean that the detailed information in the fused image is more than the detailed information in any one of the first images in the N frames of the first image; or, the fused The detail information of the image being better than the detail information of the N frames of first images may mean that the definition of the fused image is better than that of any first image in the N frames of first images. For example, the detail information may include edge information, texture information, etc. of the object to be photographed.
在本申请的实施例中,可以基于语义分割图像,对N帧第三图像和M帧第四图像进 行融合处理,得到融合图像;通过在融合处理中引入语义分割图像可以确定融合的局部图像信息;比如,可以通过语义分割图像选取N帧第三图像与M帧第四图像中的局部图像信息进行融合处理,从而能够增加融合图像的局部细节信息。此外,通过对第三图像(例如,可见光图像)与第四图像(例如,近红外光图像)进行融合处理,可以实现近红外光的图像信息与可见光的图像信息的多光谱信息融合,使得融合后的图像中包括更多的细节信息,能够增强图像中的细节信息。In the embodiment of the present application, the third image of N frames and the fourth image of M frames can be fused based on the semantically segmented image to obtain a fused image; the fused local image information can be determined by introducing the semantically segmented image into the fusion process For example, the local image information in the third image of N frames and the fourth image of M frames can be selected through the semantic segmentation image for fusion processing, so that the local detail information of the fused image can be increased. In addition, by performing fusion processing on the third image (for example, visible light image) and the fourth image (for example, near-infrared light image), the multi-spectral information fusion of near-infrared light image information and visible light image information can be realized, so that the fusion The final image includes more detail information, which can enhance the detail information in the image.
可选地,M帧第四图像为近红外相机模组或者红外相机模组采集的第二图像进行第二图像处理得到的;因此,M帧第四图像包括拍摄对象对近红外光的反射信息;由于近红外光对绿色景物的反射率较高,因此通过近红外相机模组或者红外模组拍摄得到的绿色景物的细节信息更多;通过语义分割图像可以从第四图像中选取绿色景物图像区域进行融合处理,从而能够增强图像中暗光区域中绿色景物的细节信息。Optionally, the M frames of the fourth image are obtained by performing second image processing on the second image collected by the near-infrared camera module or the infrared camera module; therefore, the M frames of the fourth image include the reflection information of the near-infrared light of the photographed object ; Since the near-infrared light has a high reflectivity to the green scene, the details of the green scene captured by the near-infrared camera module or the infrared module are more; the green scene image can be selected from the fourth image by semantically segmenting the image Regions are fused, so that the details of the green scene in the dark and light areas of the image can be enhanced.
可选地,M帧第四图像为近红外相机模组或者红外相机模组采集的第二图像进行第二图像处理得到的;由于近红外相机模组或者红外相机模组可以获取的光谱范围为近红外光,近红外光的光谱的波长较长,因此近红外光的绕射能力较强;对于云雾拍摄场景或者拍摄较远物体的拍摄场景,近红外相机模组或者红外相机模组采集的图像的通透感更强,即图像中包括更多远处拍摄对象的细节信息(例如,远处山脉的纹理信息);可以对通过语义分割图像从第四图像中选取远处的图像区域,与通过语义分割图像从第三图像中选取近处的图像区域进行融合处理,从而增强融合图像中的细节信息。Optionally, the fourth image of the M frames is obtained by performing second image processing on the second image collected by the near-infrared camera module or the infrared camera module; since the spectral range that the near-infrared camera module or the infrared camera module can obtain is Near-infrared light, the wavelength of the spectrum of near-infrared light is longer, so the diffraction ability of near-infrared light is stronger; for shooting scenes of cloud and fog or shooting scenes of distant objects, the near-infrared camera module or infrared camera module collects The image has a stronger sense of transparency, that is, the image includes more detailed information of distant objects (for example, texture information of distant mountains); it is possible to select a distant image area from the fourth image by semantically segmenting the image, The fusion processing is carried out with the nearby image area selected from the third image through the semantic segmentation image, so as to enhance the detailed information in the fusion image.
结合第一方面,在第一方面的某些实现方式中,所述对所述M帧第二图像进行第二图像处理,得到M帧第四图像,包括:With reference to the first aspect, in some implementation manners of the first aspect, performing the second image processing on the M frames of the second image to obtain the M frames of the fourth image includes:
对所述M帧第二图像进行黑电平校正处理和/或相位坏点校正处理,得到M帧第五图像;performing black level correction processing and/or phase defect correction processing on the M frames of second images to obtain M frames of fifth images;
以所述N帧第三图像中的任意一帧第三图像为基准,对所述M帧第五图像进行第一配准处理,得到所述N帧第四图像。Taking any third image of the N frames of third images as a reference, performing a first registration process on the M frames of fifth images to obtain the N frames of fourth images.
可选地,全局配准处理可以是指以第一帧第三图像为基准,将M帧第四图像中的每一张第四图像的整体映射到第一帧第三图像中。Optionally, the global registration process may refer to taking the third image of the first frame as a reference, and mapping the whole of each fourth image in the M frames of fourth images to the third image of the first frame.
可选地,黑电平校正(black level correction,BLC)用于对黑电平进行校正处理,黑电平是指在经过一定校准的显示装置上,没有一行光亮输出的视频信号电平。相位坏点校正(phase defection pixel correction,PDPC)可以包括相位点校正(phase defection correction,PDC)与坏点校正(bad pixel correction,BPC);其中,BPC中的坏点是位置随机的亮点或暗点,数量相对比较少,BPC可以通过滤波算法实现;相对普通像素点而言,相位点就是固定位置的坏点,而且数量比较多;PDC需要通过已知的相位点列表进行相位点去除。Optionally, black level correction (black level correction, BLC) is used to correct the black level. The black level refers to the video signal level without a line of bright output on a calibrated display device. Phase defect pixel correction (PDPC) can include phase point correction (phase defect correction, PDC) and bad pixel correction (bad pixel correction, BPC); wherein, the bad pixels in BPC are randomly positioned bright or dark The number of points is relatively small, and BPC can be realized by filtering algorithm; compared with ordinary pixel points, phase points are bad points with fixed positions, and the number is relatively large; PDC needs to remove phase points through the known phase point list.
可选地,第二图像处理还可以包括但不限于:Optionally, the second image processing may also include but not limited to:
自动白平衡处理(Automatic white balance,AWB)、镜头阴影校正(Lens Shading Correction,LSC)等。Automatic white balance processing (Automatic white balance, AWB), lens shading correction (Lens Shading Correction, LSC), etc.
其中,自动白平衡处理用于使得白色在任何色温下相机均能把它还原成白;由于色温的影响,白纸在低色温下会偏黄,高色温下会偏蓝;白平衡的目的在于使得白色物体在任何色温下均为R=G=B呈现出白色。镜头阴影校正用于消除由于镜头光学系统原因造成的图像四周颜色以及亮度与图像中心不一致的问题。Among them, the automatic white balance processing is used to make the white color can be restored to white by the camera at any color temperature; due to the influence of color temperature, white paper will be yellowish at low color temperature, and blue at high color temperature; the purpose of white balance is to So that the white object is R=G=B at any color temperature and appears white. Lens shading correction is used to eliminate the inconsistency between the color and brightness of the surrounding image and the center of the image caused by the lens optical system.
应理解,第二图像处理可以包括黑电平校正、相位坏点校正以及其他Raw域图像处理算法;上述以自动白平衡处理与镜头阴影校正对其他Raw域图像处理算法进行举例描述,本申请对其他Raw域图像处理算法不作任何限定。It should be understood that the second image processing may include black level correction, phase bad pixel correction, and other Raw domain image processing algorithms; the above-mentioned examples describe other Raw domain image processing algorithms with automatic white balance processing and lens shading correction. Other Raw domain image processing algorithms are not subject to any limitation.
结合第一方面,在第一方面的某些实现方式中,所述以所述N帧第三图像中的任意一帧第三图像为基准,对所述M帧第五图像进行所述第一配准处理,得到所述N帧第四图像,包括:With reference to the first aspect, in some implementation manners of the first aspect, the first step is performed on the M frames of fifth images based on any third image of the N frames of third images. Registration processing, to obtain the fourth image of the N frames, including:
以所述N帧第三图像中的任意一帧第三图像为基准,对所述M帧第五图像进行所述第一配准处理与上采样处理,得到所述N帧第四图像。Taking any third image of the N frames of third images as a reference, performing the first registration processing and up-sampling processing on the M frames of fifth images to obtain the N frames of fourth images.
在本申请的实施例中,可以将第四图像的分辨率大小调整至与第三图像相同;从而便于对N帧第三图像与M帧第四图像进行融合处理。In the embodiment of the present application, the resolution of the fourth image can be adjusted to be the same as that of the third image; thus, it is convenient to perform fusion processing on N frames of the third image and M frames of the fourth image.
结合第一方面,在第一方面的某些实现方式中,所述对所述M帧第二图像进行第二图像处理,得到M帧第四图像,包括:With reference to the first aspect, in some implementation manners of the first aspect, performing the second image processing on the M frames of the second image to obtain the M frames of the fourth image includes:
对所述M帧第二图像进行黑电平校正处理和/或相位坏点校正处理,得到M帧第五图像;performing black level correction processing and/or phase defect correction processing on the M frames of second images to obtain M frames of fifth images;
以所述N帧第三图像中的任意一帧第三图像为基准,对所述M帧第五图像进行第一配准处理,得到M帧第一配准图像;Using any one of the third images in the N frames of third images as a reference, performing a first registration process on the M frames of the fifth image to obtain M frames of first registration images;
以所述任意一帧第三图像为基准,对所述M帧第一配准图像进行第二配准处理,得到所述M帧第四图像。Using the arbitrary third image as a reference, perform a second registration process on the M frames of the first registration image to obtain the M frames of the fourth image.
结合第一方面,在第一方面的某些实现方式中,所述以所述N帧第三图像中的任意一帧第三图像为基准,对所述M帧第五图像进行第一配准处理,得到M帧第一配准图像,包括:With reference to the first aspect, in some implementation manners of the first aspect, the first registration is performed on the M frames of the fifth image based on any one frame of the third image in the N frames of third images Process to obtain the first registration image of M frames, including:
以所述N帧第三图像中的任意一帧第三图像为基准,对所述M帧第五图像进行所述第一配准处理与上采样处理,得到所述M帧第一配准图像。Performing the first registration processing and upsampling processing on the M frames of the fifth image based on any one of the third images in the N frames of third images to obtain the M frames of the first registration image .
结合第一方面,在第一方面的某些实现方式中,所述第一配准处理为全局配准处理。With reference to the first aspect, in some implementation manners of the first aspect, the first registration processing is global registration processing.
结合第一方面,在第一方面的某些实现方式中,所述第二配准处理为局部配准处理。With reference to the first aspect, in some implementation manners of the first aspect, the second registration processing is local registration processing.
结合第一方面,在第一方面的某些实现方式中,所述对所述N帧第一图像进行第一图像处理,得到N帧第三图像,包括:With reference to the first aspect, in some implementation manners of the first aspect, performing first image processing on the N frames of first images to obtain N frames of third images includes:
对所述N帧第一图像进行黑电平校正处理和/或相位坏点校正处理,得到所述N帧第三图像。Perform black level correction processing and/or phase defect correction processing on the N frames of first images to obtain the N frames of third images.
可选地,黑电平校正(black level correction,BLC)用于对黑电平进行校正处理,黑电平是指在经过一定校准的显示装置上,没有一行光亮输出的视频信号电平。相位坏点校正(phase defection pixel correction,PDPC)可以包括相位点校正(phase defection correction,PDC)与坏点校正(bad pixel correction,BPC);其中,BPC中的坏点是位置随机的亮点或暗点,数量相对比较少,BPC可以通过滤波算法实现;相对普通像素点而言,相位点就是固定位置的坏点,而且数量比较多;PDC需要通过已知的相位点列表进行相位点去除。Optionally, black level correction (black level correction, BLC) is used to correct the black level. The black level refers to the video signal level without a line of bright output on a calibrated display device. Phase defect pixel correction (PDPC) can include phase point correction (phase defect correction, PDC) and bad pixel correction (bad pixel correction, BPC); wherein, the bad pixels in BPC are randomly positioned bright or dark The number of points is relatively small, and BPC can be realized by filtering algorithm; compared with ordinary pixel points, phase points are bad points with fixed positions, and the number is relatively large; PDC needs to remove phase points through the known phase point list.
可选地,第一图像处理还可以包括但不限于:Optionally, the first image processing may also include but not limited to:
自动白平衡处理(Automatic white balance,AWB)、镜头阴影校正(Lens Shading  Correction,LSC)等。Automatic white balance processing (Automatic white balance, AWB), lens shading correction (Lens Shading Correction, LSC), etc.
其中,自动白平衡处理用于使得白色在任何色温下相机均能把它还原成白;由于色温的影响,白纸在低色温下会偏黄,高色温下会偏蓝;白平衡的目的在于使得白色物体在任何色温下均为R=G=B呈现出白色。镜头阴影校正用于消除由于镜头光学系统原因造成的图像四周颜色以及亮度与图像中心不一致的问题。Among them, the automatic white balance processing is used to make the white color can be restored to white by the camera at any color temperature; due to the influence of color temperature, white paper will be yellowish at low color temperature, and blue at high color temperature; the purpose of white balance is to So that the white object is R=G=B at any color temperature and appears white. Lens shading correction is used to eliminate the inconsistency between the color and brightness of the surrounding image and the center of the image caused by the lens optical system.
应理解,第一图像处理可以包括黑电平校正、相位坏点校正以及其他Raw域图像处理算法;上述以自动白平衡处理与镜头阴影校正对其他Raw域图像处理算法进行举例描述,本申请对其他Raw域图像处理算法不作任何限定。It should be understood that the first image processing may include black level correction, phase bad pixel correction, and other Raw domain image processing algorithms; the above-mentioned examples describe other Raw domain image processing algorithms with automatic white balance processing and lens shading correction. Other Raw domain image processing algorithms are not subject to any limitation.
结合第一方面,在第一方面的某些实现方式中,所述电子设备还包括红外闪光灯,所述图像处理方法还包括:With reference to the first aspect, in some implementation manners of the first aspect, the electronic device further includes an infrared flash lamp, and the image processing method further includes:
在暗光场景下,开启所述红外闪光灯,所述暗光场景是指所述电子设备的进光量小于预设阈值的拍摄场景;In a dark light scene, turn on the infrared flashlight, the dark light scene refers to a shooting scene in which the amount of light entering the electronic device is less than a preset threshold;
所述响应于所述第一操作,获取N帧第一图像与M帧第二图像,包括:In response to the first operation, acquiring N frames of the first image and M frames of the second image includes:
在开启所述红外闪光灯的情况下,获取所述N帧第一图像与所述M帧第二图像。When the infrared flashlight is turned on, the N frames of the first image and the M frames of the second image are acquired.
结合第一方面,在第一方面的某些实现方式中,所述第一界面包括第二控件;所述在暗光场景下,开启所述红外闪光灯,包括:With reference to the first aspect, in some implementations of the first aspect, the first interface includes a second control; the turning on the infrared flash in a dark scene includes:
检测到对所述第二控件的第二操作;detecting a second operation on the second control;
响应于所述第二操作开启所述红外闪光灯。The infrared flashlight is turned on in response to the second operation.
在本申请的实施例中,可以开启电子设备中的红外闪光灯;由于电子设备中可以包括第一相机模组与第二模组,在红外闪光灯开启的情况下,拍摄对象的反射光增加,使得第二相机模组的进光量增加;从而使得通过第二相机模组获取的第二图像的细节信息增加;通过本申请实施例的图像处理方法对第一相机模组与第二相机模组采集的图像进行融合处理,能够对主摄像头相机模组获取的图像进行图像增强,提高图像中的细节信息。此外,红外闪光灯是用户无法感知的,在用户无感知的情况下,提高图像中的细节信息。In the embodiment of the present application, the infrared flashlight in the electronic device can be turned on; since the electronic device can include the first camera module and the second module, when the infrared flashlight is turned on, the reflected light of the object increases, so that The amount of light entering the second camera module increases; thereby increasing the detail information of the second image acquired through the second camera module; the first camera module and the second camera module are collected by the image processing method of the embodiment of the application The image fusion processing can be performed on the image acquired by the camera module of the main camera, and the detailed information in the image can be improved. In addition, the infrared flash is imperceptible to the user, and improves the detail information in the image without the user's perception.
结合第一方面,在第一方面的某些实现方式中,所述基于所述语义分割图像,对所述N帧第三图像与所述M帧第四图像进行融合处理,得到融合图像,包括:With reference to the first aspect, in some implementation manners of the first aspect, performing fusion processing on the N frames of the third image and the M frames of the fourth image based on the semantically segmented image to obtain a fusion image, including :
基于所述语义分割图像,通过图像处理模型对所述N帧第三图像与所述M帧第四图像进行融合处理,得到所述融合图像,所述图像处理模型为预先训练的神经网络。Based on the semantically segmented image, the third image of N frames and the fourth image of M frames are fused by an image processing model to obtain the fused image, and the image processing model is a pre-trained neural network.
结合第一方面,在第一方面的某些实现方式中,所述第一界面是指拍照界面,所述第一控件是指用于指示拍照的控件。With reference to the first aspect, in some implementation manners of the first aspect, the first interface refers to a photographing interface, and the first control refers to a control for instructing photographing.
可选地,第一操作可以是指对拍照界面中指示拍照的控件的点击操作。Optionally, the first operation may refer to a click operation on a control indicating to take a photo in the photo taking interface.
结合第一方面,在第一方面的某些实现方式中,所述第一界面是指视频录制界面,所述第一控件是指用于指示录制视频的控件。With reference to the first aspect, in some implementation manners of the first aspect, the first interface refers to a video recording interface, and the first control refers to a control for instructing video recording.
可选地,第一操作可以是指对视频录制界面中指示录制视频的控件的点击操作。Optionally, the first operation may refer to a click operation on a control indicating to record a video in the video recording interface.
结合第一方面,在第一方面的某些实现方式中,所述第一界面是指视频通话界面,所述第一控件是指用于指示视频通话的控件。With reference to the first aspect, in some implementation manners of the first aspect, the first interface refers to a video call interface, and the first control refers to a control for instructing a video call.
可选地,第一操作可以是指对视频通话界面中指示视频通话的控件的点击操作。Optionally, the first operation may refer to a click operation on a control indicating a video call in the video call interface.
应理解,上述以第一操作为点击操作为例进行举例说明;第一操作还可以包括语音指示操作,或者其它的指示电子设备进行拍照或者视频通话的操作;上述为举例说 明,并不对本申请作任何限定。It should be understood that the above-mentioned example illustrates the first operation as a click operation; the first operation may also include a voice instruction operation, or other operations instructing the electronic device to take a photo or make a video call; make any restrictions.
第二方面,提供了一种电子设备,所述电子设备包括一个或多个处理器、存储器、第一相机模组与第二相机模组;所述第二相机模组为近红外相机模组或者红外相机模组,所述存储器与所述一个或多个处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,所述一个或多个处理器调用所述计算机指令以使得所述装置执行:In a second aspect, an electronic device is provided, the electronic device includes one or more processors, memory, a first camera module and a second camera module; the second camera module is a near-infrared camera module Or an infrared camera module, the memory is coupled with the one or more processors, the memory is used to store computer program codes, the computer program codes include computer instructions, and the one or more processors call the computer instructions to cause the device to perform:
显示第一界面,所述第一界面包括第一控件;displaying a first interface, where the first interface includes a first control;
检测到对所述第一控件的第一操作;detecting a first operation on the first control;
响应于所述第一操作,获取N帧第一图像与M帧第二图像,所述第一图像为所述第一相机模组采集的图像,所述第二图像为所述第二相机模组采集的图像,N和M均为大于或者等于1的正整数;In response to the first operation, N frames of first images and M frames of second images are acquired, the first images are images collected by the first camera module, and the second images are images collected by the second camera module The images collected by the group, N and M are both positive integers greater than or equal to 1;
基于所述N帧第一图像和所述M帧第二图像,得到目标图像;Obtaining a target image based on the N frames of the first image and the M frames of the second image;
保存所述目标图像;其中,Save the target image; where,
所述基于所述N帧第一图像和所述M帧第二图像,得到目标图像,包括:The obtaining the target image based on the N frames of the first image and the M frames of the second image includes:
对所述N帧第一图像进行第一图像处理,得到N帧第三图像,所述N帧第三图像的图像质量高于所述N帧第一图像的图像质量;performing first image processing on the N frames of the first image to obtain N frames of the third image, the image quality of the N frames of the third image is higher than the image quality of the N frames of the first image;
对所述M帧第二图像进行第二图像处理,得到M帧第四图像,所述M帧第四图像的图像质量高于所述M帧第二图像的图像质量;performing second image processing on the M frames of second images to obtain M frames of fourth images, where the image quality of the M frames of the fourth images is higher than the image quality of the M frames of the second images;
基于语义分割图像,对所述N帧第三图像和所述M帧第四图像进行融合处理,得到融合图像,所述语义分割图像为基于所述N帧第一图像中任一帧图像或者所述M帧第二图像中任一帧图像得到的,所述融合图像的细节信息优于所述N帧第一图像的细节信息;Based on the semantically segmented image, the third image of the N frames and the fourth image of the M frame are fused to obtain a fused image, and the semantically segmented image is based on any frame image in the first image of the N frames or the The detailed information of the fused image is better than the detailed information of the N frames of the first image;
对所述融合图像进行第三图像处理,得到目标图像。Performing third image processing on the fused image to obtain a target image.
可选地,第一相机模组可以为可见光相机模组,或者,第一相机模组为其他可以获取可见光的相机模组;本申请对第一相机模组不作任何限定。Optionally, the first camera module may be a visible light camera module, or the first camera module may be other camera modules capable of obtaining visible light; this application does not make any limitation on the first camera module.
可选地,第一相机模组中可以包括第一镜片、第一镜头与图像传感器,第一镜片可以通过的光谱范围包括可见光(400nm~700nm)。Optionally, the first camera module may include a first lens, a first lens and an image sensor, and the spectral range through which the first lens can pass includes visible light (400nm-700nm).
应理解,第一镜片可以是指滤光镜片;第一镜片可以用于吸收某些特定波段的光,让可见光波段的光通过。It should be understood that the first lens may refer to a filter lens; the first lens may be used to absorb light of certain specific wavelength bands and allow light of visible light bands to pass through.
示例性地,第二相机模组中可以包括第二镜片、第二镜头与图像传感器,第二镜片可以通过的光谱范围为近红外光(700nm~1100nm)。Exemplarily, the second camera module may include a second lens, a second lens and an image sensor, and the spectral range that the second lens can pass is near-infrared light (700nm˜1100nm).
应理解,第二镜片可以是指滤光镜片;第二镜片可以用于吸收某些特定波段的光,让近红外光波段的光通过。It should be understood that the second lens may refer to a filter lens; the second lens may be used to absorb light of certain specific wavelength bands and allow light of near-infrared wavelength bands to pass through.
在本申请的实施例中,电子设备中可以包括第一相机模组与第二相机模组,其中,第二相机模组为近红外相机模组或者红外相机模组(例如,获取的光谱范围为700nm~1100nm);通过第一相机模组采集第一图像,通过第二相机模组采集第二图像;由于第二图像(例如,近红外图像)中包括的图像信息是第一图像中(例如,可见光图像)无法获取到的;同理,第三图像中包括的图像信息是第四图像无法获取到的;因此,通过对第三图像(例如,可见光图像)与第四图像(例如,近红外光图像)进 行融合处理,可以实现近红外光的图像信息与可见光的图像信息的多光谱信息融合,使得融合后的图像中包括更多的细节信息;因此,通过本申请实施例提供的图像处理方法,能够对主摄像头相机模组获取的图像进行图像增强,增强图像中的细节信息,提高图像质量。In an embodiment of the present application, the electronic device may include a first camera module and a second camera module, wherein the second camera module is a near-infrared camera module or an infrared camera module (for example, the acquired spectral range is 700nm~1100nm); the first image is collected by the first camera module, and the second image is collected by the second camera module; since the image information included in the second image (for example, a near-infrared image) is in the first image ( For example, a visible light image) cannot be obtained; similarly, the image information included in the third image cannot be obtained by the fourth image; therefore, by comparing the third image (for example, a visible light image) with the fourth image (for example, near-infrared light image) for fusion processing, the multi-spectral information fusion of near-infrared light image information and visible light image information can be realized, so that the fused image includes more detailed information; therefore, through the embodiment of the present application provided The image processing method can perform image enhancement on the image acquired by the camera module of the main camera, enhance the detailed information in the image, and improve the image quality.
应理解,N帧第三图像的图像质量高于N帧第一图像的图像质量可以是指N帧第三图像中的噪声少于N帧第一图像中的噪声;或者,通过图像质量的评估算法对N帧第三图像与N帧第一图像进行评估,得到的评估结果为N帧第三图像的图像质量高于N帧第一图像等,本申请对此不作任何限定。It should be understood that the image quality of the N frames of the third image is higher than the image quality of the N frames of the first image may mean that the noise in the N frames of the third image is less than the noise in the N frames of the first image; or, by evaluating the image quality The algorithm evaluates N frames of the third image and N frames of the first image, and the evaluation result obtained is that the image quality of the N frames of the third image is higher than that of the N frames of the first image, etc., which is not limited in this application.
还应理解,M帧第四图像的图像质量高于M帧第二图像的图像质量可以是指M帧第四图像中的噪声少于M帧第二图像中的噪声;或者,通过图像质量的评估算法对M帧第四图像与M帧第二图像进行评估,得到的评估结果为M帧第四图像的图像质量高于M帧第二图像等,本申请对此作任何限定。It should also be understood that the image quality of the M frames of the fourth image is higher than the image quality of the M frames of the second image may mean that the noise in the M frames of the fourth image is less than the noise in the M frames of the second image; or, through the image quality The evaluation algorithm evaluates the M frames of the fourth image and the M frames of the second image, and the obtained evaluation result is that the image quality of the M frames of the fourth image is higher than that of the M frames of the second image, etc. This application makes no limitation on this.
还应理解,融合图像的细节信息优于N帧第一图像的细节信息可以是指融合图像中的细节信息多于N帧第一图像中任意一帧第一图像中的细节信息;或者,融合图像的细节信息优于N帧第一图像的细节信息可以是指融合图像的清晰度优于N帧第一图像中任意一帧第一图像的清晰度。例如,细节信息可以包括拍摄对象的边缘信息、纹理信息等。It should also be understood that the detailed information of the fused image is better than the detailed information of the N frames of the first image may mean that the detailed information in the fused image is more than the detailed information in any one of the first images in the N frames of the first image; or, the fused The detail information of the image being better than the detail information of the N frames of first images may mean that the definition of the fused image is better than that of any first image in the N frames of first images. For example, the detail information may include edge information, texture information, etc. of the object to be photographed.
在本申请的实施例中,可以基于语义分割图像,对N帧第三图像和M帧第四图像进行融合处理,得到融合图像;通过在融合处理中引入语义分割图像可以确定融合的局部图像信息;比如,可以通过语义分割图像选取N帧第三图像与M帧第四图像中的局部图像信息进行融合处理,从而能够增加融合图像的局部细节信息。此外,通过对第三图像(例如,可见光图像)与第四图像(例如,近红外光图像)进行融合处理,可以实现近红外光的图像信息与可见光的图像信息的多光谱信息融合,使得融合后的图像中包括更多的细节信息,能够增强图像中的细节信息。In the embodiment of the present application, the third image of N frames and the fourth image of M frames can be fused based on the semantically segmented image to obtain a fused image; the fused local image information can be determined by introducing the semantically segmented image into the fusion process For example, the local image information in the third image of N frames and the fourth image of M frames can be selected through the semantic segmentation image for fusion processing, so that the local detail information of the fused image can be increased. In addition, by performing fusion processing on the third image (for example, visible light image) and the fourth image (for example, near-infrared light image), the multi-spectral information fusion of near-infrared light image information and visible light image information can be realized, so that the fusion The final image includes more detail information, which can enhance the detail information in the image.
可选地,M帧第四图像为近红外相机模组或者红外相机模组采集的第二图像进行第二图像处理得到的;因此,M帧第四图像包括拍摄对象对近红外光的反射信息;由于近红外光对绿色景物的反射率较高,因此通过近红外相机模组或者红外模组拍摄得到的绿色景物的细节信息更多;通过语义分割图像可以从第四图像中选取绿色景物图像区域进行融合处理,从而能够增强图像中暗光区域中绿色景物的细节信息。Optionally, the M frames of the fourth image are obtained by performing second image processing on the second image collected by the near-infrared camera module or the infrared camera module; therefore, the M frames of the fourth image include the reflection information of the near-infrared light of the photographed object ; Since the near-infrared light has a high reflectivity to the green scene, the details of the green scene captured by the near-infrared camera module or the infrared module are more; the green scene image can be selected from the fourth image by semantically segmenting the image Regions are fused, so that the details of the green scene in the dark and light areas of the image can be enhanced.
可选地,M帧第四图像为近红外相机模组或者红外相机模组采集的第二图像进行第二图像处理得到的;由于近红外相机模组或者红外相机模组可以获取的光谱范围为近红外光,近红外光的光谱的波长较长,因此近红外光的绕射能力较强;对于云雾拍摄场景或者拍摄较远物体的拍摄场景,近红外相机模组或者红外相机模组采集的图像的通透感更强,即图像中包括更多远处拍摄对象的细节信息(例如,远处山脉的纹理信息);可以对通过语义分割图像从第四图像中选取远处的图像区域,与通过语义分割图像从第三图像中选取近处的图像区域进行融合处理,从而增强融合图像中的细节信息。Optionally, the fourth image of the M frames is obtained by performing second image processing on the second image collected by the near-infrared camera module or the infrared camera module; since the spectral range that the near-infrared camera module or the infrared camera module can obtain is Near-infrared light, the wavelength of the spectrum of near-infrared light is longer, so the diffraction ability of near-infrared light is stronger; for shooting scenes of cloud and fog or shooting scenes of distant objects, the near-infrared camera module or infrared camera module collects The image has a stronger sense of transparency, that is, the image includes more detailed information of distant objects (for example, texture information of distant mountains); it is possible to select a distant image area from the fourth image by semantically segmenting the image, The fusion processing is carried out with the nearby image area selected from the third image through the semantic segmentation image, so as to enhance the detailed information in the fusion image.
结合第二方面,在第二方面的某些实现方式中,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行:With reference to the second aspect, in some implementation manners of the second aspect, the one or more processors call the computer instructions so that the electronic device executes:
对所述M帧第二图像进行黑电平校正处理和/或相位坏点校正处理,得到M帧第五图像;performing black level correction processing and/or phase defect correction processing on the M frames of second images to obtain M frames of fifth images;
以所述N帧第三图像中的任意一帧第三图像为基准,对所述M帧第五图像进行第一配准处理,得到所述N帧第四图像。Taking any third image of the N frames of third images as a reference, performing a first registration process on the M frames of fifth images to obtain the N frames of fourth images.
可选地,全局配准处理可以是指以第一帧第三图像为基准,将M帧第四图像中的每一张第四图像的整体映射到第一帧第三图像中。Optionally, the global registration process may refer to taking the third image of the first frame as a reference, and mapping the whole of each fourth image in the M frames of fourth images to the third image of the first frame.
可选地,黑电平校正(black level correction,BLC)用于对黑电平进行校正处理,黑电平是指在经过一定校准的显示装置上,没有一行光亮输出的视频信号电平。相位坏点校正(phase defection pixel correction,PDPC)可以包括相位点校正(phase defection correction,PDC)与坏点校正(bad pixel correction,BPC);其中,BPC中的坏点是位置随机的亮点或暗点,数量相对比较少,BPC可以通过滤波算法实现;相对普通像素点而言,相位点就是固定位置的坏点,而且数量比较多;PDC需要通过已知的相位点列表进行相位点去除。Optionally, black level correction (black level correction, BLC) is used to correct the black level. The black level refers to the video signal level without a line of bright output on a calibrated display device. Phase defect pixel correction (PDPC) can include phase point correction (phase defect correction, PDC) and bad pixel correction (bad pixel correction, BPC); wherein, the bad pixels in BPC are randomly positioned bright or dark The number of points is relatively small, and BPC can be realized by filtering algorithm; compared with ordinary pixel points, phase points are bad points with fixed positions, and the number is relatively large; PDC needs to remove phase points through the known phase point list.
可选地,第二图像处理还可以包括但不限于:Optionally, the second image processing may also include but not limited to:
自动白平衡处理(Automatic white balance,AWB)、镜头阴影校正(Lens Shading Correction,LSC)等。Automatic white balance processing (Automatic white balance, AWB), lens shading correction (Lens Shading Correction, LSC), etc.
其中,自动白平衡处理用于使得白色在任何色温下相机均能把它还原成白;由于色温的影响,白纸在低色温下会偏黄,高色温下会偏蓝;白平衡的目的在于使得白色物体在任何色温下均为R=G=B呈现出白色。镜头阴影校正用于消除由于镜头光学系统原因造成的图像四周颜色以及亮度与图像中心不一致的问题。Among them, the automatic white balance processing is used to make the white color can be restored to white by the camera at any color temperature; due to the influence of color temperature, white paper will be yellowish at low color temperature, and blue at high color temperature; the purpose of white balance is to So that the white object is R=G=B at any color temperature and appears white. Lens shading correction is used to eliminate the inconsistency between the color and brightness of the surrounding image and the center of the image caused by the lens optical system.
应理解,第二图像处理可以包括黑电平校正、相位坏点校正以及其他Raw域图像处理算法;上述以自动白平衡处理与镜头阴影校正对其他Raw域图像处理算法进行举例描述,本申请对其他Raw域图像处理算法不作任何限定。It should be understood that the second image processing may include black level correction, phase bad pixel correction, and other Raw domain image processing algorithms; the above-mentioned examples describe other Raw domain image processing algorithms with automatic white balance processing and lens shading correction. Other Raw domain image processing algorithms are not subject to any limitation.
结合第二方面,在第二方面的某些实现方式中,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行:With reference to the second aspect, in some implementation manners of the second aspect, the one or more processors call the computer instructions so that the electronic device executes:
以所述N帧第三图像中的任意一帧第三图像为基准,对所述M帧第五图像进行所述第一配准处理与上采样处理,得到所述N帧第四图像。Taking any third image of the N frames of third images as a reference, performing the first registration processing and up-sampling processing on the M frames of fifth images to obtain the N frames of fourth images.
在本申请的实施例中,可以将第四图像的分辨率大小调整至与第三图像相同;从而便于对N帧第三图像与M帧第四图像进行融合处理。In the embodiment of the present application, the resolution of the fourth image can be adjusted to be the same as that of the third image; thus, it is convenient to perform fusion processing on N frames of the third image and M frames of the fourth image.
结合第二方面,在第二方面的某些实现方式中,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行:With reference to the second aspect, in some implementation manners of the second aspect, the one or more processors call the computer instructions so that the electronic device executes:
对所述M帧第二图像进行黑电平校正处理和/或相位坏点校正处理,得到M帧第五图像;performing black level correction processing and/or phase defect correction processing on the M frames of second images to obtain M frames of fifth images;
以所述N帧第三图像中的任意一帧第三图像为基准,对所述M帧第五图像进行第一配准处理,得到M帧第一配准图像;Using any one of the third images in the N frames of third images as a reference, performing a first registration process on the M frames of the fifth image to obtain M frames of first registration images;
以所述任意一帧第三图像为基准,对所述M帧第一配准图像进行第二配准处理,得到所述M帧第四图像。Using the arbitrary third image as a reference, perform a second registration process on the M frames of the first registration image to obtain the M frames of the fourth image.
结合第二方面,在第二方面的某些实现方式中,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行:With reference to the second aspect, in some implementation manners of the second aspect, the one or more processors call the computer instructions so that the electronic device executes:
以所述N帧第三图像中的任意一帧第三图像为基准,对所述M帧第五图像进行所述第一配准处理与上采样处理,得到所述M帧第一配准图像。Performing the first registration processing and upsampling processing on the M frames of the fifth image based on any one of the third images in the N frames of third images to obtain the M frames of the first registration image .
结合第二方面,在第二方面的某些实现方式中,所述第一配准处理为全局配准处 理。With reference to the second aspect, in some implementation manners of the second aspect, the first registration process is a global registration process.
结合第二方面,在第二方面的某些实现方式中,所述第二配准处理为局部配准处理。With reference to the second aspect, in some implementation manners of the second aspect, the second registration processing is local registration processing.
结合第二方面,在第二方面的某些实现方式中,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行:With reference to the second aspect, in some implementation manners of the second aspect, the one or more processors call the computer instructions so that the electronic device executes:
对所述N帧第一图像进行黑电平校正处理和/或相位坏点校正处理,得到所述N帧第三图像。Perform black level correction processing and/or phase defect correction processing on the N frames of first images to obtain the N frames of third images.
可选地,黑电平校正(black level correction,BLC)用于对黑电平进行校正处理,黑电平是指在经过一定校准的显示装置上,没有一行光亮输出的视频信号电平。相位坏点校正(phase defection pixel correction,PDPC)可以包括相位点校正(phase defection correction,PDC)与坏点校正(bad pixel correction,BPC);其中,BPC中的坏点是位置随机的亮点或暗点,数量相对比较少,BPC可以通过滤波算法实现;相对普通像素点而言,相位点就是固定位置的坏点,而且数量比较多;PDC需要通过已知的相位点列表进行相位点去除。Optionally, black level correction (black level correction, BLC) is used to correct the black level. The black level refers to the video signal level without a line of bright output on a calibrated display device. Phase defect pixel correction (PDPC) can include phase point correction (phase defect correction, PDC) and bad pixel correction (bad pixel correction, BPC); wherein, the bad pixels in BPC are randomly positioned bright or dark The number of points is relatively small, and BPC can be realized by filtering algorithm; compared with ordinary pixel points, phase points are bad points with fixed positions, and the number is relatively large; PDC needs to remove phase points through the known phase point list.
可选地,第一图像处理还可以包括但不限于:Optionally, the first image processing may also include but not limited to:
自动白平衡处理(Automatic white balance,AWB)、镜头阴影校正(Lens Shading Correction,LSC)等。Automatic white balance processing (Automatic white balance, AWB), lens shading correction (Lens Shading Correction, LSC), etc.
其中,自动白平衡处理用于使得白色在任何色温下相机均能把它还原成白;由于色温的影响,白纸在低色温下会偏黄,高色温下会偏蓝;白平衡的目的在于使得白色物体在任何色温下均为R=G=B呈现出白色。镜头阴影校正用于消除由于镜头光学系统原因造成的图像四周颜色以及亮度与图像中心不一致的问题。Among them, the automatic white balance processing is used to make the white color can be restored to white by the camera at any color temperature; due to the influence of color temperature, white paper will be yellowish at low color temperature, and blue at high color temperature; the purpose of white balance is to So that the white object is R=G=B at any color temperature and appears white. Lens shading correction is used to eliminate the inconsistency between the color and brightness of the surrounding image and the center of the image caused by the lens optical system.
应理解,第一图像处理可以包括黑电平校正、相位坏点校正以及其他Raw域图像处理算法;上述以自动白平衡处理与镜头阴影校正对其他Raw域图像处理算法进行举例描述,本申请对其他Raw域图像处理算法不作任何限定。It should be understood that the first image processing may include black level correction, phase bad pixel correction, and other Raw domain image processing algorithms; the above-mentioned examples describe other Raw domain image processing algorithms with automatic white balance processing and lens shading correction. Other Raw domain image processing algorithms are not subject to any limitation.
结合第二方面,在第二方面的某些实现方式中,所述电子设备包括红外闪光灯,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行:With reference to the second aspect, in some implementation manners of the second aspect, the electronic device includes an infrared flashlight, and the one or more processors call the computer instructions to make the electronic device execute:
在暗光场景下,开启所述红外闪光灯,所述暗光场景是指所述电子设备的进光量小于预设阈值的拍摄场景;In a dark light scene, turn on the infrared flashlight, the dark light scene refers to a shooting scene in which the amount of light entering the electronic device is less than a preset threshold;
所述响应于所述第一操作,获取N帧第一图像与M帧第二图像,包括:In response to the first operation, acquiring N frames of the first image and M frames of the second image includes:
在开启红外闪光灯的情况下,获取所述N帧第一图像与所述M帧第二图像。In the case of turning on the infrared flashlight, the N frames of the first image and the M frames of the second image are acquired.
结合第二方面,在第二方面的某些实现方式中,所述第一界面包括第二控件,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行:With reference to the second aspect, in some implementation manners of the second aspect, the first interface includes a second control, and the one or more processors call the computer instructions to make the electronic device execute:
检测到对所述第二控件的第二操作;detecting a second operation on the second control;
响应于所述第二操作开启所述红外闪光灯。The infrared flashlight is turned on in response to the second operation.
在本申请的实施例中,可以开启电子设备中的红外闪光灯;由于电子设备中可以包括第一相机模组与第二模组,在红外闪光灯开启的情况下,拍摄对象的反射光增加,使得第二相机模组的进光量增加;从而使得通过第二相机模组获取的第二图像的细节信息增加;通过本申请实施例的图像处理方法对第一相机模组与第二相机模组采集的图像进行融合处理,能够对主摄像头相机模组获取的图像进行图像增强,提高图像中的细节信息。此外,红外闪光灯是用户无法感知的,在用户无感知的情况下,提高图 像中的细节信息。In the embodiment of the present application, the infrared flashlight in the electronic device can be turned on; since the electronic device can include the first camera module and the second module, when the infrared flashlight is turned on, the reflected light of the object increases, so that The amount of light entering the second camera module increases; thereby increasing the detail information of the second image acquired through the second camera module; the first camera module and the second camera module are collected by the image processing method of the embodiment of the application The image fusion processing can be performed on the image acquired by the camera module of the main camera, and the detailed information in the image can be improved. In addition, the infrared flash is imperceptible to the user, and improves the detail information in the image without the user's perception.
结合第二方面,在第二方面的某些实现方式中,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行:With reference to the second aspect, in some implementation manners of the second aspect, the one or more processors call the computer instructions so that the electronic device executes:
基于所述语义分割图像,通过图像处理模型对所述N帧第三图像与所述M帧第四图像进行融合处理,得到所述融合图像,所述图像处理模型为预先训练的神经网络。Based on the semantically segmented image, the third image of N frames and the fourth image of M frames are fused by an image processing model to obtain the fused image, and the image processing model is a pre-trained neural network.
结合第二方面,在第二方面的某些实现方式中,所述语义分割图像为通过语义分割算法对所述N帧第三图像中的第一帧第三图像进行处理得到的。With reference to the second aspect, in some implementation manners of the second aspect, the semantically segmented image is obtained by processing the third image in the first frame of the N frames of third images by using a semantic segmentation algorithm.
结合第二方面,在第二方面的某些实现方式中,所述第一界面是指拍照界面,所述第一控件是指用于指示拍照的控件。With reference to the second aspect, in some implementation manners of the second aspect, the first interface refers to a photographing interface, and the first control refers to a control for instructing photographing.
可选地,第一操作可以是指对拍照界面中指示拍照的控件的点击操作。Optionally, the first operation may refer to a click operation on a control indicating to take a photo in the photo taking interface.
结合第二方面,在第二方面的某些实现方式中,所述第一界面是指视频录制界面,所述第一控件是指用于指示录制视频的控件。With reference to the second aspect, in some implementation manners of the second aspect, the first interface refers to a video recording interface, and the first control refers to a control for instructing video recording.
可选地,第一操作可以是指对视频录制界面中指示录制视频的控件的点击操作。Optionally, the first operation may refer to a click operation on a control indicating to record a video in the video recording interface.
结合第二方面,在第二方面的某些实现方式中,所述第一界面是指视频通话界面,所述第一控件是指用于指示视频通话的控件。With reference to the second aspect, in some implementation manners of the second aspect, the first interface refers to a video call interface, and the first control refers to a control for instructing a video call.
可选地,第一操作可以是指对视频通话界面中指示视频通话的控件的点击操作。Optionally, the first operation may refer to a click operation on a control indicating a video call in the video call interface.
应理解,上述以第一操作为点击操作为例进行举例说明;第一操作还可以包括语音指示操作,或者其它的指示电子设备进行拍照或者视频通话的操作;上述为举例说明,并不对本申请作任何限定。It should be understood that the above-mentioned example illustrates the first operation as a click operation; the first operation may also include a voice instruction operation, or other operations instructing the electronic device to take a photo or make a video call; make any restrictions.
第三方面,提供了一种电子设备,包括用于执行第一方面或者第一方面中任一种图像处理方法的模块/单元。In a third aspect, an electronic device is provided, including a module/unit for executing the first aspect or any image processing method in the first aspect.
第四方面,提供一种电子设备,所述电子设备包括:一个或多个处理器、存储器、第一相机模组与第二相机模组;所述存储器与所述一个或多个处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行第一方面或者第一方面中的任一种图像处理方法。According to a fourth aspect, an electronic device is provided, and the electronic device includes: one or more processors, a memory, a first camera module, and a second camera module; the memory is coupled to the one or more processors , the memory is used to store computer program codes, the computer program codes include computer instructions, and the one or more processors call the computer instructions to make the electronic device perform the first aspect or any of the first aspects An image processing method.
第五方面,提供了一种芯片系统,所述芯片系统应用于电子设备,所述芯片系统包括一个或多个处理器,所述处理器用于调用计算机指令以使得所述电子设备执行第一方面或第一方面中的任一种图像处理方法。In a fifth aspect, a chip system is provided, the chip system is applied to an electronic device, and the chip system includes one or more processors, and the processor is used to call a computer instruction so that the electronic device executes the first aspect Or any image processing method in the first aspect.
第六方面,提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序代码,当所述计算机程序代码被电子设备运行时,使得该电子设备执行第一方面或第一方面中的任一种图像处理方法。In a sixth aspect, a computer-readable storage medium is provided, the computer-readable storage medium stores computer program code, and when the computer program code is run by an electronic device, the electronic device executes the first aspect or the first Any image processing method in the aspect.
第七方面,提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码被电子设备运行时,使得该电子设备执行第一方面或第一面中的任一种图像处理方法。In a seventh aspect, a computer program product is provided, the computer program product comprising: computer program code, when the computer program code is run by an electronic device, the electronic device is made to execute the first aspect or any one of the first aspects. An image processing method.
在本申请的实施例中,电子设备中可以包括第一相机模组与第二相机模组,其中,第二相机模组为近红外相机模组或者红外相机模组(例如,获取的光谱范围为700nm~1100nm);通过第一相机模组采集第一图像,通过第二相机模组采集第二图像;由于第二图像(例如,近红外图像)中包括的图像信息是第一图像中(例如,可见光 图像)无法获取到的;同理,第三图像中包括的图像信息是第四图像无法获取到的;因此,通过对第三图像(例如,可见光图像)与第四图像(例如,近红外光图像)进行融合处理,可以实现近红外光的图像信息与可见光的图像信息的多光谱信息融合,使得融合后的图像中包括更多的细节信息;因此,通过本申请实施例提供的图像处理方法,能够对主摄像头相机模组获取的图像进行图像增强,增强图像中的细节信息,提高图像质量。In an embodiment of the present application, the electronic device may include a first camera module and a second camera module, wherein the second camera module is a near-infrared camera module or an infrared camera module (for example, the acquired spectral range is 700nm~1100nm); the first image is collected by the first camera module, and the second image is collected by the second camera module; since the image information included in the second image (for example, a near-infrared image) is in the first image ( For example, a visible light image) cannot be obtained; similarly, the image information included in the third image cannot be obtained by the fourth image; therefore, by comparing the third image (for example, a visible light image) with the fourth image (for example, near-infrared light image) for fusion processing, the multi-spectral information fusion of near-infrared light image information and visible light image information can be realized, so that the fused image includes more detailed information; therefore, through the embodiment of the present application provided The image processing method can perform image enhancement on the image acquired by the camera module of the main camera, enhance the detailed information in the image, and improve the image quality.
此外,在本申请的实施例中,由于第二相机模组可以获取的光谱范围为近红外光,通过第二相机模组采集的红外光图像是灰度图,灰度图像用于表示的是亮度的真实值;由于第一相机模组可以获取的光谱范围为可见光,通过第一相机模组采集的可见光图像中亮度值是不连续的,通常需要对不连续的亮度值进行预测;通过近红外光图像(亮度的真实值)作为引导对可见光图像进行去马赛克处理时,能够有效减少图像中出现的伪纹理。In addition, in the embodiment of the present application, since the spectral range that the second camera module can acquire is near-infrared light, the infrared light image collected by the second camera module is a grayscale image, and the grayscale image is used to represent The actual value of brightness; since the spectral range that the first camera module can obtain is visible light, the brightness value in the visible light image collected by the first camera module is discontinuous, and it is usually necessary to predict the discontinuous brightness value; When the infrared light image (true value of brightness) is used as a guide to demosaic the visible light image, it can effectively reduce the false texture appearing in the image.
附图说明Description of drawings
图1是一种适用于本申请的电子设备的硬件系统的示意图;FIG. 1 is a schematic diagram of a hardware system applicable to an electronic device of the present application;
图2是一种适用于本申请的电子设备的软件系统的示意图;Fig. 2 is a schematic diagram of a software system applicable to the electronic device of the present application;
图3是一种适用于本申请实施例的应用场景的示意图;FIG. 3 is a schematic diagram of an application scenario applicable to an embodiment of the present application;
图4是一种适用于本申请实施例的应用场景的示意图;FIG. 4 is a schematic diagram of an application scenario applicable to an embodiment of the present application;
图5是本申请实施例提供的图像处理方法的示意性流程图;Fig. 5 is a schematic flowchart of an image processing method provided by an embodiment of the present application;
图6是本申请实施例提供的图像处理方法的示意性流程图;FIG. 6 is a schematic flowchart of an image processing method provided by an embodiment of the present application;
图7是本申请实施例提供的图像处理方法的示意性流程图;Fig. 7 is a schematic flowchart of an image processing method provided by an embodiment of the present application;
图8是本申请实施例提供的第一配准处理与的上采样处理的示意图;Fig. 8 is a schematic diagram of the first registration process and the up-sampling process provided by the embodiment of the present application;
图9是本申请实施例提供的图像处理方法的示意性流程图;FIG. 9 is a schematic flowchart of an image processing method provided by an embodiment of the present application;
图10是本申请实施例提供的图像处理方法的示意性流程图;FIG. 10 is a schematic flowchart of an image processing method provided by an embodiment of the present application;
图11是本申请实施例提供的图像处理方法的示意性流程图;Fig. 11 is a schematic flowchart of an image processing method provided by an embodiment of the present application;
图12是本申请实施例提供的图像处理方法的示意性流程图;Fig. 12 is a schematic flowchart of an image processing method provided by an embodiment of the present application;
图13是根据是本申请实施例提供的图像处理方法的效果示意图;Fig. 13 is a schematic diagram showing the effect of the image processing method provided by the embodiment of the present application;
图14是一种适用于本申请实施例的图形用户界面的示意图;Fig. 14 is a schematic diagram of a graphical user interface applicable to the embodiment of the present application;
图15是一种适用于本申请实施例的拍摄场景的光路示意图;Fig. 15 is a schematic diagram of an optical path of a shooting scene applicable to an embodiment of the present application;
图16是本申请实施例提供的一种电子设备的结构示意图;Fig. 16 is a schematic structural diagram of an electronic device provided by an embodiment of the present application;
图17是本申请实施例提供的一种电子设备的结构示意图。FIG. 17 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
具体实施方式Detailed ways
在本申请的实施例中,以下术语“第一”、“第二”、“第三”、“第四”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。In the embodiments of the present application, the following terms "first", "second", "third", and "fourth" are used for description purposes only, and cannot be understood as indicating or implying relative importance or implicitly indicating Indicates the number of technical characteristics.
为了便于对本申请实施例的理解,首先对本申请实施例中涉及的相关概念进行简要说明。In order to facilitate the understanding of the embodiments of the present application, firstly a brief description is given of related concepts involved in the embodiments of the present application.
1、近红外光(near infrared,NIR)1. Near infrared light (near infrared, NIR)
近红外光是指介于可见光与中红外光之间的电磁波;可以将近红外光区划分为近红外短波(780nm~1100nm)和近红外长波(1100nm~2526nm)两个区域。Near-infrared light refers to electromagnetic waves between visible light and mid-infrared light; the near-infrared light region can be divided into two regions: near-infrared short-wave (780nm-1100nm) and near-infrared long-wave (1100nm-2526nm).
2、主摄相机模组2. Main camera module
主摄相机模组是指接收光谱范围为可见光的相机模组;例如,主摄相机模组中包括的传感器接收的光谱范围为400nm~700nm。The main camera module refers to a camera module that receives visible light in a spectral range; for example, the sensor included in the main camera module receives a spectral range of 400nm to 700nm.
3、近红外相机模组3. Near-infrared camera module
近红外相机模组是指接收光谱范围为近红外光的相机模组;例如,近红外相机模组中包括的传感器接收的光谱范围为700nm~1100nm。A near-infrared camera module refers to a camera module that receives near-infrared light in a spectral range; for example, a sensor included in a near-infrared camera module receives a spectral range of 700 nm to 1100 nm.
4、图像的高频信息4. High-frequency information of images
图像的高频信息是指图像中灰度值变化剧烈的区域;例如,图像中的高频信息包括物体的边缘信息、纹理信息等。The high-frequency information of an image refers to the region where the gray value changes drastically in the image; for example, the high-frequency information in the image includes edge information, texture information, etc. of an object.
5、图像的低频信息5. Low-frequency information of images
图像的低频信息是指图像中灰度值变化缓慢的区域;对于一幅图像而言,除去高频信息外的部分为低频信息;例如,图像的低频信息可以包括物体边缘以内的内容信息。The low-frequency information of the image refers to the area where the gray value changes slowly in the image; for an image, the part except the high-frequency information is low-frequency information; for example, the low-frequency information of the image can include the content information within the edge of the object.
6、图像的细节层6. The detail layer of the image
图像的细节层中包括图像的高频信息;例如,图像的细节层包括物体的边缘信息、纹理信息等。The detail layer of the image includes high-frequency information of the image; for example, the detail layer of the image includes edge information, texture information, etc. of the object.
7、图像的基础层7. The base layer of the image
图像的基础层中包括图像的低频信息;对于一幅图像来说,除去细节层外的部分为基础层;例如,图像的基础层包括物体边缘以内的内容信息。The base layer of the image includes the low-frequency information of the image; for an image, the part except the detail layer is the base layer; for example, the base layer of the image includes the content information within the edge of the object.
8、图像配准(Image registration)8. Image registration
图像配准是指就是将不同时间、不同传感器(成像设备)或者不同条件下(天候、照度、摄像位置和角度等)获取的两幅或多幅图像进行匹配、叠加的过程。Image registration refers to the process of matching and superimposing two or more images acquired at different times, different sensors (imaging devices) or under different conditions (weather, illumination, camera position and angle, etc.).
9、亮度值(Lighting Value,LV)9. Lighting Value (LV)
亮度值用于估计环境亮度,其具体计算公式如下:The brightness value is used to estimate the ambient brightness, and its specific calculation formula is as follows:
Figure PCTCN2022138808-appb-000001
Figure PCTCN2022138808-appb-000001
其中,Exposure为曝光时间;Aperture为光圈大小;Iso为感光度;Luma为图像在XYZ空间中,Y的平均值。Among them, Exposure is the exposure time; Aperture is the aperture size; Iso is the sensitivity; Luma is the average value of Y in the XYZ space of the image.
10、颜色校正矩阵(color correction matrix,CCM)10. Color correction matrix (color correction matrix, CCM)
颜色校正矩阵用于校准除白色以外其他颜色的准确度。A color correction matrix is used to calibrate the accuracy of colors other than white.
11、三维查找表(Three dimension look up table,3D LUT)11. Three dimension look up table (Three dimension look up table, 3D LUT)
三维查找表广泛应用于图像处理;例如,查找表可以用于图像颜色校正、图像增强或者图像伽马校正等;例如,可以在图像信号处理器中加载LUT,根据LUT表可以对原始图像进行图像处理,实现原始图像帧的像素值映射改变图像的颜色风格,从而实现不同的图像效果。Three-dimensional lookup tables are widely used in image processing; for example, lookup tables can be used for image color correction, image enhancement, or image gamma correction; for example, LUTs can be loaded in image signal processors, and the original image can be processed according to the LUT table Processing, realize the pixel value mapping of the original image frame and change the color style of the image, so as to achieve different image effects.
12、全局色调映射(Global tone Mapping,GTM)12. Global tone mapping (GTM)
全局色调映射用于解决高动态图像的灰度值分布不均匀的问题。Global tone mapping is used to solve the problem of uneven distribution of gray values in high dynamic images.
13、伽马处理13. Gamma processing
伽马处理用于通过调整伽马曲线来调整图像的亮度、对比度与动态范围等。Gamma processing is used to adjust the brightness, contrast and dynamic range of an image by adjusting the gamma curve.
14、神经网络14. Neural network
神经网络是指将多个单一的神经单元联结在一起形成的网络,即一个神经单元的输出可以是另一个神经单元的输入;每个神经单元的输入可以与前一层的局部接受域相连,来提取局部接受域的特征,局部接受域可以是由若干个神经单元组成的区域。A neural network refers to a network formed by connecting multiple single neural units together, that is, the output of one neural unit can be the input of another neural unit; the input of each neural unit can be connected to the local receptive field of the previous layer, To extract the features of the local receptive field, the local receptive field can be an area composed of several neural units.
15、反向传播算法15. Back propagation algorithm
神经网络可以采用误差反向传播(back propagation,BP)算法在训练过程中修正初始的神经网络模型中参数的大小,使得神经网络模型的重建误差损失越来越小。具体地,前向传递输入信号直至输出会产生误差损失,通过反向传播误差损失信息来更新初始的神经网络模型中参数,从而使误差损失收敛。反向传播算法是以误差损失为主导的反向传播运动,旨在得到最优的神经网络模型的参数,例如权重矩阵。The neural network can use the error back propagation (back propagation, BP) algorithm to correct the size of the parameters in the initial neural network model during the training process, so that the reconstruction error loss of the neural network model becomes smaller and smaller. Specifically, passing the input signal forward until the output will generate an error loss, and updating the parameters in the initial neural network model by backpropagating the error loss information, so that the error loss converges. The backpropagation algorithm is a backpropagation movement dominated by error loss, aiming to obtain the optimal parameters of the neural network model, such as the weight matrix.
下面将结合附图,对本申请实施例中图像处理方法与电子设备进行描述。The image processing method and the electronic device in the embodiment of the present application will be described below with reference to the accompanying drawings.
图1示出了一种适用于本申请的电子设备的硬件系统。Fig. 1 shows a hardware system applicable to the electronic equipment of this application.
电子设备100可以是手机、智慧屏、平板电脑、可穿戴电子设备、车载电子设备、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)、投影仪等等,本申请实施例对电子设备100的具体类型不作任何限制。The electronic device 100 may be a mobile phone, a smart screen, a tablet computer, a wearable electronic device, a vehicle electronic device, an augmented reality (augmented reality, AR) device, a virtual reality (virtual reality, VR) device, a notebook computer, a super mobile personal computer ( ultra-mobile personal computer (UMPC), netbook, personal digital assistant (personal digital assistant, PDA), projector, etc., the embodiment of the present application does not impose any limitation on the specific type of the electronic device 100.
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and A subscriber identification module (subscriber identification module, SIM) card interface 195 and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
需要说明的是,图1所示的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图2所示的部件更多或更少的部件,或者,电子设备100可以包括图2所示的部件中某些部件的组合,或者,电子设备100可以包括图1所示的部件中某些部件的子部件。图1示的部件可以以硬件、软件、或软件和硬件的组合实现。It should be noted that the structure shown in FIG. 1 does not constitute a specific limitation on the electronic device 100 . In other embodiments of the present application, the electronic device 100 may include more or fewer components than those shown in FIG. 2 , or the electronic device 100 may include a combination of some of the components shown in FIG. 2 , or , the electronic device 100 may include subcomponents of some of the components shown in FIG. 1 . The components shown in FIG. 1 can be realized in hardware, software, or a combination of software and hardware.
处理器110可以包括一个或多个处理单元。例如,处理器110可以包括以下处理单元中的至少一个:应用处理器(application processor,AP)、调制解调处理器、图形处理器(graphics processing unit,GPU)、图像信号处理器(image signal processor,ISP)、控制器、视频编解码器、数字信号处理器(digital signal processor,DSP)、基带处理器、神经网络处理器(neural-network processing unit,NPU)。其中,不同的处理单元可以是独立的器件,也可以是集成的器件。控制器可以根据指令操作码和时 序信号,产生操作控制信号,完成取指令和执行指令的控制。Processor 110 may include one or more processing units. For example, the processor 110 may include at least one of the following processing units: an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor) , ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, neural network processor (neural-network processing unit, NPU). Wherein, different processing units may be independent devices or integrated devices. The controller can generate operation control signals according to the instruction opcode and timing signal, and complete the control of fetching and executing instructions.
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
示例性地,处理器110可以用于执行本申请实施例的图像处理方法;例如,显示第一界面,第一界面包括第一控件;检测到对第一控件的第一操作;响应于第一操作,获取N帧第一图像与M帧第二图像,第一图像为第一相机模组采集的图像,第二图像为第二相机模组采集的图像,N和M均为大于或者等于1的正整数;基于N帧第一图像和M帧第二图像,得到目标图像;保存目标图像;其中,基于N帧第一图像和M帧第二图像,得到目标图像,包括:对N帧第一图像进行第一图像处理,得到N帧第三图像,N帧第三图像的图像质量高于N帧第一图像的图像质量;对M帧第二图像进行第二图像处理,得到M帧第四图像,M帧第四图像的图像质量高于M帧第二图像的图像质量;基于语义分割图像,对N帧第三图像和M帧第四图像进行融合处理,得到融合图像,语义分割图像为基于N帧第一图像中任一帧图像或者M帧第二图像中任一帧图像得到的,融合图像的细节信息优于N帧第一图像的细节信息;对融合图像进行第三图像处理,得到目标图像。Exemplarily, the processor 110 may be configured to execute the image processing method of the embodiment of the present application; for example, display a first interface, the first interface includes a first control; detect a first operation on the first control; respond to the first Operation, acquire N frames of the first image and M frames of the second image, the first image is the image collected by the first camera module, the second image is the image collected by the second camera module, and both N and M are greater than or equal to 1 A positive integer; based on the first image of N frames and the second image of M frames, the target image is obtained; the target image is saved; wherein, based on the first image of N frames and the second image of M frames, the target image is obtained, including: the first image of N frames Carrying out the first image processing on an image to obtain the third image of N frames, the image quality of the third image of N frames is higher than the image quality of the first image of N frames; performing the second image processing on the second image of M frames to obtain the first image of M frames Four images, the image quality of the fourth image of the M frame is higher than the image quality of the second image of the M frame; based on the semantic segmentation image, the third image of the N frame and the fourth image of the M frame are fused to obtain a fusion image, a semantic segmentation image It is obtained based on any frame image in the first image of N frames or any frame image in the second image of M frames, and the detailed information of the fused image is better than the detailed information of the first image of N frames; the third image processing is performed on the fused image , to get the target image.
图1所示的各模块间的连接关系只是示意性说明,并不构成对电子设备100的各模块间的连接关系的限定。可选地,电子设备100的各模块也可以采用上述实施例中多种连接方式的组合。The connection relationship between the modules shown in FIG. 1 is only a schematic illustration, and does not constitute a limitation on the connection relationship between the modules of the electronic device 100 . Optionally, each module of the electronic device 100 may also adopt a combination of various connection modes in the foregoing embodiments.
电子设备100的无线通信功能可以通过天线1、天线2、移动通信模块150、无线通信模块160、调制解调处理器以及基带处理器等器件实现。The wireless communication function of the electronic device 100 may be realized by components such as the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, and a baseband processor.
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。 Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals. Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
电子设备100可以通过GPU、显示屏194以及应用处理器实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The electronic device 100 can realize the display function through the GPU, the display screen 194 and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
显示屏194可以用于显示图像或视频。Display 194 may be used to display images or video.
电子设备100可以通过ISP、摄像头193、视频编解码器、GPU、显示屏194以及应用处理器等实现拍摄功能。The electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display screen 194 , and the application processor.
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP可以对图像的噪点、亮度和色彩进行算法优化,ISP还可以优化拍摄场景的曝光和色温等参数。在一些实施例中,ISP可以设置在摄像头193中。The ISP is used for processing the data fed back by the camera 193 . For example, when taking a picture, open the shutter, the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can optimize the algorithm of image noise, brightness and color, and ISP can also optimize parameters such as exposure and color temperature of the shooting scene. In some embodiments, the ISP may be located in the camera 193 .
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元 件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的红绿蓝(red green blue,RGB),YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。Camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects it to the photosensitive element. The photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. DSP converts digital image signals into standard red green blue (red green blue, RGB), YUV and other image signals. In some embodiments, the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1、MPEG2、MPEG3和MPEG4。Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3 and MPEG4.
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x轴、y轴和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。例如,当快门被按下时,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航和体感游戏等场景。The gyro sensor 180B can be used to determine the motion posture of the electronic device 100 . In some embodiments, the angular velocity of the electronic device 100 around three axes (ie, x-axis, y-axis and z-axis) may be determined by the gyro sensor 180B. The gyro sensor 180B can be used for image stabilization. For example, when the shutter is pressed, the gyro sensor 180B detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shaking of the electronic device 100 through reverse motion to achieve anti-shake. The gyro sensor 180B can also be used in scenarios such as navigation and somatosensory games.
加速度传感器180E可检测电子设备100在各个方向上(一般为x轴、y轴和z轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。加速度传感器180E还可以用于识别电子设备100的姿态,作为横竖屏切换和计步器等应用程序的输入参数。The acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally x-axis, y-axis and z-axis). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The acceleration sensor 180E can also be used to identify the posture of the electronic device 100 as an input parameter for application programs such as horizontal and vertical screen switching and pedometer.
距离传感器180F用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,例如在拍摄场景中,电子设备100可以利用距离传感器180F测距以实现快速对焦。The distance sensor 180F is used to measure distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, for example, in a shooting scene, the electronic device 100 can use the distance sensor 180F for distance measurement to achieve fast focusing.
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。The ambient light sensor 180L is used for sensing ambient light brightness. The electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness. The ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures. The ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket, so as to prevent accidental touch.
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现解锁、访问应用锁、拍照和接听来电等功能。The fingerprint sensor 180H is used to collect fingerprints. The electronic device 100 can use the collected fingerprint characteristics to implement functions such as unlocking, accessing the application lock, taking pictures, and answering incoming calls.
触摸传感器180K,也称为触控器件。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,触摸屏也称为触控屏。触摸传感器180K用于检测作用于其上或其附近的触摸操作。触摸传感器180K可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,并且与显示屏194设置于不同的位置。The touch sensor 180K is also referred to as a touch device. The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a touch screen. The touch sensor 180K is used to detect a touch operation on or near it. The touch sensor 180K may transmit the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation can be provided through the display screen 194 . In some other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 and disposed at a different position from the display screen 194 .
上文详细描述了电子设备100的硬件系统,下面介绍图像电子设备100的软件系统。The hardware system of the electronic device 100 has been described in detail above, and the software system of the image electronic device 100 will be introduced below.
图2是本申请实施例提供的装置的软件系统的示意图。Fig. 2 is a schematic diagram of the software system of the device provided by the embodiment of the present application.
如图2所示,系统架构中可以包括应用层210、应用框架层220、硬件抽象层230、驱动层240以及硬件层250。As shown in FIG. 2 , the system architecture may include an application layer 210 , an application framework layer 220 , a hardware abstraction layer 230 , a driver layer 240 and a hardware layer 250 .
应用层210可以包括相机;可选地,应用程210还可以包括图库、日历、通话、地图、导航、WLAN、蓝牙、音乐、视频、短信息等应用程序。The application layer 210 may include a camera; optionally, the application program 210 may also include application programs such as a gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, and short message.
应用框架层220为应用层的应用程序提供应用程序编程接口(application programming interface,API)和编程框架;应用框架层可以包括一些预定义的函数。The application framework layer 220 provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer; the application framework layer may include some predefined functions.
例如,应用框架层220可以包括相机访问接口;相机访问接口中可以包括相机管理与相机设备。其中,相机管理可以用于提供管理相机的访问接口;相机设备可以用于提供访问相机的接口。For example, the application framework layer 220 may include a camera access interface; the camera access interface may include camera management and camera equipment. Wherein, the camera management can be used to provide an access interface for managing the camera; the camera device can be used to provide an interface for accessing the camera.
硬件抽象层230用于将硬件抽象化。比如,硬件抽象层可以包相机抽象层以及其他硬件设备抽象层;相机硬件抽象层可以调用相机算法库中的算法。The hardware abstraction layer 230 is used to abstract hardware. For example, the hardware abstraction layer can include the camera abstraction layer and other hardware device abstraction layers; the camera hardware abstraction layer can call the algorithms in the camera algorithm library.
例如,相机算法库中可以包括用于图像处理的软件算法。For example, a library of camera algorithms may include software algorithms for image processing.
驱动层240用于为不同硬件设备提供驱动。例如,驱动层可以包括相机设备驱动;数字信号处理器驱动、图形处理器驱动或者中央处理器驱动。The driver layer 240 is used to provide drivers for different hardware devices. For example, the driver layer may include a camera device driver; a digital signal processor driver, a graphics processor driver, or a central processing unit driver.
硬件层250可以包括相机设备以及其他硬件设备。The hardware layer 250 may include camera devices as well as other hardware devices.
例如,硬件层250包括相机设备、数字信号处理器、图形处理器或者中央处理器;示例性地,相机设备中可以包括图像信号处理器,图像信号处理器可以用于图像处理。For example, the hardware layer 250 includes a camera device, a digital signal processor, a graphics processor or a central processing unit; for example, the camera device may include an image signal processor, and the image signal processor may be used for image processing.
目前,终端设备上的主摄像头相机模组获取的光谱范围为可见光(400nm~700nm);在一些拍照场景中,例如,光照条件不好的拍摄场景中,比如,夜晚场景或者浓雾场景中,由于拍摄场景的光线条件较差,电子设备的进光量较少,导致主摄像头相机模组获取的图像中存在部分图像细节信息丢失的问题。At present, the spectral range obtained by the main camera module on the terminal device is visible light (400nm~700nm); Due to the poor light conditions of the shooting scene and the small amount of light entering the electronic device, there is a problem that some image detail information is lost in the image obtained by the camera module of the main camera.
有鉴于此,本申请实施例提供了一种图像处理方法,应用于电子设备;电子设备中可以包括主摄像头相机模组与近红外相机模组,其中,主摄像头相机模组可以获取的光谱范围包括可见光(400nm~700nm);近红外相机模组可以获取的光谱范围为近红外光(700nm~1100nm);由于近红外相机模组采集的图像包括拍摄对象对近红外光的反射信息;通过对主摄像头模组采集的图像与近红外相机模组采集的图像进行融合处理,可以实现近红外光的图像信息与可见光的图像信息的多光谱信息融合,使得融合后的图像中包括更多的细节信息;因此,通过本申请实施例提供的图像处理方法,能够增强图像中的细节信息。In view of this, an embodiment of the present application provides an image processing method, which is applied to an electronic device; the electronic device may include a main camera module and a near-infrared camera module, wherein the spectral range that the main camera module can obtain is Including visible light (400nm-700nm); the spectral range that the near-infrared camera module can obtain is near-infrared light (700nm-1100nm); because the image collected by the near-infrared camera module includes the reflection information of the subject to near-infrared light; The image collected by the main camera module is fused with the image collected by the near-infrared camera module, which can realize the multi-spectral information fusion of the image information of near-infrared light and the image information of visible light, so that the fused image includes more details information; therefore, through the image processing method provided in the embodiment of the present application, the detailed information in the image can be enhanced.
下面结合图3对本申请实施例提供的图像处理方法的应用场景进行举例说明。The application scenario of the image processing method provided by the embodiment of the present application will be illustrated below with reference to FIG. 3 .
示例性地,本申请实施例中的图像处理方法可以应用于拍照领域(例如,单景拍照、双景拍照等)、录制视频领域、视频通话领域或者其他图像处理领域;由于本申请实施例中采用的是双相机模组,双相机模组包括可以获取可见光的相机模组与可以获取近红外光的相机模组(例如,近红外相机模组,或者,红外相机模组);对获取的可见光图像与近红外光图像进行图像处理与融合处理,得到画质增强的图像;通过本申请实施例中的图像处理方法对图像进行处理,能够增强图像中的细节信息,提高图像质量。Exemplarily, the image processing method in the embodiment of the present application can be applied to the field of photography (for example, single-view photography, dual-view photography, etc.), recording video field, video call field or other image processing fields; A dual-camera module is used, and the dual-camera module includes a camera module that can obtain visible light and a camera module that can obtain near-infrared light (for example, a near-infrared camera module, or an infrared camera module); Visible light images and near-infrared light images are processed and fused to obtain images with enhanced image quality; image processing methods in the embodiments of the present application can enhance detailed information in the images and improve image quality.
在一个示例中,如图3所示,本申请实施例应用于阳光下拍摄风景(例如,云雾场景)时,由于近红外相机模组可以获取的光谱范围为近红外光,与可见光光谱范围相比近红外 相机模组可以获取的光谱的波长较长,因此绕射能力较强,例如,波长较长的光谱的穿透性更强,采集的图像的画面通透感更强;图3所示为通过主摄像头相机模组与近红外相机模组采集图像后通过本申请实施例提供的图像处理方法得到的图像;图3所示的图像的细节信息较丰富,可以清晰的显示山脉的细节信息;通过本申请实施例提供的图像处理方法可以对主摄像头模组获取的图像进行图像增强,增强图像中的细节信息。In one example, as shown in FIG. 3 , when the embodiment of the present application is applied to shooting scenery (for example, cloud and fog scenes) under sunlight, since the spectral range that the near-infrared camera module can obtain is near-infrared light, which is similar to the spectral range of visible light The wavelength of the spectrum that can be obtained by the near-infrared camera module is longer, so the diffraction ability is stronger. It is shown as the image obtained by the image processing method provided by the embodiment of the present application after the image is collected by the main camera module and the near-infrared camera module; the image shown in Figure 3 is rich in detail information and can clearly display the details of the mountains Information; through the image processing method provided by the embodiment of the present application, the image acquired by the main camera module can be enhanced to enhance the detailed information in the image.
示例性地,图3所示的终端设备可以包括第一相机模组、第二相机模组以及红外闪光灯;其中,第一相机模组可以获取的光谱范围为可见光(400nm~700nm);第二相机模组可以获取的光谱范围为近红外光(700nm~1100nm)。Exemplarily, the terminal device shown in FIG. 3 may include a first camera module, a second camera module, and an infrared flashlight; wherein, the spectral range that the first camera module can acquire is visible light (400nm-700nm); the second The spectral range that the camera module can obtain is near-infrared light (700nm-1100nm).
在一个示例中,本申请实施例应用于包括绿色景物的场景拍照时,对于进光量较少的暗光区域,由于近红外光对绿色景物的反射率较高,因此通过主摄像头相机模组与近红外相机模组拍摄得到的绿色景物的细节信息更多,能够增强图像中暗光区域中绿色景物的细节信息。In one example, when the embodiment of the present application is applied to a scene including a green scene to take pictures, for a dark area with less incoming light, since the near-infrared light has a high reflectivity to the green scene, the camera module of the main camera and the The green scene captured by the near-infrared camera module has more detail information, which can enhance the detail information of the green scene in the dark area of the image.
在一个示例中,本申请实施例应用于夜景人像拍摄时,可以开启电子设备中的红外闪光灯,例如,人像可以包括拍摄对象面部的脸、眼睛、鼻子、嘴巴、耳朵、眉毛等;由于电子设备中包括主摄像头相机模组与近红外相机模组,在红外闪光灯开启的情况下,拍摄对象的反射光增加,使得近红外相机模组的进光量增加;从而使得通过近红外相机模组拍摄的人像的细节信息增加,通过本申请实施例的图像处理方法对主摄像头相机模组与近红外相机模组采集的图像进行融合处理,能够对主摄像头相机模组获取的图像进行图像增强,提高图像中的细节信息。此外,红外闪光灯是用户无法感知的,在用户无感知的情况下,提高图像中的细节信息。In one example, when the embodiment of the present application is applied to night scene portrait shooting, the infrared flashlight in the electronic device can be turned on. For example, the portrait can include the subject's face, eyes, nose, mouth, ears, eyebrows, etc.; because the electronic device It includes the main camera module and the near-infrared camera module. When the infrared flashlight is turned on, the reflected light of the subject increases, which increases the amount of light entering the near-infrared camera module; thus making the photos taken by the near-infrared camera module The detailed information of the portrait is increased, and the images collected by the main camera module and the near-infrared camera module are fused through the image processing method of the embodiment of the application, and the image acquired by the main camera module can be image enhanced to improve the image quality. details in . In addition, the infrared flash is imperceptible to the user, and improves the detail information in the image without the user's perception.
可选地,电子设备在检测到食物或者人像时可以关闭近红外相机模组。Optionally, the electronic device can turn off the near-infrared camera module when detecting food or a portrait.
例如,在食物拍摄场景中可以包括多个食物,近红外相机模组可以采集多个食物中部分食物的图像;例如,多个食物可以为桃子、苹果或者西瓜等,近红外相机模组可以采集桃子和苹果的图像,且不采集西瓜的图像。For example, a food shooting scene may include multiple foods, and the near-infrared camera module may collect images of some of the foods in the multiple foods; for example, the multiple foods may be peaches, apples, or watermelons, etc., and the near-infrared camera module may collect Images of peaches and apples are collected, and images of watermelons are not collected.
可选地,近红外相机模组可以显示提示信息,提示用于是否开启近红外相机模组;在用户授权开启近红外相机模组后,近红外相机模组才能够开启采集图像。Optionally, the near-infrared camera module can display prompt information, prompting whether to enable the near-infrared camera module; the near-infrared camera module can only be enabled to collect images after the user authorizes the near-infrared camera module to be activated.
在一个示例中,本申请的图像处理方法可以应用于折叠屏终端设备中;例如,折叠屏终端设备可以包括外屏与内屏;在折叠屏终端设备的外屏与内屏之间的夹角为零度时,可以在外屏上显示预览图像,如图4中的(a)所示;在折叠屏终端设备的外屏与内屏之间的夹角为锐角时,可以在外屏上显示预览图像,如图4中的(b)所示;在折叠屏终端设备的外屏与内屏之间的夹角为钝角时,可以在内屏上的一侧显示预览图像,另一侧显示用于指示拍摄的控件,如图4中的(c)所示;在折叠屏终端设备的外屏与内屏之间的夹角为180度时,可以在内屏上显示预览图像,如图4中的(d)所示;上述预览图像可以是通过本申请实施例提供的图像处理方法对采集的图像进行处理得到的。示例性地,图4所示的折叠屏终端设备可以包括第一相机模组、第二相机模组以及红外闪光灯;其中,第一相机模组可以获取的光谱范围为可见光(400nm~700nm);第二相机模组可以获取的光谱范围为近红外光(700nm~1100nm)。In one example, the image processing method of the present application can be applied to a folding screen terminal device; for example, the folding screen terminal device can include an outer screen and an inner screen; the angle between the outer screen and the inner screen of the folding screen terminal device When the temperature is zero, the preview image can be displayed on the external screen, as shown in (a) in Figure 4; when the angle between the external screen and the internal screen of the folding screen terminal device is an acute angle, the preview image can be displayed on the external screen , as shown in (b) in Figure 4; when the angle between the outer screen and the inner screen of the folding screen terminal device is an obtuse angle, one side of the inner screen can display a preview image, and the other side can display a preview image for Indicate the control for shooting, as shown in (c) in Figure 4; when the angle between the outer screen and the inner screen of the folding screen terminal device is 180 degrees, the preview image can be displayed on the inner screen, as shown in Figure 4 As shown in (d); the above preview image may be obtained by processing the collected image through the image processing method provided in the embodiment of the present application. Exemplarily, the folding screen terminal device shown in FIG. 4 may include a first camera module, a second camera module, and an infrared flashlight; wherein, the spectral range that the first camera module can obtain is visible light (400nm-700nm); The spectral range that the second camera module can acquire is near-infrared light (700nm-1100nm).
应理解,上述为对应用场景的举例说明,并不对本申请的应用场景作任何限定。It should be understood that the foregoing is an illustration of an application scenario, and does not limit the application scenario of the present application in any way.
下面结合图5与图15对本申请实施例提供的图像处理方法进行详细描述。The image processing method provided by the embodiment of the present application will be described in detail below with reference to FIG. 5 and FIG. 15 .
图5是本申请实施例提供的图像处理方法的示意图。该图像处理方法可以由图1所示的电子设备执行;该方法200包括步骤S201至步骤S205,下面分别对步骤S201至步骤S205进行详细的描述。FIG. 5 is a schematic diagram of an image processing method provided by an embodiment of the present application. The image processing method can be executed by the electronic device shown in FIG. 1; the method 200 includes step S201 to step S205, which will be described in detail below respectively.
应理解,图5所示的图像处理方法应用于电子设备,电子设备中包括第一相机模组与第二相机模组,第二相机模组获取的光谱为近红外相机模组或者红外相机模组(例如,获取的光谱范围为700nm~1100nm)。It should be understood that the image processing method shown in FIG. 5 is applied to an electronic device, and the electronic device includes a first camera module and a second camera module, and the spectrum acquired by the second camera module is a near-infrared camera module or an infrared camera module. group (for example, the acquired spectral range is 700nm ~ 1100nm).
可选地,第一相机模组可以为可见光相机模组(例如,获取的光谱范围为400nm~700nm),或者第一相机模组为其他可以获取可见光的相机模组。Optionally, the first camera module may be a visible light camera module (for example, the acquired spectral range is 400nm-700nm), or the first camera module may be other camera modules capable of acquiring visible light.
步骤S201、显示第一界面,第一界面包括第一控件。Step S201, displaying a first interface, where the first interface includes a first control.
可选地,第一界面可以是指电子设备的拍照界面,第一控件可以是指拍照界面中用于指示拍照的控件,如图3或者图4所示。Optionally, the first interface may refer to the photographing interface of the electronic device, and the first control may refer to a control in the photographing interface for instructing photographing, as shown in FIG. 3 or FIG. 4 .
可选地,第一界面可以是指电子设备的视频录制界面,第一控件可以是指视频录制界面中用于指示录制视频的控件。Optionally, the first interface may refer to a video recording interface of the electronic device, and the first control may refer to a control in the video recording interface for instructing to record a video.
可选地,第一界面可以是指电子设备的视频通话界面,第一控件可以是指视频通话界面用于指示视频通话的控件。Optionally, the first interface may refer to a video call interface of the electronic device, and the first control may refer to a control on the video call interface used to indicate a video call.
步骤S202、检测到对第一控件的第一操作。Step S202, detecting a first operation on the first control.
可选地,第一操作可以是指对拍照界面中指示拍照的控件的点击操作。Optionally, the first operation may refer to a click operation on a control indicating to take a photo in the photo taking interface.
可选地,第一操作可以是指对视频录制界面中指示录制视频的控件的点击操作。Optionally, the first operation may refer to a click operation on a control indicating to record a video in the video recording interface.
可选地,第一操作可以是指对视频通话界面中指示视频通话的控件的点击操作。Optionally, the first operation may refer to a click operation on a control indicating a video call in the video call interface.
应理解,上述以第一操作为点击操作为例进行举例说明;第一操作还可以包括语音指示操作,或者其它的指示电子设备进行拍照或者视频通话的操作;上述为举例说明,并不对本申请作任何限定。It should be understood that the above-mentioned example illustrates the first operation as a click operation; the first operation may also include a voice instruction operation, or other operations instructing the electronic device to take a photo or make a video call; make any restrictions.
步骤S203、响应于第一操作,获取N帧第一图像与M帧第二图像。Step S203, in response to the first operation, acquire N frames of the first image and M frames of the second image.
其中,N帧第一图像可以是通过第一相机模组采集的图像,M帧第二图像是通过所述第二相机模组采集的图像;第二相机模组为近红外相机模组或者红外相机模组(例如,获取的光谱范围为700nm~1100nm);N、M为大于1的正整数。Wherein, the N frames of the first image can be images collected by the first camera module, and the M frames of the second image are images collected by the second camera module; the second camera module is a near-infrared camera module or an infrared camera module. A camera module (for example, the acquired spectrum ranges from 700nm to 1100nm); N and M are positive integers greater than 1.
可选地,第一图像与第二图像可以是指Raw颜色空间的图像。Optionally, the first image and the second image may refer to images in Raw color space.
例如,第一图像可以是指Raw颜色空间的RGB图像;第二图像可以是指Raw颜色空间的NIR图像。For example, the first image may refer to an RGB image in a Raw color space; the second image may refer to an NIR image in a Raw color space.
步骤S204、基于N帧第一图像和M帧第二图像,得到目标图像。Step S204, based on the N frames of the first image and the M frames of the second image, the target image is obtained.
其中,基于N帧第一图像和M帧第二图像,得到目标图像可以包括以下步骤:Wherein, based on the first image of N frames and the second image of M frames, obtaining the target image may include the following steps:
对N帧第一图像进行第一图像处理,得到N帧第三图像,N帧第三图像的图像质量高于N帧第一图像的图像质量;对M帧第二图像进行第二图像处理,得到M帧第四图像,M帧第四图像的图像质量高于所述M帧第二图像的图像质量;基于语义分割图像,对N帧第三图像和M帧第四图像进行融合处理,得到融合图像,所述语义分割图像为基于N帧第一图像中任一帧图像或者M帧第二图像中任一帧图像得到的,融合图像的细节信息优于N帧第一图像的细节信息;对融合图像进行第三图像处理,得到目标图像。Carrying out the first image processing on the first image of N frames to obtain the third image of N frames, the image quality of the third image of N frames is higher than the image quality of the first image of N frames; performing the second image processing on the second image of M frames, The fourth image of M frames is obtained, and the image quality of the fourth image of M frames is higher than the image quality of the second image of M frames; based on the semantic segmentation image, the third image of N frames and the fourth image of M frames are fused to obtain The fusion image, the semantic segmentation image is obtained based on any frame image in the first image of N frames or any frame image in the second image of M frames, and the detailed information of the fused image is better than the detailed information of the first image of N frames; The third image processing is performed on the fused image to obtain the target image.
应理解,N帧第三图像的图像质量高于N帧第一图像的图像质量可以是指N帧第三图像中的噪声少于N帧第一图像中的噪声;或者,通过图像质量的评估算法对N帧第三图像 与N帧第一图像进行评估,得到的评估结果为N帧第三图像的图像质量高于N帧第一图像等,本申请对此不作任何限定。图像质量的评估例如可以包括像曝光、清晰度、颜色、质感、噪音、防手抖、闪光灯、对焦和/或伪像等方面的评估。It should be understood that the image quality of the N frames of the third image is higher than the image quality of the N frames of the first image may mean that the noise in the N frames of the third image is less than the noise in the N frames of the first image; or, by evaluating the image quality The algorithm evaluates N frames of the third image and N frames of the first image, and the evaluation result obtained is that the image quality of the N frames of the third image is higher than that of the N frames of the first image, etc., which is not limited in this application. Evaluation of image quality may include, for example, evaluation of aspects such as exposure, sharpness, color, texture, noise, anti-shake, flash, focus, and/or artifacts.
还应理解,M帧第四图像的图像质量高于M帧第二图像的图像质量可以是指M帧第四图像中的噪声少于M帧第二图像中的噪声;或者,通过图像质量的评估算法对M帧第四图像与M帧第二图像进行评估,得到的评估结果为M帧第四图像的图像质量高于M帧第二图像等,本申请对此作任何限定。It should also be understood that the image quality of the M frames of the fourth image is higher than the image quality of the M frames of the second image may mean that the noise in the M frames of the fourth image is less than the noise in the M frames of the second image; or, through the image quality The evaluation algorithm evaluates the M frames of the fourth image and the M frames of the second image, and the obtained evaluation result is that the image quality of the M frames of the fourth image is higher than that of the M frames of the second image, etc. This application makes no limitation on this.
还应理解,融合图像的细节信息优于N帧第一图像的细节信息可以是指融合图像中的细节信息多于N帧第一图像中任意一帧第一图像中的细节信息;或者,融合图像的细节信息优于N帧第一图像的细节信息可以是指融合图像的清晰度优于N帧第一图像中任意一帧第一图像的清晰度。也可以是其他情况,本申请不进行限定。例如,细节信息可以包括拍摄对象的边缘信息、纹理信息等(例如,发丝边缘,人脸细节,衣服褶皱、大量树木的每颗树木边缘,绿植的枝叶脉络等)。It should also be understood that the detailed information of the fused image is better than the detailed information of the N frames of the first image may mean that the detailed information in the fused image is more than the detailed information in any one of the first images in the N frames of the first image; or, the fused The detail information of the image being better than the detail information of the N frames of first images may mean that the definition of the fused image is better than that of any first image in the N frames of first images. It can also be other situations, which are not limited in this application. For example, the detail information may include edge information and texture information of the subject (for example, hair edges, face details, clothes folds, edges of each tree of a large number of trees, branches and leaves of green plants, etc.).
在本申请的实施例中,可以基于语义分割图像,对N帧第三图像和M帧第四图像进行融合处理,得到融合图像;通过在融合处理中引入语义分割图像可以确定融合的局部图像信息;比如,可以通过语义分割图像选取N帧第三图像与M帧第四图像中的局部图像信息进行融合处理,从而能够增加融合图像的局部细节信息。此外,通过对第三图像(例如,可见光图像)与第四图像(例如,近红外光图像)进行融合处理,可以实现近红外光的图像信息与可见光的图像信息的多光谱信息融合,使得融合后的图像中包括更多的细节信息,能够增强图像中的细节信息。In the embodiment of the present application, the third image of N frames and the fourth image of M frames can be fused based on the semantically segmented image to obtain a fused image; the fused local image information can be determined by introducing the semantically segmented image into the fusion process For example, the local image information in the third image of N frames and the fourth image of M frames can be selected through the semantic segmentation image for fusion processing, so that the local detail information of the fused image can be increased. In addition, by performing fusion processing on the third image (for example, visible light image) and the fourth image (for example, near-infrared light image), the multi-spectral information fusion of near-infrared light image information and visible light image information can be realized, so that the fusion The final image includes more detail information, which can enhance the detail information in the image.
例如,M帧第四图像为近红外相机模组或者红外相机模组采集的第二图像进行第二图像处理得到的;因此,M帧第四图像包括拍摄对象对近红外光的反射信息;由于近红外光对绿色景物的反射率较高,因此通过近红外相机模组或者红外模组拍摄得到的绿色景物的细节信息更多;通过语义分割图像可以从第四图像中选取绿色景物图像区域进行融合处理,从而能够增强图像中暗光区域中绿色景物的细节信息。For example, the fourth image of the M frames is obtained by performing second image processing on the second image collected by the near-infrared camera module or the infrared camera module; therefore, the fourth image of the M frames includes the reflection information of the object to the near-infrared light; The near-infrared light has a high reflectivity to the green scene, so the details of the green scene captured by the near-infrared camera module or the infrared module are more detailed; the image area of the green scene can be selected from the fourth image by semantically segmenting the image. Fusion processing, so as to enhance the detail information of the green scene in the dark and light area of the image.
例如,M帧第四图像为近红外相机模组或者红外相机模组采集的第二图像进行第二图像处理得到的;由于近红外相机模组或者红外相机模组可以获取的光谱范围为近红外光,近红外光的光谱的波长较长,因此近红外光的绕射能力较强;对于云雾拍摄场景或者拍摄较远物体的拍摄场景,近红外相机模组或者红外相机模组采集的图像的通透感更强,即图像中包括更多远处拍摄对象的细节信息(例如,远处山脉的纹理信息);可以对通过语义分割图像从第四图像中选取远处的图像区域,与通过语义分割图像从第三图像中选取近处的图像区域进行融合处理,从而增强融合图像中的细节信息。For example, the fourth image of the M frames is obtained by performing second image processing on the second image collected by the near-infrared camera module or the infrared camera module; light, the near-infrared light spectrum has a longer wavelength, so the near-infrared light has a stronger diffraction ability; for cloud and fog shooting scenes or shooting scenes of distant objects, the images collected by the near-infrared camera module or infrared camera module The sense of transparency is stronger, that is, the image includes more detailed information of distant objects (for example, the texture information of distant mountains); it is possible to select a distant image area from the fourth image through semantic segmentation, and through The semantic segmentation image selects the nearby image area from the third image for fusion processing, so as to enhance the detailed information in the fusion image.
可选地,N帧第一图像可以是指第一相机模组采集的N帧Raw图像(例如,RGGB图像);对N帧第一图像进行第一图像处理,可以得到N帧第三图像;其中,第一图像处理可以包括黑电平校正处理和/或相位坏点校正处理。Optionally, the N frames of the first image may refer to N frames of Raw images (for example, RGGB images) collected by the first camera module; the first image processing of the N frames of the first image may obtain the N frames of the third image; Wherein, the first image processing may include black level correction processing and/or phase bad point correction processing.
其中,黑电平校正(black level correction,BLC)用于对黑电平进行校正处理,黑电平是指在经过一定校准的显示装置上,没有一行光亮输出的视频信号电平。相位坏点校正(phase defection pixel correction,PDPC)可以包括相位点校正(phase defection correction,PDC)与坏点校正(bad pixel correction,BPC);其中,BPC中的坏点是位置随机的亮点或 暗点,数量相对比较少,BPC可以通过滤波算法实现;相对普通像素点而言,相位点就是固定位置的坏点,而且数量比较多;PDC需要通过已知的相位点列表进行相位点去除。Among them, black level correction (black level correction, BLC) is used to correct the black level, and the black level refers to the video signal level without a line of bright output on a calibrated display device. Phase defect pixel correction (PDPC) can include phase point correction (phase defect correction, PDC) and bad pixel correction (bad pixel correction, BPC); wherein, the bad pixels in BPC are randomly positioned bright or dark The number of points is relatively small, and BPC can be realized by filtering algorithm; compared with ordinary pixel points, phase points are bad points with fixed positions, and the number is relatively large; PDC needs to remove phase points through the known phase point list.
可选地,第一图像处理还可以包括但不限于:Optionally, the first image processing may also include but not limited to:
自动白平衡处理(Automatic white balance,AWB)、镜头阴影校正(Lens Shading Correction,LSC)等。Automatic white balance processing (Automatic white balance, AWB), lens shading correction (Lens Shading Correction, LSC), etc.
其中,自动白平衡处理用于使得白色在任何色温下相机均能把它还原成白;由于色温的影响,白纸在低色温下会偏黄,高色温下会偏蓝;白平衡的目的在于使得白色物体在任何色温下均为R=G=B呈现出白色。镜头阴影校正用于消除由于镜头光学系统原因造成的图像四周颜色以及亮度与图像中心不一致的问题。Among them, the automatic white balance processing is used to make the white color can be restored to white by the camera at any color temperature; due to the influence of color temperature, white paper will be yellowish at low color temperature, and blue at high color temperature; the purpose of white balance is to So that the white object is R=G=B at any color temperature and appears white. Lens shading correction is used to eliminate the inconsistency between the color and brightness of the surrounding image and the center of the image caused by the lens optical system.
应理解,第一图像处理可以包括黑电平校正、相位坏点校正以及其他Raw域图像处理算法;上述以自动白平衡处理与镜头阴影校正对其他Raw域图像处理算法进行举例描述,本申请对其他Raw域图像处理算法不作任何限定。It should be understood that the first image processing may include black level correction, phase bad pixel correction, and other Raw domain image processing algorithms; the above-mentioned examples describe other Raw domain image processing algorithms with automatic white balance processing and lens shading correction. Other Raw domain image processing algorithms are not subject to any limitation.
可选地,对M帧第二图像进行第二图像处理,得到M帧第四图像,包括:Optionally, performing second image processing on M frames of second images to obtain M frames of fourth images, including:
对M帧第二图像进行黑电平校正处理和/或相位坏点校正处理,得到M帧第五图像;以N帧第三图像中的任意一帧第三图像为基准,对M帧第五图像进行第一配准处理,得到N帧第四图像。Perform black level correction processing and/or phase bad point correction processing on the second image of the M frames to obtain the fifth image of the M frame; based on any third image of the third image of the N frames, the fifth image of the M frame The images are subjected to a first registration process to obtain N frames of fourth images.
其中,黑电平校正(black level correction,BLC)用于对黑电平进行校正处理,黑电平是指在经过一定校准的显示装置上,没有一行光亮输出的视频信号电平。相位坏点校正(phase defection pixel correction,PDPC)可以包括相位点校正(phase defection correction,PDC)与坏点校正(bad pixel correction,BPC);其中,BPC中的坏点是位置随机的亮点或暗点,数量相对比较少,BPC可以通过滤波算法实现;相对普通像素点而言,相位点就是固定位置的坏点,而且数量比较多;PDC需要通过已知的相位点列表进行相位点去除。Among them, black level correction (black level correction, BLC) is used to correct the black level, and the black level refers to the video signal level without a line of bright output on a calibrated display device. Phase defect pixel correction (PDPC) can include phase point correction (phase defect correction, PDC) and bad pixel correction (bad pixel correction, BPC); wherein, the bad pixels in BPC are randomly positioned bright or dark The number of points is relatively small, and BPC can be realized by filtering algorithm; compared with ordinary pixel points, phase points are bad points with fixed positions, and the number is relatively large; PDC needs to remove phase points through the known phase point list.
可选地,第二图像处理还可以包括但不限于:Optionally, the second image processing may also include but not limited to:
自动白平衡处理(Automatic white balance,AWB)、镜头阴影校正(Lens Shading Correction,LSC)等。Automatic white balance processing (Automatic white balance, AWB), lens shading correction (Lens Shading Correction, LSC), etc.
其中,自动白平衡处理用于使得白色在任何色温下相机均能把它还原成白;由于色温的影响,白纸在低色温下会偏黄,高色温下会偏蓝;白平衡的目的在于使得白色物体在任何色温下均为R=G=B呈现出白色。镜头阴影校正用于消除由于镜头光学系统原因造成的图像四周颜色以及亮度与图像中心不一致的问题。Among them, the automatic white balance processing is used to make the white color can be restored to white by the camera at any color temperature; due to the influence of color temperature, white paper will be yellowish at low color temperature, and blue at high color temperature; the purpose of white balance is to So that the white object is R=G=B at any color temperature and appears white. Lens shading correction is used to eliminate the inconsistency between the color and brightness of the surrounding image and the center of the image caused by the lens optical system.
应理解,第二图像处理可以包括黑电平校正、相位坏点校正以及其他Raw域图像处理算法;上述以自动白平衡处理与镜头阴影校正对其他Raw域图像处理算法进行举例描述,本申请对其他Raw域图像处理算法不作任何限定。It should be understood that the second image processing may include black level correction, phase bad pixel correction, and other Raw domain image processing algorithms; the above-mentioned examples describe other Raw domain image processing algorithms with automatic white balance processing and lens shading correction. Other Raw domain image processing algorithms are not subject to any limitation.
在一个示例中,第一相机模组为可见光相机模组,第二相机模组为近红外相机模组或者红外光相机模组;通过第一相机模组可以采集N帧RGB Raw图像,通过第二相机模组可以采集M帧NIR Raw图像;对N帧RGB Raw图像进行黑电平校正处理和/或相位坏点校正处理,得到N帧处理后的RGB Raw图像;对M帧NIR Raw图像进行黑电平校正处理和/或相位坏点校正处理,可以得到M帧处理后的NIR Raw图像;由于两个相机模组未设置在同一位置,对于同一拍摄场景第一相机模组与第二相机模组的存在一定基线距离;因此,可以以N帧处理后的RGB Raw图像中的任意一帧图 像为基准,对M帧处理后的NIR Raw图像进行全局配准处理,得到M帧配准后的图像;基于语义分割图像,对N帧处理后的RGB Raw图像与M帧配准后的图像进行融合处理,得到融合图像。可选地,具体步骤可以参见后续图9所示。In one example, the first camera module is a visible light camera module, and the second camera module is a near-infrared camera module or an infrared camera module; N frames of RGB Raw images can be collected through the first camera module, and N frames of RGB Raw images can be collected through the second camera module. The two-camera module can collect M frames of NIR Raw images; perform black level correction processing and/or phase bad point correction processing on N frames of RGB Raw images to obtain N frames of processed RGB Raw images; perform M frames of NIR Raw images Black level correction processing and/or phase dead point correction processing can obtain M frame processed NIR Raw images; since the two camera modules are not set at the same position, for the same shooting scene, the first camera module and the second camera The existence of the module has a certain baseline distance; therefore, the NIR Raw image after the M frame processing can be globally registered with any frame of the RGB Raw image after the N frame processing, and the M frame registered image can be obtained. The image; based on the semantic segmentation image, the RGB Raw image processed by N frames is fused with the registered image of M frames to obtain a fused image. Optionally, for specific steps, refer to the subsequent FIG. 9 .
可选地,全局配准处理可以是指以N帧处理后的RGB Raw图像中的任意一帧图像为基准,对M帧处理后的NIR Raw图像进行的全局配准处理;或者,全局处理也可以是指以M帧处理后的NIR Raw图像中的任意一帧图像为基准,对N帧处理后的RGB Raw图像进行的全局配准处理。Optionally, the global registration processing can refer to the global registration processing of the NIR Raw image after the M frame processing on the basis of any frame image in the RGB Raw image after the N frame processing; or, the global processing also It can refer to the global registration process performed on the RGB Raw image after N frame processing based on any frame of image in the NIR Raw image after M frame processing.
可选地,为了便于融合处理,可以将M帧第四图像的图像分辨率大小调整至与N帧第三图像的图像分辨率大小相同;例如,通过以N帧第三图像中任意一帧第三图像为基准,对M帧配准后的图像进行上采样处理或者下采样处理,得到M帧第四图像;或者,可以通过以M帧第四图像中任意一帧第三图像为基准,对N帧处理后的RGB Raw图像进行上采样处理或者下采样处理,得到N帧第三图像。Optionally, in order to facilitate the fusion process, the image resolution of the M frames of the fourth image can be adjusted to be the same as the image resolution of the N frames of the third image; for example, by using any one of the N frames of the third image The three images are used as a reference, and the images after the registration of the M frames are subjected to up-sampling or down-sampling processing to obtain the fourth image of the M frames; or, the third image of any one of the fourth images of the M frames can be used as a reference. The processed RGB Raw image of N frames is subjected to up-sampling processing or down-sampling processing to obtain a third image of N frames.
可选地,以N帧第三图像中任意一帧第三图像为基准,对M帧配准后的图像进行上采样处理或者下采样处理,得到M帧第四图像的具体流程,可以参见后续图像8所示。Optionally, based on any one of the third images in the N frames of the third image, perform up-sampling or down-sampling processing on the registered images of the M frames to obtain the fourth image of the M frames. For specific procedures, please refer to the subsequent Image 8 is shown.
可选地,对M帧第二图像进行第二图像处理,得到M帧第四图像,包括:Optionally, performing second image processing on M frames of second images to obtain M frames of fourth images, including:
对M帧第二图像进行黑电平校正处理和/或相位坏点校正处理,得到M帧第五图像;以N帧第三图像中的任意一帧第三图像为基准,对M帧第五图像进行第一配准处理,得到M帧第一配准图像;以任意一帧第三图像为基准,对M帧第一配准图像进行第二配准处理,得到M帧第四图像。Perform black level correction processing and/or phase bad point correction processing on the second image of the M frames to obtain the fifth image of the M frame; based on any third image of the third image of the N frames, the fifth image of the M frame The first registration process is performed on the images to obtain M frames of the first registration image; taking any third image of the frame as a reference, the second registration process is performed on the M frames of the first registration image to obtain the M frames of the fourth image.
在一个示例中,第一相机模组可以为可见光相机模组,第二相机模组为近红外相机模组或者红外光相机模组;通过第一相机模组可以采集N帧RGB Raw图像,通过第二相机模组可以采集M帧NIR Raw图像;对N帧RGB Raw图像进行黑电平校正处理和/或相位坏点校正处理,得到N帧处理后的RGB Raw图像;对M帧NIR Raw图像进行黑电平校正处理和/或相位坏点校正处理,可以得到M帧处理后的NIR Raw图像;由于两个相机模组未设置在同一位置,对于同一拍摄场景第一相机模组与第二相机模组的存在一定基线距离;因此,可以以N帧处理后的RGB Raw图像中的任意一帧图像为基准,对M帧处理后的NIR Raw图像进行全局配准处理,得到M帧第一配准图像;进一步地,可以以N帧处理后的RGB Raw图像中的任意一帧图像为基准。对M帧第一配准图像进行局部配准处理,得到第二配准图像;基于语义分割图像,对N帧处理后的RGB Raw图像与第二配准图像进行融合处理,得到融合图像。可选地,具体步骤可以参见后续图10所示。In one example, the first camera module can be a visible light camera module, and the second camera module can be a near-infrared camera module or an infrared camera module; N frames of RGB Raw images can be collected through the first camera module, and N frames of RGB Raw images can be collected through the first camera module. The second camera module can collect M frames of NIR Raw images; perform black level correction processing and/or phase bad point correction processing on N frames of RGB Raw images to obtain N frames of processed RGB Raw images; M frames of NIR Raw images Perform black level correction processing and/or phase dead point correction processing to obtain M frame processed NIR Raw images; since the two camera modules are not set at the same position, for the same shooting scene, the first camera module and the second camera module The existence of the camera module has a certain baseline distance; therefore, the NIR Raw image after the M frame processing can be globally registered on the basis of any frame of the RGB Raw image after the N frame processing, and the first frame of the M frame can be obtained. Register the image; further, any frame of image in the N frames of processed RGB Raw images can be used as a reference. Perform local registration processing on the M frames of the first registration image to obtain the second registration image; based on the semantic segmentation image, perform fusion processing on the processed RGB Raw image of the N frames and the second registration image to obtain the fusion image. Optionally, for specific steps, refer to the following figure 10 .
可选地,全局配准处理可以是指以N帧处理后的RGB Raw图像中的任意一帧图像为基准,对M帧处理后的NIR Raw图像进行的全局配准处理;或者,全局处理也可以是指以M帧处理后的NIR Raw图像中的任意一帧图像为基准,对N帧处理后的RGB Raw图像进行的全局配准处理。Optionally, the global registration processing can refer to the global registration processing of the NIR Raw image after the M frame processing on the basis of any frame image in the RGB Raw image after the N frame processing; or, the global processing also It can refer to the global registration process performed on the RGB Raw image after N frame processing based on any frame of image in the NIR Raw image after M frame processing.
可选地,在全局配准处理的基础上进一步进行局部配准处理,使得M帧第一配准图像中的局部细节进行再次图像配准处理;能够提高融合处理图像的局部细节信息。Optionally, local registration processing is further performed on the basis of the global registration processing, so that the local details in the M frames of the first registration image are subjected to image registration processing again; the local detail information of the fused image can be improved.
可选地,为了便于融合处理,可以将M帧第四图像的图像分辨率大小调整至与N 帧第三图像的图像分辨率大小相同;例如,通过以N帧第三图像中任意一帧第三图像为基准,对M帧配准后的图像进行上采样处理或者下采样处理,得到M帧第四图像;或者,可以通过以M帧第四图像中任意一帧第三图像为基准,对N帧处理后的RGB Raw图像进行上采样处理或者下采样处理,得到N帧第三图像。Optionally, in order to facilitate the fusion process, the image resolution of the M frames of the fourth image can be adjusted to be the same as the image resolution of the N frames of the third image; for example, by using any frame of the N frames of the third image The three images are used as a reference, and the images after the registration of the M frames are subjected to up-sampling or down-sampling processing to obtain the fourth image of the M frames; or, the third image of any one of the fourth images of the M frames can be used as a reference. The processed RGB Raw image of N frames is subjected to up-sampling processing or down-sampling processing to obtain a third image of N frames.
可选地,以N帧第三图像中任意一帧第三图像为基准,对M帧配准后的图像进行上采样处理或者下采样处理,得到M帧第四图像的具体流程,可以参见后续图像8所示。Optionally, based on any one of the third images in the N frames of the third image, perform up-sampling or down-sampling processing on the registered images of the M frames to obtain the fourth image of the M frames. For specific procedures, please refer to the subsequent Image 8 is shown.
可选地,基于语义分割图像,对N帧第三图像与M帧第四图像进行融合处理,得到融合图像,包括:Optionally, based on the semantically segmented image, the third image of N frames and the fourth image of M frames are fused to obtain a fused image, including:
基于语义分割图像,通过图像处理模型对N帧第三图像与所述M帧第四图像进行融合处理,得到融合图像。Based on the semantically segmented image, the third image of N frames and the fourth image of M frames are fused by an image processing model to obtain a fused image.
示例性地,图像处理模型为预先训练的神经网络。Exemplarily, the image processing model is a pre-trained neural network.
例如,可以通过大量的样本数据与损失函数通过反向传播算法迭代更新神经网络的参数,得到图像处理模型。For example, a large amount of sample data and a loss function can be used to iteratively update the parameters of the neural network through the backpropagation algorithm to obtain an image processing model.
步骤S205、保存目标图像。Step S205, saving the target image.
可选地,可以对融合图像进行第三图像处理,可以得到目标图像;目标图像可以是指电子设备的显示屏中显示的图像。Optionally, a third image processing may be performed on the fused image to obtain a target image; the target image may refer to an image displayed on a display screen of an electronic device.
示例性地,融合图像可以是指RGB颜色空间的图像;目标图像可以是指电子设备发送至屏幕中显示的图像;第三图像处理可以包括但不限于:RGB域图像算法,或者YUV域图像算法;可选地,可以参见后续图6所示的步骤S308与步骤S309。Exemplarily, the fused image may refer to an image in RGB color space; the target image may refer to an image sent by an electronic device to be displayed on a screen; the third image processing may include but not limited to: RGB domain image algorithm, or YUV domain image algorithm ; Optionally, refer to step S308 and step S309 shown in FIG. 6 .
可选地,电子设备中还可以包括红外闪光灯;在暗光场景下,可以开启红外闪光灯;在开启红外闪光灯的情况下,可以获取N帧第一图像与M帧第二图像;其中,暗光场景是指所述电子设备的进光量小于预设阈值的拍摄场景。Optionally, the electronic device may also include an infrared flashlight; in a dark scene, the infrared flashlight may be turned on; when the infrared flashlight is turned on, N frames of the first image and M frames of the second image may be acquired; wherein, the dark light The scene refers to a shooting scene in which the amount of light entering the electronic device is less than a preset threshold.
应理解,对于暗光场景,电子设备的进光量较少;电子设备开启红外闪光灯后,可以使得第二相机模组获取的反射光增加,从而增加第二相机模组的进光量;使得第二相机模组采集的第二图像的清晰度增加;由于第二图像的清晰度增加,使得第一图像处理后的第三图像的清晰度增加;由于第三图像的清晰度增加,使得通过图像处理模型对第三图像与第四图像处理后的融合图像的清晰度增加。It should be understood that for dark scenes, the amount of light entering the electronic device is relatively small; after the electronic device turns on the infrared flashlight, it can increase the reflected light acquired by the second camera module, thereby increasing the amount of light entering the second camera module; so that the second The sharpness of the second image collected by the camera module increases; because the sharpness of the second image increases, the sharpness of the third image processed by the first image increases; because the sharpness of the third image increases, the image processing The sharpness of the fused image processed by the model on the third image and the fourth image is increased.
可选地,由于电子设备的亮度值越大,表示电子设备的进光量越多;可以通过电子设备的亮度值确定电子设备的进光量大小,在电子设备的亮度值小于预设亮度阈值时,则电子设备开启红外闪光灯。Optionally, since the brightness value of the electronic device is larger, it means that the amount of light entering the electronic device is more; the brightness value of the electronic device can be used to determine the amount of light entering the electronic device. When the brightness value of the electronic device is less than the preset brightness threshold, Then the electronic device turns on the infrared flashlight.
其中,亮度值用于估计环境亮度,其具体计算公式如下:Among them, the brightness value is used to estimate the ambient brightness, and its specific calculation formula is as follows:
Figure PCTCN2022138808-appb-000002
Figure PCTCN2022138808-appb-000002
其中,Exposure为曝光时间;Aperture为光圈大小;Iso为感光度;Luma为图像在XYZ空间中,Y的平均值。Among them, Exposure is the exposure time; Aperture is the aperture size; Iso is the sensitivity; Luma is the average value of Y in the XYZ space of the image.
可选地,电子设备的第一界面中还可以包括第二控件;在暗光场景下,电子设备检测到对第二控件的第二操作;响应于第二操作电子设备可以开启红外闪光灯。Optionally, the first interface of the electronic device may further include a second control; in a dark scene, the electronic device detects a second operation on the second control; in response to the second operation, the electronic device may turn on an infrared flash.
在本申请的实施例中,电子设备中可以包括第一相机模组与第二相机模组,其中,第 二相机模组为近红外相机模组,或者红外相机模组;例如,可以获取的光谱范围为近红外光(700nm~1100nm);通过第一相机模组采集第一图像,通过第二相机模组采集第二图像;由于第二图像(例如,近红外图像)中包括的图像信息是第一图像中(例如,可见光图像)无法获取到的;同理,第三图像中包括的图像信息是第四图像无法获取到的;因此,通过对第三图像(例如,可见光图像)与第四图像(例如,近红外光图像)进行融合处理,可以实现近红外光的图像信息与可见光的图像信息的多光谱信息融合,使得融合后的图像中包括更多的细节信息;因此,通过本申请实施例提供的图像处理方法,能够增强图像中的细节信息。In an embodiment of the present application, the electronic device may include a first camera module and a second camera module, wherein the second camera module is a near-infrared camera module or an infrared camera module; for example, the available The spectral range is near-infrared light (700nm-1100nm); the first image is collected by the first camera module, and the second image is collected by the second camera module; since the image information included in the second image (for example, near-infrared image) It cannot be obtained in the first image (for example, visible light image); similarly, the image information included in the third image cannot be obtained in the fourth image; therefore, by combining the third image (for example, visible light image) with The fourth image (for example, a near-infrared light image) is fused, which can realize the multi-spectral information fusion of the image information of the near-infrared light and the image information of the visible light, so that the fused image includes more detailed information; therefore, by The image processing method provided in the embodiment of the present application can enhance detailed information in an image.
图6是本申请实施例提供的图像处理方法的示意图。该图像处理方法可以由图1所示的电子设备执行;该图像处理方法包括步骤S301至步骤S309,下面分别对步骤S301至步骤S309进行详细的描述。FIG. 6 is a schematic diagram of an image processing method provided by an embodiment of the present application. The image processing method can be executed by the electronic device shown in FIG. 1 ; the image processing method includes step S301 to step S309 , and step S301 to step S309 will be described in detail below.
应理解,图6所示的图像处理方法可以应用于如图1所示的电子设备,该电子设备包括第一相机模组与第二相机模组;第一相机模组可以获取的光谱范围为可见光(400nm~700nm);第二相机模组可以获取的光谱范围为近红外光(700nm~1100nm)。It should be understood that the image processing method shown in FIG. 6 can be applied to the electronic device shown in FIG. 1 , the electronic device includes a first camera module and a second camera module; the spectral range that the first camera module can obtain is Visible light (400nm-700nm); the spectral range that the second camera module can obtain is near-infrared light (700nm-1100nm).
步骤S301、通过第一相机模组获取第一Raw图像(例如,第一图像的一个示例)。Step S301. Obtain a first Raw image (for example, an example of the first image) through the first camera module.
示例性地,第一相机模组中可以包括第一镜片、第一镜头与图像传感器,第一镜片可以通过的光谱范围为可见光(400nm~700nm)。Exemplarily, the first camera module may include a first lens, a first lens and an image sensor, and the spectral range through which the first lens can pass is visible light (400nm˜700nm).
应理解,第一镜片可以是指滤光镜片;第一镜片可以用于吸收某些特定波段的光,让可见光波段的光通过。It should be understood that the first lens may refer to a filter lens; the first lens may be used to absorb light of certain specific wavelength bands and allow light of visible light bands to pass through.
可选地,步骤301中可以获取多帧第一Raw图像(例如,N帧第一图像)。Optionally, multiple frames of first Raw images (for example, N frames of first images) may be acquired in step 301 .
步骤S302、通过第二相机模组获取第二Raw图像(第二图像的一个示例)。Step S302, acquiring a second Raw image (an example of the second image) through the second camera module.
示例性地,第二相机模组中可以包括第二镜片、第二镜头与图像传感器,第二镜片可以通过的光谱范围为近红外光(700nm~1100nm)。Exemplarily, the second camera module may include a second lens, a second lens and an image sensor, and the spectral range that the second lens can pass is near-infrared light (700nm˜1100nm).
应理解,第二镜片可以是指滤光镜片;第二镜片可以用于吸收某些特定波段的光,让近红外光波段的光通过。It should be understood that the second lens may refer to a filter lens; the second lens may be used to absorb light of certain specific wavelength bands and allow light of near-infrared wavelength bands to pass through.
需要说明的是,在本申请的实施例中第二相机模组采集的第二Raw图像可以是指单通道的图像;第二Raw图像用于表示光子叠加在一起的强度信息;例如,第二Raw图像可以是在单通道的灰度图像。It should be noted that, in the embodiment of the present application, the second Raw image collected by the second camera module may refer to a single-channel image; the second Raw image is used to represent the intensity information of photons superimposed together; for example, the second Raw images can be grayscale images in a single channel.
可选地,步骤302中获取的第二Raw图像可以是指多帧第二Raw图像(例如,M帧第二图像)。Optionally, the second Raw images acquired in step 302 may refer to multiple frames of second Raw images (for example, M frames of second images).
可选地,上述步骤S301与步骤S302可以是同步执行的;即第一相机模组与第二相机模组可以同步出帧,分别获取第一Raw图像与第二Raw图像。Optionally, the above step S301 and step S302 may be performed synchronously; that is, the first camera module and the second camera module may output frames synchronously, and obtain the first Raw image and the second Raw image respectively.
步骤S303、黑电平校正与相位坏点校正。Step S303, black level correction and phase bad point correction.
示例性地,对第一Raw图像进行黑电平校正与相位坏点校正,得到第三Raw图像(第三图像的一个示例)。Exemplarily, a third Raw image (an example of a third image) is obtained by performing black level correction and phase defect correction on the first Raw image.
其中,黑电平校正(black level correction,BLC)用于对黑电平进行校正处理,黑电平是指在经过一定校准的显示装置上,没有一行光亮输出的视频信号电平。相位坏点校正(phase defection pixel correction,PDPC)可以包括相位点校正(phase defection correction,PDC)和坏点校正(bad pixel correction,BPC);BPC中的坏点是位置随 机的亮点或暗点,数量相对比较少;BPC可以通过滤波算法实现,PDC需要通过一个已知的相位点列表对进行相位点。Among them, black level correction (black level correction, BLC) is used to correct the black level, and the black level refers to the video signal level without a line of bright output on a calibrated display device. Phase defect pixel correction (phase defect pixel correction, PDPC) can include phase point correction (phase defect correction, PDC) and bad pixel correction (bad pixel correction, BPC); bad pixels in BPC are bright or dark spots at random positions, The number is relatively small; BPC can be implemented through a filtering algorithm, and PDC needs to perform phase points through a known phase point list.
可选地,上述以黑电平校正与相位坏点校正进行举例说明;还可以对第一Raw图像进行其他图像处理算法;例如,还可以对第一Raw图像进行自动白平衡处理(Automatic white balance,AWB)或者镜头阴影校正(Lens Shading Correction,LSC)等。Optionally, the black level correction and phase dead point correction are used as examples above; other image processing algorithms can also be performed on the first Raw image; for example, automatic white balance processing (Automatic white balance) can also be performed on the first Raw image , AWB) or Lens Shading Correction (Lens Shading Correction, LSC), etc.
其中,自动白平衡处理用于使得白色在任何色温下相机均能把它还原成白;由于色温的影响,白纸在低色温下会偏黄,高色温下会偏蓝;白平衡的目的在于使得白色物体在任何色温下均为R=G=B呈现出白色。镜头阴影校正用于消除由于镜头光学系统原因造成的图像四周颜色以及亮度与图像中心不一致的问题。应理解,上述以自动白平衡处理与镜头阴影校正对其他图像处理算法进行举例描述,本申请对其他图像处理算法不作任何限定。Among them, the automatic white balance processing is used to make the white color can be restored to white by the camera at any color temperature; due to the influence of color temperature, white paper will be yellowish at low color temperature, and blue at high color temperature; the purpose of white balance is to So that the white object is R=G=B at any color temperature and appears white. Lens shading correction is used to eliminate the inconsistency between the color and brightness of the surrounding image and the center of the image caused by the lens optical system. It should be understood that the automatic white balance processing and lens shading correction are used as examples to describe other image processing algorithms, and this application does not make any limitation on other image processing algorithms.
步骤S304、黑电平校正与相位坏点校正。Step S304, black level correction and phase bad point correction.
示例性地,对第二Raw图像进行黑电平校正与相位坏点校正,得到第四Raw图像(第五图像的一个示例)。Exemplarily, a fourth Raw image (an example of a fifth image) is obtained by performing black level correction and phase defect correction on the second Raw image.
可选地,步骤S303与步骤S304可以没有时序要求,或者,步骤S303与步骤S304也可以是同时执行的。Optionally, step S303 and step S304 may not have timing requirements, or step S303 and step S304 may also be executed simultaneously.
步骤S305、获取语义分割图像。Step S305, acquiring a semantically segmented image.
示例性地,可以通过语义分割算法对第一帧第三Raw图像进行处理,得到语义分割图像。Exemplarily, a semantic segmentation algorithm may be used to process the first frame and the third Raw image to obtain a semantic segmentation image.
可选地,语义分割算法可以包括多实例分割算法;通过语义分割算法可以输出图像中各个区域的标签。在本申请的实施例中,可以获取语义分割图像,语义分割图像用于步骤S307图像处理模型进行融合处理;通过在融合处理中引入语义分割图像,可以从不同的图像中选取部分图像区域用于融合处理,从而增加融合图像的局部细节信息。Optionally, the semantic segmentation algorithm may include a multi-instance segmentation algorithm; labels of various regions in the image may be output through the semantic segmentation algorithm. In the embodiment of the present application, a semantically segmented image can be obtained, and the semantically segmented image is used for the fusion processing of the image processing model in step S307; by introducing the semantically segmented image in the fusion process, some image regions can be selected from different images for Fusion processing, so as to increase the local detail information of the fusion image.
示例性地,可以通过语义分割算法对第一帧第四Raw图像进行处理,得到语义分割图像。Exemplarily, a semantic segmentation algorithm may be used to process the fourth Raw image of the first frame to obtain a semantic segmentation image.
可选地,在本申请的实施例中,可以通过第四Raw图像即近红外图像得到语义分割图像;由于近红外图像对细节的描述能力较好,因此通过近红外图像得到语义分割图像细节信息更加丰富。Optionally, in the embodiment of the present application, the semantic segmentation image can be obtained through the fourth Raw image, that is, the near-infrared image; since the near-infrared image has a better description ability for details, the semantic segmentation image detail information can be obtained through the near-infrared image richer.
步骤S306、预处理。Step S306, preprocessing.
示例性地,对第三Raw图像、第四Raw图像以及语义分割图像进行预处理。Exemplarily, preprocessing is performed on the third Raw image, the fourth Raw image and the semantic segmentation image.
可选地,预处理可以包括对第四Raw图像进行上采样处理与配准处理。例如,可以以第一帧第三Raw图像为基准对第四Raw图像进行上采样处理与配准处理,得到第五图像。Optionally, the preprocessing may include performing upsampling processing and registration processing on the fourth Raw image. For example, the fourth Raw image may be upsampled and registered based on the third Raw image of the first frame to obtain the fifth image.
可选地,预处理还可以包括特征拼接处理;特征拼接处理是指对图像的通道数量进行叠加的处理。Optionally, the preprocessing may also include feature splicing processing; the feature splicing processing refers to the processing of superimposing the channel numbers of images.
应理解,由于第四Raw图像的分辨率小于第三Raw图像,因此需要对第四Raw图像进行上采样处理,使得第四Raw图像与第三Raw图像的分辨率相同;此外,由于第四Raw图像是通过第二相机模组采集的,第三Raw图像是通过第一相机模组采 集的;由于第一相机模组与第二相机模组分别设置在电子设备中的不同位置,因此第一相机模组与第二相机模组之间存在一定的基线距离,即通过第一相机模组采集的图像与通过第二相机模组采集的图像之间存在一定的视差,需要对两者采集的图像进行配准处理。It should be understood that since the resolution of the fourth Raw image is smaller than that of the third Raw image, it is necessary to perform upsampling processing on the fourth Raw image so that the resolution of the fourth Raw image is the same as that of the third Raw image; in addition, since the fourth Raw image The image is collected by the second camera module, and the third Raw image is collected by the first camera module; since the first camera module and the second camera module are respectively arranged in different positions in the electronic device, the first There is a certain baseline distance between the camera module and the second camera module, that is, there is a certain parallax between the image collected by the first camera module and the image collected by the second camera module. The images are processed for registration.
应理解,上述通过对上采样处理进行举例说明;若第四Raw图像的分辨率大于第三Raw图像的分辨率,预处理过程中可以包括下采样处理与配准处理;若第四Raw图像的分辨率等于第三Raw图像的分辨率,预处理过程中可以包括配准处理;本申请实施例对此不作任何限定。It should be understood that the above is an example of upsampling processing; if the resolution of the fourth Raw image is greater than the resolution of the third Raw image, the preprocessing process may include downsampling processing and registration processing; if the resolution of the fourth Raw image is The resolution is equal to the resolution of the third Raw image, and registration processing may be included in the preprocessing process; this embodiment of the present application does not make any limitation on this.
示例性地,以预处理过程包括上采样与配准处理进行举例说明;下面结合图8对步骤S306中的预处理进行详细的描述。Exemplarily, a preprocessing process including upsampling and registration processing is used for illustration; the preprocessing in step S306 will be described in detail below in conjunction with FIG. 8 .
应理解,第四Raw图像是指对第二Raw图像进行黑电平校正与相位坏点校正后的Raw图像,第二Raw图像是指第二相机模组采集的Raw图像;第三Raw图像是指对第一Raw图像进行黑电平校正与相位坏点校正后的Raw图像,第一Raw图像是指第一相机模组采集的Raw图像。It should be understood that the fourth Raw image refers to the Raw image after performing black level correction and phase defect correction on the second Raw image, and the second Raw image refers to the Raw image collected by the second camera module; the third Raw image is Refers to the Raw image after performing black level correction and phase defect correction on the first Raw image, and the first Raw image refers to the Raw image collected by the first camera module.
步骤S320、获取第四Raw图像。Step S320, acquiring a fourth Raw image.
例如,第四Raw图像的分辨率大小为7M。For example, the resolution size of the fourth Raw image is 7M.
步骤S330、获取第三Raw图像。Step S330, acquiring a third Raw image.
例如,第三Raw图像的分辨率大小为10M。For example, the resolution size of the third Raw image is 10M.
步骤S340、配准处理。Step S340, registration processing.
例如,以第三Raw图像为基准对第四Raw图像进行配准处理。For example, registration processing is performed on the fourth Raw image with the third Raw image as a reference.
示例性地,步骤S330配准处理可以用于从第四Raw图像中获取与第三Raw图像中相同的像素;例如,第三Raw图像为10M,第四Raw图像为7M,第三Raw图像与第四Raw图像中相同像素为80%;配准处理可以用于从7M第四Raw图像中获取与10M第三Raw图像相同的80%的像素。可选地,可以以多帧第三Raw图像中的第一帧Raw图像为基准对多帧第四Raw图像进行配准处理。Exemplarily, the registration process in step S330 can be used to acquire the same pixels from the fourth Raw image as in the third Raw image; for example, the third Raw image is 10M, the fourth Raw image is 7M, and the third Raw image is The same pixels are 80% in the fourth Raw image; the registration process can be used to obtain 80% of the same pixels as the 10M third Raw image from the 7M fourth Raw image. Optionally, registration processing may be performed on multiple frames of fourth Raw images with the first frame of Raw images in the multiple frames of third Raw images as a reference.
步骤S350、校正。Step S350, correction.
例如,对第四Raw图像进行校正处理,得到第五Raw图像。For example, correction processing is performed on the fourth Raw image to obtain a fifth Raw image.
示例性地,步骤S340用于对步骤S330获取的第四Raw图像中与第三Raw图像相同的像素部分像素进行上采样处理,得到与第四Raw图像相同分辨率大小的第五Raw图像。Exemplarily, step S340 is used to perform up-sampling processing on the pixels in the fourth Raw image obtained in step S330 that are the same as those in the third Raw image to obtain a fifth Raw image with the same resolution and size as the fourth Raw image.
可选地,可以通过图像变换矩阵(例如,单应性矩阵)对第四Raw图像进行图像转换,使得第四Raw图像中的部分像素映射到与第三Raw图像相同大小的图像上;其中,单应性矩阵是指图像的两个平面投影之间的映射。示例性地,如图8所示10M的第三Raw图像中黑色的区域表示空像素。Optionally, image conversion may be performed on the fourth Raw image through an image transformation matrix (for example, a homography matrix), so that some pixels in the fourth Raw image are mapped to an image of the same size as the third Raw image; wherein, A homography matrix refers to a mapping between two planar projections of an image. Exemplarily, the black area in the third Raw image of 10M as shown in FIG. 8 represents an empty pixel.
可选地,预处理可以还包括对第三Raw图像、第五Raw图像与语义分割图像进行特征提取与拼接处理(contact)。Optionally, the preprocessing may further include performing feature extraction and splicing (contact) on the third Raw image, the fifth Raw image and the semantically segmented image.
应理解,特征拼接处理是指对图像的通道数量进行叠加的处理。It should be understood that the feature splicing process refers to the process of superimposing the channel numbers of images.
例如,假设第五Raw图像为3通道图像、第三Raw图像为3通道图像、语义分割图像为单通道图像,则通过特征提取与拼接处理后得到7通道图像。For example, assuming that the fifth Raw image is a 3-channel image, the third Raw image is a 3-channel image, and the semantic segmentation image is a single-channel image, then a 7-channel image is obtained after feature extraction and stitching.
步骤S307、图像处理模型。Step S307, image processing model.
示例性地,将预处理后的图像输入至图像处理模型中得到输出的RGB图像(融合图像的一个示例)。Exemplarily, the preprocessed image is input into the image processing model to obtain an output RGB image (an example of a fused image).
在一个示例中,经过预处理后可以将N帧第三Raw图像(第三图像的一个示例)、语义分割图像与M帧第五Raw图像(第四图像的一个示例)输入至图像处理模型进行融合处理;具体步骤如图9所示。In one example, N frames of the third Raw image (an example of the third image), semantically segmented images, and M frames of the fifth Raw image (an example of the fourth image) can be input to the image processing model after preprocessing. Fusion processing; specific steps are shown in FIG. 9 .
在一个示例中,经过预处理后可以将N帧第三Raw图像(第三图像的一个示例)、语义分割图像与第六Raw图像(第四图像的一个示例)输入至图像处理模型进行融合处理,其中,第六Raw图像为单帧图像,为图10中的以第三Raw图像为基准的多帧第五Raw图像融合后的单帧图像;具体步骤如图10所示。In one example, N frames of the third Raw image (an example of the third image), the semantic segmentation image and the sixth Raw image (an example of the fourth image) of N frames can be input to the image processing model for fusion processing after preprocessing , wherein the sixth Raw image is a single-frame image, which is a single-frame image after fusion of the fifth Raw image of multiple frames based on the third Raw image in FIG. 10 ; the specific steps are shown in FIG. 10 .
可以理解的,图6和图7中的第五Raw图像/第六Raw图像表示为第五Raw图像或者第六Raw图像。It can be understood that the fifth Raw image/sixth Raw image in FIG. 6 and FIG. 7 is represented as the fifth Raw image or the sixth Raw image.
可选地,图像处理模型为预先训练的神经网络;图像处理模型可以用于融合处理与去马赛克处理;例如,图像处理模型可以基于语义分割图像对输入的Raw图像进行融合处理与去马赛克处理,得到RGB图像。Optionally, the image processing model is a pre-trained neural network; the image processing model can be used for fusion processing and demosaic processing; for example, the image processing model can perform fusion processing and demosaic processing on the input Raw image based on the semantic segmentation image, Get the RGB image.
可选地,图像处理模型为预先训练的神经网络。Optionally, the image processing model is a pre-trained neural network.
例如,可以通过大量的样本数据与损失函数通过反向传播算法迭代更新神经网络的参数,得到图像处理模型。For example, a large amount of sample data and a loss function can be used to iteratively update the parameters of the neural network through the backpropagation algorithm to obtain an image processing model.
可选地,图像处理模型可以用于融合处理与去马赛克处理;例如,图像处理模型可以基于语义分割图像对输入的Raw图像进行融合处理与去马赛克处理,得到RGB图像。Optionally, the image processing model can be used for fusion processing and demosaic processing; for example, the image processing model can perform fusion processing and demosaic processing on the input Raw image based on the semantically segmented image to obtain an RGB image.
可选地,图像处理模型还可以用于去噪处理;由于图像的噪声主要来源于泊松分布与高斯分布,通过多帧图像进行去噪处理时,多帧图像进行叠加取平均时高斯分布可以近似为0;因此,通过多帧图像进行去噪处理可以提高图像的去噪效果。Optionally, the image processing model can also be used for denoising processing; since the noise of the image mainly comes from Poisson distribution and Gaussian distribution, when denoising processing is performed through multiple frames of images, when the multi-frame images are superimposed and averaged, the Gaussian distribution can be is approximately 0; therefore, denoising through multiple frames of images can improve the denoising effect of the image.
示例性地,在本申请的实施例中,输入图像处理模型中的图像可以包括N帧第三Raw图像(可见光图像)与M帧第五Raw图像(近红外光图像);或者,N帧第三Raw图像(可见光图像)与第六Raw图像(近红外光图像);在进行去马赛克处理时,由于红外光图像是灰度图表示的是亮度的真实值;可见光图像为拜耳格式的Raw图像,拜耳格式的Raw图像中亮度值是不连续的,通常通过插值方法对不连续区域的亮度进行预测;因此,通过近红外图像中亮度的真实值可以引导拜耳格式的Raw图像进行去马赛克,从而能够有效减少图像中出现的伪纹理。Exemplarily, in the embodiment of the present application, the images input into the image processing model may include N frames of the third Raw image (visible light image) and M frames of the fifth Raw image (near-infrared light image); or, the N frames of the first Raw image Three Raw images (visible light image) and the sixth Raw image (near-infrared light image); when performing demosaic processing, since the infrared light image is a grayscale image, it represents the true value of brightness; the visible light image is a Raw image in Bayer format , the luminance value in the Raw image in the Bayer format is discontinuous, and the brightness of the discontinuous area is usually predicted by an interpolation method; therefore, the Raw image in the Bayer format can be guided to demosaic by the real value of the luminance in the near-infrared image, so that It can effectively reduce the pseudo texture appearing in the image.
在本申请的实施例中,近红外图像的光谱范围为700nm~1100nm,可见光图像的光谱范围为400nm~700nm,由于近红外图像中包括的图像信息是可见光图像中无法获取到的;因此,通过对第一相机模组采集的Raw图像(可见光图像)与第二相机模组采集的Raw图像(近红外光图像)处理后进行融合处理,可以实现近红外光的图像信息与可见光的图像信息的多光谱信息融合,使得融合后的图像中包括更多的细节信息。In the embodiment of the present application, the spectral range of the near-infrared image is 700nm to 1100nm, and the spectral range of the visible light image is 400nm to 700nm, since the image information included in the near-infrared image cannot be obtained in the visible light image; therefore, by The Raw image (visible light image) collected by the first camera module and the Raw image (near-infrared light image) collected by the second camera module are processed and then fused, so that the image information of near-infrared light and the image information of visible light can be realized. The fusion of multi-spectral information makes the fused image include more detailed information.
可选的,S307也可以输出Raw图像(Raw颜色空间),然后再通过其他步骤将Raw图像转换为RGB图像(RGB颜色空间)。Optionally, the S307 can also output a Raw image (Raw color space), and then convert the Raw image into an RGB image (RGB color space) through other steps.
步骤S308、RGB图像处理。Step S308, RGB image processing.
示例性地,对RGB图像进行RGB域算法处理。Exemplarily, RGB domain algorithm processing is performed on the RGB image.
可选地,RGB域算法处理可以包括但不限于:Optionally, RGB domain algorithm processing may include but not limited to:
颜色校正矩阵处理,或者三维查找表处理等。Color correction matrix processing, or three-dimensional lookup table processing, etc.
其中,颜色校正矩阵(color correction matrix,CCM),用于校准除白色以外其他颜色的准确度。三维查找表(Look Up Table,LUT)广泛应用于图像处理;例如,查找表可以用于图像颜色校正、图像增强或者图像伽马校正等;例如,可以在图像信号处理器中加载LUT,根据LUT表可以对原始图像进行图像处理,实现原始图像映射到其他图像的颜色风格,从而实现不同的图像效果。Among them, the color correction matrix (color correction matrix, CCM) is used to calibrate the accuracy of colors other than white. The three-dimensional look-up table (Look Up Table, LUT) is widely used in image processing; for example, the look-up table can be used for image color correction, image enhancement or image gamma correction, etc.; for example, the LUT can be loaded in the image signal processor, according to the LUT The table can perform image processing on the original image, and realize the color style of the original image mapped to other images, so as to achieve different image effects.
可选地,可以根据语义分割图像进行RGB图像处理;比如,可以根据语义分割图像对RGB图像中的不同区域进行亮度处理。Optionally, RGB image processing may be performed according to the semantically segmented image; for example, brightness processing may be performed on different regions in the RGB image according to the semantically segmented image.
需要说明的是,上述以颜色校正矩阵处理与三维查找表处理为例进行举例说明;本申请对RGB图像处理并不作任何限定。It should be noted that the color correction matrix processing and three-dimensional lookup table processing are used as examples for illustration; this application does not make any limitation on RGB image processing.
步骤S309、YUV图像处理。Step S309, YUV image processing.
示例性地,将RGB图像转换至YUV域并进行YUV域算法处理,得到目标图像。Exemplarily, the RGB image is converted into the YUV domain and processed by an algorithm in the YUV domain to obtain the target image.
可选地,YUV域算法处理可以包括但不限于:Optionally, YUV domain algorithm processing may include but not limited to:
全局色调映射处理或者伽马处理等。Global tone mapping processing or gamma processing, etc.
其中,全局色调映射(Global tone Mapping,GTM)用于解决高动态图像的灰度值分布不均匀的问题。伽马处理用于通过调整伽马曲线来调整图像的亮度、对比度与动态范围等。Among them, Global tone mapping (Global tone mapping, GTM) is used to solve the problem of uneven gray value distribution of high dynamic images. Gamma processing is used to adjust the brightness, contrast and dynamic range of an image by adjusting the gamma curve.
需要说明的是,上述以全局色调映射处理与伽马处理为例进行举例说明;本申请对YUV图像处理并不作任何限定。It should be noted that the global tone mapping processing and gamma processing are used as examples for illustration above; this application does not make any limitation on YUV image processing.
可选地,步骤S307、步骤S308与步骤S309中的部分或者全部可以在图像处理模型中执行。Optionally, part or all of step S307, step S308 and step S309 may be executed in an image processing model.
应理解,在电子设备处于非暗光场景时,可以通过上述步骤S301至步骤S309执行本申请实施例提供的图像处理方法。It should be understood that, when the electronic device is in a non-dark scene, the image processing method provided by the embodiment of the present application may be executed through the above steps S301 to S309.
可选地,电子设备中还可以包括红外闪光灯;在电子设备处于暗光场景时,即电子设备的进光量小于预设阈值的情况下(例如,可以根据亮度值进行判断),电子设备可以执行图7所示的步骤S310开启红外闪光灯;在红外闪光灯开启后,通过第一相机模组获取第一Raw图像,通过第二相机模组获取第二Raw图像,执行如图7所示的步骤S311至步骤S319;应理解,步骤S310至步骤S319适用于步骤S301至步骤S309的相关描述,此处不再赘述。Optionally, the electronic device may also include an infrared flashlight; when the electronic device is in a dark scene, that is, when the amount of light entering the electronic device is less than a preset threshold (for example, it can be judged according to the brightness value), the electronic device may execute Step S310 shown in Figure 7 turns on the infrared flashlight; after the infrared flashlight is turned on, the first Raw image is acquired by the first camera module, the second Raw image is acquired by the second camera module, and step S311 as shown in Figure 7 is executed Go to step S319; it should be understood that steps S310 to S319 are applicable to the relevant descriptions of steps S301 to S309, and will not be repeated here.
可选地,由于电子设备的亮度值越大,表示电子设备的进光量越多;可以通过电子设备的亮度值确定电子设备的进光量大小,在电子设备的亮度值小于预设亮度阈值时,则电子设备开启红外闪光灯。Optionally, since the brightness value of the electronic device is larger, it means that the amount of light entering the electronic device is more; the brightness value of the electronic device can be used to determine the amount of light entering the electronic device. When the brightness value of the electronic device is less than the preset brightness threshold, Then the electronic device turns on the infrared flashlight.
其中,亮度值用于估计环境亮度,其具体计算公式如下:Among them, the brightness value is used to estimate the ambient brightness, and its specific calculation formula is as follows:
Figure PCTCN2022138808-appb-000003
Figure PCTCN2022138808-appb-000003
其中,Exposure为曝光时间;Aperture为光圈大小;Iso为感光度;Luma为图像在XYZ空间中,Y的平均值。示例性地,在暗光场景下,电子设备在检测到拍摄指示 后可以先进行对焦,同步进行场景检测;识别到暗光场景并完成对焦后可以开启红外闪光灯,红外闪光灯开启后,第一Raw图像与第二Raw图像可以同步出帧。Among them, Exposure is the exposure time; Aperture is the aperture size; Iso is the sensitivity; Luma is the average value of Y in the XYZ space of the image. For example, in a dark scene, the electronic device can focus first after detecting the shooting instruction, and perform scene detection synchronously; after recognizing the dark scene and completing the focus, the infrared flashlight can be turned on. After the infrared flashlight is turned on, the first Raw The frame of the image and the second Raw image can be synchronized.
应理解,对于暗光场景电子设备的进光量较少;电子设备开启红外闪光灯后,可以使得第二相机模组获取的反射光增加,从而增加第二相机模组的进光量;使得第二相机模组采集的第二Raw图像的清晰度增加;由于第二Raw图像的清晰度增加,使得通过第二Raw图像得到的第四Raw图像的清晰度增加;由于第四Raw图像的清晰度增加,使得融合图像的清晰度增加。It should be understood that the amount of light entering the electronic device is relatively small for dark scenes; after the electronic device turns on the infrared flashlight, it can increase the reflected light acquired by the second camera module, thereby increasing the light entering of the second camera module; making the second camera The sharpness of the second Raw image collected by the module increases; due to the increased sharpness of the second Raw image, the sharpness of the fourth Raw image obtained through the second Raw image increases; due to the increased sharpness of the fourth Raw image, This increases the clarity of the fused image.
在本申请的实施例中,电子设备中可以包括第一相机模组与第二相机模组,其中,第一相机模组可以获取的光谱范围包括可见光(400nm~700nm);第二相机模组可以获取的光谱范围为近红外光(700nm~1100nm);通过第一相机模组采集第一图像,通过第二相机模组采集第二图像;由于第二图像(例如,近红外图像)中包括的图像信息是第一图像中(例如,可见光图像)无法获取到的;同理,第三图像中包括的图像信息是第四图像无法获取到的;因此,通过对第三图像(例如,可见光图像)与第四图像(例如,近红外光图像)进行融合处理,可以实现近红外光的图像信息与可见光的图像信息的多光谱信息融合,使得融合后的图像中包括更多的细节信息;因此,通过本申请实施例提供的图像处理方法,能够增强图像中的细节信息。In an embodiment of the present application, the electronic device may include a first camera module and a second camera module, wherein the spectral range that the first camera module can acquire includes visible light (400nm-700nm); the second camera module The spectral range that can be obtained is near-infrared light (700nm~1100nm); the first image is collected by the first camera module, and the second image is collected by the second camera module; since the second image (for example, near-infrared image) includes The image information of the first image cannot be obtained in the first image (for example, visible light image); similarly, the image information included in the third image cannot be obtained in the fourth image; therefore, by analyzing the third image (for example, visible light image) and the fourth image (for example, a near-infrared light image) are fused, which can realize the multi-spectral information fusion of the image information of the near-infrared light and the image information of the visible light, so that the fused image includes more detailed information; Therefore, the detailed information in the image can be enhanced through the image processing method provided in the embodiment of the present application.
此外,在本申请的实施例中,由于第二相机模组可以获取的光谱范围为近红外光,通过第二相机模组采集的红外光图像是灰度图,灰度图像用于表示的是亮度的真实值;由于第一相机模组可以获取的光谱范围为可见光,通过第一相机模组采集的可见光图像中亮度值是不连续的,通常需要对不连续的亮度值进行预测;通过近红外光图像(亮度的真实值)作为引导对可见光图像进行去马赛克处理时,能够有效减少图像中出现的伪纹理。In addition, in the embodiment of the present application, since the spectral range that the second camera module can acquire is near-infrared light, the infrared light image collected by the second camera module is a grayscale image, and the grayscale image is used to represent The actual value of brightness; since the spectral range that the first camera module can obtain is visible light, the brightness value in the visible light image collected by the first camera module is discontinuous, and it is usually necessary to predict the discontinuous brightness value; When the infrared light image (true value of brightness) is used as a guide to demosaic the visible light image, it can effectively reduce the false texture appearing in the image.
下面结合图9与图10对图6所示的步骤S306至步骤S307进行详细描述。Step S306 to step S307 shown in FIG. 6 will be described in detail below with reference to FIG. 9 and FIG. 10 .
实现方式一Implementation method one
在一个示例中,可以对第一相机模组采集的多帧第一Raw图像与第二相机模组采集的多帧第二Raw图像进行图像处理,从而得到细节信息增强的图像;其中,图像处理可以包括但不限于:降噪处理、去马赛克处理或者融合处理。In one example, image processing may be performed on the multi-frame first Raw images collected by the first camera module and the multi-frame second Raw images collected by the second camera module, so as to obtain an image with enhanced detail information; wherein, the image processing It may include but not limited to: noise reduction processing, demosaic processing or fusion processing.
图9是本申请实施例提供的图像处理方法的示意图。该图像处理方法可以由图1所示的电子设备执行;该图像处理方法包括步骤S401至步骤S406,下面分别对步骤S401至步骤S406进行详细的描述。FIG. 9 is a schematic diagram of an image processing method provided by an embodiment of the present application. The image processing method can be executed by the electronic device shown in FIG. 1; the image processing method includes steps S401 to S406, and the steps S401 to S406 will be described in detail below.
步骤S401、获取多帧第三Raw图像(第三图像的一个示例)。Step S401, acquiring multiple frames of a third Raw image (an example of a third image).
示例性地,可以通过第一相机模组(400nm~700nm)获取多帧第一Raw图像,对多帧第一Raw图像进行黑电平校正与相位坏点校正处理,得到多帧第三Raw图像。Exemplarily, multiple frames of first Raw images can be obtained through the first camera module (400nm-700nm), and multiple frames of first Raw images can be subjected to black level correction and phase defect correction processing to obtain multiple frames of third Raw images .
步骤S402、获取多帧第五Raw图像(第四图像的一个示例)。Step S402, acquiring multiple frames of the fifth Raw image (an example of the fourth image).
示例性地,可以通过第二相机模组(700nm~1100nm)获取多帧第二Raw图像;对多帧第二Raw图像进行黑电平校正与相位坏点校正处理,得到多帧第四Raw图像;以第三Raw图像为基准对第四Raw图像进行配准处理,得到第五Raw图像。Exemplarily, multiple frames of second Raw images can be acquired through the second camera module (700nm-1100nm); black level correction and phase defect correction processing are performed on multiple frames of second Raw images to obtain multiple frames of fourth Raw images ; Using the third Raw image as a reference, perform registration processing on the fourth Raw image to obtain a fifth Raw image.
步骤S403、获取语义分割图像。Step S403, acquiring a semantically segmented image.
示例性地,可以通过语义分割算法得到语义分割图像。Exemplarily, a semantically segmented image can be obtained through a semantically segmented algorithm.
可选地,可以通过语义分割算法对多帧第三Raw图像中的第一帧第三Raw图像进行处理,得到语义分割图像。Optionally, the first frame of the third Raw image among the multiple frames of the third Raw image may be processed by a semantic segmentation algorithm to obtain the semantic segmentation image.
可选地,可以根据语义分割算法对多帧第四Raw图像中的第一帧第四Raw图像进行处理,得到语义分割图像。Optionally, the first frame of the fourth Raw image among the multiple frames of the fourth Raw image may be processed according to the semantic segmentation algorithm to obtain the semantic segmentation image.
在本申请的实施例中,可以基于语义分割图像,对多帧第三Raw图像和多帧第五Raw图像进行融合处理,得到融合图像;通过在融合处理中引入语义分割图像可以确定融合的局部图像信息;比如,可以通过语义分割图像选取多帧第三Raw图像与多帧第五Raw图像中的局部图像信息进行融合处理,从而能够增加融合图像的局部细节信息。此外,通过对第三Raw图像(例如,可见光图像)与第五Raw图像(例如,近红外光图像)进行融合处理,可以实现近红外光的图像信息与可见光的图像信息的多光谱信息融合,使得融合后的图像中包括更多的细节信息,能够增强图像中的细节信息。In the embodiment of the present application, based on the semantically segmented image, fusion processing can be performed on the third Raw image of multiple frames and the fifth Raw image of multiple frames to obtain a fused image; by introducing the semantically segmented image in the fusion process, the local part of the fusion can be determined Image information; for example, partial image information in multiple frames of the third Raw image and multiple frames of the fifth Raw image may be selected for fusion processing through semantically segmented images, so as to increase the local detail information of the fused image. In addition, by performing fusion processing on the third Raw image (for example, a visible light image) and the fifth Raw image (for example, a near-infrared light image), multispectral information fusion of near-infrared light image information and visible light image information can be realized, The fused image includes more detailed information, and the detailed information in the image can be enhanced.
步骤S404、特征拼接处理。Step S404, feature splicing processing.
示例性地,对多帧第五Raw图像、多帧第三Raw图像与语义分割图像进行特征拼接处理,得到多通道的图像特征。Exemplarily, the multi-frame fifth Raw image, the multi-frame third Raw image and the semantic segmentation image are subjected to feature splicing processing to obtain multi-channel image features.
应理解,特征拼接处理是指对图像的通道数量进行叠加的处理。It should be understood that the feature splicing process refers to the process of superimposing the channel numbers of images.
例如,假设第五aw图像为单通道图像、第三Raw图像为3通道图像、语义分割图像为单通道图像,则通过特征拼接处理后可以得到5通道的图像特征。For example, assuming that the fifth aw image is a single-channel image, the third Raw image is a three-channel image, and the semantic segmentation image is a single-channel image, then image features of five channels can be obtained after feature splicing.
步骤S405、图像处理模型。Step S405, image processing model.
示例性地,将多通道的图像特征输入至图像处理模型进行融合处理。Exemplarily, multi-channel image features are input to an image processing model for fusion processing.
示例性地,图像处理模型可以用于对Raw颜色空间的图像进行融合处理;图像处理模型为预先训练的神经网络。Exemplarily, the image processing model can be used to fuse images in the Raw color space; the image processing model is a pre-trained neural network.
例如,可以通过大量的样本数据与损失函数通过反向传播算法迭代更新神经网络的参数,得到图像处理模型。For example, a large amount of sample data and a loss function can be used to iteratively update the parameters of the neural network through the backpropagation algorithm to obtain an image processing model.
可选地,图像处理模型还可以用于去噪处理与去马赛克处理;例如,图像处理模型可以基于语义分割图像对多帧第三Raw图像与多帧第五Raw图像进行去噪处理、去马赛克处理与融合处理。Optionally, the image processing model can also be used for denoising processing and demosaic processing; for example, the image processing model can perform denoising processing and demosaicing on multiple frames of the third Raw image and multiple frames of the fifth Raw image based on the semantic segmentation image processing and fusion processing.
应理解,由于图像的噪声主要来源于泊松分布与高斯分布,通过多帧图像进行去噪处理时,多帧图像进行叠加取平均时高斯分布可以近似为0;因此,通过多帧图像进行去噪处理可以提高图像的去噪效果。It should be understood that since the noise of the image mainly comes from the Poisson distribution and the Gaussian distribution, when performing denoising processing through multiple frames of images, the Gaussian distribution can be approximately 0 when multiple frames of images are superimposed and averaged; therefore, denoising through multiple frames of images Noise processing can improve the image denoising effect.
在本申请的实施例中,多帧第五Raw图像是红外光图像;近红外光图像为单通道的灰度图,灰度图像用于表示的是亮度的真实值;多帧第三Raw图像是可见光图像;可见光图像中亮度值是不连续的,通常需要对不连续的亮度值进行预测;通过近红外光图像(亮度的真实值)作为引导对可见光图像进行去马赛克处理时,能够有效减少图像中出现的伪纹理。In the embodiment of the present application, the fifth Raw image of multiple frames is an infrared light image; the near-infrared light image is a single-channel grayscale image, and the grayscale image is used to represent the true value of brightness; the third Raw image of multiple frames It is a visible light image; the brightness value in the visible light image is discontinuous, and it is usually necessary to predict the discontinuous brightness value; when the near-infrared light image (the real value of brightness) is used as a guide to demosaic the visible light image, it can effectively reduce Pseudo-textures appearing in images.
步骤S406、RGB图像。Step S406, RGB image.
示例性地,图像处理模型输出RGB图像(融合图像的一个示例)。Exemplarily, the image processing model outputs an RGB image (an example of a fused image).
在本申请的实施例中,第一相机模组可以获取的光谱范围为可见光400nm~700nm,第二相机模组可以获取的光谱范围为近红外光700nm~1100nm;由于近红外图像中包括的图像信息是可见光图像中无法获取到的;因此通过对第一相机模组采集的Raw图 像(可见光图像)与第二相机模组采集的Raw图像(近红外光图像)处理后进行融合处理,可以实现近红外光的图像信息与可见光的图像信息的多光谱信息融合,使得融合后的图像中包括更多的细节信息;即通过本申请实施例提供的图像处理方法,能够对主摄像头相机模组获取的图像进行图像增强,增强图像中的细节信息,提高图像质量。In the embodiment of the present application, the spectral range that the first camera module can acquire is visible light from 400nm to 700nm, and the spectral range that the second camera module can acquire is near-infrared light from 700nm to 1100nm; Information cannot be obtained from visible light images; therefore, by processing the Raw images (visible light images) collected by the first camera module and the Raw images (near-infrared light images) collected by the second camera module, and then performing fusion processing, it is possible to achieve The multi-spectral information fusion of the image information of near-infrared light and the image information of visible light makes the fused image include more detailed information; that is, through the image processing method provided in the embodiment of the present application, the main camera module can obtain Image enhancement is performed on the image to enhance the detail information in the image and improve the image quality.
实现方式二Implementation method two
在一个示例中,可以根据第三Raw图像对多帧第五Raw图像进行多帧降噪、超分辨率处理或者局部配准处理(例如,第二配准处理的一个示例)等,得到第六Raw图像(例如,单帧第六Raw图像);第六Raw图像与第五Raw图像相比,第六Raw图像可以是指无噪声局部配准的Raw图像;通过图像处理模型对第六Raw图像、多帧第三Raw图像与语义分割图像进行融合处理。In an example, multi-frame noise reduction, super-resolution processing, or local registration processing (for example, an example of the second registration processing) can be performed on the fifth Raw image of multiple frames according to the third Raw image to obtain the sixth Raw image (for example, the sixth Raw image of a single frame); the sixth Raw image is compared with the fifth Raw image, the sixth Raw image may refer to a Raw image of noise-free local registration; the sixth Raw image is processed by an image processing model , The multi-frame third Raw image is fused with the semantically segmented image.
应理解,在实现方式一中通过如图8所示的方法以第三Raw图像为基准对第四Raw图像进行全局配准,得到第五Raw图像;在实现方式二中,通过以第三Raw图像为基准,进一步对第五Raw图像进行局部配准,局部配准处理可以增强第五Raw图像中的细节信息,从而使得融合处理后的图像局部细节信息增强。It should be understood that in the first implementation, the fourth Raw image is globally registered by using the method shown in FIG. 8 as a reference to obtain the fifth Raw image; The image is used as a reference, and local registration is further performed on the fifth Raw image, and the local registration processing can enhance the detail information in the fifth Raw image, so that the local detail information of the fused image is enhanced.
图10是本申请实施例提供的图像处理方法的示意图。该图像处理方法可以由图1所示的电子设备执行;该图像处理方法包括步骤S501至步骤S510,下面分别对步骤S501至步骤S510进行详细的描述。FIG. 10 is a schematic diagram of an image processing method provided by an embodiment of the present application. The image processing method can be executed by the electronic device shown in FIG. 1 ; the image processing method includes step S501 to step S510 , and step S501 to step S510 will be described in detail below.
步骤S501、获取多帧第五Raw图像(第一配准图像的一个示例)。Step S501. Acquire multiple frames of fifth Raw images (an example of the first registration image).
示例性地,可以通过第二相机模组获取多帧第二Raw图像;对多帧第二Raw图像进行黑电平校正与相位坏点校正处理,得到多帧第四Raw图像;以第三Raw图像为基准对第四Raw图像进行配准处理,得到第五Raw图像。Exemplarily, multiple frames of second Raw images can be acquired through the second camera module; black level correction and phase defect correction processing are performed on multiple frames of second Raw images to obtain multiple frames of fourth Raw images; The image is used as a reference to perform registration processing on the fourth Raw image to obtain a fifth Raw image.
例如,第二相机模组中可以包括第二镜片、第二镜头与图像传感器,第二镜片可以通过的光谱范围为近红外光(700nm~1100nm)。For example, the second camera module may include a second lens, a second lens and an image sensor, and the spectral range that the second lens can pass is near-infrared light (700nm˜1100nm).
应理解,第二镜片可以是指滤光镜片;第二镜片可以用于吸收某些特定波段的光,让近红外光波段的光通过。It should be understood that the second lens may refer to a filter lens; the second lens may be used to absorb light of certain specific wavelength bands and allow light of near-infrared wavelength bands to pass through.
步骤S502、获取第三Raw图像(第三图像的一个示例)。Step S502, acquiring a third Raw image (an example of the third image).
可选地,第三Raw图像可以是指多帧第三Raw图像中的第一帧Raw图像。Optionally, the third Raw image may refer to the first frame of Raw image in the multiple frames of the third Raw image.
示例性地,可以通过第一相机模组获取多帧第一Raw图像,对多帧第一Raw图像进行黑电平校正与相位坏点校正处理,得到多帧第三Raw图像。Exemplarily, multiple frames of first Raw images may be acquired by the first camera module, and black level correction and phase defect correction processing may be performed on the multiple frames of the first Raw images to obtain multiple frames of third Raw images.
步骤S503、特征拼接处理。Step S503, feature splicing processing.
示例性地,对多帧第五Raw图像与第三Raw图像进行特征拼接处理,得到图像特征。Exemplarily, feature splicing processing is performed on the fifth Raw image and the third Raw image in multiple frames to obtain image features.
应理解,S503中的特征拼接处理,同S404在此不再进行赘述。It should be understood that the feature splicing processing in S503 is the same as that in S404 and will not be repeated here.
步骤S504、图像处理。Step S504, image processing.
例如,对图像特征进行图像处理。For example, perform image processing on image features.
示例性地,图像处理可以包括但不限于:多帧降噪、超分辨率处理、局部配准处理或者融合处理中的一项或者多项。Exemplarily, the image processing may include but not limited to: one or more of multi-frame noise reduction, super-resolution processing, local registration processing, or fusion processing.
应理解,由于图像的噪声主要来源于泊松分布与高斯分布,通过多帧图像进行去 噪处理时,多帧图像进行叠加取平均时高斯分布可以近似为0;因此,通过多帧图像进行去噪处理可以提高图像的去噪效果。It should be understood that since the noise of the image mainly comes from the Poisson distribution and the Gaussian distribution, when performing denoising processing through multiple frames of images, the Gaussian distribution can be approximately 0 when multiple frames of images are superimposed and averaged; therefore, denoising through multiple frames of images Noise processing can improve the image denoising effect.
步骤S505、得到第六Raw图像(第四图像的一个示例)。Step S505, obtaining the sixth Raw image (an example of the fourth image).
应理解,第六Raw图像与第五Raw图像相比,第六Raw图像可以是指无噪声局部配准的Raw图像。It should be understood that, compared with the fifth Raw image, the sixth Raw image may refer to a Raw image that is locally registered without noise.
步骤S506、获取多帧第三Raw图像。Step S506, acquiring multiple frames of third Raw images.
示例性地,可以通过第一相机模组获取多帧第一Raw图像,对多帧第一Raw图像进行黑电平校正与相位坏点校正处理,得到多帧第三Raw图像。Exemplarily, multiple frames of first Raw images may be acquired by the first camera module, and black level correction and phase defect correction processing may be performed on the multiple frames of the first Raw images to obtain multiple frames of third Raw images.
例如,第一相机模组中可以包括第一镜片、第一镜头与图像传感器,第一镜片可以通过的光谱范围为可见光(400nm~700nm)。For example, the first camera module may include a first lens, a first lens and an image sensor, and the spectral range through which the first lens can pass is visible light (400nm-700nm).
应理解,第一镜片可以是指滤光镜片;第一镜片可以用于吸收某些特定波段的光,让可见光波段的光通过。It should be understood that the first lens may refer to a filter lens; the first lens may be used to absorb light of certain specific wavelength bands and allow light of visible light bands to pass through.
步骤S507、获取语义分割图像。Step S507, acquiring a semantically segmented image.
可选地,可以通过语义分割算法对多帧第三Raw图像中的第一帧第三Raw图像进行处理,得到语义分割图像。Optionally, the first frame of the third Raw image among the multiple frames of the third Raw image may be processed by a semantic segmentation algorithm to obtain the semantic segmentation image.
可选地,可以根据语义分割算法对多帧第四Raw图像中的第一帧第四Raw图像进行处理,得到语义分割图像。Optionally, the first frame of the fourth Raw image among the multiple frames of the fourth Raw image may be processed according to the semantic segmentation algorithm to obtain the semantic segmentation image.
可选地,语义分割算法可以包括多实例分割算法;通过语义分割算法可以输出图像中各个区域的标签。Optionally, the semantic segmentation algorithm may include a multi-instance segmentation algorithm; labels of various regions in the image may be output through the semantic segmentation algorithm.
在本申请的实施例中,可以基于语义分割图像,对多帧第三Raw图像和多帧第五Raw图像进行融合处理,得到融合图像;通过在融合处理中引入语义分割图像可以确定融合的局部图像信息;比如,可以通过语义分割图像选取多帧第三Raw图像与多帧第五Raw图像中的局部图像信息进行融合处理,从而能够增加融合图像的局部细节信息。此外,通过对第三Raw图像(例如,可见光图像)与第五Raw图像(例如,近红外光图像)进行融合处理,可以实现近红外光的图像信息与可见光的图像信息的多光谱信息融合,使得融合后的图像中包括更多的细节信息,能够增强图像中的细节信息。In the embodiment of the present application, based on the semantically segmented image, fusion processing can be performed on the third Raw image of multiple frames and the fifth Raw image of multiple frames to obtain a fused image; by introducing the semantically segmented image in the fusion process, the local part of the fusion can be determined Image information; for example, partial image information in multiple frames of the third Raw image and multiple frames of the fifth Raw image may be selected for fusion processing through semantically segmented images, so as to increase the local detail information of the fused image. In addition, by performing fusion processing on the third Raw image (for example, a visible light image) and the fifth Raw image (for example, a near-infrared light image), multispectral information fusion of near-infrared light image information and visible light image information can be realized, The fused image includes more detailed information, and the detailed information in the image can be enhanced.
步骤S508、特征拼接处理。Step S508, feature splicing processing.
示例性地,对单帧第六Raw图像、多帧第三Raw图像与语义分割图像进行特征拼接处理,得到多通道的图像特征。Exemplarily, feature stitching is performed on the sixth Raw image of a single frame, the third Raw image of multiple frames, and the semantically segmented image to obtain multi-channel image features.
应理解,特征拼接处理是指对图像的通道数量进行叠加的处理。It should be understood that the feature splicing process refers to the process of superimposing the channel numbers of images.
例如,假设第六Raw图像为单通道图像、第三Raw图像为3通道图像、语义分割图像为单通道图像,则通过特征提取与拼接处理后得到5通道图像。For example, assuming that the sixth Raw image is a single-channel image, the third Raw image is a 3-channel image, and the semantic segmentation image is a single-channel image, then a 5-channel image is obtained after feature extraction and splicing.
步骤S509、图像处理模型。Step S509, image processing model.
示例性地,将多通道的图像特征输入至图像处理模型进行融合处理。Exemplarily, multi-channel image features are input to an image processing model for fusion processing.
示例性地,图像处理模型可以用于对Raw颜色空间的图像进行融合处理;图像处理模型为预先训练的神经网络。Exemplarily, the image processing model can be used to fuse images in the Raw color space; the image processing model is a pre-trained neural network.
可选地,图像处理模型可以用于去噪处理、去马赛克处理与融合处理;例如,图像处理模型可以基于语义分割图像对多帧第三Raw图像与多帧第五Raw图像进行去 噪处理、去马赛克处理与融合处理。Optionally, the image processing model can be used for denoising processing, demosaic processing, and fusion processing; for example, the image processing model can perform denoising processing, Demosaic processing and fusion processing.
步骤S510、RGB图像。Step S510, RGB image.
示例性地,图像处理模型输出RGB图像(融合图像的一个示例)。Exemplarily, the image processing model outputs an RGB image (an example of a fused image).
在本申请的实施例中,电子设备中包括第一相机模组与第二相机模组;第一相机模组可以获取的光谱范围为可见光400nm~700nm,第二相机模组可以获取的光谱范围为近红外光700nm~1100nm;由于近红外图像中包括的图像信息是可见光图像中无法获取到的;因此通过对第一相机模组采集的Raw图像(可见光图像)与第二相机模组采集的Raw图像(近红外光图像)处理后进行融合处理,可以实现近红外光的图像信息与可见光的图像信息的多光谱信息融合,使得融合后的图像中包括更多的细节信息;即通过本申请实施例提供的图像处理方法,能够对主摄像头相机模组获取的图像进行图像增强,增强图像中的细节信息,提高图像质量。In the embodiment of the present application, the electronic device includes a first camera module and a second camera module; the spectral range that the first camera module can obtain is visible light from 400nm to 700nm, and the spectral range that the second camera module can obtain It is near-infrared light of 700nm to 1100nm; since the image information included in the near-infrared image cannot be obtained in the visible light image; therefore, the Raw image (visible light image) collected by the first camera module and the image information collected by the second camera module Raw images (near-infrared light images) are processed and then fused, which can realize the multi-spectral information fusion of near-infrared light image information and visible light image information, so that the fused image includes more detailed information; that is, through this application The image processing method provided in the embodiment can perform image enhancement on the image acquired by the camera module of the main camera, enhance the detailed information in the image, and improve the image quality.
可选地,在本申请的实施例中可以采用图5或者图6所示的图像处理方法对第一相机模组与第二相机模组采集的图像进行融合处理;此外,为了增强融合处理后图像中的局部细节信息,还可以采用图11所示的图像处理方法对第一相机模组与第二相机模组采集的图像进行融合处理;例如,通过第一相机采集的RGB图像,通过第二相机采集的NIR图像;通过语义分割图像在NIR图像的细节层中选择局部区域,通过NIR图像的细节层中的局部区域与RGB图像的细节层、RGB图像的基础层进行融合,从而实现可选择性的对可见光图像中局部区域的细节增强。Optionally, in the embodiment of the present application, the image processing method shown in FIG. 5 or FIG. 6 can be used to perform fusion processing on the images collected by the first camera module and the second camera module; in addition, in order to enhance the fusion processing For local detail information in the image, the image processing method shown in Figure 11 can also be used to fuse the images collected by the first camera module and the second camera module; for example, the RGB image collected by the first camera can be processed by the second camera module The NIR image collected by two cameras; the local area is selected in the detail layer of the NIR image through semantic segmentation image, and the local area in the detail layer of the NIR image is fused with the detail layer of the RGB image and the basic layer of the RGB image, so as to realize Selectively enhance the details of local regions in visible light images.
图11是本申请实施例提供的图像处理方法的示意图。该图像处理方法可以由图1所示的电子设备执行;该图像处理方法包括步骤S601至步骤S619,下面分别对步骤S601至步骤S619进行详细的描述。FIG. 11 is a schematic diagram of an image processing method provided by an embodiment of the present application. The image processing method can be executed by the electronic device shown in FIG. 1; the image processing method includes step S601 to step S619, and step S601 to step S619 will be described in detail below.
步骤S601、获取第一Raw图像。Step S601, acquiring a first Raw image.
示例性地,可以通过第一相机模组获取第一Raw图像;例如,例如,第一相机模组中可以包括第一镜片、第一镜头与图像传感器,第一镜片可以通过的光谱范围为可见光(400nm~700nm)。Exemplarily, the first Raw image can be acquired through the first camera module; for example, the first camera module can include a first lens, a first lens and an image sensor, and the spectral range that the first lens can pass is visible light (400nm ~ 700nm).
步骤S602、降噪处理。Step S602, noise reduction processing.
示例性地,对第一Raw图像进行降噪处理,得到降噪处理后的第一Raw图像。Exemplarily, noise reduction processing is performed on the first Raw image to obtain the first Raw image after noise reduction processing.
示例性地,通过对第一Raw图像进行降噪处理可以有效的降低图像中的噪声信息,从而使得后续通过第一Raw图像进行融合处理时,提高融合处理后图像的图像质量。Exemplarily, the noise information in the image can be effectively reduced by performing noise reduction processing on the first Raw image, so that when the fusion processing is performed on the first Raw image subsequently, the image quality of the fusion-processed image can be improved.
可选地,可以在执行步骤S601后执行步骤S603。Optionally, step S603 may be performed after step S601 is performed.
步骤S603、去马赛克处理。Step S603, demosaic processing.
示例性地,对降噪处理后的第一Raw图像进行去马赛克处理。Exemplarily, demosaic processing is performed on the first Raw image after the noise reduction processing.
应理解,上述以步骤S602与步骤S603进行举例说明;也可以通过其他方式将第一Raw图像转换为RGB图像;本申请对此不作任何限定。It should be understood that the above steps S602 and S603 are used as examples for illustration; the first Raw image may also be converted into an RGB image in other ways; this application does not make any limitation thereto.
步骤S604、RGB图像。Step S604, RGB image.
示例性地,对降噪处理后的第一Raw图像进行去马赛克处理,得到RGB图像。Exemplarily, demosaic processing is performed on the first Raw image after the noise reduction processing to obtain an RGB image.
步骤S605、HSV图像中提取V通道图像。Step S605, extract the V channel image from the HSV image.
示例性地,将RGB图像转换至HSV颜色空间得到HSV图像;提取HSV图像的V通道图像。Exemplarily, the RGB image is converted into the HSV color space to obtain the HSV image; and the V channel image of the HSV image is extracted.
在本申请的实施例中,为了获取RGB图像对应的亮度通道,可以将RGB图像转换至其他可颜色空间,从而获取RGB图像对应的亮度通道。In the embodiment of the present application, in order to obtain the luminance channel corresponding to the RGB image, the RGB image may be converted to other color spaces, so as to obtain the luminance channel corresponding to the RGB image.
可选地,上述HSV图像颜色空间进行举例说明;还可以是YUV颜色空间,或者其他能够获取图像的亮度通道的颜色空间。Optionally, the above-mentioned HSV image color space is used as an example; it may also be a YUV color space, or other color spaces capable of obtaining brightness channels of an image.
步骤S606、滤波器处理。Step S606, filter processing.
例如,将V通道图像通过保边平滑滤波器进行处理。For example, the V channel image is processed through an edge-preserving smoothing filter.
示例性地,保边平滑滤波器可以包括但不限于:导向滤波器、双边滤波器、最小二乘法滤波器。Exemplarily, the edge-preserving smoothing filter may include, but not limited to: a guided filter, a bilateral filter, and a least-squares filter.
应理解,在本申请的实施例中通过保边平滑滤波器对V通道图像进行处理时,在滤波过程中能够有效的保留图像中的边缘信息。It should be understood that, in the embodiment of the present application, when the V-channel image is processed by an edge-preserving smoothing filter, edge information in the image can be effectively preserved during the filtering process.
步骤S607、第一细节层图像。Step S607, the first detail layer image.
例如,V通道的图像通过保边平滑滤波器的处理得到第一细节层图像。For example, the image of the V channel is processed by an edge-preserving smoothing filter to obtain the first detail layer image.
示例性地,图像的细节层中包括图像的高频信息;例如,图像的细节层包括物体的边缘信息、纹理信息等。Exemplarily, the detail layer of the image includes high-frequency information of the image; for example, the detail layer of the image includes edge information, texture information, and the like of the object.
步骤S608、第一基础层图像。Step S608, the first base layer image.
示例性地,V通道的图像通过保边平滑滤波器的处理得到第一基础层图像。Exemplarily, the image of the V channel is processed by an edge-preserving smoothing filter to obtain the image of the first base layer.
示例性地,图像的基础层中包括图像的低频信息;对于一幅图像来说,除去细节层外的部分为基础层;例如,图像的基础层包括物体边缘以内的内容信息。Exemplarily, the base layer of the image includes low-frequency information of the image; for an image, the part except the detail layer is the base layer; for example, the base layer of the image includes content information within the edge of the object.
步骤S609、获取第二Raw图像。Step S609, acquiring a second Raw image.
示例性地,可以通过第二相机模组获取第二Raw图像;例如,第二相机模组中可以包括第二镜片、第二镜头与图像传感器,第二镜片可以通过的光谱范围为近红外光(700nm~1100nm)。Exemplarily, the second Raw image can be acquired through the second camera module; for example, the second camera module can include a second lens, a second lens and an image sensor, and the spectral range that the second lens can pass is near-infrared light (700nm ~ 1100nm).
需要说明的是,在本申请的实施例中第二相机模组采集的第二Raw图像可以是指单通道的图像;第二Raw图像用于表示光子叠加在一起的强度信息;例如,第二Raw图像可以是在单通道的灰度图像。It should be noted that, in the embodiment of the present application, the second Raw image collected by the second camera module may refer to a single-channel image; the second Raw image is used to represent the intensity information of photons superimposed together; for example, the second Raw images can be grayscale images in a single channel.
步骤S610、降噪处理。Step S610, noise reduction processing.
示例性地,对第二Raw图像进行降噪处理,得到降噪处理后的第二Raw图像。Exemplarily, noise reduction processing is performed on the second Raw image to obtain the second Raw image after noise reduction processing.
示例性地,通过对第二Raw图像进行降噪处理可以有效的降低图像中的噪声信息,从而使得后续通过第二Raw图像进行融合处理时,提高融合处理后图像的图像质量。Exemplarily, the noise information in the image can be effectively reduced by performing noise reduction processing on the second Raw image, so that when the fusion processing is performed on the second Raw image later, the image quality of the fusion-processed image is improved.
可选地,可以在执行步骤S609后执行步骤S612。Optionally, step S612 may be performed after step S609 is performed.
应理解,上述以步骤S610进行举例说明;也可以通过其他方式将第二Raw图像转换为NIR图像;本申请对此不作任何限定。It should be understood that the above step S610 is used as an example for illustration; the second Raw image may also be converted into an NIR image in other ways; this application does not make any limitation thereto.
步骤S611、NIR图像。Step S611, NIR image.
示例性地,对第二Raw图像进行转换处理,得到NIR图像。Exemplarily, conversion processing is performed on the second Raw image to obtain an NIR image.
步骤S612、滤波器处理。Step S612, filter processing.
示例性地,将NIR图像通过保边平滑滤波器进行处理。Exemplarily, the NIR image is processed through an edge-preserving smoothing filter.
步骤S613、第二细节层图像。Step S613, the second detail layer image.
示例性地,NIR图像通过保边平滑滤波器的处理得到第二细节层图像。Exemplarily, the NIR image is processed by an edge-preserving smoothing filter to obtain a second detail layer image.
步骤S614、第二基础层图像。Step S614, the second base layer image.
示例性地,NIR图像通过保边平滑滤波器的处理得到第二基础层图像。Exemplarily, the NIR image is processed by an edge-preserving smoothing filter to obtain a second base layer image.
步骤S615、获取语义分割图像。Step S615, acquiring a semantically segmented image.
在本申请的实施例中,可以基于语义分割图像获取第二细节层图像中局部区域信息;通过第二细节层图像中局部的图像信息与第一细节层图像进行融合处理,从而实现可选择性的对图像中局部区域的细节增强。In the embodiment of the present application, local area information in the second level of detail image can be obtained based on the semantically segmented image; the partial image information in the second level of detail image is fused with the first level of detail image to achieve selectivity The detail enhancement of the local area in the image.
步骤S616、相乘。Step S616, multiply.
示例性地,将第二细节层图像与语义分割图像进行相乘,得到NIR图像中的细节层信息。Exemplarily, the second detail level image is multiplied by the semantic segmentation image to obtain detail level information in the NIR image.
例如,将第二细节层图像与语义分割图像进行相乘可以是指将第二细节层图像与语义分割图像中对应的像素点的像素值进行相乘。For example, multiplying the second level-of-detail image by the semantic segmentation image may refer to multiplying the second level-of-detail image by pixel values of corresponding pixels in the semantic segmentation image.
示例性地,第二细节层图像中包括NIR图像中的高频信息;可以根据语义分割图像选择性的增强图像中的局部细节信息。Exemplarily, the second detail layer image includes high-frequency information in the NIR image; local detail information in the image can be selectively enhanced according to the semantic segmentation image.
应理解,第二细节层中包括NIR图像中的所有细节信息;由于可见光图像在拍摄景物时,对与电子设备距离较远的景物可能会存在部分图像细节信息丢失;因此,可以通过语义分割图像与第二细节层相乘选择出第二细节层图像中的局部区域,通过第二细节层图像中的局部区域与第一细节层图像进行融合,从而实现可选择性的对图像中局部区域的细节增强。It should be understood that the second detail layer includes all the detailed information in the NIR image; when the visible light image captures the scene, some image detail information may be lost for the scene that is far away from the electronic device; therefore, the image can be segmented through semantics Multiply with the second detail layer to select the local area in the second detail layer image, and fuse the local area in the second detail layer image with the first detail layer image, so as to realize the optional local area in the image Details are enhanced.
步骤S617、融合处理。Step S617, fusion processing.
示例性地,对NIR图像的细节层信息、第一细节层图像与第一基础层图像进行融合处理。Exemplarily, fusion processing is performed on the detail layer information of the NIR image, the first detail layer image, and the first base layer image.
示例性地,将NIR图像的细节层信息叠加至第一细节层图像中;将第一细节层图像与第一基础层图像进行叠加。Exemplarily, layer-of-detail information of the NIR image is superimposed on the first layer-of-detail image; and the first layer-of-detail image is superimposed on the first base layer image.
步骤S618、HSV图像。Step S618, HSV image.
示例性地,融合处理后得到HSV图像。Exemplarily, the HSV image is obtained after fusion processing.
可选地,上述HSV图像还可以是其他颜色空间;比如,可以是HSL颜色空间的图像。Optionally, the above HSV image may also be in other color spaces; for example, it may be an image in HSL color space.
可选地,上述HSV图像还可以是YUV颜色空间的图像;或者,其他能抽离出亮度通道的颜色空间的图像。Optionally, the above-mentioned HSV image may also be an image in a YUV color space; or, an image in another color space from which a brightness channel can be extracted.
步骤S619、RGB图像。Step S619, RGB image.
示例性地,将融合处理后的HSV图像转换至RGB颜色空间,得到融合处理后的RGB图像。Exemplarily, the fused HSV image is converted into an RGB color space to obtain a fused RGB image.
在本申请的实施例中,通过保边平滑滤波器对第一相机模组采集的RGB图像与第二相机模组采集的NIR图像进行滤波,分别得到RGB图像中包括的第一细节层图像与第一基础层图像;NIR图像中包括的第二细节层图像与第二基础层图像,由于近红外光的光谱范围比可见光的光谱范围更宽,使得近红外图像中可以获取更多的图像细节;因此,可以通过语义分割图像在第二细节层图像中选择局部区域,通过第二细节层图像中的局部区域与第一细节层图像、第一基础层图像进行融合,从而实现可选择性的对可见光图像中局部区域的细节增强。In the embodiment of the present application, the RGB image collected by the first camera module and the NIR image collected by the second camera module are filtered through an edge-preserving smoothing filter, and the first detail layer image and the first detail layer image included in the RGB image are respectively obtained. The first base layer image; the second detail layer image and the second base layer image included in the NIR image. Since the spectral range of near-infrared light is wider than that of visible light, more image details can be obtained in the near-infrared image ; Therefore, the local area in the second layer of detail image can be selected by semantically segmenting the image, and the local area in the second layer of detail image is fused with the first layer of detail image and the first base layer image, thereby realizing optional Detail enhancement for local regions in visible light images.
可选地,在本申请的实施例中可以采用图5、图6或者图11所示的图像处理方法 对第一相机模组与第二相机模组采集的图像进行融合处理;此外,在本申请的实施例中,为了降低融合处理后图像中出现的鬼影问题,还可以采用图12所示的图像处理方法对第一相机模组与第二相机模组采集的图像进行融合处理;例如,通过对RGB图像与NIR图像中相似的图像信息进行融合处理,有效地避免融合处理后图像中出现鬼影问题。例如,通过对RGB图像中的低频信息与NIR图像进行图像融合处理,使得图像的低频信息部分增强;通过将RGB图像转换至YUV颜色空间,将Y通道的高频信息与低频信息部分增强的图像进行叠加,使得对图像的高频信息部分也得到增强。Optionally, in the embodiment of the present application, the image processing method shown in FIG. 5, FIG. 6 or FIG. 11 can be used to perform fusion processing on the images collected by the first camera module and the second camera module; In the embodiment of the application, in order to reduce the ghosting problem in the image after fusion processing, the image processing method shown in Figure 12 can also be used to perform fusion processing on the images collected by the first camera module and the second camera module; for example , by fusing the similar image information in the RGB image and the NIR image, effectively avoiding the ghosting problem in the image after fusion processing. For example, through the image fusion processing of the low-frequency information in the RGB image and the NIR image, the low-frequency information of the image is enhanced; by converting the RGB image to the YUV color space, the high-frequency information and low-frequency information of the Y channel are enhanced. Superposition is performed so that the high-frequency information part of the image is also enhanced.
图12是本申请实施例提供的图像处理方法的示意图。该图像处理方法可以由图1所示的电子设备执行;该图像处理方法包括步骤S701至步骤S715,下面分别对步骤S701至步骤S715进行详细的描述。Fig. 12 is a schematic diagram of an image processing method provided by an embodiment of the present application. The image processing method can be executed by the electronic device shown in FIG. 1; the image processing method includes step S701 to step S715, and step S701 to step S715 will be described in detail below.
步骤S701、获取第一Raw图像。Step S701, acquiring a first Raw image.
示例性地,可以通过第一相机模组获取第一Raw图像;例如,第一相机模组中可以包括第一镜片、第一镜头与图像传感器,第一镜片可以通过的光谱范围为可见光(400nm~700nm)。Exemplarily, the first Raw image can be acquired through the first camera module; for example, the first camera module can include a first lens, a first lens and an image sensor, and the spectral range through which the first lens can pass is visible light (400nm ~700nm).
步骤S702、降噪处理。Step S702, noise reduction processing.
示例性地,对第一Raw图像进行降噪处理,得到降噪处理后的第一Raw图像。Exemplarily, noise reduction processing is performed on the first Raw image to obtain the first Raw image after noise reduction processing.
可选地,可以在执行步骤S701后执行步骤S703。Optionally, step S703 may be performed after step S701 is performed.
步骤S703、去马赛克处理,得到第一RGB图像。Step S703, demosaic processing, to obtain the first RGB image.
示例性地,对降噪处理后的第一Raw图像进行去马赛克处理,得到第一RGB图像。Exemplarily, demosaic processing is performed on the first Raw image after the noise reduction processing to obtain the first RGB image.
应理解,上述以步骤S702与步骤S703进行举例说明;也可以通过其他方式将第一Raw图像转换为RGB图像;本申请对此不作任何限定。It should be understood that the above steps S702 and S703 are used as examples; the first Raw image may also be converted into an RGB image in other ways; this application does not make any limitation thereto.
步骤S704、通过高斯低通滤波器对第一RGB图像进行处理,得到第一RGB图像中的低频信息。Step S704: Process the first RGB image through a Gaussian low-pass filter to obtain low-frequency information in the first RGB image.
示例性地,通过高斯低通滤波器对第一RGB图进行处理,过滤掉第一RGB图像的高频细节特征,得到第一RGB图像的低频信息。Exemplarily, the first RGB image is processed through a Gaussian low-pass filter to filter out high-frequency detail features of the first RGB image to obtain low-frequency information of the first RGB image.
应理解,图像的低频信息是指图像中灰度值变化缓慢的区域;对于一幅图像来说,除去高频信息外的部分为低频信息;例如,图像的低频信息可以包括物体边缘以内的内容信息。It should be understood that the low-frequency information of an image refers to the area where the gray value changes slowly in the image; for an image, the part except the high-frequency information is low-frequency information; for example, the low-frequency information of the image can include the content within the edge of the object information.
步骤S705、获取第二Raw图像。Step S705, acquiring a second Raw image.
示例性地,可以通过第二相机模组获取第二Raw图像;例如,第二相机模组中可以包括第二镜片、第二镜头与图像传感器,第二镜片可以通过的光谱范围为近红外光(700nm~1100nm)。Exemplarily, the second Raw image can be acquired through the second camera module; for example, the second camera module can include a second lens, a second lens and an image sensor, and the spectral range that the second lens can pass is near-infrared light (700nm ~ 1100nm).
需要说明的是,在本申请的实施例中第二相机模组采集的第二Raw图像可以是指单通道的图像;第二Raw图像用于表示光子叠加在一起的强度信息;例如,第二Raw图像可以是在单通道的灰度图像。It should be noted that, in the embodiment of the present application, the second Raw image collected by the second camera module may refer to a single-channel image; the second Raw image is used to represent the intensity information of photons superimposed together; for example, the second Raw images can be grayscale images in a single channel.
步骤S706、降噪处理。Step S706, noise reduction processing.
示例性地,对第二Raw图像进行降噪处理。Exemplarily, noise reduction processing is performed on the second Raw image.
可选地,可以在执行步骤S705后执行步骤S708。Optionally, step S708 may be performed after step S705 is performed.
步骤S707、NIR图像。Step S707, NIR image.
应理解,上述以步骤S706进行举例说明;也可以通过其他方式将第二Raw图像转换为NIR图像;本申请对此不作任何限定。It should be understood that the above step S706 is used as an example for illustration; the second Raw image may also be converted into an NIR image in other ways; this application does not make any limitation thereto.
步骤S708、对NIR图像进行配准处理,得到配准处理后的NIR图像。Step S708, performing registration processing on the NIR image to obtain a registered NIR image.
示例性地,以第一RGB图像为基准对NIR图像进行配准处理,得到配准处理后的NIR图像。Exemplarily, a registration process is performed on the NIR image with the first RGB image as a reference to obtain a registered NIR image.
应理解,由于第一相机模组与第二相机模组分别设置在电子设备中的不同位置,因此第一相机模组与第二相机模组之间存在一定的基线距离,即通过第一相机模组采集的图像与通过第二相机模组采集的图像之间存在一定的视差,需要对两者采集的图像进行配准处理。It should be understood that since the first camera module and the second camera module are respectively arranged at different positions in the electronic device, there is a certain baseline distance between the first camera module and the second camera module, that is, through the first camera module There is a certain parallax between the image collected by the module and the image collected by the second camera module, and the images collected by the two need to be registered.
步骤S709、融合模型。Step S709, merging models.
示例性地,将第一RGB图像的低频信息与配准处理后的NIR图像输入至融合模型进行处理。Exemplarily, the low-frequency information of the first RGB image and the registered NIR image are input to the fusion model for processing.
步骤S710、输出第二RGB图像。Step S710, outputting the second RGB image.
例如,融合模型输出第二RGB图像。For example, the fused model outputs a second RGB image.
示例性地,融合模型可以是指通过大量样本数据预先训练的神经网络;融合模型用于对输入的图像进行融合处理,输出融合处理后的图像。Exemplarily, the fusion model may refer to a neural network pre-trained by a large amount of sample data; the fusion model is used to perform fusion processing on input images and output a fusion processed image.
应理解,NIR图像中包括的大量的图像低频信息;由于NIR图像与第一RGB图像对应的光谱范围不同,通过NIR图像与第一RGB图像的低频信息进行融合能够使得NIR图像的低频信息对第一RGB图像的低频信息进行补充;从而对第一RGB图像中的低频信息进行增强。此外,通过相似的图像信息进行融合,即NIR图像中包括的低频信息与第一RGB图像的低频信息进行融合,能够有效地降低融合处理后图像中出现的鬼影。It should be understood that a large amount of image low-frequency information is included in the NIR image; since the spectral ranges corresponding to the NIR image and the first RGB image are different, fusion of the low-frequency information of the NIR image and the first RGB image can make the low-frequency information of the NIR image significantly affect the second RGB image. The low-frequency information of an RGB image is supplemented; thereby the low-frequency information in the first RGB image is enhanced. In addition, by fusing similar image information, that is, fusing the low-frequency information included in the NIR image with the low-frequency information of the first RGB image, ghost images appearing in the fused image can be effectively reduced.
步骤S711、提取YUV图像中Y通道的图像。Step S711, extracting the image of the Y channel in the YUV image.
示例性地,将第二RGB图像转换至YUV颜色空间,得到第一YUV图像;提取第一YUV图像中Y通道的图像。Exemplarily, the second RGB image is converted into a YUV color space to obtain a first YUV image; and an image of a Y channel in the first YUV image is extracted.
在本申请的实施例中,为了获取RGB图像对应的亮度通道,可以将RGB图像转换至YUV颜色空间,得到第一YUV图像,提取第一YUV图像的Y通道。In the embodiment of the present application, in order to obtain the luminance channel corresponding to the RGB image, the RGB image may be converted into the YUV color space to obtain the first YUV image, and the Y channel of the first YUV image may be extracted.
应理解,上述是以YUV颜色空间进行举例说明;还可以将RGB图像转换至其他能够抽取亮度通道的颜色空间,本申请对此不作任何限定。It should be understood that the YUV color space is used as an example for illustration above; the RGB image can also be converted to other color spaces capable of extracting brightness channels, which is not limited in this application.
步骤S712、第一RGB图像转YUV。Step S712, converting the first RGB image to YUV.
示例性地,将第一RGB图像转换至YUV颜色空间,得到第二YUV图像。Exemplarily, the first RGB image is converted into a YUV color space to obtain a second YUV image.
步骤S713、对Y通道进行处理。Step S713, process the Y channel.
例如,对第二YUV图像的Y通道进行处理。For example, the Y channel of the second YUV image is processed.
示例性地,在第二YUV图像的Y通道进行高斯模糊,得到Y通道的模糊图像(Y通道的blur图像);Y通道的模糊图像中包括Y通道的低频信息;通过Y通道的图像减去Y通道的模糊图像,得到Y通道的高频信息。Exemplarily, Gaussian blur is performed on the Y channel of the second YUV image to obtain a blurred image of the Y channel (blur image of the Y channel); the blurred image of the Y channel includes the low-frequency information of the Y channel; the image of the Y channel is subtracted The blurred image of the Y channel, and the high frequency information of the Y channel is obtained.
应理解,图像的高频信息是指图像中灰度值变化剧烈的区域;例如,图像中的高频信息包括物体的边缘信息、纹理信息等。It should be understood that the high-frequency information of the image refers to the region in the image where the gray value changes rapidly; for example, the high-frequency information in the image includes edge information, texture information, etc. of the object.
可选地,可以根据用户选择的不同拍照模式,确定不同的增益系数;可以根据以下公式对第二YUV图像的Y通道进行处理:Optionally, different gain coefficients can be determined according to different camera modes selected by the user; the Y channel of the second YUV image can be processed according to the following formula:
处理后的Y通道=(Y通道的图像-Y通道的模糊图像)*增益系数。Processed Y channel=(Y channel image−Y channel blurred image)*gain factor.
步骤S714、Y通道相加。Step S714, adding Y channels.
示例性地,将第一YUV图像的Y通道与处理后的第二YUV图像的Y通道进行相加得到处理后的YUV图像。Exemplarily, the Y channel of the first YUV image is added to the Y channel of the processed second YUV image to obtain the processed YUV image.
应理解,在步骤S710中对NIR图像与第一RGB图像的低频信息进行了融合处理,得到第二RGB图像;在步骤S713与步骤S714中需要对图像的高频信息部分进行处理。It should be understood that in step S710, the low-frequency information of the NIR image and the first RGB image are fused to obtain the second RGB image; in steps S713 and S714, the high-frequency information part of the image needs to be processed.
步骤S715、得到第三RGB图像。Step S715, obtaining a third RGB image.
示例性地,将处理后的YUV图像转换至RGB颜色空间,得到第三RGB图像。Exemplarily, the processed YUV image is converted into RGB color space to obtain a third RGB image.
在本申请的实施例中,通过第一相机模组采集RGB图像,通过第二相机模组采集NIR图像,通过对RGB图像与NIR图像中相似的图像信息进行融合处理,有效地避免融合处理后图像中出现鬼影问题;例如,通过对RGB图像中的低频信息与NIR图像进行图像融合处理,使得图像的低频信息部分增强;通过将RGB图像转换至YUV颜色空间,将Y通道的高频信息与低频信息部分增强的图像进行叠加,使得对图像的高频信息部分也得到增强;由于是通过对RGB图像与NIR图像中的相似的图像信息进行融合处理,因此能够有效地降低融合处理后图像中出现的鬼影。In the embodiment of the present application, the RGB image is collected by the first camera module, the NIR image is collected by the second camera module, and the similar image information in the RGB image and the NIR image is fused to effectively avoid the post-fusion processing. There are ghosting problems in the image; for example, through image fusion processing of the low-frequency information in the RGB image and the NIR image, the low-frequency information of the image is enhanced; by converting the RGB image to the YUV color space, the high-frequency information of the Y channel Superimposed with the image enhanced by the low-frequency information part, the high-frequency information part of the image is also enhanced; because the similar image information in the RGB image and the NIR image is fused, it can effectively reduce the image after fusion processing. Ghosts that appear in .
图13根据是本申请实施例提供的图像处理方法的效果示意图。FIG. 13 is a schematic diagram of the effect of the image processing method provided by the embodiment of the present application.
如图13所示,图13中的(a)是通过现有的主摄像头相机模组得到的输出图像;图13中的(b)是通过本申请实施例提供的图像处理方法得到的输出图像;如图13中的(a)所示的图像可以看出山脉中的细节信息出现了严重失真;与图13中的(a)所示的输出图像相比,图13中的(b)所示的输出图像的细节信息较丰富,可以清晰的显示山脉的细节信息;通过本申请实施例提供的图像处理方法可以对主摄像头相机模组获取的图像进行图像增强,提高图像中的细节信息。As shown in Figure 13, (a) in Figure 13 is the output image obtained by the existing main camera camera module; (b) in Figure 13 is the output image obtained by the image processing method provided by the embodiment of the present application ; The image shown in (a) in Figure 13 shows that the detailed information in the mountains is severely distorted; compared with the output image shown in (a) in Figure 13, the image shown in (b) in Figure 13 The detailed information of the displayed output image is relatively rich, and can clearly display the detailed information of mountains; through the image processing method provided in the embodiment of the present application, the image obtained by the main camera camera module can be image enhanced to improve the detailed information in the image.
在一个示例中,在暗光场景下,用户可以开启电子设备中的红外闪光灯;通过主摄像头相机模组与近红外相机模组采集图像,并通过本申请实施例提供的图像处理方法对采集的图像进行处理,从而输出处理后的图像或者视频。In an example, in a dark scene, the user can turn on the infrared flashlight in the electronic device; collect images through the main camera module and the near-infrared camera module, and use the image processing method provided by the embodiment of the application to process the collected images. The image is processed to output the processed image or video.
图14示出了电子设备的一种图形用户界面(graphical user interface,GUI)。FIG. 14 shows a graphical user interface (graphical user interface, GUI) of an electronic device.
图14中的(a)所示的GUI为电子设备的桌面810;当电子设备检测到用户点击桌面810上的相机应用(application,APP)的图标820的操作后,可以启动相机应用,显示如图14中的(b)所示的另一GUI;图14中的(b)所示的GUI可以是相机APP在拍照模式下的显示界面,在GUI可以包括拍摄界面830;拍摄界面830中可以包括取景框831与控件;比如,拍摄界面830中可以包括用于指示拍摄的控件832与用于指示开启红外闪光灯的控件833;在预览状态下,该取景框831内可以实时显示预览图像;其中,预览状态下可以是指用户打开相机且未按下拍照/录像按钮之前,此时取景框内可以实时显示预览图。The GUI shown in (a) in FIG. 14 is the desktop 810 of the electronic device; when the electronic device detects that the user clicks the operation of the camera application (application, APP) icon 820 on the desktop 810, the camera application can be started, and the display is as follows: Another GUI shown in (b) in Figure 14; the GUI shown in (b) in Figure 14 can be the display interface of the camera APP in the camera mode, and the GUI can include a shooting interface 830; Including a viewfinder frame 831 and controls; for example, the shooting interface 830 may include a control 832 for instructing shooting and a control 833 for instructing to turn on an infrared flash; in the preview state, a preview image may be displayed in real time in the viewfinder 831; wherein , the preview state can mean that before the user turns on the camera and does not press the photo/record button, the preview image can be displayed in the viewfinder in real time.
在电子设备检测到用户点击指示开启红外闪光灯的控件833的操作后,显示如图14中的(c)所示的拍摄界面;在红外闪光灯开启的情况下,通过主摄像头相机模组 与近红外相机模组采集图像,并通过本申请实施例提供的图像处理方法对采集的图像进行处理,输出处理后的画质增强的图像。After the electronic device detects that the user clicks the operation of the control 833 indicating to turn on the infrared flashlight, the shooting interface shown in (c) in Figure 14 is displayed; The camera module collects images, processes the collected images through the image processing method provided in the embodiment of the present application, and outputs processed images with enhanced image quality.
图15是一种适用于本申请实施例的拍摄场景的光路示意图。Fig. 15 is a schematic diagram of an optical path of a shooting scene applicable to an embodiment of the present application.
如图15所示,电子设备还包括红外闪光灯;在暗光场景下,电子设备可以开启红外闪光灯;在开启红外闪光灯的情况下,拍摄环境中的光照可以包括路灯与红外闪光灯;拍摄对象可以对拍摄环境中的光照进行反射,使得电子设备得到拍摄对象的图像。As shown in Figure 15, the electronic device also includes an infrared flashlight; in a dark scene, the electronic device can turn on the infrared flashlight; when the infrared flashlight is turned on, the lighting in the shooting environment can include street lights and infrared flashlights; The light in the shooting environment is reflected, so that the electronic device obtains an image of the shooting object.
在本申请的实施例中,在红外闪光灯开启的情况下,拍摄对象的反射光增加,使得电子设备中近红外相机模组的进光量增加;从而使得通过近红外相机模组拍摄的图像的细节信息增加,通过本申请实施例的图像处理方法对主摄像头相机模组与近红外相机模组采集的图像进行融合处理,能够对主摄像头相机模组获取的图像进行图像增强,提高图像中的细节信息。此外,红外闪光灯是用户无法感知的,在用户无感知的情况下,提高图像中的细节信息。In the embodiment of the present application, when the infrared flashlight is turned on, the reflected light of the shooting object increases, so that the amount of incoming light of the near-infrared camera module in the electronic device increases; thereby making the details of the image captured by the near-infrared camera module The information is increased, and the images collected by the main camera module and the near-infrared camera module are fused through the image processing method of the embodiment of the present application, so that the image acquired by the main camera module can be image enhanced, and the details in the image can be improved. information. In addition, the infrared flash is imperceptible to the user, and improves the detail information in the image without the user's perception.
应理解,上述举例说明是为了帮助本领域技术人员理解本申请实施例,而非要将本申请实施例限于所例示的具体数值或具体场景。本领域技术人员根据所给出的上述举例说明,显然可以进行各种等价的修改或变化,这样的修改或变化也落入本申请实施例的范围内。It should be understood that the above illustrations are intended to help those skilled in the art understand the embodiments of the present application, rather than to limit the embodiments of the present application to the illustrated specific values or specific scenarios. Those skilled in the art can obviously make various equivalent modifications or changes based on the above illustrations given, and such modifications or changes also fall within the scope of the embodiments of the present application.
上文结合图1至图15详细描述了本申请实施例提供的图像处理方法;下面将结合图16与图17详细描述本申请的装置实施例。应理解,本申请实施例中的装置可以执行前述本申请实施例的各种方法,即以下各种产品的具体工作过程,可以参考前述方法实施例中的对应过程。The image processing method provided by the embodiment of the present application is described in detail above with reference to FIG. 1 to FIG. 15 ; the device embodiment of the present application will be described in detail below in conjunction with FIG. 16 and FIG. 17 . It should be understood that the devices in the embodiments of the present application can execute the various methods in the foregoing embodiments of the present application, that is, the specific working processes of the following various products can refer to the corresponding processes in the foregoing method embodiments.
图16是本申请实施例提供的一种电子设备的结构示意图。该电子设备900包括显示模块910与处理模块920。该电子设备中包括第一相机模组与第二相机模组,第二相机模组为近红外相机模组或者红外相机模组。FIG. 16 is a schematic structural diagram of an electronic device provided by an embodiment of the present application. The electronic device 900 includes a display module 910 and a processing module 920 . The electronic device includes a first camera module and a second camera module, and the second camera module is a near-infrared camera module or an infrared camera module.
其中,所述显示模块910用于显示第一界面,所述第一界面包括第一控件;所述处理模块920用于检测到对所述第一控件的第一操作;响应于所述第一操作,获取N帧第一图像与M帧第二图像,所述第一图像为所述第一相机模组采集的图像,所述第二图像为所述第二相机模组采集的图像,N和M均为大于或者等于1的正整数;基于所述N帧第一图像和所述M帧第二图像,得到目标图像;保存所述目标图像;其中,所述处理模块920具体用于:Wherein, the display module 910 is used for displaying a first interface, and the first interface includes a first control; the processing module 920 is used for detecting a first operation on the first control; in response to the first Operation, acquiring N frames of first images and M frames of second images, the first images are images collected by the first camera module, the second images are images collected by the second camera module, N and M are both positive integers greater than or equal to 1; based on the N frames of the first image and the M frames of the second image, the target image is obtained; the target image is saved; wherein the processing module 920 is specifically used for:
对所述N帧第一图像进行第一图像处理,得到N帧第三图像,所述N帧第三图像的图像质量高于所述N帧第一图像的图像质量;对所述M帧第二图像进行第二图像处理,得到M帧第四图像,所述M帧第四图像的图像质量高于所述M帧第二图像的图像质量;基于语义分割图像,对所述N帧第三图像和所述M帧第四图像进行融合处理,得到融合图像,所述语义分割图像为基于所述N帧第一图像中任一帧图像或者所述M帧第二图像中任一帧图像得到的,所述融合图像的细节信息优于所述N帧第一图像的细节信息;对所述融合图像进行第三图像处理,得到目标图像。Performing first image processing on the N frames of the first image to obtain N frames of the third image, the image quality of the N frames of the third image is higher than the image quality of the N frames of the first image; for the M frames of the first image The second image is processed for the second image to obtain the fourth image of M frames, the image quality of the fourth image of the M frames is higher than the image quality of the second image of the M frames; based on the semantic segmentation image, the third image of the N frames is The image and the fourth image of the M frames are fused to obtain a fused image, and the semantically segmented image is obtained based on any frame image in the first image of the N frames or any frame image in the second image of the M frames Yes, the detailed information of the fused image is better than the detailed information of the N frames of the first image; the third image processing is performed on the fused image to obtain the target image.
可选地,作为一个实施例,所述处理模块920具体用于:Optionally, as an embodiment, the processing module 920 is specifically configured to:
对所述M帧第二图像进行黑电平校正处理和/或相位坏点校正处理,得到M帧第五图像;performing black level correction processing and/or phase defect correction processing on the M frames of second images to obtain M frames of fifth images;
以所述N帧第三图像中的任意一帧第三图像为基准,对所述M帧第五图像进行第一配准处理,得到所述N帧第四图像。Taking any third image of the N frames of third images as a reference, performing a first registration process on the M frames of fifth images to obtain the N frames of fourth images.
可选地,作为一个实施例,所述处理模块920具体用于:Optionally, as an embodiment, the processing module 920 is specifically configured to:
以所述N帧第三图像中的任意一帧第三图像为基准,对所述M帧第五图像进行所述第一配准处理与上采样处理,得到所述N帧第四图像。Taking any third image of the N frames of third images as a reference, performing the first registration processing and up-sampling processing on the M frames of fifth images to obtain the N frames of fourth images.
可选地,作为一个实施例,所述处理模块920具体用于:Optionally, as an embodiment, the processing module 920 is specifically configured to:
对所述M帧第二图像进行黑电平校正处理和/或相位坏点校正处理,得到M帧第五图像;performing black level correction processing and/or phase defect correction processing on the M frames of second images to obtain M frames of fifth images;
以所述N帧第三图像中的任意一帧第三图像为基准,对所述M帧第五图像进行第一配准处理,得到M帧第一配准图像;Using any one of the third images in the N frames of third images as a reference, performing a first registration process on the M frames of the fifth image to obtain M frames of first registration images;
以所述任意一帧第三图像为基准,对所述M帧第一配准图像进行第二配准处理,得到所述M帧第四图像。Using the arbitrary third image as a reference, perform a second registration process on the M frames of the first registration image to obtain the M frames of the fourth image.
可选地,作为一个实施例,所述处理模块920具体用于:Optionally, as an embodiment, the processing module 920 is specifically configured to:
以所述N帧第三图像中的任意一帧第三图像为基准,对所述M帧第五图像进行所述第一配准处理与上采样处理,得到所述M帧第一配准图像。Performing the first registration processing and upsampling processing on the M frames of the fifth image based on any one of the third images in the N frames of third images to obtain the M frames of the first registration image .
可选地,作为一个实施例,所述第一配准处理为全局配准处理。Optionally, as an embodiment, the first registration processing is global registration processing.
可选地,作为一个实施例,所述第二配准处理为局部配准处理。Optionally, as an embodiment, the second registration processing is local registration processing.
可选地,作为一个实施例,所述处理模块920具体用于:Optionally, as an embodiment, the processing module 920 is specifically configured to:
对所述N帧第一图像进行黑电平校正处理和/或相位坏点校正处理,得到所述N帧第三图像。Perform black level correction processing and/or phase defect correction processing on the N frames of first images to obtain the N frames of third images.
可选地,作为一个实施例,所述电子设备还包括红外闪光灯,所述处理模块920还用于:Optionally, as an embodiment, the electronic device further includes an infrared flash lamp, and the processing module 920 is further configured to:
在暗光场景下,开启所述红外闪光灯,所述暗光场景是指所述电子设备的进光量小于预设阈值的拍摄场景;In a dark light scene, turn on the infrared flashlight, the dark light scene refers to a shooting scene in which the amount of light entering the electronic device is less than a preset threshold;
所述响应于所述第一操作,获取N帧第一图像与M帧第二图像,包括:In response to the first operation, acquiring N frames of the first image and M frames of the second image includes:
在开启红外闪光灯的情况下,获取所述N帧第一图像与所述M帧第二图像。In the case of turning on the infrared flashlight, the N frames of the first image and the M frames of the second image are acquired.
可选地,作为一个实施例,所述第一界面包括第二控件;所述处理模块920具体用于:Optionally, as an embodiment, the first interface includes a second control; the processing module 920 is specifically configured to:
检测到对所述第二控件的第二操作;detecting a second operation on the second control;
响应于所述第二操作开启所述红外闪光灯。The infrared flashlight is turned on in response to the second operation.
可选地,作为一个实施例,所述处理模块920具体用于:Optionally, as an embodiment, the processing module 920 is specifically configured to:
基于所述语义分割图像,通过图像处理模型对所述N帧第三图像与所述M帧第四图像进行融合处理,得到所述融合图像,所述图像处理模型为预先训练的神经网络。Based on the semantically segmented image, the third image of N frames and the fourth image of M frames are fused by an image processing model to obtain the fused image, and the image processing model is a pre-trained neural network.
可选地,作为一个实施例,所述语义分割图像为通过语义分割算法对所述N帧第三图像中的第一帧第三图像进行处理得到的。Optionally, as an embodiment, the semantically segmented image is obtained by processing the third image in the first frame of the N frames of third images by using a semantic segmentation algorithm.
可选地,作为一个实施例,所述第一界面是指拍照界面,所述第一控件是指用于指示拍照的控件。Optionally, as an embodiment, the first interface refers to a photographing interface, and the first control refers to a control for instructing photographing.
可选地,作为一个实施例,所述第一界面是指视频录制界面,所述第一控件是指用于指示录制视频的控件。Optionally, as an embodiment, the first interface refers to a video recording interface, and the first control refers to a control for instructing video recording.
可选地,作为一个实施例,所述第一界面是指视频通话界面,所述第一控件是指用于指示视频通话的控件。Optionally, as an embodiment, the first interface refers to a video call interface, and the first control refers to a control for instructing a video call.
需要说明的是,上述电子设备900以功能模块的形式体现。这里的术语“模块”可以通过软件和/或硬件形式实现,对此不作具体限定。It should be noted that the above-mentioned electronic device 900 is embodied in the form of functional modules. The term "module" here may be implemented in the form of software and/or hardware, which is not specifically limited.
例如,“模块”可以是实现上述功能的软件程序、硬件电路或二者结合。所述硬件电路可能包括应用特有集成电路(application specific integrated circuit,ASIC)、电子电路、用于执行一个或多个软件或固件程序的处理器(例如共享处理器、专有处理器或组处理器等)和存储器、合并逻辑电路和/或其它支持所描述的功能的合适组件。For example, a "module" may be a software program, a hardware circuit or a combination of both to realize the above functions. The hardware circuitry may include application specific integrated circuits (ASICs), electronic circuits, processors (such as shared processors, dedicated processors, or group processors) for executing one or more software or firmware programs. etc.) and memory, incorporating logic, and/or other suitable components to support the described functionality.
因此,在本申请的实施例中描述的各示例的单元,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Therefore, the units of each example described in the embodiments of the present application can be realized by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the present application.
图17示出了本申请提供的一种电子设备的结构示意图。图17中的虚线表示该单元或该模块为可选的;电子设备1000可以用于实现上述方法实施例中描述的方法。FIG. 17 shows a schematic structural diagram of an electronic device provided by the present application. The dotted line in FIG. 17 indicates that this unit or this module is optional; the electronic device 1000 can be used to implement the methods described in the foregoing method embodiments.
电子设备1000包括一个或多个处理器1001,该一个或多个处理器1001可支持电子设备1000实现方法实施例中的图像处理方法。处理器1001可以是通用处理器或者专用处理器。例如,处理器1001可以是中央处理器(central processing unit,CPU)、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)或者其它可编程逻辑器件,如分立门、晶体管逻辑器件或分立硬件组件。The electronic device 1000 includes one or more processors 1001, and the one or more processors 1001 can support the electronic device 1000 to implement the image processing method in the method embodiment. Processor 1001 may be a general purpose processor or a special purpose processor. For example, the processor 1001 may be a central processing unit (central processing unit, CPU), a digital signal processor (digital signal processor, DSP), an application specific integrated circuit (application specific integrated circuit, ASIC), a field programmable gate array (field programmable gate array, FPGA) or other programmable logic devices such as discrete gates, transistor logic devices, or discrete hardware components.
处理器1001可以用于对电子设备1000进行控制,执行软件程序,处理软件程序的数据。电子设备1000还可以包括通信单元1005,用以实现信号的输入(接收)和输出(发送)。The processor 1001 can be used to control the electronic device 1000, execute software programs, and process data of the software programs. The electronic device 1000 may further include a communication unit 1005, configured to implement signal input (reception) and output (send).
例如,电子设备1000可以是芯片,通信单元1005可以是该芯片的输入和/或输出电路,或者,通信单元1005可以是该芯片的通信接口,该芯片可以作为终端设备或其它电子设备的组成部分。For example, the electronic device 1000 can be a chip, and the communication unit 1005 can be an input and/or output circuit of the chip, or the communication unit 1005 can be a communication interface of the chip, and the chip can be used as a component of a terminal device or other electronic devices .
又例如,电子设备1000可以是终端设备,通信单元1005可以是该终端设备的收发器,或者,通信单元1005可以是该终端设备的收发电路。For another example, the electronic device 1000 may be a terminal device, and the communication unit 1005 may be a transceiver of the terminal device, or the communication unit 1005 may be a transceiver circuit of the terminal device.
电子设备1000中可以包括一个或多个存储器1002,其上存有程序1004,程序1004可被处理器1001运行,生成指令1003,使得处理器1001根据指令1003执行上述方法实施例中描述的图像处理方法。The electronic device 1000 may include one or more memories 1002, on which a program 1004 is stored, and the program 1004 may be run by the processor 1001 to generate an instruction 1003, so that the processor 1001 executes the image processing described in the above method embodiment according to the instruction 1003 method.
可选地,存储器1002中还可以存储有数据。Optionally, data may also be stored in the memory 1002 .
可选地,处理器1001还可以读取存储器1002中存储的数据,该数据可以与程序1004存储在相同的存储地址,该数据也可以与程序1004存储在不同的存储地址。Optionally, the processor 1001 may also read data stored in the memory 1002, the data may be stored in the same storage address as the program 1004, and the data may also be stored in a different storage address from the program 1004.
处理器1001和存储器1002可以单独设置,也可以集成在一起,例如,集成在终端设备的系统级芯片(system on chip,SOC)上。The processor 1001 and the memory 1002 may be set separately, or may be integrated together, for example, integrated on a system-on-chip (system on chip, SOC) of a terminal device.
示例性地,存储器1002可以用于存储本申请实施例中提供的图像处理方法的相关程序1004,处理器1001可以用于在执行图像处理时调用存储器1002中存储的图像处理方法的相关程序1004,执行本申请实施例的图像处理方法;例如,显示第一界面, 第一界面包括第一控件;检测到对第一控件的第一操作;响应于第一操作,获取N帧第一图像与M帧第二图像,第一图像为第一相机模组采集的图像,第二图像为第二相机模组采集的图像,N和M均为大于或者等于1的正整数;基于N帧第一图像和M帧第二图像,得到目标图像;保存目标图像;其中,基于N帧第一图像和M帧第二图像,得到目标图像,包括:对N帧第一图像进行第一图像处理,得到N帧第三图像,N帧第三图像的图像质量高于N帧第一图像的图像质量;对M帧第二图像进行第二图像处理,得到M帧第四图像,M帧第四图像的图像质量高于M帧第二图像的图像质量;基于语义分割图像,对N帧第三图像和M帧第四图像进行融合处理,得到融合图像,语义分割图像为基于N帧第一图像中任一帧图像或者M帧第二图像中任一帧图像得到的,融合图像的细节信息优于N帧第一图像的细节信息;对融合图像进行第三图像处理,得到目标图像。Exemplarily, the memory 1002 can be used to store the related program 1004 of the image processing method provided in the embodiment of the present application, and the processor 1001 can be used to call the related program 1004 of the image processing method stored in the memory 1002 when performing image processing, Execute the image processing method of the embodiment of the present application; for example, display a first interface, the first interface includes a first control; detect a first operation on the first control; in response to the first operation, acquire N frames of the first image and M Frame the second image, the first image is the image collected by the first camera module, the second image is the image collected by the second camera module, N and M are both positive integers greater than or equal to 1; based on N frames of the first image and M frames of the second image to obtain the target image; save the target image; wherein, based on the N frames of the first image and the M frames of the second image, the target image is obtained, including: performing the first image processing on the N frames of the first image to obtain N The third image of the frame, the image quality of the third image of the N frame is higher than the image quality of the first image of the N frame; the second image processing is performed on the second image of the M frame, and the fourth image of the M frame is obtained, and the image of the fourth image of the M frame The quality is higher than the image quality of the second image of the M frame; based on the semantic segmentation image, the third image of the N frame and the fourth image of the M frame are fused to obtain a fusion image, and the semantic segmentation image is based on any one of the first images of the N frame The detailed information of the fused image is better than the detailed information of the N frames of the first image obtained from any one frame of the image or the M frames of the second image; the third image processing is performed on the fused image to obtain the target image.
本申请还提供了一种计算机程序产品,该计算机程序产品被处理器1001执行时实现本申请中任一方法实施例的方法。The present application also provides a computer program product, which implements the method of any method embodiment in the present application when the computer program product is executed by the processor 1001 .
该计算机程序产品可以存储在存储器1002中,例如是程序1004,程序1004经过预处理、编译、汇编和链接等处理过程最终被转换为能够被处理器1001执行的可执行目标文件。The computer program product can be stored in the memory 1002, such as the program 1004, and the program 1004 is finally converted into an executable object file that can be executed by the processor 1001 through processes such as preprocessing, compiling, assembling and linking.
本申请还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被计算机执行时实现本申请中任一方法实施例所述的图像处理方法。该计算机程序可以是高级语言程序,也可以是可执行目标程序。The present application also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a computer, the image processing method described in any method embodiment in the present application is implemented. The computer program may be a high-level language program or an executable object program.
该计算机可读存储介质例如是存储器1002。存储器1002可以是易失性存储器或非易失性存储器,或者,存储器1002可以同时包括易失性存储器和非易失性存储器。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。The computer-readable storage medium is, for example, the memory 1002 . The memory 1002 may be a volatile memory or a nonvolatile memory, or, the memory 1002 may include both a volatile memory and a nonvolatile memory. Among them, the non-volatile memory can be read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically programmable Erases programmable read-only memory (electrically EPROM, EEPROM) or flash memory. Volatile memory can be random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, many forms of RAM are available such as static random access memory (static RAM, SRAM), dynamic random access memory (dynamic RAM, DRAM), synchronous dynamic random access memory (synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (double data rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (enhanced SDRAM, ESDRAM), synchronous connection dynamic random access memory (synchlink DRAM, SLDRAM ) and direct memory bus random access memory (direct rambus RAM, DR RAM).
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those skilled in the art can appreciate that the units and algorithm steps of the examples described in conjunction with the embodiments disclosed herein can be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the present application.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that for the convenience and brevity of the description, the specific working process of the above-described system, device and unit can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通 过其它的方式实现。例如,以上所描述的电子设备的实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed systems, devices and methods can be implemented in other ways. For example, the embodiments of the electronic equipment described above are only illustrative. For example, the division of the modules is only a logical function division, and there may be other division methods in actual implementation. For example, multiple units or components can be Incorporation may either be integrated into another system, or some features may be omitted, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
应理解,在本申请的各种实施例中,各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请的实施例的实施过程构成任何限定。It should be understood that in various embodiments of the present application, the sequence numbers of the processes do not mean the order of execution, and the execution order of the processes should be determined by their functions and internal logic, rather than by the embodiments of the present application. The implementation process constitutes any limitation.
另外,本文中的术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。In addition, the term "and/or" in this article is only an association relationship describing associated objects, indicating that there may be three relationships, for example, A and/or B, which may mean: A exists alone, and A and B exist at the same time , there are three cases of B alone. In addition, the character "/" in this article generally indicates that the contextual objects are an "or" relationship.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions described above are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disc and other media that can store program codes. .
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准总之,以上所述仅为本申请技术方案的较佳实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。The above is only a specific implementation of the application, but the scope of protection of the application is not limited thereto. Anyone familiar with the technical field can easily think of changes or substitutions within the technical scope disclosed in the application. Should be covered within the protection scope of this application. Therefore, the protection scope of the present application should be based on the protection scope of the claims. In a word, the above description is only a preferred embodiment of the technical solution of the application, and is not intended to limit the protection scope of the application. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of this application shall be included within the protection scope of this application.

Claims (19)

  1. 一种图像处理方法,其特征在于,应用于电子设备,所述电子设备包括第一相机模组与第二相机模组,所述第二相机模组为近红外相机模组或者红外相机模组,所述图像处理方法包括:An image processing method, characterized in that it is applied to an electronic device, the electronic device includes a first camera module and a second camera module, the second camera module is a near-infrared camera module or an infrared camera module , the image processing method includes:
    显示第一界面,所述第一界面包括第一控件;displaying a first interface, where the first interface includes a first control;
    检测到对所述第一控件的第一操作;detecting a first operation on the first control;
    响应于所述第一操作,获取N帧第一图像与M帧第二图像,所述第一图像为所述第一相机模组采集的图像,所述第二图像为所述第二相机模组采集的图像,N和M均为大于或者等于1的正整数;In response to the first operation, N frames of first images and M frames of second images are acquired, the first images are images collected by the first camera module, and the second images are images collected by the second camera module The images collected by the group, N and M are both positive integers greater than or equal to 1;
    基于所述N帧第一图像和所述M帧第二图像,得到目标图像;Obtaining a target image based on the N frames of the first image and the M frames of the second image;
    保存所述目标图像;其中,Save the target image; where,
    所述基于所述N帧第一图像和所述M帧第二图像,得到目标图像,包括:The obtaining the target image based on the N frames of the first image and the M frames of the second image includes:
    对所述N帧第一图像进行第一图像处理,得到N帧第三图像,所述N帧第三图像的图像质量高于所述N帧第一图像的图像质量;performing first image processing on the N frames of the first image to obtain N frames of the third image, the image quality of the N frames of the third image is higher than the image quality of the N frames of the first image;
    对所述M帧第二图像进行第二图像处理,得到M帧第四图像,所述M帧第四图像的图像质量高于所述M帧第二图像的图像质量;performing second image processing on the M frames of second images to obtain M frames of fourth images, where the image quality of the M frames of the fourth images is higher than the image quality of the M frames of the second images;
    基于语义分割图像,对所述N帧第三图像和所述M帧第四图像进行融合处理,得到融合图像,所述语义分割图像为基于所述N帧第一图像中任一帧图像或者所述M帧第二图像中任一帧图像得到的,所述融合图像的细节信息优于所述N帧第一图像的细节信息;Based on the semantically segmented image, the third image of the N frames and the fourth image of the M frame are fused to obtain a fused image, and the semantically segmented image is based on any frame image in the first image of the N frames or the The detailed information of the fused image is better than the detailed information of the N frames of the first image;
    对所述融合图像进行第三图像处理,得到所述目标图像。performing third image processing on the fused image to obtain the target image.
  2. 如权利要求1所述的图像处理方法,其特征在于,所述对所述M帧第二图像进行第二图像处理,得到M帧第四图像,包括:The image processing method according to claim 1, wherein said performing second image processing on said M frames of second images to obtain M frames of fourth images comprises:
    对所述M帧第二图像进行黑电平校正处理和/或相位坏点校正处理,得到M帧第五图像;performing black level correction processing and/or phase defect correction processing on the M frames of second images to obtain M frames of fifth images;
    以所述N帧第三图像中的任意一帧第三图像为基准,对所述M帧第五图像进行第一配准处理,得到所述N帧第四图像。Taking any third image of the N frames of third images as a reference, performing a first registration process on the M frames of fifth images to obtain the N frames of fourth images.
  3. 如权利要求2所述的图像处理方法,其特征在于,所述以所述N帧第三图像中的任意一帧第三图像为基准,对所述M帧第五图像进行第一配准处理,得到所述N帧第四图像,包括:The image processing method according to claim 2, wherein the first registration process is performed on the M frames of the fifth image based on any one frame of the third image in the N frames of third images , to obtain the fourth image of the N frames, including:
    以所述N帧第三图像中的任意一帧第三图像为基准,对所述M帧第五图像进行所述第一配准处理与上采样处理,得到所述N帧第四图像。Taking any third image of the N frames of third images as a reference, performing the first registration processing and up-sampling processing on the M frames of fifth images to obtain the N frames of fourth images.
  4. 如权利要求1所述的图像处理方法,其特征在于,所述对所述M帧第二图像进行第二图像处理,得到M帧第四图像,包括:The image processing method according to claim 1, wherein said performing second image processing on said M frames of second images to obtain M frames of fourth images comprises:
    对所述M帧第二图像进行黑电平校正处理和/或相位坏点校正处理,得到M帧第五图像;performing black level correction processing and/or phase defect correction processing on the M frames of second images to obtain M frames of fifth images;
    以所述N帧第三图像中的任意一帧第三图像为基准,对所述M帧第五图像进行第一配准处理,得到M帧第一配准图像;Using any one of the third images in the N frames of third images as a reference, performing a first registration process on the M frames of the fifth image to obtain M frames of first registration images;
    以所述任意一帧第三图像为基准,对所述M帧第一配准图像进行第二配准处理,得到所述M帧第四图像。Using the arbitrary third image as a reference, perform a second registration process on the M frames of the first registration image to obtain the M frames of the fourth image.
  5. 如权利要求4所述的图像处理方法,其特征在于,所述以所述N帧第三图像中的任意一帧第三图像为基准,对所述M帧第五图像进行第一配准处理,得到M帧第一配准图像,包括:The image processing method according to claim 4, wherein the first registration process is performed on the M frames of the fifth image based on any one frame of the third image in the N frames of third images , to obtain M frames of the first registration image, including:
    以所述N帧第三图像中的任意一帧第三图像为基准,对所述M帧第五图像进行所述第一配准处理与上采样处理,得到所述M帧第一配准图像。Performing the first registration processing and upsampling processing on the M frames of the fifth image based on any one of the third images in the N frames of third images to obtain the M frames of the first registration image .
  6. 如权利要求2至5中任一项所述的图像处理方法,其特征在于,所述第一配准处理为全局配准处理。The image processing method according to any one of claims 2 to 5, wherein the first registration process is a global registration process.
  7. 如权利要求4或5所述的图像处理方法,其特征在于,所述第二配准处理为局部配准处理。The image processing method according to claim 4 or 5, wherein the second registration process is local registration process.
  8. 如权利要求1至7中任一项所述的图像处理方法,其特征在于,所述对所述N帧第一图像进行第一图像处理,得到N帧第三图像,包括:The image processing method according to any one of claims 1 to 7, wherein said performing first image processing on said N frames of first images to obtain N frames of third images comprises:
    对所述N帧第一图像进行黑电平校正处理和/或相位坏点校正处理,得到所述N帧第三图像。Perform black level correction processing and/or phase defect correction processing on the N frames of first images to obtain the N frames of third images.
  9. 如权利要求1至8中任一项所述的图像处理方法,其特征在于,所述电子设备还包括红外闪光灯,所述图像处理方法还包括:The image processing method according to any one of claims 1 to 8, wherein the electronic device further comprises an infrared flash lamp, and the image processing method further comprises:
    在暗光场景下,开启所述红外闪光灯,所述暗光场景是指所述电子设备的进光量小于预设阈值的拍摄场景;In a dark light scene, turn on the infrared flashlight, the dark light scene refers to a shooting scene in which the amount of light entering the electronic device is less than a preset threshold;
    所述响应于所述第一操作,获取N帧第一图像与M帧第二图像,包括:In response to the first operation, acquiring N frames of the first image and M frames of the second image includes:
    在开启所述红外闪光灯的情况下,获取所述N帧第一图像与所述M帧第二图像。When the infrared flashlight is turned on, the N frames of the first image and the M frames of the second image are acquired.
  10. 如权利要求9所述的图像处理方法,其特征在于,所述第一界面包括第二控件;所述在暗光场景下,开启所述红外闪光灯,包括:The image processing method according to claim 9, wherein the first interface includes a second control; and turning on the infrared flashlight in a dark scene includes:
    检测到对所述第二控件的第二操作;detecting a second operation on the second control;
    响应于所述第二操作开启所述红外闪光灯。The infrared flashlight is turned on in response to the second operation.
  11. 如权利要求1至10中任一项所述的图像处理方法,其特征在于,所述基于所述语义分割图像,对所述N帧第三图像与所述M帧第四图像进行融合处理,得到融合图像,包括:The image processing method according to any one of claims 1 to 10, wherein, based on the semantically segmented image, fusion processing is performed on the N frames of the third image and the M frames of the fourth image, Get the fused image, including:
    基于所述语义分割图像,通过图像处理模型对所述N帧第三图像与所述M帧第四图像进行融合处理,得到所述融合图像,所述图像处理模型为预先训练的神经网络。Based on the semantically segmented image, the third image of N frames and the fourth image of M frames are fused by an image processing model to obtain the fused image, and the image processing model is a pre-trained neural network.
  12. 如权利要求1至11中任一项所述的图像处理方法,其特征在于,所述语义分割图像为通过语义分割算法对所述N帧第三图像中的第一帧第三图像进行处理得到的。The image processing method according to any one of claims 1 to 11, wherein the semantically segmented image is obtained by processing the third image of the first frame of the N frames of third images through a semantic segmentation algorithm of.
  13. 如权利要求1至12中任一项所述的图像处理方法,其特征在于,所述第一界面是指拍照界面,所述第一控件是指用于指示拍照的控件。The image processing method according to any one of claims 1 to 12, wherein the first interface refers to a photographing interface, and the first control refers to a control for instructing photographing.
  14. 如权利要求1至12中任一项所述的图像处理方法,其特征在于,所述第一界面是指视频录制界面,所述第一控件是指用于指示录制视频的控件。The image processing method according to any one of claims 1 to 12, wherein the first interface refers to a video recording interface, and the first control refers to a control for instructing video recording.
  15. 如权利要求1至12中任一项所述的图像处理方法,其特征在于,所述第一界面是指视频通话界面,所述第一控件是指用于指示视频通话的控件。The image processing method according to any one of claims 1 to 12, wherein the first interface refers to a video call interface, and the first control refers to a control for instructing a video call.
  16. 一种电子设备,其特征在于,包括:An electronic device, characterized in that it comprises:
    一个或多个处理器和存储器;one or more processors and memory;
    所述存储器与所述一个或多个处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行如权利要求1至15中任一项所述的图像处理方法。The memory is coupled to the one or more processors, the memory is used to store computer program code, the computer program code includes computer instructions, and the one or more processors invoke the computer instructions to make the The electronic device executes the image processing method according to any one of claims 1 to 15.
  17. 一种芯片系统,其特征在于,所述芯片系统应用于电子设备,所述芯片系统包括一个或多个处理器,所述处理器用于调用计算机指令以使得所述电子设备执行如权利要求1至15中任一项所述的图像处理方法。A system on a chip, characterized in that the system on a chip is applied to an electronic device, and the system on a chip includes one or more processors, the processors are used to invoke computer instructions so that the electronic device executes the electronic device as claimed in claims 1 to 10. The image processing method described in any one of 15.
  18. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储了计算机程序,当所述计算机程序被处理器执行时,使得处理器执行权利要求1至15中任一项所述的图像处理方法。A computer-readable storage medium, characterized in that, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor is made to execute any one of claims 1 to 15. image processing method.
  19. 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机程序代码,当所述计算机程序代码被处理器执行时,使得处理器执行权利要求1至15中任一项所述的图像处理方法。A computer program product, characterized in that the computer program product includes computer program code, and when the computer program code is executed by a processor, the processor is made to perform the image processing described in any one of claims 1 to 15 method.
PCT/CN2022/138808 2022-01-10 2022-12-13 Image processing method and electronic device WO2023130922A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210023611.2 2022-01-10
CN202210023611.2A CN115550570B (en) 2022-01-10 2022-01-10 Image processing method and electronic equipment

Publications (1)

Publication Number Publication Date
WO2023130922A1 true WO2023130922A1 (en) 2023-07-13

Family

ID=84723591

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/138808 WO2023130922A1 (en) 2022-01-10 2022-12-13 Image processing method and electronic device

Country Status (2)

Country Link
CN (1) CN115550570B (en)
WO (1) WO2023130922A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115908221B (en) * 2023-03-08 2023-12-08 荣耀终端有限公司 Image processing method, electronic device and storage medium
CN116051425B (en) * 2023-03-21 2023-08-04 杭州微影软件有限公司 Infrared image processing method and device, electronic equipment and storage medium
CN116994338B (en) * 2023-09-25 2024-01-12 四川中交信通网络科技有限公司 Site paperless auditing management system based on behavior recognition

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108449555A (en) * 2018-05-04 2018-08-24 北京化工大学 Image interfusion method and system
CN110766706A (en) * 2019-09-26 2020-02-07 深圳市景阳信息技术有限公司 Image fusion method and device, terminal equipment and storage medium
CN111586314A (en) * 2020-05-25 2020-08-25 浙江大华技术股份有限公司 Image fusion method and device and computer storage medium
CN113507558A (en) * 2020-03-24 2021-10-15 华为技术有限公司 Method and device for removing image glare, terminal equipment and storage medium
WO2021253173A1 (en) * 2020-06-15 2021-12-23 深圳市大疆创新科技有限公司 Image processing method and apparatus, and inspection system

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6497579B2 (en) * 2014-07-25 2019-04-10 日本電気株式会社 Image composition system, image composition method, image composition program
KR102584187B1 (en) * 2016-03-30 2023-10-05 삼성전자주식회사 Electronic device and method for processing image
US10827140B2 (en) * 2016-10-17 2020-11-03 Huawei Technologies Co., Ltd. Photographing method for terminal and terminal
CN106780392B (en) * 2016-12-27 2020-10-02 浙江大华技术股份有限公司 Image fusion method and device
US10796403B2 (en) * 2017-09-14 2020-10-06 The Regents Of The University Of Colorado, A Body Corporate Thermal-depth fusion imaging
US11037312B2 (en) * 2019-06-29 2021-06-15 Intel Corporation Technologies for thermal enhanced semantic segmentation of two-dimensional images
CN110930440B (en) * 2019-12-09 2023-06-27 Oppo广东移动通信有限公司 Image alignment method, device, storage medium and electronic equipment
CN113542573A (en) * 2020-04-14 2021-10-22 华为技术有限公司 Photographing method and electronic equipment
CN113364975B (en) * 2021-05-10 2022-05-20 荣耀终端有限公司 Image fusion method and electronic equipment
CN113810600B (en) * 2021-08-12 2022-11-11 荣耀终端有限公司 Terminal image processing method and device and terminal equipment
CN113781377A (en) * 2021-11-03 2021-12-10 南京理工大学 Infrared and visible light image fusion method based on antagonism semantic guidance and perception

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108449555A (en) * 2018-05-04 2018-08-24 北京化工大学 Image interfusion method and system
CN110766706A (en) * 2019-09-26 2020-02-07 深圳市景阳信息技术有限公司 Image fusion method and device, terminal equipment and storage medium
CN113507558A (en) * 2020-03-24 2021-10-15 华为技术有限公司 Method and device for removing image glare, terminal equipment and storage medium
CN111586314A (en) * 2020-05-25 2020-08-25 浙江大华技术股份有限公司 Image fusion method and device and computer storage medium
WO2021253173A1 (en) * 2020-06-15 2021-12-23 深圳市大疆创新科技有限公司 Image processing method and apparatus, and inspection system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HUABING ZHOU, HOU JILEI, WU WEI, ZHANG YANDUO, WU YUNTAO, MA JIAYI: "Infrared and Visible Image Fusion Based on Semantic Segmentation", JOURNAL OF COMPUTER RESEARCH AND DEVELOPMENT, vol. 58, no. 2, 8 February 2021 (2021-02-08), pages 436 - 443, XP093076893 *

Also Published As

Publication number Publication date
CN115550570A (en) 2022-12-30
CN115550570B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
WO2023130922A1 (en) Image processing method and electronic device
WO2018176925A1 (en) Hdr image generation method and apparatus
US11321830B2 (en) Image detection method and apparatus and terminal
US20200322530A1 (en) Electronic device and method for controlling camera using external electronic device
US20190268536A1 (en) Electronic device and method for correcting image corrected in first image processing scheme in external electronic device in second image processing scheme
US20240119566A1 (en) Image processing method and apparatus, and electronic device
CN116744120B (en) Image processing method and electronic device
KR20230098575A (en) Frame Processing and/or Capture Command Systems and Techniques
CN116437198B (en) Image processing method and electronic equipment
CN115767290B (en) Image processing method and electronic device
WO2023124202A1 (en) Image processing method and electronic device
CN116668862B (en) Image processing method and electronic equipment
WO2023060921A1 (en) Image processing method and electronic device
US20240129446A1 (en) White Balance Processing Method and Electronic Device
CN116055895A (en) Image processing method and related device
WO2023015985A1 (en) Image processing method and electronic device
WO2023124201A1 (en) Image processing method and electronic device
CN116258633A (en) Image antireflection method, training method and training device for image antireflection model
CN109447925B (en) Image processing method and device, storage medium and electronic equipment
CN116668838B (en) Image processing method and electronic equipment
CN115767287B (en) Image processing method and electronic equipment
CN115526786B (en) Image processing method and related device
CN115955611B (en) Image processing method and electronic equipment
CN116029914B (en) Image processing method and electronic equipment
CN115426458B (en) Light source detection method and related equipment thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22918371

Country of ref document: EP

Kind code of ref document: A1