WO2018137267A1 - 图像处理方法和终端设备 - Google Patents

图像处理方法和终端设备 Download PDF

Info

Publication number
WO2018137267A1
WO2018137267A1 PCT/CN2017/074827 CN2017074827W WO2018137267A1 WO 2018137267 A1 WO2018137267 A1 WO 2018137267A1 CN 2017074827 W CN2017074827 W CN 2017074827W WO 2018137267 A1 WO2018137267 A1 WO 2018137267A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
frame
terminal device
fusion
camera sensor
Prior art date
Application number
PCT/CN2017/074827
Other languages
English (en)
French (fr)
Inventor
孙涛
朱聪超
杨永兴
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201780065469.5A priority Critical patent/CN109863742B/zh
Publication of WO2018137267A1 publication Critical patent/WO2018137267A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene

Definitions

  • the present application relates to communication technologies, and in particular, to an image processing method and a terminal device.
  • terminal devices As the needs of users continue to increase, the functions integrated by terminal devices are also increasing. At present, most terminal devices on the market can provide users with the following functions: making a call, sending a text message, surfing the Internet, taking a photo, and the like.
  • the terminal device can implement the photographing function through a camera sensor integrated on the terminal device.
  • the camera sensor integrated on the terminal device is generally small, so that the photosensitive area of the camera sensor is limited, and the pixel size is small, so that the amount of light entering the camera sensor under low illumination is insufficient. .
  • the effect of the image captured by the terminal device is poor (for example, the noise of the image is large, the brightness is low, etc.), so that the user experience is low.
  • the present application provides an image processing method and a terminal device, which are used to solve the technical problem that the image captured by the terminal device is poor when the user uses the terminal device to take a scene with a relatively dark light, so that the user experience is low. .
  • the present application provides an image processing method, including: acquiring at least one frame of a first image and at least one frame of a second image that are alternately and continuously output by a camera sensor; wherein a resolution of the first image and a current photo mode The corresponding resolution is the same, the resolution of the first image is N times the resolution of the second image, N is an integer greater than 1; the camera sensor uses the first exposure parameter to output the first image per frame, and the camera sensor uses the second The exposure parameter outputs a second image per frame, and the first exposure parameter is greater than the second exposure parameter; and the image is fused according to the at least one frame of the first image and the at least one frame of the second image to obtain the fused image.
  • the terminal device may acquire at least one frame of the first image and at least one frame of the second image that are alternately and continuously output by the camera sensor, wherein An image is mainly used to provide detailed information of a current shooting scene, and the second image is mainly used to provide brightness information of a current shooting scene, so that the terminal device can perform image according to the at least one frame of the first image and the at least one frame of the second image.
  • the fusion processing improves the brightness and the sharpness of the fused image obtained by the terminal device, so that the terminal device can make the user see the sharpness and brightness when the fused image is presented to the user.
  • the image improves the camera effect of the terminal device under low illumination, thereby improving the user experience.
  • the method before acquiring at least one frame of the first image and at least one frame of the second image that are alternately output by the camera sensor, the method further includes:
  • the camera parameter includes: a size of the first image, a frame number of the first image, a frame number of the second image, an exposure parameter of the first image, and a second image An alternating sequence of exposure parameters, a first image, and a second image;
  • the terminal device can reduce relative local motion between the first images by instructing the camera sensor to alternately and continuously output at least one frame of the first image and at least one frame of the first image, and Reducing relative local motion between multiple frames of the second image.
  • the photographing time can be reduced, the photographing speed can be improved, and the user experience can be improved.
  • the at least one frame of the first image includes: a frame of the first image
  • the at least one frame of the second image includes: a frame of the second image; and then performing image fusion according to the at least one frame of the first image and the at least one frame of the second image And obtaining the fused image, comprising: performing image fusion on the first image and the second image to obtain a fused image.
  • the method before performing image fusion on the first image and the second image to obtain the fused image, the method further includes: converting the first image from the Bayer Bayer format to the YUV format, and obtaining the converted first image, and Converting the second image from the Bayer Bayer format to the YUV format to obtain the converted second image; performing image fusion on the first image and the second image to obtain the fused image, including: converting the first image after the format The image is merged with the second image after the conversion format to obtain a fused image.
  • the at least one frame of the first image comprises: a multi-frame first image
  • the at least one frame second image comprises: a multi-frame second image
  • performing image fusion according to the at least one frame first image and the at least one frame second image Obtaining the merged image, comprising: performing time domain noise reduction on the first image of the multiple frames, obtaining a third image, and performing time domain noise reduction on the second image of the multiple frames to obtain a fourth image; Four images are image-fused to obtain a fused image.
  • performing time domain noise reduction on the first image of the multiple frames to obtain a third image, and performing time domain noise reduction on the second image of the multiple frames, before obtaining the fourth image further comprising: Bayer Bayer format is converted to YUV format, the first image after multi-frame conversion format is obtained, and the second image of multi-frame is converted from Bayer format to YUV format, and the second image after multi-frame conversion format is obtained;
  • the image is subjected to time domain noise reduction to obtain a third image
  • the second image is subjected to time domain noise reduction to obtain a fourth image, including: performing time domain noise reduction on the first image after the multi-frame conversion format, and obtaining the first image
  • the three images are subjected to time domain noise reduction on the second image after the multi-frame conversion format to obtain a fourth image.
  • performing image fusion on the third image and the fourth image to obtain the fused image including: downsampling the third image according to the size of the fourth image, and obtaining the downsampled third image; and downsampling
  • the size of the third image is the same as the size of the fourth image; the third image and the fourth image after the downsampling are subjected to exposure fusion to obtain a high dynamic range HDR image; and the HDR image is performed according to the size of the third image.
  • Sampling obtaining the upsampled HDR image; fusing the upsampled HDR image with the detail image of the third image to obtain a fused image; wherein the detail image of the third image includes high frequency components of the third image.
  • the terminal device when the user takes a picture under low illumination using the terminal device, the terminal device performs time domain by using the multi-frame first image and the second image that are alternately and continuously outputted to the acquired camera sensor.
  • the brightness and the sharpness of the merged image obtained by the terminal device are improved, so that the terminal device can display the image with higher definition and brightness when the merged image is presented to the user.
  • the camera effect of the terminal device under low illumination is improved, thereby improving the user experience.
  • the method further includes: acquiring the detail image of the third image according to the third image.
  • the terminal device can acquire “including the third image. a detailed image of the third image of the high frequency component, such that after the HDR image of the upsampled image is merged with the detailed image of the third image, the detailed information of the entire captured scene can be backfilled to the upsampled HDR In the image, the sharpness of the upsampled HDR image is improved.
  • acquiring the detailed image of the third image according to the third image comprising: upsampling the downsampled third image according to the size of the third image, to obtain the upsampled third image; and upsampling
  • the subsequent third image and the third image are subjected to image subtraction calculation to obtain a detailed image of the third image.
  • the method further includes: following the sampled third image as a reference, and performing image registration on the fourth image. Obtaining a fourth image after image registration; according to the downsampled third image, ghost image correction is performed on the image after registration, to obtain a corrected fourth image; and the downsampled third image is obtained Performing exposure fusion with the fourth image to obtain a high dynamic range HDR image includes: combining the downsampled third image with the corrected fourth image to obtain an HDR image.
  • the terminal device when the user takes a picture under low illumination using the terminal device, the terminal device performs time domain by using the multi-frame first image and the second image that are alternately and continuously outputted to the acquired camera sensor. Noise reduction, obtaining a third image mainly for providing detailed information of the current shooting scene, and a third image and a fourth image which are mainly used for providing the fourth image of the brightness information of the shooting scene after being down-sampled Before image fusion, the third image after sampling is used as a reference, and image registration and ghost correction are performed on the fourth image, so that the terminal device uses the fourth image and image after image registration and ghost correction.
  • the sampled third image is used for image fusion, the image fusion effect is better, and the sharpness of the merged image obtained by the terminal device is further improved.
  • performing ghost correction on the image-registered fourth image according to the downsampled third image, and obtaining the corrected fourth image including: reducing the brightness of the image after the image registration to the fourth image a brightness of the third image after the downsampling, obtaining a fourth image after the brightness is reduced; performing image difference calculation on the downsampled third image and the reduced brightness fourth image to obtain a fourth image after the brightness is reduced
  • the absolute value of the difference corresponding to each pixel point; the pixel whose absolute value is greater than the preset threshold is used as the ghost image of the fourth image after image registration; according to the brightness of the fourth image after image registration, after the downsampling is improved
  • the brightness of the third image is obtained as a third image after the brightness is improved; the ghost image of the image after the image registration is replaced with the pixel in the third image after the brightness is increased, and the corrected fourth image is obtained.
  • the upsampled HDR image is merged with the detail image of the third image to obtain a fused image, including: determining a sensitivity ISO of the camera sensor; determining a gain coefficient according to an ISO of the camera sensor; and determining a third image
  • the pixel value of each pixel of the detail image is multiplied by the gain coefficient to obtain a processed detail image; and the processed detail image is added to the upsampled HDR image to obtain a fused image.
  • the method further includes: performing spatial denoising on the fused image to obtain an image after spatial denoising.
  • the noise of the image can be further reduced by performing spatial noise reduction on the fused image.
  • the first image is a full size image.
  • the application provides a terminal device, where the terminal device includes: an acquiring module, configured to acquire at least one frame first image and at least one frame second image that are alternately and continuously output by the camera sensor; wherein, the resolution of the first image The rate is the same as the resolution corresponding to the current photographing mode, and the resolution of the first image is N times the resolution of the second image, N An integer greater than 1; the camera sensor outputs a first image per frame using a first exposure parameter, the camera sensor outputs a second image per frame using a second exposure parameter, the first exposure parameter is greater than the second exposure parameter; the fusion module is configured to At least one frame of the first image and at least one frame of the second image are image-fused to obtain a fused image.
  • the terminal device further includes: a determining module, configured to determine, according to the preview image output by the camera sensor, before the acquiring module acquires at least one frame of the first image and at least one frame of the second image that are alternately and continuously output by the camera sensor Photographing parameters of the camera sensor;
  • the photographing parameters include: a size of the first image, a number of frames of the first image, a number of frames of the second image, an exposure parameter of the first image, an exposure parameter of the second image, the first image, and the second An alternating sequence of images;
  • an indicating module configured to instruct the camera sensor to alternately and continuously output at least one frame of the first image and at least one frame of the second image according to the photographing parameter.
  • the at least one frame of the first image includes: a frame of the first image
  • the at least one frame of the second image includes: a frame of the second image
  • the fusion module is configured to perform image fusion on the first image and the second image, Get the merged image.
  • the terminal device further includes: a first format conversion module, configured to convert the first image from the Bayer Bayer format to the image fusion after the fusion module performs image fusion on the first image and the second image to obtain the merged image
  • the YUV format obtains the first image after the conversion format, and converts the second image from the Bayer Bayer format to the YUV format to obtain the second image after the conversion format
  • the fusion module is specifically configured to convert the first image after the format and The second image after the conversion format is subjected to image fusion to obtain a fused image.
  • the at least one frame of the first image includes: a multi-frame first image
  • the at least one frame second image includes: a multi-frame second image
  • a fusion module specifically configured to perform time domain noise reduction on the multi-frame first image, Obtaining a third image, and performing time domain noise reduction on the second image of the plurality of frames to obtain a fourth image; and performing image fusion on the third image and the fourth image to obtain a fused image.
  • the terminal device further includes: a second format conversion module, configured to perform time domain noise reduction on the first image of the multiple frames in the fusion module, obtain a third image, and perform time domain noise reduction on the second image of the multiple frames Before obtaining the fourth image, converting the multi-frame first image from the Bayer Bayer format to the YUV format, obtaining the first image after the multi-frame conversion format, and converting the multi-frame second image from the Bayer format to the YUV format, obtaining more a second image after the frame conversion format; the fusion module is configured to perform time domain noise reduction on the first image after the multi-frame conversion format to obtain a third image, and perform time domain reduction on the second image after the multi-frame conversion format Noise, get the fourth image.
  • a second format conversion module configured to perform time domain noise reduction on the first image of the multiple frames in the fusion module, obtain a third image, and perform time domain noise reduction on the second image after the multi-frame conversion format Noise, get the fourth image.
  • the fusion module includes: a downsampling unit, configured to downsample the third image according to the size of the fourth image to obtain a downsampled third image; and downsize the third image after the downsampling
  • the four images are the same size
  • the exposure fusion unit is configured to perform exposure fusion on the downsampled third image and the fourth image to obtain a high dynamic range HDR image
  • the image is upsampled to obtain an upsampled HDR image
  • a fusion unit is configured to fuse the upsampled HDR image with the detail image of the third image to obtain a fused image
  • the third image details image includes The high frequency component of the three images.
  • the merging module further includes: an acquiring unit, configured to: after the merging unit merges the upsampled HDR image with the third image, to obtain the third image according to the third image, before the fused image is obtained Detailed image.
  • the acquiring unit is configured to perform upsampling on the downsampled third image according to the size of the third image to obtain the upsampled third image, and the upsampled third image and the third image.
  • the image subtraction calculation is performed to obtain a detailed image of the third image.
  • the fusion module further includes: an image registration unit, configured to expose the third picture after the fusion unit is downsampled Before the image is merged with the fourth image to obtain a high dynamic range HDR image, the following sampled third image is used as a reference, and the fourth image is image-registered to obtain a fourth image after image registration; the ghost correction unit And performing, according to the downsampled third image, performing ghost correction on the image after registration, to obtain a corrected fourth image; and an exposure fusion unit, specifically for using the downsampled third image and The corrected fourth image is subjected to exposure fusion to obtain an HDR image.
  • an image registration unit configured to expose the third picture after the fusion unit is downsampled Before the image is merged with the fourth image to obtain a high dynamic range HDR image, the following sampled third image is used as a reference, and the fourth image is image-registered to obtain a fourth image after image registration
  • the ghost correction unit And performing, according to the downsampled third image, performing ghost correction on the image after
  • the ghost correction unit is specifically configured to reduce the brightness of the fourth image after the image registration to the brightness of the downsampled third image to obtain a fourth image after the brightness is reduced;
  • the image difference calculation is performed on the third image and the fourth image after the brightness is reduced, and the absolute value of the difference corresponding to each pixel point of the fourth image after the brightness reduction is obtained; and the pixel point whose absolute value is greater than the preset threshold is used as the image registration.
  • the ghost image of the fourth image after the image is imaged; the brightness of the third image after the downsampling is raised, and the third image after the brightness is improved is obtained; and the third image after the brightness is improved is used.
  • the pixel points replace the ghost of the fourth image after the image registration, and the corrected fourth image is obtained.
  • the merging unit is specifically configured to determine the sensitivity ISO of the camera sensor; determine the gain coefficient according to the ISO of the camera sensor; multiply the pixel value of each pixel of the detailed image of the third image by the gain coefficient to obtain The processed detail image is obtained by adding the processed detail image to the upsampled HDR image to obtain a fused image.
  • the terminal device further includes:
  • the spatial domain noise reduction module is configured to fuse the upsampled HDR image with the detail image of the third image in the fusion unit to obtain the fused image, and then perform spatial domain noise reduction on the fused image to obtain a spatially denoised image. .
  • the first image is a full size image.
  • the application provides a terminal device, where the terminal device includes: a processor and a memory;
  • the memory is used to store computer executable program code
  • the program code includes instructions; when the processor executes the instructions, the instructions cause the terminal device to perform the image processing method according to any of the first aspect and each of the possible implementations of the first aspect .
  • a fourth aspect of the present application provides a terminal device comprising at least one processing element (or chip) for performing the method of the above first aspect.
  • a fifth aspect of the present application provides a program for performing the method of the above first aspect when executed by a processor.
  • a sixth aspect of the present application provides a program product, such as a computer readable storage medium, comprising the program of the fifth aspect.
  • a seventh aspect of the present application provides a computer readable storage medium having stored therein instructions that, when run on a computer, cause the computer to perform the method of the first aspect described above.
  • the image processing method and the terminal device provided by the present application when the user takes a picture under low illumination using the terminal device, the terminal device can acquire at least one frame of the first image and at least one frame of the second image that are alternately and continuously output by the camera sensor.
  • the first image is mainly used to provide detailed information of the current shooting scene
  • the second image is mainly used to provide brightness information of the current shooting scene, so that the terminal device can according to the at least one frame of the first image and the at least one frame of the second image.
  • image fusion processing so that the brightness and sharpness of the fused image obtained by the terminal device are improved, so that the terminal device can make the user see the sharpness and the brightness when the fused image is presented to the user.
  • the image with higher brightness improves the camera effect of the terminal device under low illumination, thereby improving the user experience.
  • FIG. 1 is a schematic diagram of a terminal device in the prior art
  • FIG. 2 is a schematic diagram showing the working principle of a binning mode of a camera sensor in the prior art
  • FIG. 3 is a schematic flowchart diagram of an image processing method provided by the present application.
  • FIG. 4 is a schematic flow chart of another image processing method provided by the present application.
  • FIG. 5 is a schematic diagram of a camera sensor provided by the present application.
  • FIG. 6 is a schematic flowchart diagram of still another image processing method provided by the present application.
  • FIG. 7 is a schematic flowchart diagram of still another image processing method provided by the present application.
  • FIG. 8 is a schematic flowchart diagram of still another image processing method provided by the present application.
  • Figure 9 is a schematic diagram of a first image shown in the present application.
  • Figure 10 is a schematic diagram of a second image shown in the present application.
  • FIG. 11 is a schematic diagram of an airborne noise-reduced image shown in the present application.
  • FIG. 12 is a schematic flowchart diagram of still another image processing method provided by the present application.
  • FIG. 13 is a schematic structural diagram of a terminal device according to the present application.
  • FIG. 14 is a schematic structural diagram of another terminal device provided by the present application.
  • 15 is a schematic structural diagram of still another terminal device provided by the present application.
  • FIG. 16 is a schematic structural diagram of still another terminal device provided by the present application.
  • FIG. 17 is a schematic structural diagram of still another terminal device provided by the present application.
  • FIG. 18 is a schematic structural diagram of still another terminal device provided by the present application.
  • FIG. 19 is a structural block diagram of a terminal device provided by the application as a mobile phone.
  • the wireless terminal can be a wireless terminal or a wired terminal.
  • the wireless terminal can be a device that provides voice and/or other service data connectivity to the user, a handheld device with wireless connectivity, or other processing device connected to the wireless modem.
  • the wireless terminal can communicate with one or more core networks via a Radio Access Network (RAN), which can be a mobile terminal, such as a mobile phone (or "cellular" phone) and a computer with a mobile terminal.
  • RAN Radio Access Network
  • it may be a portable, pocket, handheld, computer built-in or in-vehicle mobile device that exchanges language and/or data with a wireless access network.
  • Wireless terminal can also be called system, subscriber unit (Subscriber Unit), Subscriber Station, Mobile Station, Mobile, Remote Station, Remote Terminal, Access Terminal, User Terminal ( The user terminal, the user agent, and the user device are not limited herein.
  • plural means two or more.
  • “and/or” describing the association relationship of the associated objects, indicating that there may be three relationships, for example, A and/or B, which may indicate that there are three cases where A exists separately, A and B exist at the same time, and B exists separately.
  • the character "/” generally indicates that the contextual object is an "or" relationship.
  • FIG. 1 is a schematic diagram of a terminal device in the prior art.
  • most terminal devices implement a photographing function through a camera sensor integrated on the terminal device.
  • the camera sensor referred to herein may be a front camera sensor of the terminal device or a rear camera sensor of the terminal device.
  • FIG. 1 is a schematic diagram showing a terminal device as a mobile phone.
  • the photosensitive area of the camera sensor is limited, and the pixel size is small, so the amount of light entering the camera sensor under low illumination may be insufficient, resulting in low illumination of the camera sensor.
  • the output image is less effective (for example, the image is louder, the brightness is lower, etc.). Therefore, when the user uses the terminal device to take a scene with a relatively dark light (for example, a night scene), the effect of the image output by the camera sensor of the terminal device is poor, so that the user device experiences the user when the image is presented to the user. Lower.
  • the first solution the terminal device enhances the brightness of the shooting scene by using the fill light on the terminal device to increase the amount of light entering the camera sensor, thereby improving the brightness of the image output by the camera sensor.
  • the terminal device can fill the shooting scene with the fill light, and brighten the brightness of the shooting scene, thereby increasing the amount of light entering the camera sensor, thereby improving the brightness of the image output by the camera sensor.
  • fill light such as a rear flash, a light emitting diode (LED) lamp, and the like. Therefore, when the camera sensor takes a picture, the terminal device can fill the shooting scene with the fill light, and brighten the brightness of the shooting scene, thereby increasing the amount of light entering the camera sensor, thereby improving the brightness of the image output by the camera sensor.
  • the fill light can only fill the near view, and can not fill the distant view, so that the distant part of the image output by the camera sensor is still dark, so that the terminal device is presented to the user.
  • the image is still inferior and the user experience is low.
  • the second scheme the terminal device increases the brightness of the image output by the camera sensor by operating the camera sensor in the binning mode.
  • FIG. 2 is a schematic diagram of the working principle of the binning mode of the camera sensor in the prior art.
  • the binning mode of the camera sensor is such that pixels of a plurality of adjacent and identical pixels in the image captured by the camera sensor are combined and used as one pixel. That is, pixels of a plurality of adjacent green (Green) pixels in the image are combined and used as one pixel; pixels of a plurality of adjacent red (R, R) pixels in the image are combined as one pixel Use; combines pixels of multiple adjacent blue (B, B) pixels in the image for use as one pixel.
  • the adjacent pixels mentioned herein may be pixels adjacent in the horizontal direction, or pixels adjacent in the vertical direction, and may include both adjacent cells in the horizontal direction and include Cells that are adjacent in the vertical direction.
  • FIG. 2 is a schematic diagram in which pixels of two pixels adjacent in the horizontal direction and pixels of two pixels adjacent in the vertical direction are combined and used as one pixel.
  • the four pixels that are merged into the same pixel are identified by the same line in FIG.
  • the image taken by the camera sensor on the left side of Figure 2 is For example, when the camera sensor operates in the binning mode, the camera sensor can combine the pixels of the two pixels adjacent in the horizontal direction and the pixels of the two pixels adjacent in the vertical direction in the image.
  • the image shown on the right side of Fig. 2 outputs the image shown on the right side of Fig. 2.
  • the image shown on the right side of FIG. 2 may be referred to as a binning image.
  • the size of the binning image obtained after the pixel combination is reduced to a quarter of the image on the left side of FIG. 2 (ie, the original image), and binning.
  • the resolution of the image will also fall to a quarter of the image on the left (ie the original image) in Figure 2.
  • the photosensitive area of the image can be improved, and the sensitivity of the light to the light sensing can be improved, thereby improving the brightness of the image output by the camera sensor under low illumination.
  • the resolution of the image obtained by merging the pixels is reduced while the brightness of the image is increased, so that the high-frequency information of the image is lost (ie, the details of the image are lost), resulting in an image.
  • the clarity is reduced.
  • the way of combining the pixels of four adjacent identical pixels as shown in FIG. 2 reduces the resolution of the right image in FIG. 2 to a quarter of the image on the left side, so that the camera sensor is The sharpness of the outputted image is reduced, so that the effect of the image presented to the user by the terminal device is still poor, and the user experience is low.
  • the present application provides an image processing method for solving the technical problem that the effect of the image presented by the terminal device to the user is poor when the user uses the terminal device to take a scene with a dark light. .
  • the technical solutions of the present application are described below with some embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in some embodiments.
  • FIG. 3 is a schematic flowchart diagram of an image processing method provided by the present application. As shown in FIG. 3, the method may include:
  • the terminal The device may acquire at least one frame of the first image and at least one frame of the second image that are alternately and continuously output by the camera sensor. That is to say, the first image and the second image acquired by the terminal device are images that are output when the same camera sensor of the terminal device currently captures the same scene, that is, the first image and the second image include the current same shooting scene.
  • the same camera sensor mentioned above may be a front camera sensor of the terminal device or a rear camera sensor of the terminal device.
  • the resolution of the first image is the same as the resolution corresponding to the photographing mode selected by the user on the terminal device.
  • the resolution of the first image is N times the resolution of the second image, and N is an integer greater than 1. That is, the size of the first image is N times the size of the second image. That is to say, the size of the first image is the size matching the resolution corresponding to the current photographing mode, and may also be referred to as a full-size image at the resolution.
  • the second image is a binning image relative to the first image, that is, the second image is an image obtained by combining pixels. Therefore, the sharpness of the first image in the present application is higher than the sharpness of the second image, but the luminance of the first image is lower than the luminance of the second image. Therefore, the first image is mainly used to provide detailed information of the current shooting scene (ie, the high frequency component of the first image), and the second image is mainly used to provide the luminance information of the current shooting scene (ie, the low frequency component of the second
  • the embodiment does not limit the manner in which the camera sensor alternately and continuously outputs at least one frame of the first image and at least one frame of the second image.
  • the camera sensor may output a frame of the first image and then output a frame of the second image. Alternating and continuously outputting at least one frame of the first image and at least one frame of the second image, and first outputting a frame of the second image and then outputting a frame of the first image, and alternately outputting at least one frame of the first image and at least one frame Two images.
  • the second image is a binning image with respect to the first image, that is, the second image is an image obtained by combining pixels, the brightness of the second image is higher than that of the first image.
  • the camera sensor may output the first image and the second image with different exposure parameters, for example, outputting the first image of each frame by using the first exposure parameter.
  • a second image per frame is output using a second exposure parameter that is less than the first exposure parameter.
  • the specific values of the first exposure parameter and the second exposure parameter may be determined according to the current sensitivity value (ISO) of the camera sensor, and details are not described herein again.
  • S102 Perform image fusion according to at least one frame of the first image and at least one frame of the second image to obtain a fused image.
  • the terminal device may perform image fusion on the at least one frame first image and the at least one frame second image. That is, image fusion is performed according to at least one frame of the first image having higher definition and at least one frame of the second image having higher brightness. In this way, the sharpness of the first image and the brightness of the second image can be fused on one frame of the image, so that the brightness and sharpness of the obtained fused image are obtained after the terminal device performs the image fusion. Upgrade.
  • the image processing method provided by the present application when the user takes a picture under low illumination using the terminal device, the terminal device can acquire at least one frame first image and at least one frame second image that are alternately and continuously output by the camera sensor, wherein the first image
  • the second image is mainly used to provide the brightness information of the current shooting scene, and the second image is used to provide the brightness information of the current shooting scene, so that the terminal device can perform image fusion processing according to the at least one frame of the first image and the at least one frame of the second image.
  • the brightness and the sharpness of the merged image obtained by the terminal device are improved, so that the terminal device can display the image with higher definition and brightness when the merged image is presented to the user.
  • the camera effect of the terminal device under low illumination is improved, thereby improving the user experience.
  • FIG. 4 is a schematic flowchart diagram of another image processing method provided by the present application.
  • the embodiment relates to how the terminal device instructs the camera sensor to alternately and continuously output at least one frame of the first image and at least one frame of the second image.
  • the method may further include:
  • the terminal device when the user takes a picture using the terminal device, if the terminal device analyzes and determines the preview image currently output by the camera sensor, and the camera sensor is currently in the low illumination shooting state, the terminal device may determine the camera parameter of the camera sensor. .
  • the photographing parameter mentioned herein may be a parameter that the terminal device needs to use when performing a photographing operation when the user is currently taking a photograph using the terminal device.
  • the photographing parameter may include: a size of the first image, a number of frames of the first image, a number of frames of the second image, an exposure parameter of the first image, an exposure parameter of the second image, an alternating sequence of the first image and the second image Wait.
  • the alternating sequence of the first image and the second image may be a preset alternate sequence, and may also be an alternate sequence in which the terminal device randomly allocates the camera sensor.
  • FIG. 5 is a schematic diagram of a camera sensor provided by the present application. As shown in FIG. 5, FIG. 5 is a schematic diagram showing the output of a 4-frame first image and a 4-frame second image in a sequence in which the camera sensor outputs the first image and then outputs the second image. It can be understood by those skilled in the art that the camera sensor can also output 4 frames of the first image and 4 frames of the second image in an alternating sequence of outputting the second image and then outputting the first image, which is not limited thereto.
  • the terminal device may determine the first by the resolution corresponding to the currently selected camera mode on the terminal.
  • the resolution and size of the image which in turn determines the resolution and size of the second image based on a multiple of the resolution of the first image and the resolution of the second image.
  • the terminal device may determine the current ISO of the camera sensor according to the preview image output by the camera sensor, and determine the number of frames of the first image and the second by the correspondence between the ISO and the number of frames of the first image and the number of frames of the second image.
  • the number of frames of the image It should be noted that the darker the light of the current shooting scene, the higher the ISO, and the higher the noise of the image output by the camera sensor. Therefore, the terminal device needs to use more images of the number of frames for image processing, so the number of frames of the first image and the second image corresponding to ISO is larger.
  • the correspondence between ISO and the number of frames of the first image and the number of frames of the second image may be, for example, when ISO is 500, corresponding to 2 frames of the first image and 2 frames of the second image, when the ISO is 1000 Corresponding to 3 frames of the first image and 3 frames of the second image, and the like.
  • the first image and the second image of the same frame number have been described as an example in the above example, those skilled in the art can understand that the number of frames of the first image and the second image may be different.
  • the terminal device can determine the exposure parameter of the first image and the exposure parameter of the second image according to the brightness of the preview image output by the camera sensor according to an existing calculation manner, and details are not described herein again.
  • the exposure parameters mentioned herein may include: ISO, exposure time, frame rate, and the like.
  • S202 Instruct the camera sensor to alternately and continuously output at least one frame of the first image and at least one frame of the second image according to the photographing parameter.
  • the terminal device may instruct the camera sensor to alternately and continuously output at least one frame of the first image and at least one frame of the second image according to the photographing parameter.
  • the terminal device may send, according to an alternating sequence, an exposure parameter corresponding to the frame image to the camera sensor before the camera sensor outputs each frame image, and a size of the frame image, so that the camera sensor can be correctly alternated and continuous.
  • the first image and the second image are output per frame.
  • the terminal device indicates that the camera sensor is alternately
  • the method of continuous output, outputting the multi-frame first image and the second image mode can reduce the relative local motion between the multi-frame first images and reduce the relative local motion between the multi-frame second images.
  • the photographing time can be reduced, the photographing speed can be improved, and the user experience can be improved.
  • the resolution of the first image is different from the resolution of the second image, in order to keep the pictures seen on the screen consistent when the user takes a picture using the terminal device, the user may be displayed on the screen.
  • the first image per frame does not display the second image, which improves the user experience.
  • terminal device may be implemented by software when executing steps S201-S202, may also be implemented by hardware, or may be implemented by combining software and hardware.
  • the hardware mentioned here may be, for example, an image signal processor (ISP), etc.
  • the software mentioned here may be, for example, an automatic exposure (AE) module or the like.
  • the image processing method provided by the present application when the user takes a picture under low illumination using the terminal device, the terminal device can determine the camera parameter of the camera sensor according to the preview image output by the camera sensor, and then instruct the camera sensor to alternate and continuously through the camera parameter. Outputting at least one frame of the first image and at least one frame of the second image, wherein the first image is mainly used to provide detailed information of the current shooting scene, and the second image is mainly used to provide brightness of the current shooting scene Information, so that the terminal device can perform image fusion processing according to the at least one frame of the first image and the at least one frame of the second image, so that the brightness and the sharpness of the merged image obtained by the terminal device are improved, thereby
  • the terminal device presents the merged image to the user the user can view the image with higher definition and brightness, thereby improving the photographing effect of the terminal device under low illumination, thereby improving the user experience.
  • the embodiment relates to a process in which the terminal device performs image fusion according to the at least one frame first image and the at least one frame second image to obtain a fused image.
  • the above S102 may include the following two situations:
  • the terminal device acquires a frame of the first image and a frame of the second image that are alternately and continuously output by the camera sensor.
  • the terminal device may directly perform an image on the first image and the second image of the frame. Fusion to get the merged image.
  • the terminal device may further perform format conversion (ie, a demosaicing operation) on the first image of the first frame and the second image in a frame to select a frame of the first image.
  • the second case the above-mentioned terminal device acquires a multi-frame first image and a multi-frame second image in which the camera sensors alternately and continuously output.
  • the terminal device may directly perform image fusion on the multi-frame first image and the multi-frame second image. .
  • the terminal device may perform time domain noise reduction on the first image of the multiple frames to obtain a third image, and perform time domain noise reduction on the second image of the multiple frames to obtain a fourth image, and then the third image and The fourth image is subjected to image fusion to obtain a fused image.
  • the camera sensor has a small amount of light entering under low illumination, the image of the camera sensor outputted under low illumination is relatively noisy. Therefore, the terminal device can reduce the image by performing time domain noise reduction on the multi-frame first image and the multi-frame first image output by the camera sensor respectively, that is, by means of pixel average operation between different frames in the time domain.
  • the noise is such that the noise of the obtained third image and the fourth image is small, so that the noise of the fused image obtained by the terminal device performing image fusion on the third image and the fourth image is small.
  • the terminal device may further perform format conversion on the first image of the first frame and the second image in a frame to convert the first image of the multiple frames from the Bayer Bayer format to the YUV format.
  • the frame converts the first image after the format, and converts the multi-frame second image from the Bayer format to the YUV format to obtain a second image after the multi-frame conversion format.
  • the terminal device can obtain a third image by performing time domain noise reduction on the first image after the multi-frame conversion format, and perform time domain noise reduction on the second image after the multi-frame conversion format to obtain a fourth image, and then The third image and the fourth image are image-fused to obtain a fused image or the like.
  • the image processing method provided by the present application when the user takes a picture under low illumination using the terminal device, the terminal device can acquire at least one frame first image and at least one frame second image that are alternately and continuously output by the camera sensor, wherein the first image
  • the second image is mainly used to provide the brightness information of the current shooting scene, and the second image is used to provide the brightness information of the current shooting scene, so that the terminal device can perform image fusion processing according to the at least one frame of the first image and the at least one frame of the second image.
  • the brightness and the sharpness of the merged image obtained by the terminal device are improved, so that the terminal device can display the image with higher definition and brightness when the merged image is presented to the user.
  • the camera effect of the terminal device under low illumination is improved, thereby improving the user experience.
  • FIG. 6 is a schematic flowchart diagram of still another image processing method provided by the present application.
  • the third image and the fourth image are taken as an example to describe a process in which the terminal device performs image fusion. It can be understood by those skilled in the art that if the terminal device acquires a frame of the first image and a frame of the second image, the terminal device may also perform an image on the first frame and the second image of the frame in the following manner. Fusion, its implementation and implementation principles are similar, and will not be repeated here.
  • the third image is a time domain denoised image of the multi-frame first image
  • the fourth image is a time domain denoised image of the multi-frame second image. Therefore, in the image fusion process.
  • the third image is mainly used to provide detailed information of the current shooting scene (ie, the high frequency component of the first image)
  • the fourth image is mainly used to provide the luminance information of the current shooting scene (ie, the low frequency component of the second image).
  • the method includes:
  • the third image acquired by the terminal device is an image obtained by performing time domain noise reduction according to the first image of the multi-frame
  • the fourth image is obtained by performing time domain noise reduction according to the second image of the multi-frame.
  • Image the size of the third image is the same as the first image
  • the size of the fourth image is the same as the second image such that the sizes of the third image and the fourth image are different.
  • the terminal device may downsample the third image according to the size of the fourth image to reduce the third image.
  • the size of the third image after downsampling is the same as the size of the fourth image.
  • S302 Perform exposure fusion on the downsampled third image and the fourth image to obtain an HDR image.
  • the terminal device may perform exposure fusion on two frames of images having the same size (ie, the downsampled third image and the fourth image). That is, the third image after the downsampling with higher definition is exposed and merged with the fourth image with higher brightness.
  • the sharpness of the downsampled third image and the brightness of the fourth image can be fused on one frame image, so that the high dynamic range obtained by the terminal device after performing the exposure fusion (High-Dynamic) Range, HDR)
  • the overall brightness of the image is improved.
  • the embodiment does not limit the implementation manner in which the terminal device performs exposure fusion on the downsampled third image and the fourth image.
  • the terminal device can adopt an exposure fusion method of “calculating parameters with image brightness as a weight”.
  • the terminal device may use the center value 128 of the image brightness as a reference to assign a weight to each pixel of the downsampled third image.
  • the terminal device can assign weights to each pixel in the fourth image in the above manner.
  • the terminal device may multiply the pixel value of each pixel of the downsampled third image by the weight value of the pixel point to obtain a processed third image.
  • the terminal device may multiply the pixel value of each pixel of the fourth image by the weight value of the pixel to obtain the processed fourth image.
  • the terminal device performs image addition calculation on the processed third image and the processed fourth image to obtain an HDR image, and thus the process of exposure fusion is completed. In this way, the darker pixel in the downsampled third image can be raised by the pixel points in the fourth image with higher brightness, and the fourth image can be compensated by the downsampled pixel of the third image.
  • the overexposed pixels are so that the HDR image obtained by the terminal device does not have too dark areas or too bright areas, and the brightness of the HDR image is improved as a whole.
  • the value range of the weight value may be between 0 and 1, for example, and the corresponding relationship between the weight value and the brightness may be determined according to the needs of the user.
  • the terminal device performs exposure fusion on the third image and the fourth image after the downsampling, and obtains the HDR.
  • the size of the image is the same as the size of the fourth image. Therefore, the terminal device needs to upsample the HDR image according to the size of the third image to enlarge the size of the HDR image such that the size of the upsampled HDR image is the same as the size of the third image. In this way, the size of the upsampled HDR image can be adapted to the resolution corresponding to the camera mode currently selected by the user on the terminal device.
  • the upsampled HDR image is merged with the detailed image of the third image to obtain a merged image.
  • the terminal device may lose the high frequency component of the third image (ie, the detailed information of the current shooting scene) by downsampling, so the third image after the down sampling is Compared to the original third image, the resolution of the third image after downsampling is reduced. In this way, the resolution of the HDR image obtained by performing the exposure fusion of the downsampled third image and the fourth image is also lower than the resolution of the third image, so that the sharpness of the upsampled HDR image is still better. low.
  • the high frequency component of the third image ie, the detailed information of the current shooting scene
  • the terminal device may fuse the upsampled HDR image with the detail image of the third image including the high frequency component of the third image to capture the entire scene.
  • the detail information is backfilled into the upsampled HDR image to improve the sharpness of the upsampled HDR image.
  • the brightness and sharpness of the fused image obtained by the terminal are improved.
  • the terminal device presents the merged image to the user, the user can view the image with higher definition and brightness, thereby improving the user experience.
  • the embodiment does not limit the implementation manner in which the terminal device fuses the upsampled HDR image with the detail image of the third image.
  • the terminal device may directly perform image addition calculation on the upsampled HDR image and the third image detail image to obtain a fused image.
  • the terminal device may first determine the sensitivity (ISO) of the camera sensor currently under low illumination, and then determine a gain coefficient that is adapted to the camera sensor ISO according to the ISO of the camera sensor. Then, the terminal device may multiply the pixel value of each pixel of the detail image by the gain coefficient to enhance the detail image to obtain the processed detail image. Finally, the terminal device calculates the merged image by adding the processed detail image to the upsampled HDR image.
  • ISO sensitivity
  • the terminal device can determine the sensitivity of the camera sensor currently under low illumination by the image currently previewed by the camera sensor.
  • the terminal device may determine the gain coefficient corresponding to the ISO of the camera sensor according to the mapping relationship between the ISO and the gain coefficient.
  • the mapping relationship between the ISO and the gain coefficient may be specifically set according to actual conditions.
  • the mapping relationship between the ISO and the gain coefficient may be, for example, that the gain coefficient may be 1.5 when ISO is less than or equal to 500, and 1.4 when ISO is greater than 500 and less than or equal to 1000; and greater than 1000 and less than ISO when ISO is greater than 500 and less than or equal to 1000.
  • the gain coefficient When the voltage is equal to 1500, the gain coefficient may be 1.3; when the ISO is greater than 1500 and less than or equal to 2000, the gain coefficient may be 1.2, and when the ISO is greater than 2000, the gain coefficient may be 1.1 or the like.
  • the detailed image of the third image may be a detailed image of the third image acquired by the terminal device according to the third image before performing S304.
  • the embodiment does not limit the implementation manner of acquiring the detailed image of the third image.
  • the terminal device may perform Fourier transform on the third image, remove low frequency components in the third image, and retain high frequency components of the third image. Then, the terminal device performs inverse Fourier transform on the third image that retains only the high frequency component, and the detailed image of the third image can be obtained.
  • the terminal device may further perform upsampling on the downsampled third image according to the size of the third image to obtain the upsampled third image. Since the upsampled third image is more blurred than the third image, the terminal device can obtain the detailed image of the third image by performing image subtraction calculation on the upsampled third image and the third image.
  • the terminal device combines the upsampled HDR image with the detailed image of the third image to obtain a fused image, and further performs spatial domain noise reduction on the fused image to further reduce image noise.
  • the terminal device may perform spatial domain noise reduction on the fused image by using a non-local mean denoising algorithm, and may perform spatial domain noise reduction on the fused image by using a method in the prior art, which is not described herein.
  • the terminal device when the user takes a picture under low illumination using the terminal device, the terminal device performs time domain noise reduction on the multi-frame first image and the second image that are alternately and continuously outputted by the acquired camera sensor.
  • a third image mainly used to provide detailed information of a current shooting scene, and, after being mainly used to provide a fourth image of brightness information of the shooting scene, the third image and the fourth image may be used for image fusion processing, so that the terminal device
  • the brightness and sharpness of the obtained fused image are improved, so that the terminal device can display the image with higher definition and brightness when the fused image is presented to the user, and the terminal is improved.
  • the camera's photo effect under low illumination improves the user experience.
  • FIG. 7 is a schematic flowchart diagram of still another image processing method provided by the present application. As shown in FIG. 7, before the foregoing S302, the method may further include:
  • S401 The third image after sampling is used as a reference, and image registration is performed on the fourth image to obtain a fourth image after image registration.
  • the terminal device may further perform the image matching on the fourth image by using the third image after sampling as a reference.
  • the alignment is such that the downsampled third image is aligned with the same feature in the fourth image.
  • the terminal device can perform image registration on the fourth image by using a registration method of Speeded Up Robust Features (SURF).
  • SURF Speeded Up Robust Features
  • the terminal device can also perform image registration on the fourth image by using the image registration method in the prior art, and details are not described herein again.
  • S402. Perform ghost correction on the image after the image is imaged according to the downsampled third image to obtain a corrected fourth image.
  • the terminal device uses the third image after sampling as a reference, and performs image registration on the fourth image to obtain a fourth image after image registration, and may further match the image according to the third image after downsampling.
  • the fourth image after the quasi-shadow correction is performed to obtain the corrected fourth image.
  • the ghost image mentioned here is a third image after the downsampling and the fourth image after the image registration are exposed and merged, and the third image after the downsampling is moved in the fourth image after the image registration.
  • the object forms a ghost in the fused image.
  • the terminal device may perform exposure fusion on the downsampled third image and the corrected fourth image to obtain an HDR image, so that edges of each object in the obtained HDR image are clear, and no ghosting occurs. The phenomenon further improves the sharpness of the HDR image.
  • the embodiment does not limit the manner in which the terminal device performs ghost correction on the fourth image after the image registration according to the downsampled third image.
  • the terminal device may first reduce the brightness of the image after the image registration to the brightness of the downsampled third image to obtain a fourth image after the brightness is reduced. Then, the terminal device performs image difference calculation on the downsampled third image and the reduced luminance fourth image to obtain an absolute value of the difference corresponding to each pixel point of the fourth image after the luminance reduction. At this time, if the absolute value of the difference corresponding to a certain pixel point is greater than the preset threshold, the terminal device may find the location of the pixel point in the fourth image after the image registration, and the position is the image registration A ghost image of four images.
  • the terminal device can increase the brightness of the downsampled third image according to the brightness of the fourth image after the image registration, and obtain the third image after the brightness is improved. . In this way, the terminal device can replace the ghost of the fourth image after the image registration with the pixel in the third image after the brightness is increased to obtain the corrected fourth image. Since the terminal device uses the pixels in the third image having the same brightness as the image-registered fourth image to correct the ghost, the original corrected brightness can still be obtained in the corrected fourth image.
  • the terminal device when the user takes a picture under low illumination using the terminal device, the terminal device performs time domain noise reduction on the multi-frame first image and the second image that are alternately and continuously outputted by the acquired camera sensor.
  • a third image mainly for providing detailed information of the current shooting scene, and, after being mainly used for providing the fourth image of the brightness information of the shooting scene, before the image fusion of the downsampled third image and the fourth image
  • the third image after sampling is used as a reference, and image registration and ghost correction are performed on the fourth image, so that the terminal device uses the fourth image after image registration and ghost correction and the second image after downsampling.
  • Example 1 and FIG. 8 is a schematic flowchart diagram of still another image processing method provided by the present application.
  • the terminal device acquires a plurality of frames of the first image and the second image that are alternately and continuously output by the camera sensor.
  • the method may include:
  • the existing terminal device performs the photographing function, and the format of the captured image presented to the user is mostly in the JPEG format.
  • the terminal device limited by the chip technology of the terminal device (for example, bandwidth limitation and processing speed limitation, etc.), the terminal device cannot quickly convert the image directly from the Bayer format into a JPEG format that can be presented to the user, which cannot satisfy the photographing process. Smooth demand. Therefore, existing terminal devices need to first convert the format of the image from the Bayer format to the YUV format, and then convert the YUV format to the JPEG format.
  • the image format is required. Convert to convert from Bayer format to YUV format.
  • This operation may be performed after the image is subjected to the fusion processing, or may be performed before the image is subjected to the fusion processing. If the operation can be performed after the image is merged, the operation is mostly performed by the software module of the terminal device. If the operation can be performed before the image is merged, the operation is mostly performed by the ISP of the terminal device. Since the execution rate of the ISP is faster than the software module of the terminal device, in this way, the efficiency of photographing the terminal device can be improved.
  • the ISP of the terminal device can convert the multi-frame first image from the Bayer format to the YUV format by performing a demosaicing operation on the first image of the multi-frame to obtain the first image after the multi-frame conversion format.
  • step S505 and step S504 may be performed in no particular order.
  • the terminal device can reduce the noise of the first image by performing time domain noise reduction on the multi-frame first image output by the camera sensor, that is, by using the average operation of pixels between different frames in the time domain, thereby making the noise of the first image
  • the resulting first image has less noise.
  • the terminal device may perform time domain noise reduction on the first image after the multi-frame conversion format by using an existing time domain noise reduction method, for example, sequentially performing a global image on the first image after the multi-frame conversion format.
  • the first image is obtained by registration, local ghost detection, and time domain fusion operation, and will not be described again.
  • step S506 and step S507 may be performed in no particular order.
  • S509 The third image after sampling is used as a reference, and the fourth image is image-registered to obtain a fourth image after image registration.
  • S510 Perform ghost correction on the image-registered fourth image according to the downsampled third image to obtain a corrected fourth image.
  • S511 Perform exposure fusion on the downsampled third image and the corrected fourth image to obtain an HDR image.
  • the upsampled HDR image is merged with the detailed image of the third image to obtain a fused image.
  • the entire image processing process is completed.
  • the terminal device obtains an image with higher brightness, higher definition, and less noise by performing the above image processing process, so that the terminal device is
  • the user can view the image with higher definition and brightness, thereby improving the photographing effect of the terminal device under low illumination, thereby improving the image. user experience.
  • FIG. 9 is a schematic diagram of a first image shown in the present application
  • FIG. 10 is a schematic diagram of a second image shown in the present application
  • FIG. 11 is a schematic diagram of an image after airborne noise reduction shown in the present application.
  • FIG. 11 shows an image obtained by the above-described steps S501-S515.
  • the image of the spatial domain after noise reduction shown in FIG. 11 is as shown.
  • the brightness and sharpness are greatly improved, and there is no overexposed and/or too dark areas in the entire image. Therefore, when the terminal device presents the image with higher brightness, higher definition, and less noise to the user, the user can view the image with higher definition and brightness, and improve the terminal device under low illumination.
  • FIG. 9 to FIG. 11 are only used for exemplifying the effect of improving the sharpness and brightness of the image after using the image processing method provided by the present application, and not for the processed image. Color and content are limited.
  • the terminal device in order to further improve the brightness, may further indicate that the camera sensor alternately and continuously outputs at least one frame of the first image and the at least one frame of the second image according to the photographing parameter.
  • the fill light of the terminal device is simultaneously turned on to further increase the brightness of the first image and the second image by supplementing the light.
  • the subsequent image processing based on the first image and the second image the brightness of the foreground back scene in the obtained image after the noise reduction is improved, and local overexposure or over darkness problem does not occur.
  • the camera effect of the terminal device under low illumination is further improved.
  • the image noise output by the camera sensor may be reduced.
  • the camera sensor may be used to output only one frame of the first image and one frame of the second image. Way to omit the process of time domain noise reduction in the above example.
  • an image processing method is described by taking a first image of a frame and a second image of a frame in which the terminal device alternates and continuously outputs the camera sensor as an example.
  • Example 2 is a schematic flowchart of still another image processing method provided by the present application.
  • the terminal device acquires a frame of the first image and a frame of the second image that are alternately and continuously output by the camera sensor.
  • the method may include:
  • S602. Instruct the camera sensor to alternately and continuously output one frame of the first image and one frame of the second image according to the photographing parameter.
  • step S605 and step S604 may be in no particular order.
  • S607 The first image after sampling is used as a reference, and image registration is performed on the second image after the conversion format, to obtain a second image after image registration.
  • S608 Perform ghost correction on the second image after image registration according to the downsampled first image to obtain a corrected second image.
  • S609 Perform exposure fusion on the downsampled first image and the corrected second image to obtain an HDR image.
  • the upsampled HDR image is merged with the detailed image of the first image to obtain a fused image.
  • the image processing method provided by the present application when the user takes a picture under low illumination using the terminal device, the terminal device can acquire at least one frame first image and at least one frame second image that are alternately and continuously output by the camera sensor, wherein the first image
  • the second image is mainly used to provide the brightness information of the current shooting scene, and the second image is used to provide the brightness information of the current shooting scene, so that the terminal device can perform image fusion processing according to the at least one frame of the first image and the at least one frame of the second image.
  • the brightness and the sharpness of the merged image obtained by the terminal device are improved, so that the terminal device can display the image with higher definition and brightness when the merged image is presented to the user.
  • the camera effect of the terminal device under low illumination is improved, thereby improving the user experience.
  • the image processing method provided by the present application is not only applicable to the front camera of the terminal device.
  • the application scene of the sensor shooting is also applicable to the application scene of the terminal device using the rear camera sensor.
  • the method of the present application is also applicable to an application scenario in which the terminal device uses a dual camera sensor.
  • the terminal device can process the image output by each camera sensor by using the steps of S301-S315, and finally Then, by using the existing fusion mode, the images output by the two camera sensors are processed, and the spatially denoised images obtained by the respective methods are further fused to obtain images with high definition and brightness.
  • the terminal device may process only the image output by one of the dual camera sensors using the steps of S301-S315, and use another camera sensor to perform special effects (eg, blurring, etc.) on the image. No longer.
  • FIG. 13 is a schematic structural diagram of a terminal device provided by the present application. As shown in FIG. 13, the foregoing terminal device may include:
  • the obtaining module 11 is configured to obtain at least one frame first image and at least one frame second image that are alternately and continuously output by the camera sensor; wherein the resolution of the first image is the same as the resolution corresponding to the current photographing mode, and the first image is The resolution is N times the resolution of the second image, N is an integer greater than 1; the camera sensor outputs the first image per frame using the first exposure parameter, and the camera sensor outputs the second image per frame using the second exposure parameter, An exposure parameter is greater than the second exposure parameter; for example, the first image can be, for example, a full size image.
  • the merging module 12 is configured to perform image fusion according to at least one frame of the first image and at least one frame of the second image to obtain a fused image.
  • the terminal device provided by the present application may perform the foregoing method embodiments, and the implementation principles and technical effects thereof are similar, and details are not described herein again.
  • FIG. 14 is a schematic structural diagram of another terminal device provided by the present application. As shown in FIG. 14, on the basis of the block diagram shown in FIG. 13, the terminal device further includes:
  • the determining module 13 is configured to determine a photographing parameter of the camera sensor according to the preview image output by the camera sensor before the obtaining module 11 acquires at least one frame of the first image and the at least one frame of the second image that are alternately and continuously output by the camera sensor;
  • the method includes: a size of the first image, a number of frames of the first image, a number of frames of the second image, an exposure parameter of the first image, an exposure parameter of the second image, and an alternate sequence of the first image and the second image;
  • the indicating module 14 is configured to instruct the camera sensor to alternately and continuously output at least one frame of the first image and at least one frame of the second image according to the photographing parameter.
  • the fusion module 12 may be specifically configured to perform image fusion on the first image and the second image, Get the merged image.
  • FIG. 15 is a schematic structural diagram of still another terminal device provided by the present application. As shown in FIG. 15, on the basis of the block diagram shown in FIG. 13, the terminal device further includes:
  • the first format conversion module 15 is configured to convert the first image from the Bayer Bayer format to the YUV format after the fusion module 12 performs image fusion on the first image and the second image to obtain a converted image. a first image, and converting the second image from a Bayer Bayer format to a YUV format to obtain a second image after the converted format;
  • the fusion module 12 is specifically configured to perform image fusion on the first image after the conversion format and the second image after the conversion format to obtain a merged image.
  • FIG. 16 is a schematic structural diagram of still another terminal device provided by the present application. As shown in FIG. 16, on the basis of the block diagram shown in FIG.
  • the terminal device further includes: a second format conversion module 16 configured to perform time domain noise reduction on the first image of the multiple frames in the fusion module 12, The third image is subjected to time domain noise reduction on the second image of the plurality of frames.
  • a second format conversion module 16 configured to perform time domain noise reduction on the first image of the multiple frames in the fusion module 12, The third image is subjected to time domain noise reduction on the second image of the plurality of frames.
  • the first image of the plurality of frames is converted from the Bayer Bayer format to the YUV format, and the first image after the multi-frame conversion format is obtained, and Converting the multi-frame second image from the Bayer format to the YUV format to obtain a second image after the multi-frame conversion format;
  • the fusion module 12 is configured to perform time domain noise reduction on the first image after the multi-frame conversion format to obtain a third image, and perform time domain noise reduction on the second image after the multi-frame conversion format to obtain a fourth image.
  • FIG. 17 is a schematic structural diagram of still another terminal device provided by the present application.
  • the fusion module 12 may be specifically configured to perform time domain noise reduction on the first image of the multiple frames to obtain a third image, and perform time domain noise reduction on the second image of the multiple frames to obtain a fourth image;
  • the third image and the fourth image are image-fused to obtain a fused image.
  • the fusion module 12 may include:
  • a downsampling unit 121 configured to downsample the third image according to the size of the fourth image to obtain a downsampled third image; the downsampled third image has the same size as the fourth image
  • the exposure fusing unit 122 is configured to perform exposure fusion on the downsampled third image and the fourth image to obtain a high dynamic range HDR image;
  • the upsampling unit 123 is configured to upsample the HDR image according to the size of the third image to obtain the upsampled HDR image;
  • the merging unit 124 is configured to fuse the upsampled HDR image with the detailed image of the third image to obtain a fused image; wherein the detail image of the third image includes a high frequency component of the third image.
  • the merging unit 124 may be specifically configured to determine the sensitivity ISO of the camera sensor; determine the gain coefficient according to the ISO of the camera sensor; multiply the pixel value of each pixel of the detailed image of the third image by the gain coefficient And obtaining a processed detail image; and performing image addition calculation on the processed detail image and the upsampled HDR image to obtain a fused image.
  • the fusion module 12 may further include:
  • the obtaining unit 125 is configured to merge the upsampled HDR image with the detail image of the third image after the fusion unit 124 obtains the merged image, and obtain the detail image of the third image according to the third image.
  • the obtaining unit 125 may be specifically configured to perform upsampling the downsampled third image according to the size of the third image to obtain the upsampled third image; and the upsampled third image and the third image The three images are subjected to image subtraction calculation to obtain a detailed image of the third image.
  • the fusion module 12 may further include:
  • the image registration unit 126 is configured to perform exposure fusion on the downsampled third image and the fourth image to obtain a high dynamic range HDR image, and the following sampled third image is used as a reference for the fourth image. Perform image registration to obtain a fourth image after image registration;
  • the ghost correction unit 127 is configured to perform ghost correction on the image after the image is imaged according to the downsampled third image to obtain a corrected fourth image.
  • the ghost correction unit 127 may Specifically, the brightness of the fourth image after image registration is reduced to the brightness of the third image after downsampling, and the fourth image after the brightness is reduced; the third image after downsampling and the fourth image after the brightness is reduced The image is subjected to image difference calculation, and the absolute value of the difference corresponding to each pixel point of the fourth image after the brightness reduction is obtained; the pixel point whose absolute value of the difference is greater than the preset threshold is used as the ghost image of the fourth image after the image registration; Upgrading the downsampled third image according to the brightness of the fourth image after the image registration Brightness, obtaining a third image after the brightness is improved; replacing the ghost of the image-aligned fourth image with the pixel in the third image after the brightness is raised, and obtaining the corrected fourth image.
  • the exposure fusing unit 122 is specifically configured to perform exposure fusion on the downsampled third image and the corrected fourth image to obtain an HDR image.
  • the terminal device may further include:
  • the spatial domain noise reduction module 17 is configured to fuse the upsampled HDR image with the detail image of the third image in the fusion unit 124 to obtain the fused image, and then perform spatial domain noise reduction on the fused image to obtain spatial domain noise reduction. Image.
  • the terminal device provided by the present application may perform the foregoing method embodiments, and the implementation principles and technical effects thereof are similar, and details are not described herein again.
  • FIG. 18 is a schematic structural diagram of still another terminal device provided by the present application.
  • the terminal device may include: a processor 21 (for example, a CPU) and a memory 22; the memory 22 may include a high-speed RAM memory, and may also include a non-volatile memory NVM, such as at least one disk storage, and the memory 22 Various instructions may be stored for performing various processing functions and implementing the method steps of the present application.
  • the terminal device involved in the present application may further include: a receiver 23, a transmitter 24, a power source 25, a communication bus 26, and a communication port 27.
  • the receiver 23 and the transmitter 24 may be integrated in the transceiver of the terminal device, or may be an independent transceiver antenna on the terminal device.
  • Communication bus 26 is used to implement a communication connection between the components.
  • the communication port 27 is used to implement connection communication between the terminal device and other peripheral devices.
  • the memory 22 is used to store computer executable program code, and the program code includes instructions.
  • the processor 21 executes the instruction, the instruction causes the terminal device to execute the above method embodiment, and the implementation principle and technical effect are similar. No longer.
  • FIG. 19 is a structural block diagram of the terminal device provided by the application as a mobile phone.
  • the mobile phone may include: a radio frequency (RF) circuit 1110, a memory 1120, an input unit 1130, a display unit 1140, a sensor 1150, an audio circuit 1160, a wireless fidelity (WiFi) module 1170, and processing.
  • RF radio frequency
  • Device 1180, and power supply 1190 and other components It will be understood by those skilled in the art that the structure of the handset shown in FIG. 19 does not constitute a limitation to the handset, and may include more or less components than those illustrated, or some components may be combined, or different components may be arranged.
  • the RF circuit 1110 can be used for receiving and transmitting signals during the transmission or reception of information or during a call. For example, after receiving the downlink information of the base station, the processing is performed by the processor 1180. In addition, the uplink data is sent to the base station.
  • RF circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
  • LNA Low Noise Amplifier
  • RF circuitry 1110 can also communicate with the network and other devices via wireless communication. The above wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division). Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), e-mail, Short Messaging Service (SMS), and the like.
  • GSM Global System of Mobile communication
  • GPRS General
  • the memory 1120 can be used to store software programs and modules, and the processor 1180 executes various functional applications and data processing of the mobile phone by running software programs and modules stored in the memory 1120.
  • Memory 1120 can be primarily The storage program area and the storage data area are included, wherein the storage program area can store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the storage data area can be stored according to the use of the mobile phone. Created data (such as audio data, phone book, etc.).
  • memory 1120 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input unit 1130 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function controls of the handset.
  • the input unit 1130 may include a touch panel 1131 and other input devices 1132.
  • the touch panel 1131 also referred to as a touch screen, can collect touch operations on or near the user (such as the user using a finger, a stylus, or the like on the touch panel 1131 or near the touch panel 1131. Operation), and drive the corresponding connecting device according to a preset program.
  • the touch panel 1131 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 1180 is provided and can receive commands from the processor 1180 and execute them.
  • the touch panel 1131 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 1130 may also include other input devices 1132.
  • other input devices 1132 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the display unit 1140 can be used to display information input by the user or information provided to the user as well as various menus of the mobile phone.
  • the display unit 1140 may include a display panel 1141.
  • the display panel 1141 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the touch panel 1131 can be overlaid on the display panel 1141. When the touch panel 1131 detects a touch operation thereon or nearby, the touch panel 1131 transmits to the processor 1180 to determine the type of the touch event, and then the processor 1180 is The type of touch event provides a corresponding visual output on display panel 1141.
  • touch panel 1131 and the display panel 1141 are used as two independent components to implement the input and input functions of the mobile phone in FIG. 10, in some embodiments, the touch panel 1131 and the display panel 1141 may be integrated. Realize the input and output functions of the phone.
  • the handset may also include at least one type of sensor 1150, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1141 according to the brightness of the ambient light, and the light sensor may close the display panel 1141 and/or when the mobile phone moves to the ear. Or backlight.
  • the acceleration sensor can detect the acceleration of each direction (usually three axes). When it is still, it can detect the magnitude and direction of gravity. It can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related games).
  • the mobile phone can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer repeat .
  • Audio circuitry 1160, speaker 1161, and microphone 1162 can provide an audio interface between the user and the handset.
  • the audio circuit 1160 can transmit the converted electrical data of the received audio data to the speaker 1161, and convert it into a sound signal output by the speaker 1161; on the other hand, the microphone 1162 converts the collected sound signal into an electrical signal, and the audio circuit 1160 After receiving, it is converted into audio data, and then processed by the audio data output processor 1180, transmitted to the other mobile phone via the RF circuit 1110, or outputted to the memory 1120 for further processing.
  • WiFi is a short-range wireless transmission technology.
  • the mobile phone can help users to send and receive emails, browse web pages and access streaming media through the WiFi module 1170, which provides users with wireless broadband Internet access.
  • FIG. 19 shows the WiFi module 1170, it can be understood that it does not belong to the essential configuration of the mobile phone, and may be omitted as needed within the scope of not changing the essence of the present application.
  • the processor 1180 is a control center for the handset, which connects various portions of the entire handset using various interfaces and lines, by executing or executing software programs and/or modules stored in the memory 1120, and invoking data stored in the memory 1120, The phone's various functions and processing data, so that the overall monitoring of the phone.
  • the processor 1180 may include one or more processing units; for example, the processor 1180 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 1180.
  • the handset also includes a power supply 1190 (such as a battery) that powers the various components.
  • a power supply 1190 (such as a battery) that powers the various components.
  • the power supply can be logically coupled to the processor 1180 via a power management system to manage charging, discharging, and power management functions through the power management system.
  • the mobile phone can also include a camera 1200, which can be a front camera or a rear camera.
  • the mobile phone may further include a Bluetooth module, a GPS module, and the like, and details are not described herein again.
  • the processor 1180 included in the mobile phone may be used to perform the foregoing image processing method embodiment, and the implementation principle and technical effects are similar, and details are not described herein again.
  • a computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, computer instructions can be wired from a website site, computer, server or data center (eg Coax, fiber, digital subscriber line (DSL) or wireless (eg, infrared, wireless, microwave, etc.) is transmitted to another website, computer, server, or data center.
  • the computer readable storage medium can be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that includes one or more available media.
  • Useful media can be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

本申请提供一种图像处理方法和终端设备,该方法包括:获取摄像头传感器交替并连续输出的至少一帧第一图像和至少一帧第二图像;其中,第一图像的分辨率与当前拍照模式所对应的分辨率相同,第一图像的分辨率为第二图像的分辨率的N倍,N为大于1的整数;摄像头传感器采用第一曝光参数输出每帧第一图像,摄像头传感器采用第二曝光参数输出每帧第二图像,第一曝光参数大于第二曝光参数;根据至少一帧第一图像和至少一帧第二图像,进行图像融合,得到融合后的图像。本申请提供的图像处理方法和终端设备,能够提升终端设备在低照度下的拍照效果,提高了用户体验。

Description

图像处理方法和终端设备 技术领域
本申请涉及通信技术,尤其涉及一种图像处理方法和终端设备。
背景技术
随着用户需求的不断提升,终端设备所集成的功能也越来越多。目前,市面上大多数的终端设备可以为用户提供如下功能:拨打电话、发送短信、上网、拍照等。
终端设备可以通过集成在终端设备上的摄像头传感器来实现拍照功能。现有技术中,为了不影响终端设备的体积,通常集成在终端设备上的摄像头传感器较小,使得摄像头传感器的感光面积有限、像素尺寸较小,进而使得摄像头传感器在低照度下的进光量不足。
因此,当用户使用终端设备拍摄光线较暗的场景时,终端设备所拍摄的图像的效果较差(例如:图像的噪声较大、亮度较低等),使得用户体验较低。
发明内容
本申请提供一种图像处理方法和终端设备,用于解决现有技术中用户使用终端设备拍摄光线较暗的场景时,终端设备所拍摄的图像的效果较差,使得用户体验较低的技术问题。
第一方面,本申请提供一种图像处理方法,方法包括:获取摄像头传感器交替并连续输出的至少一帧第一图像和至少一帧第二图像;其中,第一图像的分辨率与当前拍照模式所对应的分辨率相同,第一图像的分辨率为第二图像的分辨率的N倍,N为大于1的整数;摄像头传感器采用第一曝光参数输出每帧第一图像,摄像头传感器采用第二曝光参数输出每帧第二图像,第一曝光参数大于第二曝光参数;根据至少一帧第一图像和至少一帧第二图像,进行图像融合,得到融合后的图像。
通过第一方面提供的图像处理方法,用户在使用终端设备在低照度下拍照时,终端设备可以获取摄像头传感器交替并连续输出的至少一帧第一图像和至少一帧第二图像,其中,第一图像主要用于提供当前拍摄场景的细节信息,第二图像主要用于提供当前拍摄场景的亮度信息,从而使得终端设备可以根据该至少一帧第一图像和至少一帧第二图像,进行图像融合处理,使得终端设备所得到的融合后的图像的亮度和清晰度均得到了提升,从而使得终端设备在将该融合后的图像呈现给用户时,可以使用户观看到清晰度和亮度较高的图像,提升了终端设备在低照度下的拍照效果,进而提高了用户体验。
可选的,获取摄像头传感器交替输出的至少一帧第一图像和至少一帧第二图像之前,方法还包括:
根据摄像头传感器输出的预览图像,确定摄像头传感器的拍照参数;拍照参数包括:第一图像的尺寸、第一图像的帧数、第二图像的帧数、第一图像的曝光参数、第二图像的曝光参数、第一图像和第二图像的交替顺序;
根据拍照参数,指示摄像头传感器交替并连续输出至少一帧第一图像和至少一帧第二 图像。
通过该可能的实施方式提供的图像处理方法,终端设备通过指示摄像头传感器交替并连续输出至少一帧第一图像和至少一帧第一图像,可以减少第一图像之间的相对局部运动,以及,减少多帧第二图像之间的相对局部运动。同时,通过指示摄像头传感器连续输出的方式,能够减少拍照时间,提升拍照速度,进而提高用户体验。
可选的,至少一帧第一图像包括:一帧第一图像,至少一帧第二图像包括:一帧第二图像;则根据至少一帧第一图像和至少一帧第二图像进行图像融合,得到融合后的图像,包括:对第一图像和第二图像进行图像融合,得到融合后的图像。
可选的,对第一图像和第二图像进行图像融合,得到融合后的图像之前,方法还包括:将第一图像从拜耳Bayer格式转换为YUV格式,得到转换格式后的第一图像,并将第二图像从拜耳Bayer格式转换为YUV格式,得到转换格式后的第二图像;对第一图像和第二图像进行图像融合,得到融合后的图像,包括:对转换格式后的第一图像和转换格式后的第二图像进行图像融合,得到融合后的图像。
可选的,至少一帧第一图像包括:多帧第一图像,至少一帧第二图像包括:多帧第二图像;则根据至少一帧第一图像和至少一帧第二图像进行图像融合,得到融合后的图像,包括:对多帧第一图像进行时域降噪,得到第三图像,并对多帧第二图像进行时域降噪,得到第四图像;对第三图像和第四图像进行图像融合,得到融合后的图像。
可选的,对多帧第一图像进行时域降噪,得到第三图像,并对多帧第二图像进行时域降噪,得到第四图像之前,还包括:将多帧第一图像从拜耳Bayer格式转换为YUV格式,得到多帧转换格式后的第一图像,并将多帧第二图像从Bayer格式转换为YUV格式,得到多帧转换格式后的第二图像;对多帧第一图像进行时域降噪,得到第三图像,并对多帧第二图像进行时域降噪,得到第四图像,包括:对多帧转换格式后的第一图像进行时域降噪,得到第三图像,并对多帧转换格式后第二图像进行时域降噪,得到第四图像。
可选的,对第三图像和第四图像进行图像融合,得到融合后的图像,包括:根据第四图像的尺寸,对第三图像进行下采样,得到下采样后的第三图像;下采样后的第三图像的尺寸与第四图像的尺寸相同;将下采样后的第三图像和第四图像进行曝光融合,得到高动态范围HDR图像;根据第三图像的尺寸,将HDR图像进行上采样,得到上采样后的HDR图像;将上采样后的HDR图像与第三图像的细节图像融合,得到融合后的图像;其中,第三图像的细节图像包括第三图像的高频分量。
通过该可能的实施方式提供的图像处理方法,用户在使用终端设备在低照度下拍照时,终端设备通过对所获取的摄像头传感器交替并连续输出的多帧第一图像和第二图像进行时域降噪,得到主要用于提供当前拍摄场景的细节信息的第三图像,以及,主要用于提供拍摄场景的亮度信息的第四图像之后,可以使用该第三图像和第四图像进行图像融合处理,使得终端设备所得到的融合后的图像的亮度和清晰度均得到了提升,从而使得终端设备在将该融合后的图像呈现给用户时,可以使用户观看到清晰度和亮度较高的图像,提升了终端设备在低照度下的拍照效果,进而提高了用户体验。
可选的,将上采样后的HDR图像与第三图像的细节图像融合,得到融合后的图像之前,方法还包括:根据第三图像,获取第三图像的细节图像。
通过该可能的实施方式提供的图像处理方法,终端设备可以获取到“包括第三图像的 高频分量”的第三图像的细节图像,从而使得终端设备在将上采样后的HDR图像与该第三图像的细节图像进行融合后,可以将拍摄场景整体的细节信息回填至上采样后的HDR图像中,提高了上采样后的HDR图像的清晰度。
示例性的,根据第三图像,获取第三图像的细节图像,包括:根据第三图像的尺寸,对下采样后的第三图像进行上采样,得到上采样后的第三图像;对上采样后的第三图像与第三图像进行图像相减计算,得到第三图像的细节图像。
可选的,将下采样后的第三图像和第四图像进行曝光融合,得到高动态范围HDR图像之前,方法还包括:以下采样后的第三图像为参考,对第四图像进行图像配准,得到图像配准后的第四图像;根据下采样后的第三图像,对图像配准后的第四图像进行鬼影矫正,得到矫正后的第四图像;将下采样后的第三图像和第四图像进行曝光融合,得到高动态范围HDR图像,包括:将下采样后的第三图像与矫正后的第四图像进行曝光融合,得到HDR图像。
通过该可能的实施方式提供的图像处理方法,用户在使用终端设备在低照度下拍照时,终端设备通过对所获取的摄像头传感器交替并连续输出的多帧第一图像和第二图像进行时域降噪,得到主要用于提供当前拍摄场景的细节信息的第三图像,以及,主要用于提供拍摄场景的亮度信息的第四图像之后,可以在对下采样后的第三图像和第四图像进行图像融合之前,先以下采样后的第三图像为基准,对第四图像进行图像配准和鬼影矫正,从而使得终端设备在使用进行图像配准和鬼影矫正后的第四图像与下采样后的第三图像进行图像融合时,图像融合的效果较好,进而使得终端设备所得到的融合后的图像的清晰度进一步得到了提升。
示例性的,根据下采样后的第三图像,对图像配准后的第四图像进行鬼影矫正,得到矫正后的第四图像,包括:将图像配准后的第四图像的亮度降低至下采样后的第三图像的亮度,得到降低亮度后的第四图像;对下采样后的第三图像和降低亮度后的第四图像进行图像求差计算,得到降低亮度后的第四图像的每个像素点对应的差异绝对值;将差异绝对值大于预设阈值的像素点作为图像配准后的第四图像的鬼影;根据图像配准后的第四图像的亮度,提升下采样后的第三图像的亮度,得到提升亮度后的第三图像;使用提升亮度后的第三图像中的像素点替换图像配准后的第四图像的鬼影,得到矫正后的第四图像。
示例性的,将上采样后的HDR图像与第三图像的细节图像融合,得到融合后的图像,包括:确定摄像头传感器的感光度ISO;根据摄像头传感器的ISO,确定增益系数;将第三图像的细节图像的每个像素点的像素值与增益系数相乘,得到处理后的细节图像;将处理后的细节图像与上采样后的HDR图像进行图像相加计算,得到融合后的图像。
可选的,将上采样后的HDR图像与第三图像的细节图像融合,得到融合后的图像之后,方法还包括:对融合后的图像进行空域降噪,得到空域降噪后的图像。
通过该可能的实施方式提供的图像处理方法,可以通过对融合后的图像进行空域降噪的方式,进一步降低图像的噪声。
示例性的,第一图像为全尺寸图像。
第二方面,本申请提供一种终端设备,终端设备包括:获取模块,用于获取摄像头传感器交替并连续输出的至少一帧第一图像和至少一帧第二图像;其中,第一图像的分辨率与当前拍照模式所对应的分辨率相同,第一图像的分辨率为第二图像的分辨率的N倍,N 为大于1的整数;摄像头传感器采用第一曝光参数输出每帧第一图像,摄像头传感器采用第二曝光参数输出每帧第二图像,第一曝光参数大于第二曝光参数;融合模块,用于根据至少一帧第一图像和至少一帧第二图像,进行图像融合,得到融合后的图像。
可选的,终端设备,还包括:确定模块,用于在获取模块获取摄像头传感器交替并连续输出的至少一帧第一图像和至少一帧第二图像之前,根据摄像头传感器输出的预览图像,确定摄像头传感器的拍照参数;拍照参数包括:第一图像的尺寸、第一图像的帧数、第二图像的帧数、第一图像的曝光参数、第二图像的曝光参数、第一图像和第二图像的交替顺序;指示模块,用于根据拍照参数,指示摄像头传感器交替并连续输出至少一帧第一图像和至少一帧第二图像。
可选的,至少一帧第一图像包括:一帧第一图像,至少一帧第二图像包括:一帧第二图像;融合模块,具体用于对第一图像和第二图像进行图像融合,得到融合后的图像。
可选的,终端设备,还包括:第一格式转换模块,用于在融合模块对第一图像和第二图像进行图像融合,得到融合后的图像之前,将第一图像从拜耳Bayer格式转换为YUV格式,得到转换格式后的第一图像,并将第二图像从拜耳Bayer格式转换为YUV格式,得到转换格式后的第二图像;融合模块,具体用于对转换格式后的第一图像和转换格式后的第二图像进行图像融合,得到融合后的图像。
可选的,至少一帧第一图像包括:多帧第一图像,至少一帧第二图像包括:多帧第二图像;融合模块,具体用于对多帧第一图像进行时域降噪,得到第三图像,并对多帧第二图像进行时域降噪,得到第四图像;对第三图像和第四图像进行图像融合,得到融合后的图像。
可选的,终端设备,还包括:第二格式转换模块,用于在融合模块对多帧第一图像进行时域降噪,得到第三图像,并对多帧第二图像进行时域降噪,得到第四图像之前,将多帧第一图像从拜耳Bayer格式转换为YUV格式,得到多帧转换格式后的第一图像,并将多帧第二图像从Bayer格式转换为YUV格式,得到多帧转换格式后的第二图像;融合模块,具体用于对多帧转换格式后的第一图像进行时域降噪,得到第三图像,并对多帧转换格式后第二图像进行时域降噪,得到第四图像。
可选的,融合模块,包括:下采样单元,用于根据第四图像的尺寸,对第三图像进行下采样,得到下采样后的第三图像;下采样后的第三图像的尺寸与第四图像的尺寸相同;曝光融合单元,用于将下采样后的第三图像和第四图像进行曝光融合,得到高动态范围HDR图像;上采样单元,用于根据第三图像的尺寸,将HDR图像进行上采样,得到上采样后的HDR图像;融合单元,用于将上采样后的HDR图像与第三图像的细节图像融合,得到融合后的图像;其中,第三图像的细节图像包括第三图像的高频分量。
可选的,融合模块,还包括:获取单元,用于在融合单元将上采样后的HDR图像与第三图像的细节图像融合,得到融合后的图像之前,根据第三图像,获取第三图像的细节图像。
示例性的,获取单元,具体用于根据第三图像的尺寸,对下采样后的第三图像进行上采样,得到上采样后的第三图像;对上采样后的第三图像与第三图像进行图像相减计算,得到第三图像的细节图像。
可选的,融合模块,还包括:图像配准单元,用于曝光融合单元将下采样后的第三图 像和第四图像进行曝光融合,得到高动态范围HDR图像之前,以下采样后的第三图像为参考,对第四图像进行图像配准,得到图像配准后的第四图像;鬼影矫正单元,用于根据下采样后的第三图像,对图像配准后的第四图像进行鬼影矫正,得到矫正后的第四图像;曝光融合单元,具体用于将下采样后的第三图像与矫正后的第四图像进行曝光融合,得到HDR图像。
示例性的,鬼影矫正单元,具体用于将图像配准后的第四图像的亮度降低至下采样后的第三图像的亮度,得到降低亮度后的第四图像;对下采样后的第三图像和降低亮度后的第四图像进行图像求差计算,得到降低亮度后的第四图像的每个像素点对应的差异绝对值;将差异绝对值大于预设阈值的像素点作为图像配准后的第四图像的鬼影;根据图像配准后的第四图像的亮度,提升下采样后的第三图像的亮度,得到提升亮度后的第三图像;使用提升亮度后的第三图像中的像素点替换图像配准后的第四图像的鬼影,得到矫正后的第四图像。
示例性的,融合单元,具体用于确定摄像头传感器的感光度ISO;根据摄像头传感器的ISO,确定增益系数;将第三图像的细节图像的每个像素点的像素值与增益系数相乘,得到处理后的细节图像;将处理后的细节图像与上采样后的HDR图像进行图像相加计算,得到融合后的图像。
可选的,终端设备还包括:
空域降噪模块,用于在融合单元将上采样后的HDR图像与第三图像的细节图像融合,得到融合后的图像之后,对融合后的图像进行空域降噪,得到空域降噪后的图像。
示例性的,第一图像为全尺寸图像。
上述第二方面以及第二方面的各可能的实施方式所提供的终端设备,其有益效果可以参见上述第一方面和第一方面的各可能的实施方式所带来的有益效果,在此不再赘述。
第三方面,本申请提供一种终端设备,终端设备包括:处理器、存储器;
其中,存储器用于存储计算机可执行程序代码,程序代码包括指令;当处理器执行指令时,指令使终端设备执行如第一方面和第一方面的各可能的实施方式任一项的图像处理方法。
上述第三方面所提供的终端设备,其有益效果可以参见上述第一方面和第一方面的各可能的实施方式所带来的有益效果,在此不再赘述。
本申请第四方面提供一种终端设备,包括用于执行以上第一方面的方法的至少一个处理元件(或芯片)。
本申请第五方面提供一种程序,该程序在被处理器执行时用于执行以上第一方面的方法。
本申请第六方面提供一种程序产品,例如计算机可读存储介质,包括第五方面的程序。
本申请第七方面提供一种计算机可读存储介质,计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述第一方面的方法。
本申请提供的图像处理方法和终端设备,用户在使用终端设备在低照度下拍照时,终端设备可以获取摄像头传感器交替并连续输出的至少一帧第一图像和至少一帧第二图像, 其中,第一图像主要用于提供当前拍摄场景的细节信息,第二图像主要用于提供当前拍摄场景的亮度信息,从而使得终端设备可以根据该至少一帧第一图像和至少一帧第二图像,进行图像融合处理,使得终端设备所得到的融合后的图像的亮度和清晰度均得到了提升,从而使得终端设备在将该融合后的图像呈现给用户时,可以使用户观看到清晰度和亮度较高的图像,提升了终端设备在低照度下的拍照效果,进而提高了用户体验。
附图说明
图1为现有技术中的终端设备的示意图;
图2为现有技术中的摄像头传感器binning模式的工作原理示意图;
图3为本申请提供的一种图像处理方法的流程示意图;
图4为本申请提供的另一种图像处理方法的流程示意图;
图5为本申请提供的摄像头传感器的出图示意图;
图6为本申请提供的又一种图像处理方法的流程示意图;
图7为本申请提供的又一种图像处理方法的流程示意图;
图8为本申请提供的又一种图像处理方法的流程示意图;
图9为本申请示出的一种第一图像的示意图;
图10为本申请示出的一种第二图像的示意图;
图11为本申请示出的一种空域降噪后的图像的示意图;
图12为本申请提供的又一种图像处理方法的流程示意图;
图13为本申请提供的一种终端设备的结构示意图;
图14为本申请提供的另一种终端设备的结构示意图;
图15为本申请提供的又一种终端设备的结构示意图;
图16为本申请提供的又一种终端设备的结构示意图;
图17为本申请提供的又一种终端设备的结构示意图;
图18为本申请提供的又一种终端设备的结构示意图;
图19为申请提供的终端设备为手机时的结构框图。
具体实施方式
以下,对本申请中的部分用语进行解释说明,以便于本领域技术人员理解:
终端:可以是无线终端也可以是有线终端,无线终端可以是指向用户提供语音和/或其他业务数据连通性的设备,具有无线连接功能的手持式设备、或连接到无线调制解调器的其他处理设备。无线终端可以经无线接入网(Radio Access Network,RAN)与一个或多个核心网进行通信,无线终端可以是移动终端,如移动电话(或称为“蜂窝”电话)和具有移动终端的计算机,例如,可以是便携式、袖珍式、手持式、计算机内置的或者车载的移动装置,它们与无线接入网交换语言和/或数据。例如,个人通信业务(Personal Communication Service,PCS)电话、无绳电话、会话发起协议(Session Initiation Protocol,SIP)话机、无线本地环路(Wireless Local Loop,WLL)站、个人数字助理(Personal Digital Assistant,PDA)等设备。无线终端也可以称为系统、订户单元 (Subscriber Unit)、订户站(Subscriber Station),移动站(Mobile Station)、移动台(Mobile)、远程站(Remote Station)、远程终端(Remote Terminal)、接入终端(Access Terminal)、用户终端(User Terminal)、用户代理(User Agent)、用户设备(User Device or User Equipment),在此不作限定。
本申请中,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
图1为现有技术中的终端设备的示意图。现有技术中,大多数终端设备通过集成在终端设备上的摄像头传感器来实现拍照功能。其中,这里所说的摄像头传感器可以为终端设备的前置摄像头传感器,还可以为终端设备的后置摄像头传感器。图1示出的是以终端设备为手机为例的示意图。
如图1所示,由于集成在终端设备上的摄像头传感器较小,使得摄像头传感器的感光面积有限、像素尺寸较小,所以摄像头传感器在低照度下的进光量会不足,导致摄像头传感器在低照度下输出的图像的效果较差(例如:图像的噪声较大、亮度较低等)。因此,当用户使用终端设备拍摄光线较暗的场景(例如:夜景)时,因终端设备的摄像头传感器所输出的图像的效果较差,从而使得终端设备在将该图像呈现给用户时,用户体验较低。
目前,存在如下几种解决方案,具体地:
第一种方案:终端设备通过终端设备上的补光灯提升拍摄场景的亮度,以增大摄像头传感器的进光量,进而提高摄像头传感器输出的图像的亮度。
具体的,大多数终端设备上都有设置有补光灯,例如:后置闪光灯、前置发光二极管(Light Emitting Diode,LED)灯等。因此,终端设备在摄像头传感器拍照时,可以使用补光灯对拍摄场景进行补光,提亮拍摄场景的亮度,从而提高摄像头传感器的进光量,进而提高摄像头传感器输出的图像的亮度。
然而,由于补光灯的补光范围有限,使得补光灯只能对近景补光,无法对远景补光,导致摄像头传感器输出的图像中的远景部分仍然较暗,使得终端设备呈现给用户的图像的效果仍然较差,用户体验较低。
第二种方案:终端设备通过将摄像头传感器工作在binning模式,提高摄像头传感器输出的图像的亮度。
具体的,图2为现有技术中的摄像头传感器binning模式的工作原理示意图。如图2所示,摄像头传感器的binning模式为:将摄像头传感器所拍摄的图像中多个相邻的、且相同的像元的像素合并,作为一个像素使用。即,将图像中多个相邻的绿色(Green,G)像元的像素合并,作为一个像素使用;将图像中多个相邻的红色(Red,R)像元的像素合并,作为一个像素使用;将图像中多个相邻的蓝色(Blue,B)像元的像素合并,作为一个像素使用。其中,这里所说的相邻的像元可以为水平方向上相邻的像元,也可以为竖直方向上相邻的像元,还可以既包括水平方向上相邻的像元,又包括竖直方向上相邻的像元。
图2中示出的是以水平方向上相邻的2个像元的像素、以及,竖直方向上相邻的2个像元的像素合并,作为一个像素使用的示意图。为了便于理解,在图2中将合并为同一个像素的4个像元采用相同的线条标识。以图2中左侧的图像为摄像头传感器拍摄的图像为 例,则当摄像头传感器工作在binning模式时,摄像头传感器可以将该图像中水平方向上相邻的2个像元的像素、以及,竖直方向上相邻的2个像元的像素合并,得到如图2中右侧所示的图像,进而输出图2中右侧所示的图像。其中,图2中右侧所示的图像可以称为binning图像。由于图2中将4个相同像元的像素进行了合并,因此,像素合并后所得到的binning图像的尺寸降至图2中左侧图像(即原图)的四分之一,同时,binning图像的分辨率也会降至图2中左侧图像(即原图)的四分之一。
通过这种合并像素的方式,可以提升图像的感光面积,提升暗处对光感应的灵敏度,进而可以改善摄像头传感器在低照度下输出的图像的亮度。然而,摄像头传感器工作在binning模式时,在通过提升图像的亮度的同时,会导致像素合并后所得到的图像的分辨率降低,使得图像的高频信息丢失(即图像的细节丢失),导致图像的清晰度降低。例如:图2所示的将4个相邻的相同的像元的像素和并的方式,会将图2中右侧图像的分辨率降至左侧图像的四分之一,使得摄像头传感器所输出的图像的清晰度降低,进而使得终端设备呈现给用户的图像的效果仍然较差,用户体验较低。
根据上述描述可知,终端设备在采用上述任一方案时,终端设备呈现给用户的图像的效果仍然较差,用户体验较低。因此,考虑到上述问题,本申请提供一种图像处理方法,用于解决现有技术中用户在使用终端设备拍摄光线较暗的场景时,终端设备呈现给用户的图像的效果较差的技术问题。下面以一些实施例对本申请的技术方案进行说明。下面这几个实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例不再赘述。
图3为本申请提供的一种图像处理方法的流程示意图。如图3所示,该方法可以包括:
S101、获取摄像头传感器交替并连续输出的至少一帧第一图像和至少一帧第二图像。
具体的,在本申请中,当用户使用终端设备在低照度下拍照时,即,当终端设备处于在低照度下(即终端设备当前所拍摄的场景的光线较暗)拍照的状态时,终端设备可以获取摄像头传感器交替并连续输出的至少一帧第一图像和至少一帧第二图像。也就是说,终端设备所获取的第一图像和第二图像为终端设备的同一摄像头传感器当前拍摄同一场景时输出的图像,即第一图像和第二图像包含当前同一拍摄场景。其中,上述所说的同一摄像头传感器可以为终端设备的前置摄像头传感器,也可以为终端设备的后置摄像头传感器。
其中,上述第一图像的分辨率,与,用户当前在终端设备上所选择的拍照模式对应的分辨率相同。上述第一图像的分辨率为第二图像的分辨率的N倍,N为大于1的整数。即第一图像的尺寸为第二图像的尺寸的N倍。也就是说,第一图像的尺寸即为与当前拍照模式对应的分辨率匹配的尺寸,也可以称为该分辨率下的全尺寸图像。而第二图像为相对于第一图像的binning图像,即第二图像为经过像素合并后得到的图像。因此,本申请中的第一图像的清晰度高于第二图像的清晰度,但是第一图像的亮度低于第二图像的亮度。所以第一图像主要用于提供当前拍摄场景的细节信息(即第一图像的高频分量),第二图像主要用于提供当前拍摄场景的亮度信息(即第二图像的低频分量)。
本实施例不限定上述摄像头传感器交替并连续输出至少一帧第一图像和至少一帧第二图像的方式,例如:摄像头传感器可以先输出一帧第一图像后输出一帧第二图像的方式, 交替并连续输出至少一帧第一图像和至少一帧第二图像,还可以先输出一帧第二图像后输出一帧第一图像的方式,交替输出至少一帧第一图像和至少一帧第二图像。需要说明的是,由于第二图像为相对于第一图像的binning图像,即第二图像为经过像素合并后得到的图像,因此,第二图像的亮度高于第一图像。为了避免上述摄像头传感器输出的至少一帧第二图像出现过曝的问题,上述摄像头传感器可以不同的曝光参数输出第一图像和第二图像,例如:采用第一曝光参数输出每帧第一图像,采用小于第一曝光参数的第二曝光参数输出每帧第二图像。其中,第一曝光参数和第二曝光参数的具体取值可以根据摄像头传感器当前的感光值(ISO)确定,对此不再赘述。
S102、根据至少一帧第一图像和至少一帧第二图像,进行图像融合,得到融合后的图像。
具体的,上述终端设备在获取到至少一帧第一图像和至少一帧第二图像之后,可以对该至少一帧第一图像和至少一帧第二图像进行图像融合。即,根据清晰度较高的至少一帧第一图像与亮度较高的至少一帧第二图像进行图像融合。通过这种方式,可以将第一图像的清晰度和第二图像的亮度融合在一帧图像上,从而使得终端设备在执行完图像融合后,所得到的融合后的图像的亮度和清晰度得到提升。
本申请提供的图像处理方法,用户在使用终端设备在低照度下拍照时,终端设备可以获取摄像头传感器交替并连续输出的至少一帧第一图像和至少一帧第二图像,其中,第一图像主要用于提供当前拍摄场景的细节信息,第二图像主要用于提供当前拍摄场景的亮度信息,从而使得终端设备可以根据该至少一帧第一图像和至少一帧第二图像,进行图像融合处理,使得终端设备所得到的融合后的图像的亮度和清晰度均得到了提升,从而使得终端设备在将该融合后的图像呈现给用户时,可以使用户观看到清晰度和亮度较高的图像,提升了终端设备在低照度下的拍照效果,进而提高了用户体验。
图4为本申请提供的另一种图像处理方法的流程示意图。本实施例涉及的是上述终端设备如何指示摄像头传感器交替并连续输出至少一帧第一图像和至少一帧第二图像。如图4所示,则在上述S101之前,该方法还可以包括:
S201、根据摄像头传感器输出的预览图像,确定摄像头传感器的拍照参数。
具体的,在本实施例中,用户在使用终端设备拍照时,若终端设备对摄像头传感器当前输出的预览图像分析确定,摄像头传感器当前处于低照度拍摄状态时,终端设备可以确定摄像头传感器的拍照参数。其中,这里所说的拍照参数可以为用户当前在使用终端设备进行拍照时,终端设备执行一次拍照操作所需要使用的参数。该拍照参数可以包括:第一图像的尺寸、第一图像的帧数、第二图像的帧数、第一图像的曝光参数、第二图像的曝光参数、第一图像和第二图像的交替顺序等。
上述第一图像和第二图像的交替顺序可以为预设的交替顺序,还可以为终端设备随机为摄像头传感器分配的交替顺序等。图5为本申请提供的摄像头传感器的出图示意图。如图5所示,图5示出的是摄像头传感器以先输出第一图像后输出第二图像的交替顺序,输出4帧第一图像和4帧第二图像的出图示意图。本领域技术人员可以理解的是,摄像头传感器还可以以先输出第二图像后输出第一图像的交替顺序,输出4帧第一图像和4帧第二图像,对此不进行限定。
上述终端设备可以通过用户在终端上当前所选择的拍照模式对应的分辨率,确定第一 图像的分辨率和尺寸,进而根据第一图像的分辨率与第二图像的分辨率的倍数,确定第二图像的分辨率和尺寸。
上述终端设备可以根据摄像头传感器输出的预览图像,确定摄像头传感器当前的ISO,进而通过ISO与第一图像的帧数和第二图像的帧数的对应关系,确定第一图像的帧数和第二图像的帧数。需要说明的是,当前拍摄场景的光线越暗,ISO就会越高,摄像头传感器输出的图像的噪声也就越高。因此,终端设备需要使用越多帧数的图像进行图像处理,所以ISO对应的第一图像和第二图像的帧数也就越多。示例性的,ISO与第一图像的帧数和第二图像的帧数的对应关系例如可以为:在ISO为500时,对应2帧第一图像和2帧第二图像,在ISO为1000时,对应3帧第一图像和3帧第二图像等。虽然上述示例中以相同帧数的第一图像和第二图像为例进行了说明,但是本领域技术人员可以理解的是,上述第一图像和第二图像的帧数也可以不同。
上述终端设备可以采用现有的计算方式,根据摄像头传感器输出的预览图像的亮度,确定第一图像的曝光参数和第二图像的曝光参数,对此不再赘述。其中,这里所说的曝光参数可以包括:ISO、曝光时间、帧率等。
S202、根据拍照参数,指示摄像头传感器交替并连续输出至少一帧第一图像和至少一帧第二图像。
具体的,上述终端设备在获取到拍照参数之后,可以根据该拍照参数,指示摄像头传感器交替并连续输出至少一帧第一图像和至少一帧第二图像。具体实现时,终端设备可以根据交替顺序,在摄像头传感器输出每帧图像之前,向摄像头传感器发送该帧图像对应的曝光参数,以及,该帧图像的尺寸,从而使得摄像头传感器可以正确的交替并连续输出每帧第一图像和第二图像。
由于摄像头传感器在输出多帧第一图像和多帧第二图像时,时间持续较长,使得摄像头传感器在输出的图像的过程中,可能会存在局部运动。因此,为了避免多帧第一图像和多帧第二图像之间的亮度不同,从而导致后续对多帧第一图像和多帧第二图像的处理比较困难,终端设备通过指示摄像头传感器采用交替并连续输出的方式,输出多帧第一图像和第二图像方式,可以减少多帧第一图像之间的相对局部运动,以及,减少多帧第二图像之间的相对局部运动。同时,通过指示摄像头传感器连续输出的方式,能够减少拍照时间,提升拍照速度,进而提高用户体验。
需要说明的是,由于第一图像的分辨率和第二图像的分辨率不同,因此,为了保持用户在使用终端设备拍照时,在屏幕上看到的画面一致,可以采用在屏幕上向用户显示每帧第一图像,不显示第二图像,提高了用户体验。
本领域技术人员可以理解的是,上述终端设备在执行步骤S201-S202时,可以通过软件来实现,也可以通过硬件来实现,还可以通过软件和硬件结合的方式实现。其中,这里所说的硬件例如可以为图像信号处理器(Image Signal Processing,ISP)等,这里所说的软件例如可以为:自动曝光(Automatic Exposure,AE)模块等。
本申请提供的图像处理方法,用户在使用终端设备在低照度下拍照时,终端设备可以根据摄像头传感器输出的预览图像,确定摄像头传感器的拍照参数,进而通过该拍照参数,指示摄像头传感器交替并连续输出的至少一帧第一图像和至少一帧第二图像,其中,第一图像主要用于提供当前拍摄场景的细节信息,第二图像主要用于提供当前拍摄场景的亮度 信息,从而使得终端设备可以根据该至少一帧第一图像和至少一帧第二图像,进行图像融合处理,使得终端设备所得到的融合后的图像的亮度和清晰度均得到了提升,从而使得终端设备在将该融合后的图像呈现给用户时,可以使用户观看到清晰度和亮度较高的图像,提升了终端设备在低照度下的拍照效果,进而提高了用户体验。
进一步地,在上述实施例的基础上,本实施例涉及的是上述终端设备根据所述至少一帧第一图像和所述至少一帧第二图像,进行图像融合,得到融合后的图像的过程,则上述S102可以包括如下两种情况:
第一种情况:上述终端设备获取的是摄像头传感器交替并连续输出的一帧第一图像和一帧第二图像。
具体的,若上述终端设备只获取了摄像头传感器交替并连续输出的一帧第一图像和一帧第二图像,则上述终端设备可以直接对该一帧第一图像和一帧第二图像进行图像融合,以得到融合后的图像。可选的,在本申请的另一实现方式中,上述终端设备还可以先对该一帧第一图像和一帧第二图像进行格式转换(即去马赛克操作),以将一帧第一图像从拜耳Bayer格式转换为YUV格式,得到一帧转换格式后的第一图像,并将该一帧第二图像从拜耳Bayer格式转换为YUV格式,得到一帧转换格式后的第二图像,进而对该一帧转换格式后的第一图像和一帧转换格式后的第二图像进行图像融合,得到融合后的图像等。
第二种情况:上述终端设备获取的是摄像头传感器交替并连续输出的多帧第一图像和多帧第二图像。
具体的,若上述终端设备获取的是摄像头传感器交替并连续输出的多帧第一图像和多帧第二图像,则上述终端设备可以直接对多帧第一图像和多帧第二图像进行图像融合。
可选的,终端设备还可以先对多帧第一图像进行时域降噪,得到第三图像,对多帧第二图像进行时域降噪,得到第四图像,进而对该第三图像和该第四图像进行图像融合,得到融合后的图像。由于摄像头传感器在低照度下的进光量较少,所以,摄像头传感器在低照度下输出的图像的噪声较大。因此,终端设备可以通过对摄像头传感器输出的多帧第一图像和多帧第一图像分别进行时域降噪的方式,即通过时域上不同帧间的像素平均操作的方式,可以降低图像的噪声,从而使得所得到的第三图像和第四图像的噪声较小,进而使得终端设备在对第三图像和第四图像进行图像融合所得到的融合后的图像的噪声较小。
在本申请的另一实现方式中,终端设备还可以先对该一帧第一图像和一帧第二图像进行格式转换,以将多帧第一图像从拜耳Bayer格式转换为YUV格式,得到多帧转换格式后的第一图像,并将多帧第二图像从Bayer格式转换为YUV格式,得到多帧转换格式后的第二图像。这样,终端设备可以通过对多帧转换格式后的第一图像进行时域降噪,得到第三图像,并对多帧转换格式后第二图像进行时域降噪,得到第四图像,进而对该第三图像和该第四图像进行图像融合,得到融合后的图像等。
本申请提供的图像处理方法,用户在使用终端设备在低照度下拍照时,终端设备可以获取摄像头传感器交替并连续输出的至少一帧第一图像和至少一帧第二图像,其中,第一图像主要用于提供当前拍摄场景的细节信息,第二图像主要用于提供当前拍摄场景的亮度信息,从而使得终端设备可以根据该至少一帧第一图像和至少一帧第二图像,进行图像融合处理,使得终端设备所得到的融合后的图像的亮度和清晰度均得到了提升,从而使得终端设备在将该融合后的图像呈现给用户时,可以使用户观看到清晰度和亮度较高的图像, 提升了终端设备在低照度下的拍照效果,进而提高了用户体验。
图6为本申请提供的又一种图像处理方法的流程示意图。本实施例以第三图像和第四图像为例,介绍终端设备进行图像融合的过程。本领域技术人员可以理解的是,若终端设备获取的是一帧第一图像和一帧第二图像,则终端设备也可以采用如下方式对该一帧第一图像和一帧第二图像进行图像融合,其实现方式和实现原理类似,对此不再赘述。
本实施例中,第三图像为对多帧第一图像进行时域降噪后的图像,第四图像为对多帧第二图像进行时域降噪后的图像,因此,在图像融合过程中,第三图像主要用于提供当前拍摄场景的细节信息(即第一图像的高频分量),第四图像主要用于提供当前拍摄场景的亮度信息(即第二图像的低频分量)。如图6所示,该方法包括:
S301、根据第四图像的尺寸,对第三图像进行下采样,得到下采样后的第三图像。
具体的,由于上述终端设备所获取到的第三图像为根据多帧第一图像进行时域降噪后所得到的图像,第四图像为根据多帧第二图像进行时域降噪后所得到的图像。因此,第三图像的尺寸与第一图像相同,第四图像的尺寸与第二图像相同,使得第三图像和第四图像的尺寸不同。为了能够将第三图像和第四图像进行曝光融合,终端设备在获取到上述第三图像和第四图像之后,可以根据第四图像的尺寸,对第三图像进行下采样,以缩小第三图像的尺寸,从而使得下采样后的第三图像的尺寸与第四图像的尺寸相同。
S302、将下采样后的第三图像和第四图像进行曝光融合,得到HDR图像。
具体的,终端设备在得到下采样后的第三图像之后,可以将具有相同尺寸的两帧图像(即下采样后的第三图像和第四图像)进行曝光融合。即,将清晰度较高的下采样后的第三图像与亮度较高的第四图像进行曝光融合。通过这种方式,可以将下采样后的第三图像的清晰度和第四图像的亮度融合在一帧图像上,从而使得终端设备在执行完曝光融合后所得到的高动态范围(High-Dynamic Range,HDR)图像整体的亮度得到提升。
其中,本实施例不限定上述终端设备将下采样后的第三图像和第四图像进行曝光融合的实现方式。例如:终端设备可以采用“以图像亮度作为权重计算参数”的曝光融合方式。作为一种可实施的方式,终端设备可以将图像亮度的中心值128作为基准,为下采样后的第三图像的每个像素点分配权重。其中,亮度低于128的像素点中,像素点的亮度越低,该像素点的权重越小;亮度高于128的像素点中,像素点的亮度越高,该像素点权重越小。同样的,终端设备可以采用上述方式,为第四图像中的每个像素点分配权重。然后,终端设备可以将下采样后的第三图像的每个像素点的像素值与该像素点的权重值相乘,得到处理后的第三图像。同样的,终端设备可以将第四图像的每个像素点的像素值与该像素点的权重值相乘,得到处理后的第四图像。最后,终端设备将处理后的第三图像和处理后的第四图像进行图像相加计算,即可得到HDR图像,至此就完成了曝光融合的过程。通过这种方式,可以通过亮度较高的第四图像中的像素点来提升下采样后的第三图像中较暗的像素点,并通过下采样后的第三图像的像素点弥补第四图像中过曝的像素点,从而使得终端设备所得到的HDR图像中既不会出现太暗的区域,也不会出现太亮的区域,进而使得HDR图像的亮度得到整体的提升。需要说明的是,上述权重值的取值范围例如可以在0至1之间,上述权重值与亮度的对应关系具体可以根据用户的需求确定。
S303、根据第三图像的尺寸,将HDR图像进行上采样,得到上采样后的HDR图像。
具体的,终端设备在将下采样后的第三图像和第四图像进行曝光融合,所得到HDR 图像的尺寸与第四图像的尺寸相同。因此,终端设备需要根据第三图像的尺寸,将HDR图像进行上采样,以放大HDR图像的尺寸,从而使得上采样后的HDR图像的尺寸与第三图像的尺寸相同。通过这种方式,可以使上采样后的HDR图像的尺寸,适配于,用户当前在终端设备上所选择的拍照模式对应的分辨率。
S304、将上采样后的HDR图像与第三图像的细节图像融合,得到融合后的图像。
具体的,上述终端设备在对第三图像进行下采样的过程中,由于下采样会丢失第三图像的高频分量(即当前拍摄场景的细节信息),所以,下采样后的第三图像与原本的第三图像相比,下采样后第三图像的分辨率会降低。这样,在通过对下采样后的第三图像与第四图像进行曝光融合所得到的HDR图像的分辨率也会低于第三图像的分辨率,使得上采样后的HDR图像的清晰度仍然较低。
因此,终端设备在获取到上采样后的HDR图像之后,可以将上采样后的HDR图像与“包括第三图像的高频分量”的第三图像的细节图像进行融合,以将拍摄场景整体的细节信息回填至上采样后的HDR图像中,提高上采样后的HDR图像的清晰度。通过这种方式,使得终端所得到的融合后的图像的亮度和清晰度均得到了提升。这样,终端设备在将该融合后的图像呈现给用户时,可以使用户观看到清晰度和亮度较高的图像,提升了用户体验。
其中,本实施例不限定终端设备将上采样后的HDR图像与第三图像的细节图像融合的实现方式。例如:上述终端设备可以直接对上采样后的HDR图像与第三图像的细节图像进行图像相加计算,以得到融合后的图像。作为一种可实施的方式,上述终端设备还可以先确定摄像头传感器当前在低照度下的感光度(ISO),进而根据该摄像头传感器的ISO,确定一个与该摄像头传感器ISO适配的增益系数。然后,终端设备可以将细节图像的每个像素点的像素值与增益系数相乘,以对细节图像进行增强,得到处理后的细节图像。最后,终端设备通过将处理后的细节图像与上采样后的HDR图像进行图像相加计算,得到融合后的图像。由于在进行融合之前,结合了摄像头传感器当前的ISO所对应的增益系数,对细节图像进行了增强,从而使得融合后的图像的锐度提升,进而提升了融合后的图像的清晰度。具体实现时,上述终端设备可以通过摄像头传感器当前预览的图像,确定摄像头传感器当前在低照度下的感光度。上述终端设备可以根据ISO与增益系数之间的映射关系,确定摄像头传感器的ISO对应的增益系数。其中,ISO与增益系数之间的映射关系具体可以根据实际情况设定。ISO与增益系数之间的映射关系例如可以为:在ISO小于或等于500时,增益系数可以为1.5;在ISO大于500且小于或等于1000时,增益系数可以为1.4;在ISO大于1000且小于或等于1500时,增益系数可以为1.3;在ISO大于1500且小于或等于2000时,增益系数可以为1.2,在ISO大于2000时,增益系数可以为1.1等。
其中,上述第三图像的细节图像可以由终端设备在执行S304之前,根据第三图像,获取的第三图像的细节图像。其中,本实施例不限定获取第三图像的细节图像的实现方式。例如,终端设备可以先对第三图像进行傅里叶变换,去除第三图像中的低频分量,保留第三图像的高频分量。然后,终端设备再对只保留高频分量的第三图像进行反傅里叶变换,就可以得到第三图像的细节图像。可选的,在本申请的一种实现方式中,终端设备还可以根据第三图像的尺寸,对下采样后的第三图像进行上采样,得到上采样后的第三图像。由于上采样后的第三图像比第三图像模糊,因此,终端设备通过对上采样后的第三图像与第三图像进行图像相减计算,即可得到第三图像的细节图像。
可选的,上述终端设备在将上采样后的HDR图像与第三图像的细节图像融合,得到融合后的图像之后,还可以对融合后的图像进行空域降噪,以进一步降低图像的噪声。可选的,终端设备可以通过非局部均值去噪算法对融合后的图像进行空域降噪,还可以采用现有技术中的方法对融合后的图像进行空域降噪,对此不再赘述。
本申请提供的图像处理方法,用户在使用终端设备在低照度下拍照时,终端设备通过对所获取的摄像头传感器交替并连续输出的多帧第一图像和第二图像进行时域降噪,得到主要用于提供当前拍摄场景的细节信息的第三图像,以及,主要用于提供拍摄场景的亮度信息的第四图像之后,可以使用该第三图像和第四图像进行图像融合处理,使得终端设备所得到的融合后的图像的亮度和清晰度均得到了提升,从而使得终端设备在将该融合后的图像呈现给用户时,可以使用户观看到清晰度和亮度较高的图像,提升了终端设备在低照度下的拍照效果,进而提高了用户体验。
图7为本申请提供的又一种图像处理方法的流程示意图。如图7所示,在上述S302之前,该方法还可以包括:
S401、以下采样后的第三图像为参考,对第四图像进行图像配准,得到图像配准后的第四图像。
具体的,在本实施例中,上述终端设备在对下采样后的第三图像和第四图像进行曝光融合之前,还可以先以下采样后的第三图像为参考,对第四图像进行图像配准,从而使下采样后的第三图像与第四图像中相同的特征对准。这样,终端设备在后续对下采样后的第三图像和第四图像进行图像融合时,可以准确的将相同的特征融合在一起,提高了图像融合的效果。
具体实现时,终端设备可以通过加速稳健特征(Speeded Up Robust Features,SURF)的配准方法,对第四图像进行图像配准。当然,终端设备还可以采用现有技术中的图像配准方法,对第四图像进行图像配准,对此不再赘述。
S402、根据下采样后的第三图像,对图像配准后的第四图像进行鬼影矫正,得到矫正后的第四图像。
具体的,终端设备在以下采样后的第三图像为参考,对第四图像进行图像配准,得到图像配准后的第四图像之后,还可以根据下采样后的第三图像,对图像配准后的第四图像进行鬼影矫正,以得到矫正后的第四图像。其中,这里所说的鬼影为在将下采样后的第三图像与图像配准后的第四图像进行曝光融合时,下采样后的第三图像与图像配准后的第四图像中移动的物体在融合后的图像中所形成重影。则在该实现方式下,上述终端设备可以将下采样后的第三图像与矫正后的第四图像进行曝光融合,得到HDR图像,从而使得所得到HDR图像中各物体的边缘清楚,无重影现象,进一步提高了HDR图像的清晰度。
其中,本实施例不限定终端设备根据下采样后的第三图像,对图像配准后的第四图像进行鬼影矫正的方式。可选的,终端设备可以先将图像配准后的第四图像的亮度降低至下采样后的第三图像的亮度,得到降低亮度后的第四图像。然后,终端设备对下采样后的第三图像和降低亮度后的第四图像进行图像求差计算,得到降低亮度后的第四图像的每个像素点对应的差异绝对值。此时,若某一像素点对应的差异绝对值大于预设阈值,则终端设备可以在图像配准后的第四图像中找到该像素点所在的位置,该位置即为图像配准后的第四图像的一个鬼影。通过这种方式,可以获取到图像配准后的第四图像的所有鬼影。在获 取到图像配准后的第四图像的所有鬼影之后,终端设备可以根据图像配准后的第四图像的亮度,提升下采样后的第三图像的亮度,得到提升亮度后的第三图像。这样,终端设备可以使用提升亮度后的第三图像中的像素点替换图像配准后的第四图像的鬼影,以得到矫正后的第四图像。由于终端设备使用的是与图像配准的第四图像具有相同亮度的第三图像中的像素点来矫正鬼影,因此,所得到的矫正后的第四图像中仍然可以原有的亮度。
本申请提供的图像处理方法,用户在使用终端设备在低照度下拍照时,终端设备通过对所获取的摄像头传感器交替并连续输出的多帧第一图像和第二图像进行时域降噪,得到主要用于提供当前拍摄场景的细节信息的第三图像,以及,主要用于提供拍摄场景的亮度信息的第四图像之后,可以在对下采样后的第三图像和第四图像进行图像融合之前,先以下采样后的第三图像为基准,对第四图像进行图像配准和鬼影矫正,从而使得终端设备在使用进行图像配准和鬼影矫正后的第四图像与下采样后的第三图像进行图像融合时,图像融合的效果较好,进而使得终端设备所得到的融合后的图像的清晰度进一步得到了提升。
下面以两个示例,来对本申请提供的图像处理方法进行说明。
示例一、图8为本申请提供的又一种图像处理方法的流程示意图。在本实施例中,终端设备获取的是摄像头传感器交替并连续输出的多帧第一图像和第二图像。如图8所示,该方法可以包括:
S501、根据摄像头传感器输出的预览图像,确定摄像头传感器的拍照参数。
S502、根据拍照参数,指示摄像头传感器交替并连续输出多帧第一图像和多帧第二图像。
S503、获取摄像头传感器交替并连续输出的多帧第一图像和多帧第二图像。
S504、将多帧第一图像从Bayer格式转换为YUV格式,得到多帧转换格式后的第一图像。
具体的,现有的终端设备在执行拍照功能,所向用户呈现所拍摄的图像的格式大多为JPEG格式。现有技术中,受终端设备的芯片技术的限制(例如:带宽限制和处理速度限制等),终端设备无法快速地将图像从Bayer格式直接转换成可以呈现给用户的JPEG格式,满足不了拍照过程的流畅需求。因此,现有的终端设备需要先将图像的格式从Bayer格式转换成YUV格式,然后再将YUV格式转换成JPEG格式。
对应到本实施例中,由于摄像头传感器输出的多帧第一图像和多帧第二图像均为Bayer格式的图像,因此,在执行本实施例的图像处理方法的过程中,需要对图像的格式进行转换,以从Bayer格式转换成YUV格式。该操作可以在对图像进行融合处理之后进行,也可以在对图像进行融合处理之前执行。若该操作可以在对图像进行融合处理之后进行,则该操作大多由终端设备的软件模块执行。若该操作可以在对图像进行融合处理之前进行,则该操作大多由终端设备的ISP执行。由于ISP的执行速率快于终端设备的软件模块,因此通过这种方式,可以提高终端设备拍照的效率。具体实现时,终端设备的ISP可以通过对多帧第一图像执行去马赛克操作的方式,将多帧第一图像从Bayer格式转换为YUV格式,得到多帧转换格式后的第一图像。
本领域技术人员可以理解的是,若后续终端设备的芯片技术可以支持将图像从Bayer格式实时直接转换成可以呈现给用户的JPEG格式,则在执行本申请所示的实施例的过程中,可以不用再执行将图像从Bayer格式转换成YUV格式的操作。
S505、将多帧第二图像从Bayer格式转换为YUV格式,得到多帧转换格式后的第二图像。
需要说明的是,步骤S505和步骤S504的执行可以不分先后顺序。
S506、对多帧转换格式后的第一图像进行时域降噪,得到第三图像。
具体的,由于摄像头传感器在低照度下的进光量较少,所以,摄像头传感器在低照度下输出的图像的噪声较大。因此,终端设备可以通过对摄像头传感器输出的多帧第一图像进行时域降噪的方式,即通过时域上不同帧间的像素平均操作的方式,可以降低第一图像的噪声,从而使得所得到的第一图像的噪声较小。具体实现时,终端设备可以采用现有的时域降噪的方式,对多帧转换格式后的第一图像进行时域降噪,例如:对多帧转换格式后的第一图像依次进行全局图像配准、局部鬼影检测、时域融合操作,得到第一图像,对此不再赘述。
S507、对多帧转换格式后的第二图像进行时域降噪,得到第四图像。
需要说明的是,步骤S506和步骤S507的执行可以不分先后顺序。
S508、根据第四图像的尺寸,对第三图像进行下采样,得到下采样后的第三图像。
S509、以下采样后的第三图像为参考,对第四图像进行图像配准,得到图像配准后的第四图像。
S510、根据下采样后的第三图像,对图像配准后的第四图像进行鬼影矫正,得到矫正后的第四图像。
S511、将下采样后的第三图像与矫正后的第四图像进行曝光融合,得到HDR图像。
S512、根据第三图像的尺寸,将HDR图像进行上采样,得到上采样后的HDR图像。
S513、根据第三图像,获取第三图像的细节图像。
S514、将上采样后的HDR图像与第三图像的细节图像融合,得到融合后的图像。
S515、对融合后的图像进行空域降噪,得到空域降噪后的图像。
至此,就完成了整个图像处理过程。通过这种方式,可以在用户使用终端设备在低照度下拍照时,终端设备通过执行上述图像处理过程,得到一个亮度较高、清晰度较高、噪声较小的图像,从而使得终端设备在将该该亮度较高、清晰度较高、噪声较小的图像呈现给用户时,可以使用户观看到清晰度和亮度较高的图像,提升了终端设备在低照度下的拍照效果,进而提高了用户体验。
图9为本申请示出的一种第一图像的示意图,图10为本申请示出的一种第二图像的示意图,图11为本申请示出的一种空域降噪后的图像的示意图。如图11所示,图11示出的是采用上述S501-S515步骤处理后所得到的图像,通过将图11与图9和图10对比可知,图11所示的空域降噪后的图像整体的亮度和清晰度均有大幅度的提升,整个图像中也不会出现过曝和/或过暗的区域。因此,终端设备在将该该亮度较高、清晰度较高、噪声较小的图像呈现给用户时,可以使用户观看到清晰度和亮度较高的图像,提升了终端设备在低照度下的拍照效果,进而提高了用户体验。本领域技术人员可以理解的是,上述图9至图11仅用于示例性的说明采用本申请所提供的图像处理方法后,图像的清晰度和亮度的提升效果,并不对所处理的图像的颜色和内容进行限定。
可选的,在本申请的另一实现方式中,为了进一步提升亮度,终端设备还可以在根据拍照参数,指示摄像头传感器交替并连续输出至少一帧第一图像和至少一帧第二图像的过 程中,同时指示终端设备的补光灯开启,以通过补光进一步提升第一图像和第二图像的亮度。这样,在基于该第一图像和第二图像进行后续的图像处理后,所得到的空余降噪后的图像中的前景后景亮度都有提升,且不会出现局部过曝或者过暗问题,进一步提升了终端设备在低照度下的拍照效果。在该种实现方式下,由于摄像头传感器的进光量有所增加,因此,摄像头传感器输出的图像噪声会有降低,所以,可以采用指示摄像头传感器只输出一帧第一图像和一帧第二图像的方式,以省略上述示例中的时域降噪的过程。下述示例二以终端设备获取摄像头传感器交替并连续输出的一帧第一图像和一帧第二图像为例,对图像处理方法进行说明。
示例二、图12为本申请提供的又一种图像处理方法的流程示意图。在本实施例中,终端设备获取的是摄像头传感器交替并连续输出的一帧第一图像和一帧第二图像。如图12所示,该方法可以包括:
S601、根据摄像头传感器输出的预览图像,确定摄像头传感器的拍照参数。
S602、根据拍照参数,指示摄像头传感器交替并连续输出一帧第一图像和一帧第二图像。
S603、获取摄像头传感器交替并连续输出的一帧第一图像和一帧第二图像。
S604、将第一图像从Bayer格式转换为YUV格式,得到转换格式后的第一图像。
S605、将第二图像从Bayer格式转换为YUV格式,得到转换格式后的第二图像。
需要说明的是,步骤S605和步骤S604的执行可以不分先后顺序。
S606、根据转换格式后的第二图像的尺寸,对转换格式后的第一图像进行下采样,得到下采样后的第一图像。
S607、以下采样后的第一图像为参考,对转换格式后的第二图像进行图像配准,得到图像配准后的第二图像。
S608、根据下采样后的第一图像,对图像配准后的第二图像进行鬼影矫正,得到矫正后的第二图像。
S609、将下采样后的第一图像与矫正后的第二图像进行曝光融合,得到HDR图像。
S610、根据转换格式后的第一图像的尺寸,将HDR图像进行上采样,得到上采样后的HDR图像。
S611、根据转换格式后的第一图像,获取第一图像的细节图像。
S612、将上采样后的HDR图像与第一图像的细节图像融合,得到融合后的图像。
S613、对融合后的图像进行空域降噪,得到空域降噪后的图像。
至此,就完成了整个图像处理过程。
本申请提供的图像处理方法,用户在使用终端设备在低照度下拍照时,终端设备可以获取摄像头传感器交替并连续输出的至少一帧第一图像和至少一帧第二图像,其中,第一图像主要用于提供当前拍摄场景的细节信息,第二图像主要用于提供当前拍摄场景的亮度信息,从而使得终端设备可以根据该至少一帧第一图像和至少一帧第二图像,进行图像融合处理,使得终端设备所得到的融合后的图像的亮度和清晰度均得到了提升,从而使得终端设备在将该融合后的图像呈现给用户时,可以使用户观看到清晰度和亮度较高的图像,提升了终端设备在低照度下的拍照效果,进而提高了用户体验。
需要说明的是,本申请所提供的图像处理方法,不仅适用于终端设备采用前置摄像头 传感器拍摄的应用场景,也适用于终端设备采用后置摄像头传感器拍摄的应用场景。同样的,本申请的方法还适用于终端设备采用双摄像头传感器拍摄的应用场景,在该应用场景下,终端设备可以通过对每个摄像头传感器输出的图像均采用S301-S315的步骤进行处理,最后,再采用现有的融合方式,将通过对两个摄像头传感器所输出的图像进行处理,分别得到的空域降噪后的图像再进行融合,以得到清晰度和亮度较高的图像。或者,终端设备可以只对双摄像头传感器中的其中一个摄像头传感器输出的图像采用S301-S315的步骤进行处理,而使用另一个摄像头传感器对图像进行特殊效果(例如:虚化等)处理,对此不再赘述。
图13为本申请提供的一种终端设备的结构示意图。如图13所示,上述终端设备可以包括:
获取模块11,用于获取摄像头传感器交替并连续输出的至少一帧第一图像和至少一帧第二图像;其中,第一图像的分辨率与当前拍照模式所对应的分辨率相同,第一图像的分辨率为第二图像的分辨率的N倍,N为大于1的整数;摄像头传感器采用第一曝光参数输出每帧第一图像,摄像头传感器采用第二曝光参数输出每帧第二图像,第一曝光参数大于第二曝光参数;示例性的,第一图像例如可以为全尺寸图像。
融合模块12,用于根据至少一帧第一图像和至少一帧第二图像,进行图像融合,得到融合后的图像。
本申请提供的终端设备,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
图14为本申请提供的另一种终端设备的结构示意图。如图14所示,在上述图13所示框图的基础上,上述终端设备,还包括:
确定模块13,用于在获取模块11获取摄像头传感器交替并连续输出的至少一帧第一图像和至少一帧第二图像之前,根据摄像头传感器输出的预览图像,确定摄像头传感器的拍照参数;拍照参数包括:第一图像的尺寸、第一图像的帧数、第二图像的帧数、第一图像的曝光参数、第二图像的曝光参数、第一图像和第二图像的交替顺序;
指示模块14,用于根据拍照参数,指示摄像头传感器交替并连续输出至少一帧第一图像和至少一帧第二图像。
若至少一帧第一图像包括:一帧第一图像,至少一帧第二图像包括:一帧第二图像;则融合模块12,可以具体用于对第一图像和第二图像进行图像融合,得到融合后的图像。
则在本申请的一种实现方式中,图15为本申请提供的又一种终端设备的结构示意图。如图15所示,在上述图13所示框图的基础上,上述终端设备,还包括:
第一格式转换模块15,用于在融合模块12对第一图像和第二图像进行图像融合,得到融合后的图像之前,将第一图像从拜耳Bayer格式转换为YUV格式,得到转换格式后的第一图像,并将第二图像从拜耳Bayer格式转换为YUV格式,得到转换格式后的第二图像;
则在该实现方式下,融合模块12,具体用于对转换格式后的第一图像和转换格式后的第二图像进行图像融合,得到融合后的图像。
若至少一帧第一图像包括:多帧第一图像,至少一帧第二图像包括:多帧第二图像;则融合模块12,可以具体用于对多帧第一图像进行时域降噪,得到第三图像,并对多帧第二图像进行时域降噪,得到第四图像;对第三图像和第四图像进行图像融合,得到融合后 的图像。则在本申请的一种实现方式中,图16为本申请提供的又一种终端设备的结构示意图。如图16所示,在上述图13所示框图的基础上,上述终端设备,还包括:第二格式转换模块16,用于在融合模块12对多帧第一图像进行时域降噪,得到第三图像,并对多帧第二图像进行时域降噪,得到第四图像之前,将多帧第一图像从拜耳Bayer格式转换为YUV格式,得到多帧转换格式后的第一图像,并将多帧第二图像从Bayer格式转换为YUV格式,得到多帧转换格式后的第二图像;
融合模块12,具体用于对多帧转换格式后的第一图像进行时域降噪,得到第三图像,并对多帧转换格式后第二图像进行时域降噪,得到第四图像。
图17为本申请提供的又一种终端设备的结构示意图。在本实施例中,融合模块12,可以具体用于对多帧第一图像进行时域降噪,得到第三图像,并对多帧第二图像进行时域降噪,得到第四图像;对第三图像和第四图像进行图像融合,得到融合后的图像。则如图17所示,在上述图13所示框图的基础上,上述融合模块12,可以包括:
下采样单元121,用于根据第四图像的尺寸,对第三图像进行下采样,得到下采样后的第三图像;下采样后的第三图像的尺寸与第四图像的尺寸相同
曝光融合单元122,用于将下采样后的第三图像和第四图像进行曝光融合,得到高动态范围HDR图像;
上采样单元123,用于根据第三图像的尺寸,将HDR图像进行上采样,得到上采样后的HDR图像;
融合单元124,用于将上采样后的HDR图像与第三图像的细节图像融合,得到融合后的图像;其中,第三图像的细节图像包括第三图像的高频分量。示例性的,融合单元124,可以具体用于确定摄像头传感器的感光度ISO;根据摄像头传感器的ISO,确定增益系数;将第三图像的细节图像的每个像素点的像素值与增益系数相乘,得到处理后的细节图像;将处理后的细节图像与上采样后的HDR图像进行图像相加计算,得到融合后的图像。
继续参照图17,可选的,融合模块12,还可以包括:
获取单元125,用于在融合单元124将上采样后的HDR图像与第三图像的细节图像融合,得到融合后的图像之前,根据第三图像,获取第三图像的细节图像。示例性的,获取单元125,可以具体用于根据第三图像的尺寸,对下采样后的第三图像进行上采样,得到上采样后的第三图像;对上采样后的第三图像与第三图像进行图像相减计算,得到第三图像的细节图像。
继续参照图17,可选的,融合模块12,还可以包括:
图像配准单元126,用于曝光融合单元122将下采样后的第三图像和第四图像进行曝光融合,得到高动态范围HDR图像之前,以下采样后的第三图像为参考,对第四图像进行图像配准,得到图像配准后的第四图像;
鬼影矫正单元127,用于根据下采样后的第三图像,对图像配准后的第四图像进行鬼影矫正,得到矫正后的第四图像;示例性的,鬼影矫正单元127,可以具体用于将图像配准后的第四图像的亮度降低至下采样后的第三图像的亮度,得到降低亮度后的第四图像;对下采样后的第三图像和降低亮度后的第四图像进行图像求差计算,得到降低亮度后的第四图像的每个像素点对应的差异绝对值;将差异绝对值大于预设阈值的像素点作为图像配准后的第四图像的鬼影;根据图像配准后的第四图像的亮度,提升下采样后的第三图像的 亮度,得到提升亮度后的第三图像;使用提升亮度后的第三图像中的像素点替换图像配准后的第四图像的鬼影,得到矫正后的第四图像。
则曝光融合单元122,具体用于将下采样后的第三图像与矫正后的第四图像进行曝光融合,得到HDR图像。
继续参照图17,可选的,上述终端设备还可以包括:
空域降噪模块17,用于在融合单元124将上采样后的HDR图像与第三图像的细节图像融合,得到融合后的图像之后,对融合后的图像进行空域降噪,得到空域降噪后的图像。
本申请提供的终端设备,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
图18为本申请提供的又一种终端设备的结构示意图。如图18所示,该终端设备可以包括:处理器21(例如CPU)和存储器22;存储器22可能包含高速RAM存储器,也可能还包括非易失性存储器NVM,例如至少一个磁盘存储器,存储器22中可以存储各种指令,以用于完成各种处理功能以及实现本申请的方法步骤。可选的,本申请涉及的终端设备还可以包括:接收器23、发送器24、电源25、通信总线26以及通信端口27。接收器23和发送器24可以集成在终端设备的收发信机中,也可以为终端设备上独立的收发天线。通信总线26用于实现元件之间的通信连接。上述通信端口27用于实现终端设备与其他外设之间进行连接通信。
在本申请中,上述存储器22用于存储计算机可执行程序代码,程序代码包括指令;当处理器21执行指令时,指令使终端设备执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
正如上述实施例,本申请涉及的终端设备可以是手机、平板电脑等无线终端,因此,以终端设备为手机为例:图19为申请提供的终端设备为手机时的结构框图。参考图19,该手机可以包括:射频(Radio Frequency,RF)电路1110、存储器1120、输入单元1130、显示单元1140、传感器1150、音频电路1160、无线保真(wireless fidelity,WiFi)模块1170、处理器1180、以及电源1190等部件。本领域技术人员可以理解,图19中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图19对手机的各个构成部件进行具体的介绍:
RF电路1110可用于收发信息或通话过程中,信号的接收和发送,例如,将基站的下行信息接收后,给处理器1180处理;另外,将上行的数据发送给基站。通常,RF电路包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,RF电路1110还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE))、电子邮件、短消息服务(Short Messaging Service,SMS)等。
存储器1120可用于存储软件程序以及模块,处理器1180通过运行存储在存储器1120的软件程序以及模块,从而执行手机的各种功能应用以及数据处理。存储器1120可主要 包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器1120可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
输入单元1130可用于接收输入的数字或字符信息,以及产生与手机的用户设置以及功能控制有关的键信号输入。具体地,输入单元1130可包括触控面板1131以及其他输入设备1132。触控面板1131,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板1131上或在触控面板1131附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板1131可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器1180,并能接收处理器1180发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1131。除了触控面板1131,输入单元1130还可以包括其他输入设备1132。具体地,其他输入设备1132可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元1140可用于显示由用户输入的信息或提供给用户的信息以及手机的各种菜单。显示单元1140可包括显示面板1141,可选的,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板1141。进一步的,触控面板1131可覆盖于显示面板1141之上,当触控面板1131检测到在其上或附近的触摸操作后,传送给处理器1180以确定触摸事件的类型,随后处理器1180根据触摸事件的类型在显示面板1141上提供相应的视觉输出。虽然在图10中,触控面板1131与显示面板1141是作为两个独立的部件来实现手机的输入和输入功能,但是在某些实施例中,可以将触控面板1131与显示面板1141集成而实现手机的输入和输出功能。
手机还可包括至少一种传感器1150,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板1141的亮度,光传感器可在手机移动到耳边时,关闭显示面板1141和/或背光。作为运动传感器的一种,加速度传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路1160、扬声器1161以及传声器1162可提供用户与手机之间的音频接口。音频电路1160可将接收到的音频数据转换后的电信号,传输到扬声器1161,由扬声器1161转换为声音信号输出;另一方面,传声器1162将收集的声音信号转换为电信号,由音频电路1160接收后转换为音频数据,再将音频数据输出处理器1180处理后,经RF电路1110以发送给比如另一手机,或者将音频数据输出至存储器1120以便进一步处理。
WiFi属于短距离无线传输技术,手机通过WiFi模块1170可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图19示出了WiFi模块1170,但是可以理解的是,其并不属于手机的必须构成,完全可以根据需要在不改变本申请的本质的范围内而省略。
处理器1180是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器1120内的软件程序和/或模块,以及调用存储在存储器1120内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器1180可包括一个或多个处理单元;例如,处理器1180可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1180中。
手机还包括给各个部件供电的电源1190(比如电池),可选的,电源可以通过电源管理系统与处理器1180逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
手机还可以包括摄像头1200,该摄像头可以为前置摄像头,也可以为后置摄像头。尽管未示出,手机还可以包括蓝牙模块、GPS模块等,在此不再赘述。
在本申请中,该手机所包括的处理器1180可以用于执行上述图像处理方法实施例,其实现原理和技术效果类似,在此不再赘述。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本发明实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。

Claims (29)

  1. 一种图像处理方法,其特征在于,所述方法包括:
    获取摄像头传感器交替并连续输出的至少一帧第一图像和至少一帧第二图像;其中,所述第一图像的分辨率与当前拍照模式所对应的分辨率相同,所述第一图像的分辨率为所述第二图像的分辨率的N倍,所述N为大于1的整数;所述摄像头传感器采用第一曝光参数输出每帧第一图像,所述摄像头传感器采用第二曝光参数输出每帧第二图像,所述第一曝光参数大于所述第二曝光参数;
    根据所述至少一帧第一图像和所述至少一帧第二图像,进行图像融合,得到融合后的图像。
  2. 根据权利要求1所述的方法,其特征在于,所述获取摄像头传感器交替输出的至少一帧第一图像和至少一帧第二图像之前,所述方法还包括:
    根据所述摄像头传感器输出的预览图像,确定摄像头传感器的拍照参数;所述拍照参数包括:所述第一图像的尺寸、所述第一图像的帧数、所述第二图像的帧数、所述第一图像的曝光参数、所述第二图像的曝光参数、所述第一图像和第二图像的交替顺序;
    根据所述拍照参数,指示所述摄像头传感器交替并连续输出至少一帧第一图像和至少一帧第二图像。
  3. 根据权利要求1所述的方法,其特征在于,所述至少一帧第一图像包括:一帧第一图像,所述至少一帧第二图像包括:一帧第二图像;
    则所述根据所述至少一帧第一图像和所述至少一帧第二图像进行图像融合,得到融合后的图像,包括:
    对第一图像和第二图像进行图像融合,得到所述融合后的图像。
  4. 根据权利要求3所述的方法,其特征在于,所述对第一图像和第二图像进行图像融合,得到所述融合后的图像之前,所述方法还包括:
    将所述第一图像从拜耳Bayer格式转换为YUV格式,得到转换格式后的第一图像,并将所述第二图像从拜耳Bayer格式转换为YUV格式,得到转换格式后的第二图像;
    所述对第一图像和第二图像进行图像融合,得到所述融合后的图像,包括:
    对所述转换格式后的第一图像和所述转换格式后的第二图像进行图像融合,得到所述融合后的图像。
  5. 根据权利要求1所述的方法,其特征在于,所述至少一帧第一图像包括:多帧第一图像,所述至少一帧第二图像包括:多帧第二图像;
    则所述根据所述至少一帧第一图像和所述至少一帧第二图像进行图像融合,得到融合后的图像,包括:
    对所述多帧第一图像进行时域降噪,得到第三图像,并对所述多帧第二图像进行时域降噪,得到第四图像;
    对所述第三图像和所述第四图像进行图像融合,得到所述融合后的图像。
  6. 根据权利要求5所述的方法,其特征在于,所述对所述多帧第一图像进行时域降噪,得到第三图像,并对所述多帧第二图像进行时域降噪,得到第四图像之前,还包括:
    将所述多帧第一图像从拜耳Bayer格式转换为YUV格式,得到多帧转换格式后的第一 图像,并将所述多帧第二图像从Bayer格式转换为YUV格式,得到多帧转换格式后的第二图像;
    所述对所述多帧第一图像进行时域降噪,得到第三图像,并对所述多帧第二图像进行时域降噪,得到第四图像,包括:
    对所述多帧转换格式后的第一图像进行时域降噪,得到所述第三图像,并对所述多帧转换格式后第二图像进行时域降噪,得到第四图像。
  7. 根据权利要求5所述的方法,其特征在于,所述对所述第三图像和所述第四图像进行图像融合,得到所述融合后的图像,包括:
    根据所述第四图像的尺寸,对所述第三图像进行下采样,得到下采样后的第三图像;所述下采样后的第三图像的尺寸与所述第四图像的尺寸相同;
    将所述下采样后的第三图像和所述第四图像进行曝光融合,得到高动态范围HDR图像;
    根据所述第三图像的尺寸,将所述HDR图像进行上采样,得到上采样后的HDR图像;
    将所述上采样后的HDR图像与所述第三图像的细节图像融合,得到融合后的图像;其中,所述第三图像的细节图像包括所述第三图像的高频分量。
  8. 根据权利要求7所述的方法,其特征在于,所述将所述上采样后的HDR图像与所述第三图像的细节图像融合,得到融合后的图像之前,所述方法还包括:
    根据所述第三图像,获取所述第三图像的细节图像。
  9. 根据权利要求8所述的方法,其特征在于,所述根据所述第三图像,获取所述第三图像的细节图像,包括:
    根据所述第三图像的尺寸,对所述下采样后的第三图像进行上采样,得到上采样后的第三图像;
    对所述上采样后的第三图像与所述第三图像进行图像相减计算,得到所述第三图像的细节图像。
  10. 根据权利要求7所述的方法,其特征在于,所述将所述下采样后的第三图像和所述第四图像进行曝光融合,得到高动态范围HDR图像之前,所述方法还包括:
    以所述下采样后的第三图像为参考,对所述第四图像进行图像配准,得到图像配准后的第四图像;
    根据所述下采样后的第三图像,对所述图像配准后的第四图像进行鬼影矫正,得到矫正后的第四图像;
    所述将所述下采样后的第三图像和所述第四图像进行曝光融合,得到高动态范围HDR图像,包括:
    将所述下采样后的第三图像与所述矫正后的第四图像进行曝光融合,得到所述HDR图像。
  11. 根据权利要求10所述的方法,其特征在于,所述根据所述下采样后的第三图像,对所述图像配准后的第四图像进行鬼影矫正,得到矫正后的第四图像,包括:
    将图像配准后的所述第四图像的亮度降低至所述下采样后的第三图像的亮度,得到降低亮度后的第四图像;
    对所述下采样后的第三图像和所述降低亮度后的第四图像进行图像求差计算,得到所 述降低亮度后的第四图像的每个像素点对应的差异绝对值;
    将差异绝对值大于预设阈值的像素点作为所述图像配准后的第四图像的鬼影;
    根据所述图像配准后的第四图像的亮度,提升所述下采样后的第三图像的亮度,得到提升亮度后的第三图像;
    使用所述提升亮度后的第三图像中的像素点替换所述图像配准后的第四图像的鬼影,得到所述矫正后的第四图像。
  12. 根据权利要求7所述的方法,其特征在于,所述将所述上采样后的HDR图像与所述第三图像的细节图像融合,得到融合后的图像,包括:
    确定所述摄像头传感器的感光度ISO;
    根据所述摄像头传感器的ISO,确定增益系数;
    将所述第三图像的细节图像的每个像素点的像素值与所述增益系数相乘,得到处理后的细节图像;
    将所述处理后的细节图像与所述上采样后的HDR图像进行图像相加计算,得到融合后的图像。
  13. 根据权利要求7-12任一项所述的方法,其特征在于,所述将所述上采样后的HDR图像与所述第三图像的细节图像融合,得到融合后的图像之后,所述方法还包括:
    对所述融合后的图像进行空域降噪,得到空域降噪后的图像。
  14. 根据权利要求1-13任一项所述的方法,其特征在于,所述第一图像为全尺寸图像。
  15. 一种终端设备,其特征在于,所述终端设备,包括:
    获取模块,用于获取摄像头传感器交替并连续输出的至少一帧第一图像和至少一帧第二图像;其中,所述第一图像的分辨率与当前拍照模式所对应的分辨率相同,所述第一图像的分辨率为所述第二图像的分辨率的N倍,所述N为大于1的整数;所述摄像头传感器采用第一曝光参数输出每帧第一图像,所述摄像头传感器采用第二曝光参数输出每帧第二图像,所述第一曝光参数大于所述第二曝光参数;
    融合模块,用于根据所述至少一帧第一图像和所述至少一帧第二图像,进行图像融合,得到融合后的图像。
  16. 根据权利要求15所述的终端设备,其特征在于,所述终端设备,还包括:
    确定模块,用于在所述获取模块获取摄像头传感器交替并连续输出的至少一帧第一图像和至少一帧第二图像之前,根据所述摄像头传感器输出的预览图像,确定摄像头传感器的拍照参数;所述拍照参数包括:所述第一图像的尺寸、所述第一图像的帧数、所述第二图像的帧数、所述第一图像的曝光参数、所述第二图像的曝光参数、所述第一图像和第二图像的交替顺序;
    指示模块,用于根据所述拍照参数,指示所述摄像头传感器交替并连续输出至少一帧第一图像和至少一帧第二图像。
  17. 根据权利要求15所述的终端设备,其特征在于,所述至少一帧第一图像包括:一帧第一图像,所述至少一帧第二图像包括:一帧第二图像;
    所述融合模块,具体用于对第一图像和第二图像进行图像融合,得到所述融合后的图像。
  18. 根据权利要求17所述的终端设备,其特征在于,所述终端设备,还包括:
    第一格式转换模块,用于在所述融合模块对第一图像和第二图像进行图像融合,得到所述融合后的图像之前,将所述第一图像从拜耳Bayer格式转换为YUV格式,得到转换格式后的第一图像,并将所述第二图像从拜耳Bayer格式转换为YUV格式,得到转换格式后的第二图像;
    所述融合模块,具体用于对所述转换格式后的第一图像和所述转换格式后的第二图像进行图像融合,得到所述融合后的图像。
  19. 根据权利要求15所述的终端设备,其特征在于,所述至少一帧第一图像包括:多帧第一图像,所述至少一帧第二图像包括:多帧第二图像;
    所述融合模块,具体用于对所述多帧第一图像进行时域降噪,得到第三图像,并对所述多帧第二图像进行时域降噪,得到第四图像;对所述第三图像和所述第四图像进行图像融合,得到所述融合后的图像。
  20. 根据权利要求19所述的终端设备,其特征在于,所述终端设备,还包括:
    第二格式转换模块,用于在所述融合模块对所述多帧第一图像进行时域降噪,得到第三图像,并对所述多帧第二图像进行时域降噪,得到第四图像之前,将所述多帧第一图像从拜耳Bayer格式转换为YUV格式,得到多帧转换格式后的第一图像,并将所述多帧第二图像从Bayer格式转换为YUV格式,得到多帧转换格式后的第二图像;
    所述融合模块,具体用于对所述多帧转换格式后的第一图像进行时域降噪,得到所述第三图像,并对所述多帧转换格式后第二图像进行时域降噪,得到第四图像。
  21. 根据权利要求19所述的终端设备,其特征在于,所述融合模块,包括:
    下采样单元,用于根据所述第四图像的尺寸,对所述第三图像进行下采样,得到下采样后的第三图像;所述下采样后的第三图像的尺寸与所述第四图像的尺寸相同;
    曝光融合单元,用于将所述下采样后的第三图像和所述第四图像进行曝光融合,得到高动态范围HDR图像;
    上采样单元,用于根据所述第三图像的尺寸,将所述HDR图像进行上采样,得到上采样后的HDR图像;
    融合单元,用于将所述上采样后的HDR图像与所述第三图像的细节图像融合,得到融合后的图像;其中,所述第三图像的细节图像包括所述第三图像的高频分量。
  22. 根据权利要求21所述的终端设备,其特征在于,所述融合模块,还包括:
    获取单元,用于在所述融合单元将所述上采样后的HDR图像与所述第三图像的细节图像融合,得到融合后的图像之前,根据所述第三图像,获取所述第三图像的细节图像。
  23. 根据权利要求22所述的终端设备,其特征在于,
    所述获取单元,具体用于根据所述第三图像的尺寸,对所述下采样后的第三图像进行上采样,得到上采样后的第三图像;对所述上采样后的第三图像与所述第三图像进行图像相减计算,得到所述第三图像的细节图像。
  24. 根据权利要求21所述的终端设备,其特征在于,所述融合模块,还包括:
    图像配准单元,用于所述曝光融合单元将所述下采样后的第三图像和所述第四图像进行曝光融合,得到高动态范围HDR图像之前,以所述下采样后的第三图像为参考,对所述第四图像进行图像配准,得到图像配准后的第四图像;
    鬼影矫正单元,用于根据所述下采样后的第三图像,对所述图像配准后的第四图像进 行鬼影矫正,得到矫正后的第四图像;
    所述曝光融合单元,具体用于将所述下采样后的第三图像与所述矫正后的第四图像进行曝光融合,得到所述HDR图像。
  25. 根据权利要求24所述的终端设备,其特征在于,
    所述鬼影矫正单元,具体用于将图像配准后的所述第四图像的亮度降低至所述下采样后的第三图像的亮度,得到降低亮度后的第四图像;对所述下采样后的第三图像和所述降低亮度后的第四图像进行图像求差计算,得到所述降低亮度后的第四图像的每个像素点对应的差异绝对值;将差异绝对值大于预设阈值的像素点作为所述图像配准后的第四图像的鬼影;根据所述图像配准后的第四图像的亮度,提升所述下采样后的第三图像的亮度,得到提升亮度后的第三图像;使用所述提升亮度后的第三图像中的像素点替换所述图像配准后的第四图像的鬼影,得到所述矫正后的第四图像。
  26. 根据权利要求21所述的终端设备,其特征在于,
    所述融合单元,具体用于确定所述摄像头传感器的感光度ISO;根据所述摄像头传感器的ISO,确定增益系数;将所述第三图像的细节图像的每个像素点的像素值与所述增益系数相乘,得到处理后的细节图像;将所述处理后的细节图像与所述上采样后的HDR图像进行图像相加计算,得到融合后的图像。
  27. 根据权利要求21-26任一项所述的终端设备,其特征在于,所述终端设备还包括:
    空域降噪模块,用于在所述融合单元将所述上采样后的HDR图像与所述第三图像的细节图像融合,得到融合后的图像之后,对所述融合后的图像进行空域降噪,得到空域降噪后的图像。
  28. 根据权利要求15-27任一项所述的终端设备,其特征在于,所述第一图像为全尺寸图像。
  29. 一种终端设备,其特征在于,所述终端设备包括:处理器、存储器;
    其中,所述存储器用于存储计算机可执行程序代码,所述程序代码包括指令;当所述处理器执行所述指令时,所述指令使所述终端设备执行如权利要求1-14任一项所述的图像处理方法。
PCT/CN2017/074827 2017-01-25 2017-02-24 图像处理方法和终端设备 WO2018137267A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201780065469.5A CN109863742B (zh) 2017-01-25 2017-02-24 图像处理方法和终端设备

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710061387 2017-01-25
CN201710061387.5 2017-01-25

Publications (1)

Publication Number Publication Date
WO2018137267A1 true WO2018137267A1 (zh) 2018-08-02

Family

ID=62978946

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/074827 WO2018137267A1 (zh) 2017-01-25 2017-02-24 图像处理方法和终端设备

Country Status (2)

Country Link
CN (1) CN109863742B (zh)
WO (1) WO2018137267A1 (zh)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110876014A (zh) * 2018-08-31 2020-03-10 北京小米移动软件有限公司 图像处理方法及装置、电子设备及存储介质
CN110874829A (zh) * 2018-08-31 2020-03-10 北京小米移动软件有限公司 图像处理方法及装置、电子设备及存储介质
CN110944160A (zh) * 2019-11-06 2020-03-31 维沃移动通信有限公司 一种图像处理方法及电子设备
CN111641806A (zh) * 2020-05-11 2020-09-08 浙江大华技术股份有限公司 光晕抑制的方法、设备、计算机设备和可读存储介质
CN111986129A (zh) * 2020-06-30 2020-11-24 普联技术有限公司 基于多摄图像融合的hdr图像生成方法、设备及存储介质
CN112308771A (zh) * 2019-07-31 2021-02-02 维沃移动通信有限公司 一种图像处理方法、装置及电子设备
CN113596341A (zh) * 2021-06-11 2021-11-02 北京迈格威科技有限公司 一种图像拍摄方法、图像处理方法、装置、电子设备
WO2021218551A1 (zh) * 2020-04-28 2021-11-04 华为技术有限公司 拍照方法、装置、终端设备及存储介质
CN115482143A (zh) * 2021-06-15 2022-12-16 荣耀终端有限公司 应用的图像数据调用方法、系统、电子设备及存储介质
CN115988311A (zh) * 2021-10-14 2023-04-18 荣耀终端有限公司 图像处理方法与电子设备

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419161B (zh) * 2019-08-20 2022-07-05 RealMe重庆移动通信有限公司 图像处理方法及装置、存储介质及电子设备
CN111091506A (zh) * 2019-12-02 2020-05-01 RealMe重庆移动通信有限公司 图像处理方法及装置、存储介质、电子设备
CN111028192B (zh) * 2019-12-18 2023-08-08 维沃移动通信(杭州)有限公司 一种图像合成方法及电子设备
CN111294905B (zh) * 2020-02-03 2023-04-25 RealMe重庆移动通信有限公司 图像处理方法、图像处理装置、存储介质与电子设备
CN112288642A (zh) * 2020-09-21 2021-01-29 北京迈格威科技有限公司 鬼影检测方法、图像融合方法及对应装置
CN115052104B (zh) * 2020-10-23 2023-07-07 深圳市锐尔觅移动通信有限公司 图像处理方法、电子装置及非易失性计算机可读存储介质
CN112351172B (zh) * 2020-10-26 2021-09-17 Oppo广东移动通信有限公司 图像处理方法、摄像头组件及移动终端
CN112887639A (zh) * 2021-01-18 2021-06-01 Oppo广东移动通信有限公司 图像处理方法、装置、系统、电子设备以及存储介质
CN115314628B (zh) * 2021-05-08 2024-03-01 杭州海康威视数字技术股份有限公司 一种成像方法、系统及摄像机
CN115514876B (zh) * 2021-06-23 2023-09-01 荣耀终端有限公司 图像融合方法、电子设备、存储介质及计算机程序产品
CN113344793A (zh) * 2021-08-04 2021-09-03 深圳市安软科技股份有限公司 图像超分辨率重建方法、装置、设备及存储介质
CN114466134A (zh) * 2021-08-17 2022-05-10 荣耀终端有限公司 生成hdr图像的方法及电子设备
CN114723637A (zh) * 2022-04-27 2022-07-08 上海复瞰科技有限公司 一种色差调整方法及系统
CN116095517B (zh) * 2022-08-31 2024-04-09 荣耀终端有限公司 虚化方法、终端设备和可读存储介质
CN117808688A (zh) * 2022-09-26 2024-04-02 华为技术有限公司 高分辨率高帧率摄像方法和图像处理装置
CN116301363B (zh) * 2023-02-27 2024-02-27 荣耀终端有限公司 隔空手势识别方法、电子设备及存储介质
CN117710264A (zh) * 2023-07-31 2024-03-15 荣耀终端有限公司 图像的动态范围校准方法和电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080024618A1 (en) * 2006-07-31 2008-01-31 Suk Hwan Lim Adaptive binning method and apparatus
CN102090068A (zh) * 2008-08-01 2011-06-08 伊斯曼柯达公司 利用不同分辨率的图像形成改良图像
CN103888689A (zh) * 2014-03-13 2014-06-25 北京智谷睿拓技术服务有限公司 图像采集方法及图像采集装置
CN105704363A (zh) * 2014-11-28 2016-06-22 广东中星电子有限公司 图像数据处理方法和装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9077910B2 (en) * 2011-04-06 2015-07-07 Dolby Laboratories Licensing Corporation Multi-field CCD capture for HDR imaging

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080024618A1 (en) * 2006-07-31 2008-01-31 Suk Hwan Lim Adaptive binning method and apparatus
CN102090068A (zh) * 2008-08-01 2011-06-08 伊斯曼柯达公司 利用不同分辨率的图像形成改良图像
CN103888689A (zh) * 2014-03-13 2014-06-25 北京智谷睿拓技术服务有限公司 图像采集方法及图像采集装置
CN105704363A (zh) * 2014-11-28 2016-06-22 广东中星电子有限公司 图像数据处理方法和装置

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110874829A (zh) * 2018-08-31 2020-03-10 北京小米移动软件有限公司 图像处理方法及装置、电子设备及存储介质
EP3621027A1 (en) * 2018-08-31 2020-03-11 Beijing Xiaomi Mobile Software Co., Ltd. Method and apparatus for processing image, electronic device and storage medium
US10951816B2 (en) 2018-08-31 2021-03-16 Beijing Xiaomi Mobile Software Co., Ltd. Method and apparatus for processing image, electronic device and storage medium
CN110876014A (zh) * 2018-08-31 2020-03-10 北京小米移动软件有限公司 图像处理方法及装置、电子设备及存储介质
CN110874829B (zh) * 2018-08-31 2022-10-14 北京小米移动软件有限公司 图像处理方法及装置、电子设备及存储介质
CN112308771A (zh) * 2019-07-31 2021-02-02 维沃移动通信有限公司 一种图像处理方法、装置及电子设备
CN110944160A (zh) * 2019-11-06 2020-03-31 维沃移动通信有限公司 一种图像处理方法及电子设备
EP4131928A4 (en) * 2020-04-28 2023-10-04 Huawei Technologies Co., Ltd. PHOTOGRAPHY METHOD AND APPARATUS, TERMINAL DEVICE AND STORAGE MEDIUM
WO2021218551A1 (zh) * 2020-04-28 2021-11-04 华为技术有限公司 拍照方法、装置、终端设备及存储介质
CN111641806A (zh) * 2020-05-11 2020-09-08 浙江大华技术股份有限公司 光晕抑制的方法、设备、计算机设备和可读存储介质
CN111986129A (zh) * 2020-06-30 2020-11-24 普联技术有限公司 基于多摄图像融合的hdr图像生成方法、设备及存储介质
CN111986129B (zh) * 2020-06-30 2024-03-19 普联技术有限公司 基于多摄图像融合的hdr图像生成方法、设备及存储介质
CN113596341A (zh) * 2021-06-11 2021-11-02 北京迈格威科技有限公司 一种图像拍摄方法、图像处理方法、装置、电子设备
CN113596341B (zh) * 2021-06-11 2024-04-05 北京迈格威科技有限公司 一种图像拍摄方法、图像处理方法、装置、电子设备
CN115482143B (zh) * 2021-06-15 2023-12-19 荣耀终端有限公司 应用的图像数据调用方法、系统、电子设备及存储介质
CN115482143A (zh) * 2021-06-15 2022-12-16 荣耀终端有限公司 应用的图像数据调用方法、系统、电子设备及存储介质
CN115988311A (zh) * 2021-10-14 2023-04-18 荣耀终端有限公司 图像处理方法与电子设备

Also Published As

Publication number Publication date
CN109863742A (zh) 2019-06-07
CN109863742B (zh) 2021-01-29

Similar Documents

Publication Publication Date Title
WO2018137267A1 (zh) 图像处理方法和终端设备
US10827140B2 (en) Photographing method for terminal and terminal
US10810720B2 (en) Optical imaging method and apparatus
JP6945744B2 (ja) 撮影方法、装置、およびデバイス
EP3410390B1 (en) Image processing method and device, computer readable storage medium and electronic device
TWI658433B (zh) 影像模糊方法、裝置、電腦可讀儲存媒體和電腦裝置
US9225905B2 (en) Image processing method and apparatus
WO2019183819A1 (zh) 拍照方法、拍照装置和移动终端
WO2019129020A1 (zh) 一种摄像头自动调焦方法、存储设备及移动终端
US20220086360A1 (en) Big aperture blurring method based on dual cameras and tof
CN108419008B (zh) 一种拍摄方法、终端及计算机可读存储介质
CN113132644B (zh) 一种高动态范围图像生成的方法、设备
WO2019129092A1 (zh) 一种降帧率拍照方法、移动终端及存储介质
CN113179374A (zh) 图像处理方法、移动终端及存储介质
US20240119566A1 (en) Image processing method and apparatus, and electronic device
CN111447371A (zh) 一种自动曝光控制方法、终端及计算机可读存储介质
WO2022266907A1 (zh) 处理方法、终端设备及存储介质
WO2021218551A1 (zh) 拍照方法、装置、终端设备及存储介质
WO2022267506A1 (zh) 图像融合方法、电子设备、存储介质及计算机程序产品
CN113472980B (zh) 一种图像处理方法、装置、设备、介质和芯片
CN112822548B (zh) 一种投屏显示方法及装置、移动终端、存储介质
CN111028192B (zh) 一种图像合成方法及电子设备
WO2021184496A1 (zh) 图像融合方法、装置、存储介质及移动终端
CN108259765B (zh) 一种拍摄方法、终端及计算机可读存储介质
US20160088225A1 (en) Method and technical equipment for imaging

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17893540

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17893540

Country of ref document: EP

Kind code of ref document: A1