CN109863742B - Image processing method and terminal device - Google Patents

Image processing method and terminal device Download PDF

Info

Publication number
CN109863742B
CN109863742B CN201780065469.5A CN201780065469A CN109863742B CN 109863742 B CN109863742 B CN 109863742B CN 201780065469 A CN201780065469 A CN 201780065469A CN 109863742 B CN109863742 B CN 109863742B
Authority
CN
China
Prior art keywords
image
terminal device
camera sensor
sampled
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780065469.5A
Other languages
Chinese (zh)
Other versions
CN109863742A (en
Inventor
孙涛
朱聪超
杨永兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN109863742A publication Critical patent/CN109863742A/en
Application granted granted Critical
Publication of CN109863742B publication Critical patent/CN109863742B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application provides an image processing method and a terminal device, wherein the method comprises the following steps: acquiring at least one frame of first image and at least one frame of second image which are alternately and continuously output by a camera sensor; the resolution of the first image is the same as the resolution corresponding to the current photographing mode, the resolution of the first image is N times of the resolution of the second image, and N is an integer greater than 1; the camera sensor outputs a first image of each frame by adopting a first exposure parameter, the camera sensor outputs a second image of each frame by adopting a second exposure parameter, and the first exposure parameter is larger than the second exposure parameter; and carrying out image fusion according to the at least one frame of first image and the at least one frame of second image to obtain a fused image. The image processing method and the terminal device can improve the photographing effect of the terminal device under low illumination and improve user experience.

Description

Image processing method and terminal device
Technical Field
The present application relates to communications technologies, and in particular, to an image processing method and a terminal device.
Background
With the continuous improvement of user requirements, more and more functions are integrated by the terminal equipment. Currently, most terminal devices on the market can provide the following functions for users: dialing a phone call, sending a short message, surfing the internet, taking a picture and the like.
The terminal equipment can realize the photographing function through a camera sensor integrated on the terminal equipment. In the prior art, in order not to affect the volume of the terminal device, the camera sensor integrated on the terminal device is usually small, so that the photosensitive area of the camera sensor is limited, the pixel size is small, and further, the light entering amount of the camera sensor under low illumination is insufficient.
Therefore, when the user uses the terminal device to shoot a scene with dark light, the image shot by the terminal device has poor effect (for example, the image has high noise and low brightness), so that the user experience is low.
Disclosure of Invention
The application provides an image processing method and terminal equipment, which are used for solving the technical problem that in the prior art, when a user uses the terminal equipment to shoot a scene with dark light, the effect of an image shot by the terminal equipment is poor, and the user experience is low.
In a first aspect, the present application provides an image processing method, including: acquiring at least one frame of first image and at least one frame of second image which are alternately and continuously output by a camera sensor; the resolution of the first image is the same as the resolution corresponding to the current photographing mode, the resolution of the first image is N times of the resolution of the second image, and N is an integer greater than 1; the camera sensor outputs a first image of each frame by adopting a first exposure parameter, the camera sensor outputs a second image of each frame by adopting a second exposure parameter, and the first exposure parameter is larger than the second exposure parameter; and carrying out image fusion according to the at least one frame of first image and the at least one frame of second image to obtain a fused image.
With the image processing method provided by the first aspect, when the user takes a picture at low illumination using the terminal device, the terminal device may acquire at least one frame of the first image and at least one frame of the second image alternately and continuously output by the camera sensor, wherein the first image is mainly used for providing detail information of the current shooting scene, the second image is mainly used for providing brightness information of the current shooting scene, so that the terminal equipment can perform image fusion processing according to the at least one frame of first image and the at least one frame of second image, the brightness and the definition of the fused image obtained by the terminal equipment are improved, therefore, when the fused image is presented to a user, the terminal equipment can enable the user to view the image with higher definition and brightness, the photographing effect of the terminal equipment under low illumination is improved, and the user experience is further improved.
Optionally, before acquiring at least one frame of first image and at least one frame of second image alternately output by the camera sensor, the method further includes:
determining photographing parameters of the camera sensor according to the preview image output by the camera sensor; the photographing parameters comprise: size of the first image, number of frames of the second image, exposure parameter of the first image, exposure parameter of the second image, alternating order of the first image and the second image;
and according to the photographing parameters, instructing the camera sensor to alternately and continuously output at least one frame of first image and at least one frame of second image.
With the image processing method provided by this possible embodiment, the terminal device can reduce the relative local motion between the first images and reduce the relative local motion between the second images of a plurality of frames by instructing the camera sensor to alternately and continuously output at least one first image and at least one first image. Meanwhile, the shooting time can be reduced by indicating the continuous output mode of the camera sensor, the shooting speed is improved, and the user experience is further improved.
Optionally, the at least one frame of the first image includes: a frame of a first image, at least one frame of a second image comprising: a frame of a second image; performing image fusion according to the at least one frame of first image and the at least one frame of second image to obtain a fused image, including: and carrying out image fusion on the first image and the second image to obtain a fused image.
Optionally, before the image fusion is performed on the first image and the second image to obtain the fused image, the method further includes: converting the first image from a Bayer format into a YUV format to obtain a first image after the format conversion, and converting the second image from the Bayer format into the YUV format to obtain a second image after the format conversion; carrying out image fusion on the first image and the second image to obtain a fused image, wherein the image fusion comprises the following steps: and carrying out image fusion on the first image after the format conversion and the second image after the format conversion to obtain a fused image.
Optionally, the at least one frame of the first image includes: a plurality of frames of the first image, at least one frame of the second image comprising: a plurality of frames of the second image; performing image fusion according to the at least one frame of first image and the at least one frame of second image to obtain a fused image, including: performing time domain noise reduction on a plurality of frames of first images to obtain a third image, and performing time domain noise reduction on a plurality of frames of second images to obtain a fourth image; and carrying out image fusion on the third image and the fourth image to obtain a fused image.
Optionally, performing time-domain noise reduction on multiple frames of first images to obtain a third image, and performing time-domain noise reduction on multiple frames of second images to obtain a fourth image, further includes: converting a plurality of frames of first images from a Bayer format into a YUV format to obtain a plurality of frames of first images after the format conversion, and converting a plurality of frames of second images from the Bayer format into the YUV format to obtain a plurality of frames of second images after the format conversion; the time domain denoising method comprises the following steps of performing time domain denoising on a plurality of frames of first images to obtain a third image, and performing time domain denoising on a plurality of frames of second images to obtain a fourth image, wherein the time domain denoising method comprises the following steps: and performing time domain noise reduction on the first image after the multi-frame conversion format to obtain a third image, and performing time domain noise reduction on the second image after the multi-frame conversion format to obtain a fourth image.
Optionally, performing image fusion on the third image and the fourth image to obtain a fused image, including: according to the size of the fourth image, down-sampling the third image to obtain a down-sampled third image; the size of the down-sampled third image is the same as the size of the fourth image; carrying out exposure fusion on the down-sampled third image and the down-sampled fourth image to obtain a high dynamic range HDR image; according to the size of the third image, performing up-sampling on the HDR image to obtain an up-sampled HDR image; fusing the HDR image subjected to the up-sampling with the detail image of the third image to obtain a fused image; wherein the detail image of the third image comprises high frequency components of the third image.
With the image processing method provided by this possible embodiment, when the user takes a picture using the terminal device under low illumination, after the terminal equipment obtains a third image mainly used for providing detail information of a current shooting scene and a fourth image mainly used for providing brightness information of the shooting scene by carrying out time domain noise reduction on a plurality of frames of first images and second images which are alternately and continuously output by the camera sensor, the third image and the fourth image can be used for image fusion processing, so that the brightness and the definition of the fused image obtained by the terminal equipment are improved, therefore, when the fused image is presented to a user, the terminal equipment can enable the user to view the image with higher definition and brightness, the photographing effect of the terminal equipment under low illumination is improved, and the user experience is further improved.
Optionally, before the up-sampled HDR image is fused with the detail image of the third image to obtain a fused image, the method further includes: and acquiring a detail image of the third image according to the third image.
By the image processing method provided by the possible embodiment, the terminal device can acquire the detail image of the third image including the high-frequency component of the third image, so that after the terminal device fuses the HDR image subjected to the up-sampling and the detail image of the third image, the detail information of the whole shooting scene can be backfilled into the HDR image subjected to the up-sampling, and the definition of the HDR image subjected to the up-sampling is improved.
Illustratively, acquiring a detail image of the third image according to the third image includes: according to the size of the third image, the third image after down sampling is up sampled to obtain a third image after up sampling; and performing image subtraction calculation on the up-sampled third image and the third image to obtain a detail image of the third image.
Optionally, before performing exposure fusion on the down-sampled third image and the fourth image to obtain the high dynamic range HDR image, the method further includes: taking the down-sampled third image as a reference, and carrying out image registration on the fourth image to obtain a fourth image after image registration; according to the down-sampled third image, carrying out ghost correction on the fourth image after image registration to obtain a corrected fourth image; and exposing and fusing the down-sampled third image and the fourth image to obtain a high dynamic range HDR image, wherein the method comprises the following steps: and carrying out exposure fusion on the down-sampled third image and the corrected fourth image to obtain an HDR image.
With the image processing method provided by this possible embodiment, when the user takes a picture using the terminal device under low illumination, after the terminal equipment obtains a third image mainly used for providing detail information of a current shooting scene and a fourth image mainly used for providing brightness information of the shooting scene by carrying out time domain noise reduction on a plurality of frames of first images and second images which are alternately and continuously output by the camera sensor, the down-sampled third image may be referenced prior to image fusion of the down-sampled third image and the fourth image, the image registration and the ghost correction are carried out on the fourth image, so that when the terminal equipment carries out image fusion on the fourth image after the image registration and the ghost correction and the third image after down sampling, the image fusion effect is better, and then the definition of the fused image obtained by the terminal equipment is further improved.
Illustratively, performing ghost correction on the fourth image after image registration according to the downsampled third image to obtain a corrected fourth image, including: reducing the brightness of the fourth image after image registration to the brightness of the third image after down sampling to obtain a fourth image with reduced brightness; performing image difference calculation on the down-sampled third image and the brightness-reduced fourth image to obtain a difference absolute value corresponding to each pixel point of the brightness-reduced fourth image; taking the pixel points with the difference absolute values larger than the preset threshold value as the ghost of the fourth image after the image registration; according to the brightness of the fourth image after the image registration, the brightness of the third image after the down sampling is improved, and the third image after the brightness is improved is obtained; and replacing the ghost of the fourth image after the image registration by using the pixel points in the third image after the brightness is improved to obtain the corrected fourth image.
Illustratively, the fusing the up-sampled HDR image with the detail image of the third image to obtain a fused image, includes: determining the light sensitivity ISO of the camera sensor; determining a gain coefficient according to the ISO of the camera sensor; multiplying the pixel value of each pixel point of the detail image of the third image by the gain coefficient to obtain a processed detail image; and performing image addition calculation on the processed detail image and the HDR image subjected to the up-sampling to obtain a fused image.
Optionally, after the up-sampled HDR image is fused with the detail image of the third image to obtain a fused image, the method further includes: and performing spatial domain noise reduction on the fused image to obtain a spatial domain noise-reduced image.
With the image processing method provided by this possible embodiment, the noise of the image can be further reduced by performing spatial noise reduction on the fused image.
Illustratively, the first image is a full-size image.
In a second aspect, the present application provides a terminal device, including: the acquisition module is used for acquiring at least one frame of first image and at least one frame of second image which are alternately and continuously output by the camera sensor; the resolution of the first image is the same as the resolution corresponding to the current photographing mode, the resolution of the first image is N times of the resolution of the second image, and N is an integer greater than 1; the camera sensor outputs a first image of each frame by adopting a first exposure parameter, the camera sensor outputs a second image of each frame by adopting a second exposure parameter, and the first exposure parameter is larger than the second exposure parameter; and the fusion module is used for carrying out image fusion according to the at least one frame of first image and the at least one frame of second image to obtain a fused image.
Optionally, the terminal device further includes: the determining module is used for determining the photographing parameters of the camera sensor according to the preview image output by the camera sensor before the acquiring module acquires at least one frame of first image and at least one frame of second image which are alternately and continuously output by the camera sensor; the photographing parameters comprise: size of the first image, number of frames of the second image, exposure parameter of the first image, exposure parameter of the second image, alternating order of the first image and the second image; and the indicating module is used for indicating the camera sensor to alternately and continuously output at least one frame of first image and at least one frame of second image according to the photographing parameters.
Optionally, the at least one frame of the first image includes: a frame of a first image, at least one frame of a second image comprising: a frame of a second image; and the fusion module is specifically used for carrying out image fusion on the first image and the second image to obtain a fused image.
Optionally, the terminal device further includes: the first format conversion module is used for converting the first image from a Bayer format into a YUV format to obtain a first image after the format conversion before the fusion module performs image fusion on the first image and the second image to obtain a fused image, and converting the second image from the Bayer format into the YUV format to obtain a second image after the format conversion; and the fusion module is specifically used for carrying out image fusion on the first image after the format conversion and the second image after the format conversion to obtain a fused image.
Optionally, the at least one frame of the first image includes: a plurality of frames of the first image, at least one frame of the second image comprising: a plurality of frames of the second image; the fusion module is specifically used for carrying out time domain noise reduction on a plurality of frames of first images to obtain a third image and carrying out time domain noise reduction on a plurality of frames of second images to obtain a fourth image; and carrying out image fusion on the third image and the fourth image to obtain a fused image.
Optionally, the terminal device further includes: the second format conversion module is used for converting the multi-frame first image from a Bayer format into a YUV format to obtain a first image after the multi-frame conversion format and converting the multi-frame second image from the Bayer format into the YUV format to obtain a second image after the multi-frame conversion format before the fusion module performs time domain noise reduction on the multi-frame first image to obtain a fourth image; and the fusion module is specifically used for performing time domain noise reduction on the first image after the multi-frame conversion format to obtain a third image, and performing time domain noise reduction on the second image after the multi-frame conversion format to obtain a fourth image.
Optionally, the fusion module includes: the down-sampling unit is used for down-sampling the third image according to the size of the fourth image to obtain a down-sampled third image; the size of the down-sampled third image is the same as the size of the fourth image; the exposure fusion unit is used for carrying out exposure fusion on the down-sampled third image and the down-sampled fourth image to obtain a high dynamic range HDR image; the up-sampling unit is used for up-sampling the HDR image according to the size of the third image to obtain an up-sampled HDR image; the fusion unit is used for fusing the HDR image subjected to the up-sampling with the detail image of the third image to obtain a fused image; wherein the detail image of the third image comprises high frequency components of the third image.
Optionally, the fusion module further includes: and the acquiring unit is used for acquiring the detail image of the third image according to the third image before the fusion unit fuses the HDR image subjected to the up-sampling with the detail image of the third image to obtain the fused image.
Illustratively, the obtaining unit is specifically configured to perform upsampling on the downsampled third image according to a size of the third image, so as to obtain an upsampled third image; and performing image subtraction calculation on the up-sampled third image and the third image to obtain a detail image of the third image.
Optionally, the fusion module further includes: the image registration unit is used for performing exposure fusion on the down-sampled third image and the fourth image by the exposure fusion unit to obtain a fourth image after image registration by taking the down-sampled third image as a reference before obtaining the high dynamic range HDR image; the ghost correction unit is used for carrying out ghost correction on the fourth image after image registration according to the down-sampled third image to obtain a corrected fourth image; and the exposure fusion unit is specifically used for performing exposure fusion on the down-sampled third image and the corrected fourth image to obtain an HDR image.
Illustratively, the ghost correction unit is specifically configured to reduce the brightness of the fourth image after the image registration to the brightness of the third image after down-sampling, so as to obtain a reduced-brightness fourth image; performing image difference calculation on the down-sampled third image and the brightness-reduced fourth image to obtain a difference absolute value corresponding to each pixel point of the brightness-reduced fourth image; taking the pixel points with the difference absolute values larger than the preset threshold value as the ghost of the fourth image after the image registration; according to the brightness of the fourth image after the image registration, the brightness of the third image after the down sampling is improved, and the third image after the brightness is improved is obtained; and replacing the ghost of the fourth image after the image registration by using the pixel points in the third image after the brightness is improved to obtain the corrected fourth image.
Exemplarily, the fusion unit is specifically configured to determine a sensitivity ISO of the camera sensor; determining a gain coefficient according to the ISO of the camera sensor; multiplying the pixel value of each pixel point of the detail image of the third image by the gain coefficient to obtain a processed detail image; and performing image addition calculation on the processed detail image and the HDR image subjected to the up-sampling to obtain a fused image.
Optionally, the terminal device further includes:
and the spatial domain noise reduction module is used for fusing the HDR image subjected to the up-sampling with the detail image of the third image in the fusion unit to obtain a fused image, and then performing spatial domain noise reduction on the fused image to obtain a spatial domain noise-reduced image.
Illustratively, the first image is a full-size image.
The beneficial effects of the terminal device provided by the second aspect and each possible implementation manner of the second aspect may refer to the beneficial effects brought by each possible implementation manner of the first aspect, and are not described herein again.
In a third aspect, the present application provides a terminal device, where the terminal device includes: a processor, a memory;
wherein the memory is to store computer executable program code, the program code comprising instructions; when executed by a processor, the instructions cause the terminal device to perform the image processing method as any one of the first aspect and the possible embodiments of the first aspect.
The beneficial effects of the terminal device provided by the third aspect may refer to the beneficial effects brought by the possible embodiments of the first aspect and the first aspect, and are not described herein again.
A fourth aspect of the present application provides a terminal device comprising at least one processing element (or chip) for performing the method of the first aspect above.
A fifth aspect of the present application provides a program for performing the method of the above first aspect when executed by a processor.
A sixth aspect of the application provides a program product, e.g. a computer readable storage medium, comprising the program of the fifth aspect.
A seventh aspect of the present application provides a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to perform the method of the first aspect described above.
According to the image processing method and the terminal device provided by the application, when a user uses the terminal device to take a picture under low illumination, the terminal device may acquire at least one frame of the first image and at least one frame of the second image alternately and continuously output by the camera sensor, wherein the first image is mainly used for providing detail information of the current shooting scene, the second image is mainly used for providing brightness information of the current shooting scene, so that the terminal equipment can perform image fusion processing according to the at least one frame of first image and the at least one frame of second image, the brightness and the definition of the fused image obtained by the terminal equipment are improved, therefore, when the fused image is presented to a user, the terminal equipment can enable the user to view the image with higher definition and brightness, the photographing effect of the terminal equipment under low illumination is improved, and the user experience is further improved.
Drawings
Fig. 1 is a schematic diagram of a terminal device in the prior art;
FIG. 2 is a schematic diagram of the operation principle of the camera sensor binding mode in the prior art;
fig. 3 is a schematic flowchart of an image processing method provided in the present application;
FIG. 4 is a schematic flow chart of another image processing method provided in the present application;
FIG. 5 is a schematic diagram of a camera sensor provided herein;
FIG. 6 is a schematic flow chart of another image processing method provided in the present application;
FIG. 7 is a schematic flow chart of another image processing method provided in the present application;
FIG. 8 is a schematic flow chart of another image processing method provided in the present application;
FIG. 9 is a schematic illustration of a first image shown in the present application;
FIG. 10 is a schematic illustration of a second image shown in the present application;
FIG. 11 is a schematic illustration of a spatially denoised image according to the present application;
FIG. 12 is a schematic flow chart diagram illustrating another image processing method provided herein;
fig. 13 is a schematic structural diagram of a terminal device provided in the present application;
fig. 14 is a schematic structural diagram of another terminal device provided in the present application;
fig. 15 is a schematic structural diagram of another terminal device provided in the present application;
fig. 16 is a schematic structural diagram of another terminal device provided in the present application;
fig. 17 is a schematic structural diagram of another terminal device provided in the present application;
fig. 18 is a schematic structural diagram of another terminal device provided in the present application;
fig. 19 is a block diagram of a structure of a terminal device applied for providing a mobile phone.
Detailed Description
In the following, some terms in the present application are explained to facilitate understanding by those skilled in the art:
a terminal: which may be wireless or wireline, and which may be a device providing voice and/or other traffic data connectivity to a user, a handheld device having wireless connection capability, or other processing device connected to a wireless modem. Wireless terminals, which may be mobile terminals such as mobile telephones (or "cellular" telephones) and computers having mobile terminals, such as portable, pocket, hand-held, computer-included, or vehicle-mounted mobile devices, may communicate with one or more core networks via a Radio Access Network (RAN), which may exchange language and/or data with the RAN. Examples of such devices include Personal Communication Service (PCS) phones, cordless phones, Session Initiation Protocol (SIP) phones, Wireless Local Loop (WLL) stations, and Personal Digital Assistants (PDAs). A wireless Terminal may also be referred to as a system, a Subscriber Unit (Subscriber Unit), a Subscriber Station (Subscriber Station), a Mobile Station (Mobile), a Remote Station (Remote Station), a Remote Terminal (Remote Terminal), an Access Terminal (Access Terminal), a User Terminal (User Terminal), a User Agent (User Agent), and a User Device or User Equipment (User Equipment), which are not limited herein.
In the present application, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Fig. 1 is a schematic diagram of a terminal device in the prior art. In the prior art, most terminal devices realize a photographing function through a camera sensor integrated on the terminal device. The camera sensor may be a front camera sensor of the terminal device, or may be a rear camera sensor of the terminal device. Fig. 1 is a schematic diagram illustrating an example of a terminal device as a mobile phone.
As shown in fig. 1, since the camera sensor integrated on the terminal device is small, the light sensing area of the camera sensor is limited, and the pixel size is small, the amount of light entering the camera sensor under low illumination is insufficient, resulting in poor image output effect (e.g., noise of the image is large, brightness is low, etc.) of the camera sensor under low illumination. Therefore, when a user uses the terminal device to shoot a scene with dark light (for example, a night scene), the effect of an image output by the camera sensor of the terminal device is poor, so that the user experience is low when the terminal device presents the image to the user.
At present, several solutions exist, in particular:
the first scheme is as follows: the terminal equipment promotes the luminance of shooting the scene through the light filling lamp on the terminal equipment to increase the light inlet amount of camera sensor, and then improve the luminance of the image of camera sensor output.
Specifically, all be provided with the light filling lamp on most terminal equipment, for example: a rear flash, a front Light Emitting Diode (LED) lamp, and the like. Therefore, when the terminal device takes a picture by the camera sensor, the light supplementing lamp can be used for supplementing light to the shot scene and improving the brightness of the shot scene, so that the light inlet quantity of the camera sensor is improved, and the brightness of an image output by the camera sensor is improved.
However, the light supplement range of the light supplement lamp is limited, so that the light supplement lamp can only supplement light for a near scene and cannot supplement light for a far scene, a far scene part in an image output by the camera sensor is still dark, the effect of the image presented to a user by the terminal device is still poor, and user experience is low.
The second scheme is as follows: the terminal equipment improves the brightness of the image output by the camera sensor by enabling the camera sensor to work in a binding mode.
Specifically, fig. 2 is a schematic diagram of an operating principle of a camera sensor binding mode in the prior art. As shown in fig. 2, the binning mode of the camera sensor is: pixels of a plurality of adjacent and identical pixels in an image shot by a camera sensor are combined and used as one pixel. That is, pixels of a plurality of adjacent Green (G) pixels in an image are combined and used as one pixel; merging pixels of a plurality of adjacent Red (Red, R) pixels in an image to be used as one pixel; pixels of a plurality of adjacent Blue (B) picture elements in the image are combined and used as one pixel. The adjacent pixels can be pixels adjacent in the horizontal direction, pixels adjacent in the vertical direction, pixels adjacent in the horizontal direction, and pixels adjacent in the vertical direction.
Fig. 2 is a schematic diagram showing a combination of pixels of 2 pixels adjacent in the horizontal direction and pixels of 2 pixels adjacent in the vertical direction, which are used as one pixel. For ease of understanding, the 4 picture elements that are combined into the same pixel are identified by the same lines in fig. 2. Taking the image on the left side in fig. 2 as an example of the image taken by the camera sensor, when the camera sensor operates in the binning mode, the camera sensor may combine pixels of 2 pixels adjacent in the horizontal direction and pixels of 2 pixels adjacent in the vertical direction in the image to obtain an image shown on the right side in fig. 2, and then output the image shown on the right side in fig. 2. Among them, the image shown on the right side in fig. 2 may be referred to as a binning image. Since the pixels of 4 identical pixels are merged in fig. 2, the size of the merged image obtained after the pixel merging is reduced to one fourth of the left image (i.e., the original) in fig. 2, and the resolution of the merged image is also reduced to one fourth of the left image (i.e., the original) in fig. 2.
By the mode of combining the pixels, the photosensitive area of an image can be increased, the sensitivity of a dark place to light induction is improved, and the brightness of the image output by the camera sensor under low illumination can be improved. However, when the camera sensor operates in the binning mode, the luminance of the image is increased, and at the same time, the resolution of the image obtained by pixel combination is reduced, so that high-frequency information of the image is lost (i.e., details of the image are lost), and the sharpness of the image is reduced. For example: the pixel summation mode of 4 adjacent identical pixels shown in fig. 2 reduces the resolution of the right image in fig. 2 to one fourth of the left image, so that the definition of the image output by the camera sensor is reduced, the effect of the image presented to the user by the terminal device is still poor, and the user experience is low.
According to the above description, when the terminal device adopts any of the above schemes, the effect of the image presented to the user by the terminal device is still poor, and the user experience is low. Therefore, in view of the above problems, the present application provides an image processing method, which is used to solve the technical problem in the prior art that when a user uses a terminal device to shoot a scene with dark light, an image presented to the user by the terminal device is poor in effect. The technical solution of the present application is explained below with some examples. The following several embodiments may be combined with each other and may not be described in detail in some embodiments for the same or similar concepts or processes.
Fig. 3 is a schematic flowchart of an image processing method provided in the present application. As shown in fig. 3, the method may include:
s101, acquiring at least one frame of first image and at least one frame of second image which are alternately and continuously output by the camera sensor.
Specifically, in the present application, when a user uses the terminal device to take a picture at low illumination, that is, when the terminal device is in a state of taking a picture at low illumination (that is, light of a scene currently taken by the terminal device is dark), the terminal device may obtain at least one first image and at least one second image that are alternately and continuously output by the camera sensor. That is to say, the first image and the second image acquired by the terminal device are images output when the same camera sensor of the terminal device is currently shooting the same scene, that is, the first image and the second image contain the currently same shooting scene. The same camera sensor can be a front camera sensor of the terminal equipment, and can also be a rear camera sensor of the terminal equipment.
The resolution of the first image is the same as the resolution corresponding to the photographing mode currently selected by the user on the terminal device. The resolution of the first image is N times of that of the second image, and N is an integer greater than 1. I.e. the size of the first image is N times the size of the second image. That is, the size of the first image is a size that matches the resolution corresponding to the current photographing mode, and may also be referred to as a full-size image at that resolution. The second image is a binning image relative to the first image, i.e., the second image is obtained by pixel merging. Therefore, the definition of the first image in the present application is higher than that of the second image, but the luminance of the first image is lower than that of the second image. The first image is mainly used to provide detail information of the current photographed scene (i.e., high frequency components of the first image) and the second image is mainly used to provide brightness information of the current photographed scene (i.e., low frequency components of the second image).
The present embodiment does not limit the manner in which the camera sensor alternately and continuously outputs at least one frame of the first image and at least one frame of the second image, for example: the camera sensor may alternately and continuously output at least one frame of the first image and at least one frame of the second image in a manner of outputting the frame of the first image first and then outputting the frame of the second image, or may alternately output at least one frame of the first image and at least one frame of the second image in a manner of outputting the frame of the second image first and then outputting the frame of the first image. The second image is a binning image with respect to the first image, that is, the second image is an image obtained by pixel binning, and therefore the second image has higher luminance than the first image. In order to avoid the problem of overexposure of at least one frame of second image output by the camera sensor, the camera sensor may output the first image and the second image with different exposure parameters, for example: and outputting a first image of each frame by adopting a first exposure parameter, and outputting a second image of each frame by adopting a second exposure parameter smaller than the first exposure parameter. The specific values of the first exposure parameter and the second exposure parameter can be determined according to the current photosensitive value (ISO) of the camera sensor, and are not repeated.
S102, carrying out image fusion according to the at least one frame of first image and the at least one frame of second image to obtain a fused image.
Specifically, after acquiring at least one frame of first image and at least one frame of second image, the terminal device may perform image fusion on the at least one frame of first image and the at least one frame of second image. Namely, image fusion is carried out according to at least one frame of first image with higher definition and at least one frame of second image with higher brightness. By the method, the definition of the first image and the brightness of the second image can be fused on one frame of image, so that the brightness and the definition of the fused image are improved after the terminal equipment performs image fusion.
According to the image processing method provided by the application, when a user uses the terminal device to take a picture under low illumination, the terminal device can acquire at least one first image and at least one second image which are alternately and continuously output by the camera sensor, wherein the first image is mainly used for providing detail information of the current shooting scene, the second image is mainly used for providing brightness information of the current shooting scene, so that the terminal equipment can perform image fusion processing according to the at least one frame of first image and the at least one frame of second image, the brightness and the definition of the fused image obtained by the terminal equipment are improved, therefore, when the fused image is presented to a user, the terminal equipment can enable the user to view the image with higher definition and brightness, the photographing effect of the terminal equipment under low illumination is improved, and the user experience is further improved.
Fig. 4 is a schematic flowchart of another image processing method provided in the present application. The embodiment relates to how the terminal device instructs the camera sensor to alternately and continuously output at least one frame of the first image and at least one frame of the second image. As shown in fig. 4, before S101, the method may further include:
s201, determining photographing parameters of the camera sensor according to the preview image output by the camera sensor.
Specifically, in this embodiment, when a user uses the terminal device to take a picture, if the terminal device analyzes and determines a preview image currently output by the camera sensor, and the camera sensor is currently in a low-illumination shooting state, the terminal device may determine a shooting parameter of the camera sensor. The photographing parameter may be a parameter that is required to be used by the terminal device to perform a photographing operation when the user uses the terminal device to photograph. The photographing parameters may include: the size of the first image, the number of frames of the second image, the exposure parameters of the first image, the exposure parameters of the second image, the alternating order of the first image and the second image, etc.
The above-mentioned alternating sequence of the first image and the second image may be a preset alternating sequence, or may be an alternating sequence randomly allocated to the camera sensor by the terminal device. Fig. 5 is a schematic diagram illustrating a camera sensor provided in the present application. As shown in fig. 5, fig. 5 is a diagram illustrating the camera sensor outputting 4 frames of the first image and 4 frames of the second image in an alternating order of outputting the first image and outputting the second image. It will be understood by those skilled in the art that the camera sensor may output 4 frames of the first image and 4 frames of the second image in an alternating sequence of outputting the second image first and then outputting the first image, which is not limited.
The terminal equipment can determine the resolution and the size of the first image through the resolution corresponding to the photographing mode currently selected by the user on the terminal, and further determine the resolution and the size of the second image according to the multiple of the resolution of the first image and the resolution of the second image.
The terminal device can determine the current ISO of the camera sensor according to the preview image output by the camera sensor, and further determine the frame number of the first image and the frame number of the second image according to the corresponding relationship between the ISO and the frame number of the first image and the frame number of the second image. It should be noted that the darker the light of the current shooting scene is, the higher the ISO is, and the higher the noise of the image output by the camera sensor is. Therefore, the terminal device needs to perform image processing using a larger number of frames of images, so the number of frames of the first image and the second image corresponding to ISO increases. For example, the correspondence relationship between ISO and the number of frames of the first image and the number of frames of the second image may be, for example: when ISO is 500, it corresponds to 2 frames of the first image and 2 frames of the second image, when ISO is 1000, it corresponds to 3 frames of the first image and 3 frames of the second image, and so on. Although the above example has been described by taking the first image and the second image with the same frame number as an example, it will be understood by those skilled in the art that the frame numbers of the first image and the second image may be different.
The terminal device may determine the exposure parameter of the first image and the exposure parameter of the second image according to the brightness of the preview image output by the camera sensor in the existing calculation manner, which is not described again. The exposure parameters may include: ISO, exposure time, frame rate, etc.
S202, according to the photographing parameters, the camera sensor is instructed to alternately and continuously output at least one frame of first image and at least one frame of second image.
Specifically, after the terminal device obtains the photographing parameters, the camera sensor may be instructed to alternately and continuously output at least one frame of the first image and at least one frame of the second image according to the photographing parameters. In specific implementation, the terminal device may send the exposure parameter corresponding to each frame of image and the size of the frame of image to the camera sensor according to the alternating sequence before the camera sensor outputs each frame of image, so that the camera sensor may correctly and alternately output each frame of the first image and the second image.
Because the camera sensor outputs a plurality of frames of first images and a plurality of frames of second images for a long time, the camera sensor may have local motion in the process of outputting the images. Therefore, in order to avoid the difference in brightness between the first images of the plurality of frames and the second images of the plurality of frames, which makes the subsequent processing of the first images of the plurality of frames and the second images of the plurality of frames difficult, the terminal device can reduce the relative local motion between the first images of the plurality of frames and the relative local motion between the second images of the plurality of frames by instructing the camera sensor to output the first images of the plurality of frames and the second images of the plurality of frames in an alternating and continuous output manner. Meanwhile, the shooting time can be reduced by indicating the continuous output mode of the camera sensor, the shooting speed is improved, and the user experience is further improved.
It should be noted that, because the resolution of the first image is different from the resolution of the second image, in order to keep the pictures seen on the screen consistent when the user uses the terminal device to take a picture, each frame of the first image may be displayed on the screen to the user, and the second image is not displayed, so that the user experience is improved.
It can be understood by those skilled in the art that the terminal device may be implemented by software, hardware, or a combination of software and hardware when executing steps S201 to S202. Here, the hardware may be, for example, an Image Signal Processing (ISP), and the software may be, for example: an Automatic Exposure (AE) module, and the like.
According to the image processing method provided by the application, when a user uses the terminal device to take a picture under low illumination, the terminal device can determine the picture taking parameters of the camera sensor according to the preview image output by the camera sensor, and further indicate at least one frame of first image and at least one frame of second image which are alternately and continuously output by the camera sensor according to the picture taking parameters, wherein the first image is mainly used for providing the detail information of the current shooting scene, and the second image is mainly used for providing the brightness information of the current shooting scene, so that the terminal device can perform image fusion processing according to the at least one frame of first image and the at least one frame of second image, the brightness and the definition of the fused image obtained by the terminal device are improved, and the terminal device can enable the user to watch the image with higher definition and brightness when the fused image is presented to the user, the shooting effect of the terminal equipment under low illumination is improved, and further user experience is improved.
Further, on the basis of the foregoing embodiment, this embodiment relates to a process in which the terminal device performs image fusion according to the at least one first image and the at least one second image to obtain a fused image, and then the foregoing S102 may include the following two cases:
in the first case: the terminal device acquires a frame of first image and a frame of second image which are alternately and continuously output by the camera sensor.
Specifically, if the terminal device only acquires a first frame of image and a second frame of image that are alternately and continuously output by the camera sensor, the terminal device may directly perform image fusion on the first frame of image and the second frame of image to obtain a fused image. Optionally, in another implementation manner of the present application, the terminal device may further perform format conversion (that is, demosaic operation) on the first frame of the first image and the second frame of the second image, so as to convert the first frame of the first image from the Bayer format to the YUV format, obtain a first frame of the first image after the format conversion, convert the second frame of the second image from the Bayer format to the YUV format, obtain a second frame of the second image after the format conversion, and further perform image fusion on the first frame of the first image after the format conversion and the second frame of the second image after the format conversion, obtain a fused image, and the like.
In the second case: the terminal equipment acquires a plurality of frames of first images and a plurality of frames of second images which are alternately and continuously output by the camera sensor.
Specifically, if the terminal device acquires multiple frames of first images and multiple frames of second images that are alternately and continuously output by the camera sensor, the terminal device may directly perform image fusion on the multiple frames of first images and the multiple frames of second images.
Optionally, the terminal device may further perform time-domain noise reduction on the multiple frames of the first image to obtain a third image, perform time-domain noise reduction on the multiple frames of the second image to obtain a fourth image, and further perform image fusion on the third image and the fourth image to obtain a fused image. Since the amount of light entering the camera sensor is small under low illumination, the noise of the image output by the camera sensor is large under low illumination. Therefore, the terminal device can reduce the noise of the image by respectively performing time-domain noise reduction on the multiple frames of the first image and the multiple frames of the first image output by the camera sensor, that is, by performing pixel averaging operation between different frames in the time domain, so that the noise of the obtained third image and the fourth image is small, and the noise of the fused image obtained by performing image fusion on the third image and the fourth image by the terminal device is small.
In another implementation manner of the present application, the terminal device may further perform format conversion on the one frame of the first image and the one frame of the second image, so as to convert the multiple frames of the first image from the Bayer format to the YUV format, obtain a multiple frames of the first image after the format conversion, and convert the multiple frames of the second image from the Bayer format to the YUV format, obtain a multiple frames of the second image after the format conversion. Therefore, the terminal device can perform time domain noise reduction on the first image after the multi-frame conversion format to obtain a third image, perform time domain noise reduction on the second image after the multi-frame conversion format to obtain a fourth image, and further perform image fusion on the third image and the fourth image to obtain a fused image and the like.
According to the image processing method provided by the application, when a user uses the terminal device to take a picture under low illumination, the terminal device can acquire at least one first image and at least one second image which are alternately and continuously output by the camera sensor, wherein the first image is mainly used for providing detail information of the current shooting scene, the second image is mainly used for providing brightness information of the current shooting scene, so that the terminal equipment can perform image fusion processing according to the at least one frame of first image and the at least one frame of second image, the brightness and the definition of the fused image obtained by the terminal equipment are improved, therefore, when the fused image is presented to a user, the terminal equipment can enable the user to view the image with higher definition and brightness, the photographing effect of the terminal equipment under low illumination is improved, and the user experience is further improved.
Fig. 6 is a schematic flowchart of another image processing method provided in the present application. In this embodiment, the third image and the fourth image are taken as an example to describe a process of image fusion performed by the terminal device. It can be understood by those skilled in the art that if the terminal device acquires a frame of the first image and a frame of the second image, the terminal device may also perform image fusion on the frame of the first image and the frame of the second image in the following manner, and the implementation manner and the implementation principle thereof are similar, and thus are not described again.
In this embodiment, the third image is an image obtained by performing time-domain noise reduction on a plurality of frames of first images, and the fourth image is an image obtained by performing time-domain noise reduction on a plurality of frames of second images, so that the third image is mainly used for providing detail information of a current shooting scene (i.e. a high-frequency component of the first image) and the fourth image is mainly used for providing brightness information of the current shooting scene (i.e. a low-frequency component of the second image) in an image fusion process. As shown in fig. 6, the method includes:
and S301, according to the size of the fourth image, down-sampling the third image to obtain a down-sampled third image.
Specifically, the third image acquired by the terminal device is an image obtained by performing time-domain noise reduction on the multiple frames of the first image, and the fourth image is an image obtained by performing time-domain noise reduction on the multiple frames of the second image. Therefore, the third image has the same size as the first image, and the fourth image has the same size as the second image, so that the third image and the fourth image have different sizes. In order to perform exposure fusion on the third image and the fourth image, after acquiring the third image and the fourth image, the terminal device may perform downsampling on the third image according to the size of the fourth image to reduce the size of the third image, so that the size of the downsampled third image is the same as the size of the fourth image.
S302, exposure fusion is carried out on the down-sampled third image and the down-sampled fourth image, and an HDR image is obtained.
Specifically, the terminal device may perform exposure fusion on two frame images having the same size (i.e., the downsampled third image and the downsampled fourth image) after obtaining the downsampled third image. In other words, the down-sampled third image with high resolution and the fourth image with high brightness are subjected to exposure fusion. In this way, the sharpness of the down-sampled third image and the brightness of the fourth image can be fused on one frame of image, so that the overall brightness of a High-Dynamic Range (HDR) image obtained after the terminal device performs exposure fusion is improved.
The present embodiment does not limit the implementation manner of the terminal device performing exposure fusion on the down-sampled third image and the down-sampled fourth image. For example: the terminal device may adopt an exposure fusion mode of "calculating parameters with image brightness as weights". As a practical manner, the terminal device may assign a weight to each pixel point of the down-sampled third image with reference to the central value 128 of the image brightness. Among the pixels with the brightness lower than 128, the lower the brightness of the pixel is, the smaller the weight of the pixel is; among the pixels with the brightness higher than 128, the higher the brightness of the pixel is, the smaller the weight of the pixel is. Similarly, the terminal device may assign a weight to each pixel point in the fourth image in the above manner. Then, the terminal device may multiply the pixel value of each pixel point of the down-sampled third image by the weight value of the pixel point, so as to obtain a processed third image. Similarly, the terminal device may multiply the pixel value of each pixel point of the fourth image by the weight value of the pixel point to obtain the processed fourth image. And finally, the terminal equipment performs image addition calculation on the processed third image and the processed fourth image to obtain an HDR image, so that the exposure fusion process is completed. By the method, darker pixel points in the third image after down sampling can be promoted through the pixel points in the fourth image with higher brightness, and overexposed pixel points in the fourth image are compensated through the pixel points in the third image after down sampling, so that a too dark area and a too bright area can not appear in an HDR image obtained by the terminal equipment, and further the brightness of the HDR image is promoted integrally. It should be noted that, for example, the value range of the weight value may be between 0 and 1, and the correspondence between the weight value and the brightness may be specifically determined according to the requirement of the user.
And S303, according to the size of the third image, performing up-sampling on the HDR image to obtain an up-sampled HDR image.
Specifically, the terminal device performs exposure fusion on the down-sampled third image and the down-sampled fourth image, and the size of the obtained HDR image is the same as that of the fourth image. Therefore, the terminal device needs to up-sample the HDR image according to the size of the third image to enlarge the size of the HDR image so that the size of the up-sampled HDR image is the same as the size of the third image. In this way, the size of the up-sampled HDR image can be adapted to the resolution corresponding to the photographing mode currently selected by the user on the terminal device.
S304, fusing the HDR image subjected to the up-sampling with the detail image of the third image to obtain a fused image.
Specifically, in the process of down-sampling the third image, the terminal device may lose the high-frequency component of the third image (i.e., the detail information of the current shooting scene), so that the resolution of the down-sampled third image is lower than that of the original third image. In this way, the resolution of the HDR image obtained by exposure-fusing the down-sampled third image and the fourth image is also lower than that of the third image, so that the resolution of the up-sampled HDR image is still low.
Therefore, after the terminal device acquires the up-sampled HDR image, the up-sampled HDR image and the detail image of the third image including the high-frequency component of the third image may be fused, so as to backfill the detail information of the whole shooting scene into the up-sampled HDR image, and improve the definition of the up-sampled HDR image. By the method, the brightness and the definition of the fused image obtained by the terminal are improved. Therefore, when the fused image is presented to a user, the terminal equipment can enable the user to watch the image with higher definition and brightness, and user experience is improved.
The embodiment does not limit the implementation manner of the terminal device fusing the up-sampled HDR image and the detail image of the third image. For example: the terminal device may directly perform image addition calculation on the up-sampled HDR image and the detail image of the third image to obtain a fused image. As an implementable manner, the terminal device may further determine the current low-Illumination Sensitivity (ISO) of the camera sensor, and further determine a gain coefficient adapted to the ISO of the camera sensor according to the ISO of the camera sensor. Then, the terminal device may multiply the pixel value of each pixel point of the detail image by the gain coefficient to enhance the detail image, so as to obtain the processed detail image. And finally, the terminal equipment performs image addition calculation on the processed detail image and the HDR image subjected to the up-sampling to obtain a fused image. Before the fusion, the gain coefficient corresponding to the current ISO of the camera sensor is combined, and the detail image is enhanced, so that the sharpness of the fused image is improved, and the definition of the fused image is further improved. In a specific implementation, the terminal device may determine, through an image currently previewed by the camera sensor, a current sensitivity of the camera sensor under low illumination. The terminal device can determine the gain coefficient corresponding to the ISO of the camera sensor according to the mapping relation between the ISO and the gain coefficient. The mapping relationship between the ISO and the gain coefficient may be specifically set according to an actual situation. The mapping relationship between ISO and gain factor may be, for example: when ISO is less than or equal to 500, the gain factor may be 1.5; when ISO is greater than 500 and less than or equal to 1000, the gain factor may be 1.4; when ISO is greater than 1000 and less than or equal to 1500, the gain factor may be 1.3; the gain factor may be 1.2 when ISO is greater than 1500 and less than or equal to 2000, 1.1 when ISO is greater than 2000, and so on.
The detail image of the third image may be a detail image of the third image obtained by the terminal device according to the third image before executing S304. The embodiment does not limit the implementation manner of acquiring the detail image of the third image. For example, the terminal device may perform fourier transform on the third image, remove low-frequency components in the third image, and retain high-frequency components of the third image. Then, the terminal device performs inverse fourier transform on the third image only with the high-frequency component retained, and thus a detailed image of the third image can be obtained. Optionally, in an implementation manner of the present application, the terminal device may further perform upsampling on the downsampled third image according to a size of the third image, so as to obtain the upsampled third image. Because the third image after the up-sampling is blurred compared with the third image, the terminal device may obtain the detailed image of the third image by performing image subtraction calculation on the third image after the up-sampling and the third image.
Optionally, after the terminal device fuses the up-sampled HDR image and the detail image of the third image to obtain a fused image, the terminal device may further perform spatial noise reduction on the fused image to further reduce noise of the image. Optionally, the terminal device may perform spatial domain noise reduction on the fused image through a non-local mean denoising algorithm, and may also perform spatial domain noise reduction on the fused image by using a method in the prior art, which is not described herein again.
The image processing method provided by the application, when a user uses the terminal equipment to take a picture under low illumination, the terminal equipment performs time domain noise reduction on a plurality of frames of first images and second images which are alternately and continuously output by the camera sensor, so as to obtain a third image which is mainly used for providing detail information of a current shooting scene, and after a fourth image which is mainly used for providing brightness information of the shooting scene, the third image and the fourth image can be used for performing image fusion processing, so that the brightness and the definition of the fused image obtained by the terminal equipment are improved, and therefore when the fused image is presented to the user, the user can watch the image with higher definition and brightness, the picture taking effect of the terminal equipment under low illumination is improved, and further the user experience is improved.
Fig. 7 is a schematic flowchart of another image processing method provided in the present application. As shown in fig. 7, before the above S302, the method may further include:
and S401, performing image registration on the fourth image by taking the down-sampled third image as a reference to obtain a fourth image after image registration.
Specifically, in this embodiment, before performing exposure fusion on the down-sampled third image and the fourth image, the terminal device may perform image registration on the fourth image by using the down-sampled third image as a reference, so as to align the same features in the down-sampled third image and the down-sampled fourth image. Therefore, when the terminal equipment performs image fusion on the third image and the fourth image after the down sampling, the same characteristics can be accurately fused together, and the image fusion effect is improved.
In particular, when implemented, the terminal device may perform image registration on the fourth image through a registration method of Speeded Up Robust Features (SURF). Certainly, the terminal device may also perform image registration on the fourth image by using an image registration method in the prior art, which is not described again.
S402, according to the down-sampled third image, carrying out ghost correction on the fourth image after image registration to obtain a corrected fourth image.
Specifically, the terminal device performs image registration on the fourth image with reference to the downsampled third image to obtain a fourth image after the image registration, and may further perform ghost correction on the fourth image after the image registration according to the downsampled third image to obtain a corrected fourth image. Here, the ghost is formed in the fused image by an object moving in the third image after down sampling and the fourth image after image registration when the third image after down sampling and the fourth image after image registration are exposure-fused. In this implementation manner, the terminal device may perform exposure fusion on the downsampled third image and the corrected fourth image to obtain an HDR image, so that edges of objects in the obtained HDR image are clear, a ghost phenomenon does not occur, and the definition of the HDR image is further improved.
The present embodiment does not limit the way in which the terminal device performs the ghost correction on the fourth image after the image registration according to the downsampled third image. Optionally, the terminal device may reduce the brightness of the fourth image after the image registration to the brightness of the third image after the down-sampling, so as to obtain the fourth image with reduced brightness. Then, the terminal device performs image difference calculation on the down-sampled third image and the brightness-reduced fourth image to obtain a difference absolute value corresponding to each pixel point of the brightness-reduced fourth image. At this time, if the absolute value of the difference corresponding to a certain pixel point is greater than the preset threshold, the terminal device may find the position of the pixel point in the fourth image after the image registration, where the position is a ghost of the fourth image after the image registration. In this way, all ghosts of the fourth image after image registration can be acquired. After all the ghosts of the fourth image after the image registration are acquired, the terminal device may increase the brightness of the down-sampled third image according to the brightness of the fourth image after the image registration, so as to obtain the brightness-increased third image. In this way, the terminal device may replace the ghost of the fourth image after the image registration with the pixel point in the third image after the brightness is increased, so as to obtain the corrected fourth image. Because the terminal device uses the pixel points in the third image with the same brightness as the fourth image registered by the image to correct the ghost, the corrected fourth image still has the original brightness.
According to the image processing method provided by the application, when a user uses the terminal device to take a picture under low illumination, the terminal device performs time domain noise reduction on a plurality of frames of first images and second images which are alternately and continuously output by the camera sensor to obtain a third image which is mainly used for providing detail information of a current shooting scene and a fourth image which is mainly used for providing brightness information of the shooting scene, the down-sampled third image may be referenced prior to image fusion of the down-sampled third image and the fourth image, the image registration and the ghost correction are carried out on the fourth image, so that when the terminal equipment carries out image fusion on the fourth image after the image registration and the ghost correction and the third image after down sampling, the image fusion effect is better, and then the definition of the fused image obtained by the terminal equipment is further improved.
The following describes an image processing method provided by the present application with two examples.
Fig. 8 is a schematic flowchart of another image processing method provided by the present application. In this embodiment, the terminal device acquires a plurality of frames of the first image and the second image that are alternately and continuously output by the camera sensor. As shown in fig. 8, the method may include:
s501, determining photographing parameters of the camera sensor according to the preview image output by the camera sensor.
And S502, according to the photographing parameters, indicating the camera sensor to alternately and continuously output a plurality of frames of first images and a plurality of frames of second images.
S503, acquiring a plurality of frames of first images and a plurality of frames of second images which are alternately and continuously output by the camera sensor.
S504, converting the multi-frame first image from the Bayer format into the YUV format to obtain the multi-frame converted first image.
Specifically, when the existing terminal device executes a photographing function, the format of the photographed image presented to the user is mostly JPEG format. In the prior art, due to the limitation of the chip technology of the terminal equipment (such as bandwidth limitation, processing speed limitation and the like), the terminal equipment cannot rapidly and directly convert an image from a Bayer format into a JPEG format which can be presented to a user, and the smooth requirement of a photographing process cannot be met. Therefore, the existing terminal device needs to convert the format of the image from Bayer format to YUV format, and then convert YUV format to JPEG format.
Corresponding to the present embodiment, since the plurality of frames of the first image and the plurality of frames of the second image output by the camera sensor are both images in the Bayer format, in the process of executing the image processing method of the present embodiment, the format of the image needs to be converted to convert from the Bayer format to the YUV format. This operation may be performed after the image is subjected to the fusion processing, or may be performed before the image is subjected to the fusion processing. If this operation can be performed after the fusion process is performed on the image, the operation is often performed by a software module of the terminal device. If this operation can be performed before the fusion processing is performed on the images, the operation is often performed by the ISP of the terminal device. Since the execution rate of the ISP is faster than that of the software module of the terminal equipment, the shooting efficiency of the terminal equipment can be improved in such a way. In specific implementation, the ISP of the terminal device may convert the multiple frames of the first image from the Bayer format to the YUV format by performing a demosaicing operation on the multiple frames of the first image, so as to obtain the multiple frames of the first image after being converted into the YUV format.
It will be understood by those skilled in the art that if the chip technology of the subsequent terminal device can support the real-time direct conversion of the image from the Bayer format to the JPEG format that can be presented to the user, the conversion of the image from the Bayer format to the YUV format may not be performed in the course of performing the embodiments illustrated in the present application.
And S505, converting the multi-frame second image from the Bayer format to the YUV format to obtain the multi-frame converted second image.
It should be noted that the execution of step S505 and step S504 may not be in a sequential order.
S506, performing time domain noise reduction on the first image after the multi-frame format conversion to obtain a third image.
Specifically, since the amount of light entering the camera sensor is small under low illumination, the noise of the image output by the camera sensor is large under low illumination. Therefore, the terminal device can reduce the noise of the first image by performing time-domain noise reduction on the multiple frames of first images output by the camera sensor, that is, by performing pixel averaging operation between different frames in the time domain, so that the obtained noise of the first image is small. In specific implementation, the terminal device may perform time domain noise reduction on the first image after the multi-frame conversion format by using the existing time domain noise reduction method, for example: and sequentially carrying out global image registration, local ghost detection and time domain fusion operation on the first image after the multi-frame format conversion to obtain the first image, and no further description is given for the first image.
And S507, performing time domain noise reduction on the second image after the multi-frame conversion format to obtain a fourth image.
It should be noted that, the execution of step S506 and step S507 may not be in a sequential order.
And S508, according to the size of the fourth image, down-sampling the third image to obtain a down-sampled third image.
And S509, taking the down-sampled third image as a reference, and carrying out image registration on the fourth image to obtain a fourth image after image registration.
And S510, carrying out ghost correction on the fourth image after image registration according to the down-sampled third image to obtain a corrected fourth image.
And S511, carrying out exposure fusion on the down-sampled third image and the corrected fourth image to obtain an HDR image.
And S512, according to the size of the third image, performing up-sampling on the HDR image to obtain an up-sampled HDR image.
And S513, acquiring a detail image of the third image according to the third image.
And S514, fusing the HDR image subjected to the up-sampling with the detail image of the third image to obtain a fused image.
And S515, performing spatial domain noise reduction on the fused image to obtain a spatial domain noise-reduced image.
At this point, the entire image processing process is completed. Through the mode, when a user uses the terminal equipment to take a picture under low illumination, the terminal equipment obtains an image with higher brightness, higher definition and smaller noise by executing the image processing process, so that the terminal equipment can make the user watch the image with higher brightness and higher brightness when the image with higher brightness, higher definition and smaller noise is presented to the user, the picture taking effect of the terminal equipment under low illumination is improved, and the user experience is further improved.
Fig. 9 is a schematic diagram of a first image shown in the present application, fig. 10 is a schematic diagram of a second image shown in the present application, and fig. 11 is a schematic diagram of a spatially denoised image shown in the present application. As shown in fig. 11, fig. 11 shows the image obtained after the processing in steps S501 to S515, and as can be seen from comparing fig. 11 with fig. 9 and 10, the brightness and the sharpness of the entire image after the spatial domain noise reduction shown in fig. 11 are both greatly improved, and no overexposed and/or excessively dark regions appear in the entire image. Therefore, when the terminal device presents the image with higher brightness, higher definition and smaller noise to the user, the user can watch the image with higher definition and higher brightness, the photographing effect of the terminal device under low illumination is improved, and the user experience is further improved. It can be understood by those skilled in the art that fig. 9 to 11 are only used for illustrating the effect of improving the sharpness and brightness of an image after the image processing method provided by the present application is adopted, and the color and content of the processed image are not limited.
Optionally, in another implementation manner of the present application, in order to further improve the brightness, the terminal device may further instruct the camera sensor to turn on a light supplement lamp of the terminal device in a process of alternately and continuously outputting at least one frame of the first image and at least one frame of the second image according to the photographing parameters, so as to further improve the brightness of the first image and the second image through the light supplement. Therefore, after subsequent image processing is performed on the basis of the first image and the second image, the foreground and background brightness in the obtained vacant noise-reduced image is improved, the problem of local overexposure or over-darkness is avoided, and the photographing effect of the terminal equipment under low illumination is further improved. In this implementation, since the amount of light entering the camera sensor increases, and thus the noise of the image output by the camera sensor decreases, the camera sensor may be instructed to output only the first image and the second image of one frame, so as to omit the time-domain noise reduction process in the above example. In the second example below, the image processing method is described by taking an example in which the terminal device acquires a first image of one frame and a second image of one frame that are alternately and continuously output by the camera sensor.
Example two, fig. 12 is a schematic flowchart of another image processing method provided in the present application. In this embodiment, the terminal device acquires a first image and a second image that are alternately and continuously output by the camera sensor. As shown in fig. 12, the method may include:
s601, determining the photographing parameters of the camera sensor according to the preview image output by the camera sensor.
And S602, according to the photographing parameters, instructing the camera sensor to alternately and continuously output a first frame of image and a second frame of image.
And S603, acquiring a frame of first image and a frame of second image which are alternately and continuously output by the camera sensor.
S604, converting the first image from a Bayer format to a YUV format to obtain the first image after the format conversion.
And S605, converting the second image from a Bayer format to a YUV format to obtain the converted second image.
It should be noted that the execution of step S605 and step S604 may not be in a sequential order.
And S606, according to the size of the second image after format conversion, down-sampling the first image after format conversion to obtain the down-sampled first image.
And S607, taking the first image after the down-sampling as a reference, and carrying out image registration on the second image after the format conversion to obtain a second image after the image registration.
And S608, performing ghost correction on the second image after image registration according to the first image after down sampling to obtain a corrected second image.
And S609, carrying out exposure fusion on the downsampled first image and the corrected second image to obtain an HDR image.
S610, according to the size of the first image after the format conversion, the HDR image is subjected to up-sampling, and the HDR image after the up-sampling is obtained.
And S611, acquiring a detailed image of the first image according to the first image after the format conversion.
And S612, fusing the HDR image subjected to the up-sampling with the detail image of the first image to obtain a fused image.
And S613, performing spatial domain noise reduction on the fused image to obtain a spatial domain noise-reduced image.
At this point, the entire image processing process is completed.
According to the image processing method provided by the application, when a user uses the terminal device to take a picture under low illumination, the terminal device can acquire at least one first image and at least one second image which are alternately and continuously output by the camera sensor, wherein the first image is mainly used for providing detail information of the current shooting scene, the second image is mainly used for providing brightness information of the current shooting scene, so that the terminal equipment can perform image fusion processing according to the at least one frame of first image and the at least one frame of second image, the brightness and the definition of the fused image obtained by the terminal equipment are improved, therefore, when the fused image is presented to a user, the terminal equipment can enable the user to view the image with higher definition and brightness, the photographing effect of the terminal equipment under low illumination is improved, and the user experience is further improved.
It should be noted that the image processing method provided by the present application is not only applicable to an application scene that the terminal device uses the front-facing camera sensor to shoot, but also applicable to an application scene that the terminal device uses the rear-facing camera sensor to shoot. Similarly, the method is also suitable for an application scene that the terminal equipment shoots by adopting the double-camera sensor, under the application scene, the terminal equipment can process the images output by each camera sensor by adopting the steps of S301-S315, and finally, the images output by the two camera sensors are processed by adopting the existing fusion mode, and the images obtained after the airspace noise reduction are respectively fused to obtain the images with higher definition and brightness. Or, the terminal device may process the image output by only one of the two camera sensors by using the steps S301 to S315, and perform special effect (e.g., blurring) processing on the image by using the other camera sensor, which is not described herein again.
Fig. 13 is a schematic structural diagram of a terminal device provided in the present application. As shown in fig. 13, the terminal device may include:
the acquisition module 11 is configured to acquire at least one frame of first image and at least one frame of second image that are alternately and continuously output by the camera sensor; the resolution of the first image is the same as the resolution corresponding to the current photographing mode, the resolution of the first image is N times of the resolution of the second image, and N is an integer greater than 1; the camera sensor outputs a first image of each frame by adopting a first exposure parameter, the camera sensor outputs a second image of each frame by adopting a second exposure parameter, and the first exposure parameter is larger than the second exposure parameter; illustratively, the first image may be, for example, a full-size image.
And the fusion module 12 is configured to perform image fusion according to the at least one frame of the first image and the at least one frame of the second image to obtain a fused image.
The terminal device provided by the present application can execute the above method embodiments, and the implementation principle and technical effect are similar, which are not described herein again.
Fig. 14 is a schematic structural diagram of another terminal device provided in the present application. As shown in fig. 14, on the basis of the block diagram shown in fig. 13, the terminal device further includes:
a determining module 13, configured to determine, before the obtaining module 11 obtains at least one frame of first image and at least one frame of second image alternately and continuously output by the camera sensor, a photographing parameter of the camera sensor according to a preview image output by the camera sensor; the photographing parameters comprise: size of the first image, number of frames of the second image, exposure parameter of the first image, exposure parameter of the second image, alternating order of the first image and the second image;
and the indicating module 14 is used for indicating the camera sensor to alternately and continuously output at least one frame of first image and at least one frame of second image according to the photographing parameters.
If at least one frame of the first image comprises: a frame of a first image, at least one frame of a second image comprising: a frame of a second image; the fusion module 12 may be specifically configured to perform image fusion on the first image and the second image to obtain a fused image.
In an implementation manner of the present application, fig. 15 is a schematic structural diagram of another terminal device provided in the present application. As shown in fig. 15, the terminal device further includes, based on the block diagram shown in fig. 13:
the first format conversion module 15 is configured to convert the first image from a Bayer format to a YUV format to obtain a first image after the format conversion before the fusion module 12 performs image fusion on the first image and the second image to obtain a fused image, and convert the second image from the Bayer format to the YUV format to obtain a second image after the format conversion;
in this implementation manner, the fusion module 12 is specifically configured to perform image fusion on the first image after format conversion and the second image after format conversion to obtain a fused image.
If at least one frame of the first image comprises: a plurality of frames of the first image, at least one frame of the second image comprising: a plurality of frames of the second image; the fusion module 12 may be specifically configured to perform time-domain noise reduction on multiple frames of the first image to obtain a third image, and perform time-domain noise reduction on multiple frames of the second image to obtain a fourth image; and carrying out image fusion on the third image and the fourth image to obtain a fused image. Fig. 16 is a schematic structural diagram of another terminal device provided in this application, in an implementation manner of this application. As shown in fig. 16, the terminal device further includes, based on the block diagram shown in fig. 13: the second format conversion module 16 is configured to perform time domain noise reduction on multiple frames of the first image to obtain a third image, perform time domain noise reduction on multiple frames of the second image to obtain a fourth image, convert the multiple frames of the first image from a Bayer format to a YUV format to obtain a first image after the multiple frames of the first image are converted from the Bayer format to the YUV format, and convert the multiple frames of the second image from the Bayer format to the YUV format to obtain a second image after the multiple frames of the second image are converted from the Bayer format to the YUV format;
the fusion module 12 is specifically configured to perform time-domain noise reduction on the first image after the multi-frame format conversion to obtain a third image, and perform time-domain noise reduction on the second image after the multi-frame format conversion to obtain a fourth image.
Fig. 17 is a schematic structural diagram of another terminal device provided in the present application. In this embodiment, the fusion module 12 may be specifically configured to perform time-domain noise reduction on multiple frames of first images to obtain a third image, and perform time-domain noise reduction on multiple frames of second images to obtain a fourth image; and carrying out image fusion on the third image and the fourth image to obtain a fused image. As shown in fig. 17, on the basis of the block diagram shown in fig. 13, the fusion module 12 may include:
a down-sampling unit 121, configured to perform down-sampling on the third image according to the size of the fourth image, so as to obtain a down-sampled third image; the size of the down-sampled third image is the same as the size of the fourth image
An exposure fusion unit 122, configured to perform exposure fusion on the downsampled third image and the fourth image to obtain a high dynamic range HDR image;
an upsampling unit 123, configured to upsample the HDR image according to a size of the third image, to obtain an upsampled HDR image;
a fusion unit 124, configured to fuse the up-sampled HDR image with a detail image of the third image to obtain a fused image; wherein the detail image of the third image comprises high frequency components of the third image. For example, the fusion unit 124 may be specifically configured to determine the sensitivity ISO of the camera sensor; determining a gain coefficient according to the ISO of the camera sensor; multiplying the pixel value of each pixel point of the detail image of the third image by the gain coefficient to obtain a processed detail image; and performing image addition calculation on the processed detail image and the HDR image subjected to the up-sampling to obtain a fused image.
With continued reference to fig. 17, optionally, the fusion module 12 may further include:
an obtaining unit 125, configured to obtain a detailed image of the third image according to the third image before the merging unit 124 merges the HDR image after the upsampling with the detailed image of the third image to obtain a merged image. For example, the obtaining unit 125 may be specifically configured to perform upsampling on the downsampled third image according to the size of the third image, so as to obtain an upsampled third image; and performing image subtraction calculation on the up-sampled third image and the third image to obtain a detail image of the third image.
With continued reference to fig. 17, optionally, the fusion module 12 may further include:
an image registration unit 126, configured to perform exposure fusion on the downsampled third image and the fourth image by the exposure fusion unit 122, before obtaining the high dynamic range HDR image, perform image registration on the fourth image by using the downsampled third image as a reference, and obtain a fourth image after image registration;
a ghost correction unit 127, configured to perform ghost correction on the fourth image after image registration according to the downsampled third image, to obtain a corrected fourth image; for example, the ghost correction unit 127 may be specifically configured to reduce the brightness of the image-registered fourth image to the brightness of the downsampled third image, so as to obtain a reduced-brightness fourth image; performing image difference calculation on the down-sampled third image and the brightness-reduced fourth image to obtain a difference absolute value corresponding to each pixel point of the brightness-reduced fourth image; taking the pixel points with the difference absolute values larger than the preset threshold value as the ghost of the fourth image after the image registration; according to the brightness of the fourth image after the image registration, the brightness of the third image after the down sampling is improved, and the third image after the brightness is improved is obtained; and replacing the ghost of the fourth image after the image registration by using the pixel points in the third image after the brightness is improved to obtain the corrected fourth image.
The exposure fusion unit 122 is specifically configured to perform exposure fusion on the downsampled third image and the corrected fourth image to obtain an HDR image.
With reference to fig. 17, optionally, the terminal device may further include:
and the spatial domain noise reduction module 17 is configured to fuse the up-sampled HDR image with the detail image of the third image in the fusion unit 124 to obtain a fused image, and then perform spatial domain noise reduction on the fused image to obtain a spatial domain noise-reduced image.
The terminal device provided by the present application can execute the above method embodiments, and the implementation principle and technical effect are similar, which are not described herein again.
Fig. 18 is a schematic structural diagram of another terminal device provided in the present application. As shown in fig. 18, the terminal device may include: a processor 21 (e.g., CPU) and a memory 22; the memory 22 may comprise a high-speed RAM memory, and may also include a non-volatile memory NVM, such as at least one disk memory, in which various instructions may be stored for performing various processing functions and implementing the method steps of the present application. Optionally, the terminal device related to the present application may further include: a receiver 23, a transmitter 24, a power supply 25, a communication bus 26, and a communication port 27. The receiver 23 and the transmitter 24 may be integrated in the transceiver of the terminal device or may be separate transceiving antennas on the terminal device. A communication bus 26 is used to enable communication connections between the elements. The communication port 27 is used for realizing connection and communication between the terminal device and other peripherals.
In the present application, the memory 22 is used for storing computer executable program code, which includes instructions; when the processor 21 executes the instruction, the instruction causes the terminal device to execute the above method embodiment, which has similar implementation principle and technical effect, and is not described herein again.
As in the foregoing embodiment, the terminal device related to the present application may be a wireless terminal such as a mobile phone and a tablet computer, and therefore, taking the terminal device as a mobile phone as an example: fig. 19 is a block diagram of a structure of a terminal device applied for providing a mobile phone. Referring to fig. 19, the mobile phone may include: radio Frequency (RF) circuitry 1110, memory 1120, input unit 1130, display unit 1140, sensors 1150, audio circuitry 1160, wireless fidelity (WiFi) module 1170, processor 1180, and power supply 1190. Those skilled in the art will appreciate that the handset configuration shown in fig. 19 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 19:
RF circuit 1110 may be used for receiving and transmitting signals during a message transmission or call, for example, receiving downlink information from a base station and then processing the received downlink information to processor 1180; in addition, the uplink data is transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 1110 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE)), e-mail, Short Messaging Service (SMS), and the like.
The memory 1120 may be used to store software programs and modules, and the processor 1180 may execute various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 1120. The memory 1120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1120 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 1130 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 1130 may include a touch panel 1131 and other input devices 1132. Touch panel 1131, also referred to as a touch screen, can collect touch operations of a user on or near the touch panel 1131 (for example, operations of the user on or near touch panel 1131 by using any suitable object or accessory such as a finger or a stylus pen), and drive corresponding connection devices according to a preset program. Alternatively, the touch panel 1131 may include two parts, namely, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1180, and can receive and execute commands sent by the processor 1180. In addition, the touch panel 1131 can be implemented by using various types, such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 1130 may include other input devices 1132 in addition to the touch panel 1131. In particular, other input devices 1132 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 1140 may be used to display information input by the user or information provided to the user and various menus of the cellular phone. The Display unit 1140 may include a Display panel 1141, and optionally, the Display panel 1141 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, touch panel 1131 can be overlaid on display panel 1141, and when touch operation is detected on or near touch panel 1131, the touch operation is transmitted to processor 1180 to determine the type of touch event, and then processor 1180 provides corresponding visual output on display panel 1141 according to the type of touch event. Although in fig. 10, the touch panel 1131 and the display panel 1141 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 1131 and the display panel 1141 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 1150, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1141 according to the brightness of ambient light, and the light sensor may turn off the display panel 1141 and/or the backlight when the mobile phone moves to the ear. As one type of motion sensor, the acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 1160, speaker 1161, and microphone 1162 may provide an audio interface between a user and a cell phone. The audio circuit 1160 may transmit the electrical signal converted from the received audio data to the speaker 1161, and convert the electrical signal into a sound signal for output by the speaker 1161; on the other hand, the microphone 1162 converts the collected sound signals into electrical signals, which are received by the audio circuit 1160 and converted into audio data, which are then processed by the audio data output processor 1180, and then transmitted to, for example, another cellular phone via the RF circuit 1110, or output to the memory 1120 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the cell phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 1170, and provides wireless broadband internet access for the user. Although fig. 19 shows the WiFi module 1170, it is understood that it does not belong to the essential constitution of the handset, and may be omitted entirely as needed within the scope not changing the essence of the present application.
The processor 1180 is a control center of the mobile phone, and is connected to various parts of the whole mobile phone through various interfaces and lines, and executes various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 1120 and calling data stored in the memory 1120, thereby performing overall monitoring of the mobile phone. Optionally, processor 1180 may include one or more processing units; for example, the processor 1180 may integrate an application processor, which handles primarily the operating system, user interfaces, and applications, among others, and a modem processor, which handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated within processor 1180.
The mobile phone further includes a power supply 1190 (e.g., a battery) for supplying power to each component, and optionally, the power supply may be logically connected to the processor 1180 through a power management system, so that functions of managing charging, discharging, power consumption management, and the like are implemented through the power management system.
The mobile phone may further include a camera 1200, which may be a front camera or a rear camera. Although not shown, the mobile phone may further include a bluetooth module, a GPS module, etc., which will not be described herein.
In this application, the processor 1180 included in the mobile phone may be configured to execute the above-described embodiment of the image processing method, and the implementation principle and the technical effect are similar, and are not described herein again.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the invention are brought about in whole or in part when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.

Claims (21)

1. An image processing method, characterized in that the method comprises:
acquiring a plurality of frames of first images and a plurality of frames of second images which are alternately and continuously output by a camera sensor; the resolution of the first image is the same as the resolution corresponding to the current photographing mode, the resolution of the first image is N times of the resolution of the second image, and N is an integer greater than 1; the camera sensor outputs a first image of each frame by adopting a first exposure parameter, the camera sensor outputs a second image of each frame by adopting a second exposure parameter, and the first exposure parameter is greater than the second exposure parameter;
performing time domain noise reduction on the multiple frames of first images to obtain a third image, and performing time domain noise reduction on the multiple frames of second images to obtain a fourth image;
according to the size of the fourth image, down-sampling the third image to obtain a down-sampled third image; the size of the down-sampled third image is the same as the size of the fourth image;
exposing and fusing the down-sampled third image and the fourth image to obtain a High Dynamic Range (HDR) image;
according to the size of the third image, the HDR image is up-sampled to obtain an up-sampled HDR image;
fusing the HDR image subjected to the up-sampling with the detail image of the third image to obtain a fused image; wherein the detail image of the third image comprises high frequency components of the third image.
2. The method according to claim 1, wherein before the acquiring of the plurality of frames of the first image and the plurality of frames of the second image alternately and continuously output by the camera sensor, the method further comprises:
determining photographing parameters of the camera sensor according to the preview image output by the camera sensor; the photographing parameters include: a size of the first image, a number of frames of the second image, an exposure parameter of the first image, an exposure parameter of the second image, an alternating order of the first image and the second image;
and according to the photographing parameters, instructing the camera sensor to alternately and continuously output a plurality of frames of first images and a plurality of frames of second images.
3. The method according to claim 1, wherein before performing temporal denoising on the plurality of frames of the first image to obtain a third image and performing temporal denoising on the plurality of frames of the second image to obtain a fourth image, the method further comprises:
converting the multiple frames of first images from a Bayer format into a YUV format to obtain multiple frames of first images after conversion, and converting the multiple frames of second images from the Bayer format into the YUV format to obtain multiple frames of second images after conversion;
the time domain denoising of the multiple frames of first images to obtain a third image, and the time domain denoising of the multiple frames of second images to obtain a fourth image comprises:
and performing time domain noise reduction on the first image after the multi-frame conversion format to obtain the third image, and performing time domain noise reduction on the second image after the multi-frame conversion format to obtain the fourth image.
4. The method of claim 1, wherein before the merging the up-sampled HDR image with the detail image of the third image to obtain a merged image, the method further comprises:
and acquiring a detail image of the third image according to the third image.
5. The method of claim 4, wherein the obtaining a detail image of the third image from the third image comprises:
according to the size of the third image, the third image after down sampling is up sampled to obtain a third image after up sampling;
and performing image subtraction calculation on the up-sampled third image and the third image to obtain a detail image of the third image.
6. The method of claim 1, wherein before exposure blending the down-sampled third image and the fourth image to obtain a High Dynamic Range (HDR) image, the method further comprises:
taking the down-sampled third image as a reference, and carrying out image registration on the fourth image to obtain a fourth image after image registration;
according to the down-sampled third image, carrying out ghost correction on the fourth image after image registration to obtain a corrected fourth image;
the exposure fusion of the down-sampled third image and the fourth image to obtain the high dynamic range HDR image includes:
and carrying out exposure fusion on the down-sampled third image and the corrected fourth image to obtain the HDR image.
7. The method according to claim 6, wherein performing a ghost correction on the image-registered fourth image according to the downsampled third image to obtain a corrected fourth image, includes:
reducing the brightness of the fourth image after image registration to the brightness of the third image after down sampling to obtain a fourth image with reduced brightness;
performing image difference calculation on the down-sampled third image and the brightness-reduced fourth image to obtain a difference absolute value corresponding to each pixel point of the brightness-reduced fourth image;
taking the pixel points with the difference absolute values larger than a preset threshold value as the ghost of the fourth image after the image registration;
according to the brightness of the fourth image after the image registration, the brightness of the third image after the down sampling is improved, and the third image after the brightness is improved is obtained;
and replacing the ghost of the fourth image after the image registration by using the pixel points in the third image after the brightness is improved to obtain the corrected fourth image.
8. The method of claim 1, wherein the fusing the up-sampled HDR image with the detail image of the third image to obtain a fused image comprises:
determining a sensitivity ISO of the camera sensor;
determining a gain coefficient according to the ISO of the camera sensor;
multiplying the pixel value of each pixel point of the detail image of the third image by the gain coefficient to obtain a processed detail image;
and performing image addition calculation on the processed detail image and the HDR image subjected to the up-sampling to obtain a fused image.
9. The method as claimed in any one of claims 1 and 4-8, wherein after the fusing the up-sampled HDR image with the detail image of the third image to obtain a fused image, the method further comprises:
and performing spatial domain noise reduction on the fused image to obtain an image subjected to spatial domain noise reduction.
10. The method of any of claims 1-8, wherein the first image is a full size image.
11. A terminal device, characterized in that the terminal device comprises:
the acquisition module is used for acquiring multiple frames of first images and multiple frames of second images which are alternately and continuously output by the camera sensor; the resolution of the first image is the same as the resolution corresponding to the current photographing mode, the resolution of the first image is N times of the resolution of the second image, and N is an integer greater than 1; the camera sensor outputs a first image of each frame by adopting a first exposure parameter, the camera sensor outputs a second image of each frame by adopting a second exposure parameter, and the first exposure parameter is greater than the second exposure parameter;
the fusion module is used for carrying out image fusion according to the multi-frame first image and the multi-frame second image to obtain a fused image;
the fusion module is specifically configured to perform time-domain noise reduction on the multiple frames of first images to obtain a third image, and perform time-domain noise reduction on the multiple frames of second images to obtain a fourth image; performing image fusion on the third image and the fourth image to obtain a fused image;
the fusion module comprises:
the down-sampling unit is used for carrying out down-sampling on the third image according to the size of the fourth image to obtain a down-sampled third image; the size of the down-sampled third image is the same as the size of the fourth image;
the exposure fusion unit is used for carrying out exposure fusion on the downsampled third image and the fourth image to obtain a high dynamic range HDR image;
an upsampling unit, configured to upsample the HDR image according to a size of the third image to obtain an upsampled HDR image;
a fusion unit, configured to fuse the up-sampled HDR image with the detail image of the third image to obtain a fused image; wherein the detail image of the third image comprises high frequency components of the third image.
12. The terminal device according to claim 11, wherein the terminal device further comprises:
the determining module is used for determining the photographing parameters of the camera sensor according to the preview image output by the camera sensor before the acquiring module acquires the plurality of frames of first images and the plurality of frames of second images which are alternately and continuously output by the camera sensor; the photographing parameters include: a size of the first image, a number of frames of the second image, an exposure parameter of the first image, an exposure parameter of the second image, an alternating order of the first image and the second image;
and the indicating module is used for indicating the camera sensor to alternately and continuously output a plurality of frames of first images and a plurality of frames of second images according to the photographing parameters.
13. The terminal device according to claim 11, wherein the terminal device further comprises:
the second format conversion module is used for converting the multiframe first images from a Bayer format into a YUV format to obtain a multiframe converted first image and converting the multiframe second images from the Bayer format into the YUV format to obtain a multiframe converted second image before the fusion module performs time domain noise reduction on the multiframe first images to obtain a third image and performs time domain noise reduction on the multiframe second images to obtain a fourth image;
the fusion module is specifically configured to perform time-domain noise reduction on the first image after the multi-frame format conversion to obtain the third image, and perform time-domain noise reduction on the second image after the multi-frame format conversion to obtain the fourth image.
14. The terminal device of claim 11, wherein the fusion module further comprises:
an obtaining unit, configured to obtain a detailed image of the third image according to the third image before the fusing unit fuses the up-sampled HDR image and the detailed image of the third image to obtain a fused image.
15. The terminal device of claim 14,
the acquiring unit is specifically configured to perform upsampling on the downsampled third image according to the size of the third image to obtain an upsampled third image; and performing image subtraction calculation on the up-sampled third image and the third image to obtain a detail image of the third image.
16. The terminal device of claim 11, wherein the fusion module further comprises:
an image registration unit, configured to perform exposure fusion on the downsampled third image and the fourth image by the exposure fusion unit to obtain a high dynamic range HDR image, and perform image registration on the fourth image with the downsampled third image as a reference to obtain a fourth image after image registration;
a ghost correction unit, configured to perform ghost correction on the fourth image after the image registration according to the down-sampled third image, so as to obtain a corrected fourth image;
the exposure fusion unit is specifically configured to perform exposure fusion on the downsampled third image and the corrected fourth image to obtain the HDR image.
17. The terminal device of claim 16,
the ghost correction unit is specifically configured to reduce the brightness of the fourth image after image registration to the brightness of the third image after down-sampling, so as to obtain a fourth image with reduced brightness; performing image difference calculation on the down-sampled third image and the brightness-reduced fourth image to obtain a difference absolute value corresponding to each pixel point of the brightness-reduced fourth image; taking the pixel points with the difference absolute values larger than a preset threshold value as the ghost of the fourth image after the image registration; according to the brightness of the fourth image after the image registration, the brightness of the third image after the down sampling is improved, and the third image after the brightness is improved is obtained; and replacing the ghost of the fourth image after the image registration by using the pixel points in the third image after the brightness is improved to obtain the corrected fourth image.
18. The terminal device of claim 11,
the fusion unit is specifically configured to determine a sensitivity ISO of the camera sensor; determining a gain coefficient according to the ISO of the camera sensor; multiplying the pixel value of each pixel point of the detail image of the third image by the gain coefficient to obtain a processed detail image; and performing image addition calculation on the processed detail image and the HDR image subjected to the up-sampling to obtain a fused image.
19. The terminal device according to any of claims 11, 14-18, wherein the terminal device further comprises:
and the spatial domain noise reduction module is used for performing spatial domain noise reduction on the fused image after the fusion unit fuses the HDR image subjected to the up-sampling with the detail image of the third image to obtain the fused image, so as to obtain the spatial domain noise-reduced image.
20. A terminal device according to any of claims 11-18, characterized in that the first image is a full size image.
21. A terminal device, characterized in that the terminal device comprises: a processor, a memory;
wherein the memory is to store a computer executable program; the program, when executed by the processor, causes the terminal device to perform the image processing method according to any one of claims 1 to 10.
CN201780065469.5A 2017-01-25 2017-02-24 Image processing method and terminal device Active CN109863742B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201710061387 2017-01-25
CN2017100613875 2017-01-25
PCT/CN2017/074827 WO2018137267A1 (en) 2017-01-25 2017-02-24 Image processing method and terminal apparatus

Publications (2)

Publication Number Publication Date
CN109863742A CN109863742A (en) 2019-06-07
CN109863742B true CN109863742B (en) 2021-01-29

Family

ID=62978946

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780065469.5A Active CN109863742B (en) 2017-01-25 2017-02-24 Image processing method and terminal device

Country Status (2)

Country Link
CN (1) CN109863742B (en)
WO (1) WO2018137267A1 (en)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110874829B (en) * 2018-08-31 2022-10-14 北京小米移动软件有限公司 Image processing method and device, electronic device and storage medium
CN110876014B (en) * 2018-08-31 2022-04-08 北京小米移动软件有限公司 Image processing method and device, electronic device and storage medium
CN112308771A (en) * 2019-07-31 2021-02-02 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN112419161B (en) * 2019-08-20 2022-07-05 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN110944160B (en) * 2019-11-06 2022-11-04 维沃移动通信有限公司 Image processing method and electronic equipment
CN111091506A (en) * 2019-12-02 2020-05-01 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN111028192B (en) * 2019-12-18 2023-08-08 维沃移动通信(杭州)有限公司 Image synthesis method and electronic equipment
CN111294905B (en) * 2020-02-03 2023-04-25 RealMe重庆移动通信有限公司 Image processing method, image processing device, storage medium and electronic apparatus
CN113572980B (en) * 2020-04-28 2022-10-11 华为技术有限公司 Photographing method and device, terminal equipment and storage medium
CN111641806A (en) * 2020-05-11 2020-09-08 浙江大华技术股份有限公司 Method, apparatus, computer apparatus and readable storage medium for halo suppression
CN111986129B (en) * 2020-06-30 2024-03-19 普联技术有限公司 HDR image generation method, equipment and storage medium based on multi-shot image fusion
CN112288642A (en) * 2020-09-21 2021-01-29 北京迈格威科技有限公司 Ghost detection method, image fusion method and corresponding device
CN112367459B (en) * 2020-10-23 2022-05-13 深圳市锐尔觅移动通信有限公司 Image processing method, electronic device, and non-volatile computer-readable storage medium
CN112351172B (en) * 2020-10-26 2021-09-17 Oppo广东移动通信有限公司 Image processing method, camera assembly and mobile terminal
CN112887639A (en) * 2021-01-18 2021-06-01 Oppo广东移动通信有限公司 Image processing method, device, system, electronic device and storage medium
CN115314628B (en) * 2021-05-08 2024-03-01 杭州海康威视数字技术股份有限公司 Imaging method, imaging system and camera
CN113596341B (en) * 2021-06-11 2024-04-05 北京迈格威科技有限公司 Image shooting method, image processing device and electronic equipment
CN115482143B (en) * 2021-06-15 2023-12-19 荣耀终端有限公司 Image data calling method and system for application, electronic equipment and storage medium
CN115514876B (en) * 2021-06-23 2023-09-01 荣耀终端有限公司 Image fusion method, electronic device, storage medium and computer program product
CN113344793A (en) * 2021-08-04 2021-09-03 深圳市安软科技股份有限公司 Image super-resolution reconstruction method, device, equipment and storage medium
CN114466134A (en) * 2021-08-17 2022-05-10 荣耀终端有限公司 Method and electronic device for generating HDR image
CN115988311A (en) * 2021-10-14 2023-04-18 荣耀终端有限公司 Image processing method and electronic equipment
CN117710265A (en) * 2022-01-25 2024-03-15 荣耀终端有限公司 Image processing method and related device
CN114723637B (en) * 2022-04-27 2024-06-18 上海复瞰科技有限公司 Color difference adjusting method and system
CN116095517B (en) * 2022-08-31 2024-04-09 荣耀终端有限公司 Blurring method, terminal device and readable storage medium
CN117808688A (en) * 2022-09-26 2024-04-02 华为技术有限公司 High-resolution high-frame-rate image pickup method and image processing apparatus
CN116301363B (en) * 2023-02-27 2024-02-27 荣耀终端有限公司 Space gesture recognition method, electronic equipment and storage medium
CN117710264A (en) * 2023-07-31 2024-03-15 荣耀终端有限公司 Dynamic range calibration method of image and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080024618A1 (en) * 2006-07-31 2008-01-31 Suk Hwan Lim Adaptive binning method and apparatus
US8130278B2 (en) * 2008-08-01 2012-03-06 Omnivision Technologies, Inc. Method for forming an improved image using images with different resolutions
US9077910B2 (en) * 2011-04-06 2015-07-07 Dolby Laboratories Licensing Corporation Multi-field CCD capture for HDR imaging
CN103888689B (en) * 2014-03-13 2017-10-31 北京智谷睿拓技术服务有限公司 Image-pickup method and image collecting device
CN105704363B (en) * 2014-11-28 2020-12-29 广东中星微电子有限公司 Image data processing method and device

Also Published As

Publication number Publication date
CN109863742A (en) 2019-06-07
WO2018137267A1 (en) 2018-08-02

Similar Documents

Publication Publication Date Title
CN109863742B (en) Image processing method and terminal device
US10497097B2 (en) Image processing method and device, computer readable storage medium and electronic device
US10827140B2 (en) Photographing method for terminal and terminal
US10810720B2 (en) Optical imaging method and apparatus
US20220086360A1 (en) Big aperture blurring method based on dual cameras and tof
CN112308806B (en) Image processing method, device, electronic equipment and readable storage medium
CN112449120A (en) High dynamic range video generation method and device
CN106993136B (en) Mobile terminal and multi-camera-based image noise reduction method and device thereof
CN113179374A (en) Image processing method, mobile terminal and storage medium
US20240119566A1 (en) Image processing method and apparatus, and electronic device
EP3416130B1 (en) Method, device and nonvolatile computer-readable medium for image composition
JP2018098801A (en) Imaging control device, imaging control method, imaging system, and program
CN111447371A (en) Automatic exposure control method, terminal and computer readable storage medium
WO2022267506A1 (en) Image fusion method, electronic device, storage medium, and computer program product
CN113179369A (en) Shot picture display method, mobile terminal and storage medium
CN113132644B (en) Method and equipment for generating high dynamic range image
US20200090309A1 (en) Method and device for denoising processing, storage medium, and terminal
CN107743199B (en) Image processing method, mobile terminal and computer readable storage medium
CN111028192B (en) Image synthesis method and electronic equipment
CN113572980B (en) Photographing method and device, terminal equipment and storage medium
CN107835336B (en) Dual-camera frame synchronization method and device, user terminal and storage medium
CN110971822A (en) Picture processing method and device, terminal equipment and computer readable storage medium
CN113472980B (en) Image processing method, device, equipment, medium and chip
CN111372001B (en) Image fusion method and device, storage medium and mobile terminal
CN108259765B (en) Shooting method, terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant