CN114972137A - Image processing method, terminal and storage medium - Google Patents

Image processing method, terminal and storage medium Download PDF

Info

Publication number
CN114972137A
CN114972137A CN202110221036.2A CN202110221036A CN114972137A CN 114972137 A CN114972137 A CN 114972137A CN 202110221036 A CN202110221036 A CN 202110221036A CN 114972137 A CN114972137 A CN 114972137A
Authority
CN
China
Prior art keywords
image
frames
ldr
pixel
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110221036.2A
Other languages
Chinese (zh)
Inventor
林桥洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110221036.2A priority Critical patent/CN114972137A/en
Publication of CN114972137A publication Critical patent/CN114972137A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method, a terminal and a storage medium, wherein the method comprises the following steps: determining a binary image corresponding to each two frames of LDR images based on a pixel difference value between each two frames of LDR images in at least two frames of LDR images used for synthesizing the HDR images; wherein the pixels in the binary image represent either moving pixels or non-moving pixels; and performing fusion processing on the at least two frames of LDR images based on binary images corresponding to every two frames of LDR images and fusion weights corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain corresponding HDR images.

Description

Image processing method, terminal and storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to an image processing method, a terminal, and a storage medium.
Background
In the related art, a plurality of images with different exposure values are generally fused to obtain a High Dynamic Range (HDR) image, but when a photographing apparatus shakes or a moving object exists in a photographing scene, a ghost (ghost or ghost image) exists in the obtained HDR image.
Disclosure of Invention
In view of the above, embodiments of the present application are directed to providing an image processing method, a terminal and a storage medium to solve the technical problem of the presence of ghosting in an HDR image generated in the related art.
In order to achieve the purpose, the technical scheme of the application is realized as follows:
the embodiment of the application provides an image processing method, which comprises the following steps:
determining a binary image corresponding to each two frames of Low Dynamic Range (LDR) images based on a pixel difference value between every two frames of LDR images in at least two frames of LDR images for synthesizing the HDR images; wherein the pixels in the binary image represent either moving pixels or non-moving pixels;
and performing fusion processing on the at least two frames of LDR images based on binary images corresponding to every two frames of LDR images and fusion weights corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain corresponding HDR images.
In the above scheme, the determining a binary image corresponding to each two frames of LDR based on a pixel difference between every two frames of LDR images in at least two frames of LDR images used for synthesizing the HDR image includes at least one of:
determining a corresponding first binary image based on a pixel difference value between each group of corresponding pixel points in each two frames of LDR images and a corresponding first set threshold;
determining a corresponding second value image based on the inner product of the vectors corresponding to each group of corresponding image blocks in each two frames of LDR images and the corresponding second set threshold; wherein the content of the first and second substances,
and the vector corresponding to the image block represents the difference between the pixel value of each pixel point in the image block and the pixel mean value of the image block.
In the foregoing solution, when determining a first binary image corresponding to every two frames of LDR and a second binary image corresponding to every two frames of LDR, the determining a binary image corresponding to every two frames of LDR based on a pixel difference between every two frames of LDR images in at least two frames of LDR images used for synthesizing an HDR image further includes:
and carrying out bitwise OR operation on the corresponding first binary image and the corresponding second binary image to obtain a third binary image corresponding to each two frames of LDR.
In the foregoing solution, when the corresponding first binary image is determined, the method further includes:
determining a corresponding first set threshold value based on the pixel value of a pixel point in the LDR image and the first set data table; the first setting data table stores the corresponding relation between the pixel value and a first setting threshold value; the first set threshold represents a difference value between a corresponding pixel value and a pixel value when the pixel value is changed and can be sensed by human eyes;
when the corresponding second binary image is determined, the method further includes:
determining a corresponding second set threshold value based on the vector of the image block corresponding to the pixel point in the LDR image and a second set data table; the second setting data table stores the corresponding relation between the vector and a second setting threshold; the second set threshold represents an inner product between the corresponding vector and a vector when the pixel value of the corresponding human eye-perceptible image block changes.
In the above scheme, the method further comprises:
under the condition that the determined binary chart characterizes that the pixels in the set non-edge area are motion pixels, determining the moving distance of the terminal based on sensor data built in the terminal;
and when the moving distance is smaller than a set distance threshold value, performing error correction processing on the motion pixels in the set non-edge area in the determined binary image.
In the above scheme, the method further comprises:
determining the fusion weight of each pixel point of each frame of LDR image in the at least two frames of LDR images based on a set curve; wherein the content of the first and second substances,
and the set curve represents the corresponding relation between the pixel values of the pixel points and the corresponding fusion weights.
In the above scheme, the performing fusion processing on the at least two frames of LDR images based on the binary image corresponding to each two frames of LDR images and the fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain the corresponding HDR image includes:
in the pixel value corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images, adjusting the pixel value of the pixel point which is characterized as a motion pixel in the corresponding binary image;
and performing fusion processing on the adjusted at least two frames of LDR images based on the fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain the corresponding HDR images.
In the above scheme, the performing fusion processing on the at least two frames of LDR images based on the binary image corresponding to each two frames of LDR images and the fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain the corresponding HDR image includes:
in the fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images, adjusting the fusion weight corresponding to the pixel point which is characterized as a motion pixel in the corresponding binary image;
and performing fusion processing on the at least two frames of LDR images based on the adjusted fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain the corresponding HDR image.
In the above scheme, the determining a binary image corresponding to each two frames of LDR based on a pixel difference between every two frames of LDR images in at least two frames of LDR images used for synthesizing the HDR image includes:
and determining a binary image corresponding to each two frames of LDR images based on the pixel difference value between each two frames of LDR images in the at least two frames of LDR images after size reduction.
In the above solution, before determining a binary image corresponding to each two frames of LDR based on a pixel difference between every two frames of LDR images in at least two frames of LDR images used for synthesizing the HDR image, the method further includes:
luma registration of a second image of the at least two frame LDR images based on a first image of the at least two frame LDR images; wherein, the first image is the LDR image with the longest exposure time in the at least two frames of LDR images;
and carrying out image registration on the first image and the second image after brightness registration by using an image signal processor of the terminal.
An embodiment of the present application further provides a terminal, including:
a first determining unit, configured to determine a binary image corresponding to each two frames of LDR images based on a pixel difference between each two frames of LDR images in at least two frames of low dynamic range LDR images used for synthesizing the high dynamic range HDR image; wherein the pixels in the binary image represent either moving pixels or non-moving pixels;
and the fusion unit is used for performing fusion processing on the at least two frames of LDR images based on the binary images corresponding to each two frames of LDR images and the fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain the corresponding HDR images.
An embodiment of the present application further provides a terminal, including: a processor and a memory for storing a computer program capable of running on the processor,
wherein the processor is configured to execute the steps of the image processing method when the computer program is executed.
The embodiment of the application also provides a storage medium, on which a computer program is stored, and the computer program realizes the steps of the image processing method when being executed by a processor.
In the embodiment of the application, a binary image corresponding to two frames of LDR images is determined based on a pixel difference value between every two frames of LDR images, and fusion processing is performed on at least two frames of LDR images based on the determined binary image and a corresponding fusion weight of each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain a corresponding HDR image. Because the pixels in the corresponding binary image represent motion pixels or non-motion pixels, the pixel points characterized as motion pixels in the corresponding binary image can be quickly determined from the corresponding two frames of LDR images based on the corresponding binary image, so as to process the pixel points characterized as motion pixels in at least two frames of LDR images, for example, the pixel values of the pixel points characterized as motion pixels in the corresponding two frames of LDR images are adjusted, or the fusion weights of the pixel points characterized as motion pixels in the corresponding two frames of LDR images are adjusted, so that when fusion processing is performed on the LDR images, the image information of the pixel points characterized as motion pixels can be masked, and the finally fused HDR image has no ghost.
Drawings
Fig. 1 is a schematic flow chart illustrating an implementation of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a setting curve provided in an embodiment of the present application;
fig. 3 is a schematic flow chart of an implementation of an image processing method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a terminal according to another embodiment of the present application.
Detailed Description
The technical solution of the present application is further described in detail with reference to the drawings and specific embodiments of the specification.
Fig. 1 is a schematic view of an implementation process of an image processing method according to an embodiment of the present application, where an execution subject of the process is a terminal such as a mobile phone, a tablet, or a digital camera. As shown in fig. 1, the image processing method includes:
step 101: determining a binary image corresponding to each two frames of LDR images based on a pixel difference value between each two frames of LDR images in at least two frames of LDR images for synthesizing the HDR images; wherein the pixels in the binary image represent either moving pixels or non-moving pixels.
Wherein the at least two frames of LDR images are LDR images photographed in one photographing operation with different exposure times. In practical applications, the terminal may obtain at least two LDR images from a video stream output from an internal Image Signal Processor (ISP).
Here, the terminal may obtain at least two frames of LDR images used for synthesizing the HDR image, detect whether a color space of the obtained LDR image is an RGB color space, and determine a binary image corresponding to each two frames of LDR images based on a pixel difference between each two frames of LDR images when the detection result indicates that the color space of the obtained LDR image is the RGB color space. And under the condition that the detection result represents that the color space of the obtained LDR image is a YUV color space, converting the color space of the obtained LDR image from the YUV color space to an RGB color space, and determining a binary image corresponding to each two frames of LDR images based on the pixel difference value between each two frames of LDR images.
And marking whether the corresponding pixel points are moving pixels or non-moving pixels by adopting the first numerical value and the second numerical value in the two-value image. For example, motion pixels and non-motion pixels are identified by 0 and 1, respectively.
When determining the binary image corresponding to each two frames of LDR images, the terminal may compare pixel values between corresponding pixel points of the two frames of LDR images in units of pixel points or in units of image blocks, so as to determine whether the pixel points in the same spatial position in the two frames of LDR images are characterized as motion pixels. For example, the terminal compares pixel differences between corresponding pixel points of each group in the two frames of LDR images pixel by taking the pixel points as units, so as to determine whether the corresponding pixel points are characterized as moving pixels or not based on the pixel differences; or the terminal calculates the pixel mean value of each image block in each frame of LDR image by taking the image block as a unit, and determines whether the corresponding pixel point is characterized as a moving pixel or not based on the difference value between the pixel mean values of the image blocks which respectively take the corresponding pixel point as the center in the two frames of LDR images.
In practical application, the terminal can determine one frame of reference image from the obtained at least two frames of LDR images, compare the reference image with each frame of non-reference image in the at least two frames of LDR images one by one, and determine a binary image corresponding to each two frames of LDR images.
In some embodiments, to facilitate more accurate comparison in determining whether a pixel point in the LDR image is a motion pixel, before determining the corresponding binary image based on the acquired LDR image, luminance registration and image registration are performed on the acquired LDR image, and before determining the corresponding binary image for each two frames of the LDR image based on a pixel difference between each two frames of the LDR image used to synthesize the HDR image, the method further comprises:
luma registration of a second image of the at least two frame LDR images based on a first image of the at least two frame LDR images; wherein, the first image is the LDR image with the longest exposure time in the at least two frames of LDR images;
and carrying out image registration on the first image and the second image after brightness registration by using an ISP (internet service provider) of the terminal.
Here, the terminal determines a first image with the longest exposure time from the at least two frames of LDR images, and performs luminance registration on images other than the first image from the at least two frames of LDR images based on the determined first image; and carrying out image registration on the first image and the brightness registered image by using an ISP (internet service provider) arranged in the terminal, so that each frame of LDR image in at least two frames of LDR images is aligned based on the same shooting spatial position.
In practical application, the terminal may perform multiplication operation based on the pixel value of the second image and the set exposure ratio to obtain a calculation result, and adjust the pixel value of the second image based on the calculation result so that the pixel value of the adjusted second image is the same as the calculation result, thereby making the brightness of the second image the same as the brightness of the first image. The terminal can also determine a pixel value mapping curve based on the histogram of the first image, and adjust the pixel value of the second image based on the pixel value mapping curve and the histogram of the second image, so as to realize brightness registration of the second image.
In this embodiment, after the luminance registration and the image registration are performed on at least two frames of LDR images, the corresponding binary image is determined, and the pixel difference between each group of corresponding pixel points in each two frames of LDR images can be calculated more quickly and accurately. In addition, because the ISP built in the terminal has an image registration function, the built-in ISP is used for carrying out image registration on at least two frames of LDR images, so that the hardware resources of the terminal can be fully utilized, the time consumed by image registration is reduced, and the image processing efficiency is improved.
In some embodiments, in order to accurately identify the moving pixels, based on a pixel difference value between each two frames of LDR images in the at least two frames of LDR images used for synthesizing the HDR image, a binary image corresponding to each two frames of LDR is determined, including at least one of:
determining a corresponding first binary image based on a pixel difference value between each group of corresponding pixel points in each two frames of LDR images and a corresponding first set threshold;
determining a corresponding second binary image based on the inner product of the vectors corresponding to the image blocks where each pixel point is located in every two frames of LDR images and the corresponding second set threshold; wherein the content of the first and second substances,
and the vector corresponding to the image block represents the difference between the pixel value of each pixel point in the image block and the pixel mean value of the image block.
Here, the terminal may compare pixel values of each group of corresponding pixel points in the two frames of LDR images pixel by pixel with each other, thereby obtaining a pixel difference between each group of corresponding pixel points in the two frames of LDR images, and determine the corresponding first binary image based on the pixel difference between each group of corresponding pixel points and the corresponding first set threshold. Or, the terminal may also calculate a pixel average value of each group of corresponding image blocks in the two frames of LDR images by using the image block as a unit, and determine the corresponding first binary image based on a pixel difference value between each group of corresponding image blocks and the corresponding first set threshold.
Under the condition that the pixel difference value corresponding to a group of pixel points in the two frames of LDR images is greater than or equal to the corresponding first set threshold value, the group of pixel points are characterized as motion pixels in the corresponding first binary image; under the condition that the pixel difference value corresponding to one group of pixel points is smaller than the corresponding first set threshold, the pixel point is characterized as a non-motion pixel in the corresponding first binary image, so that errors caused by image noise can be reduced as much as possible in the process of identifying the motion pixel, and the motion pixel is accurately identified. In this embodiment, different pixel difference values correspond to different first setting thresholds.
The terminal can also determine vectors respectively corresponding to each group of corresponding image blocks in the two frames of LDR images by taking the image blocks as units, calculate an inner product between the two vectors, and determine a corresponding second binary image based on the inner product between the two vectors and a corresponding second set threshold. Under the condition that the inner product of the two vectors is greater than or equal to the corresponding second set threshold, the central pixel point of the corresponding image block is characterized as a moving pixel in the corresponding second binary image; and under the condition that the inner product of the two vectors is smaller than the corresponding second set threshold, the central pixel point of the corresponding image block is characterized as a non-motion pixel in the corresponding second binary image. When calculating the inner product of the two vectors, the terminal can also normalize the vectors corresponding to the image blocks, and calculate the inner product based on the normalized vectors. In this embodiment, the inner products of different vectors correspond to different second setting thresholds. Whether the central pixel point corresponding to the image block is characterized as a moving pixel in the corresponding second binary image is determined based on the inner product of the vectors between each group of corresponding image blocks and the corresponding second set threshold, and in the process, the continuity difference of the pixel values of the pixel points in the LDR image is comprehensively considered, so that the accuracy of the identified moving pixel is improved.
In some embodiments, in determining the corresponding first binary map, the method further comprises: determining a corresponding first set threshold value based on the pixel value of a pixel point in the LDR image and the first set data table; the first setting data table stores the corresponding relation between the pixel value and a first setting threshold value; the first set threshold value represents the difference value between the corresponding pixel value and the pixel value when the pixel value is changed and can be sensed by human eyes.
Here, when the terminal compares the pixel difference between each group of corresponding pixel points in the LDR image of two frames pixel by pixel, the terminal may look up the first setting threshold corresponding to the pixel value from the first setting data table based on the pixel value of any one of the group of corresponding pixel points. When the terminal compares the difference of the pixel mean values between each group of corresponding image blocks pixel by pixel with the image block as a unit, the terminal may search the first setting threshold corresponding to the pixel mean value from the first setting data table based on the pixel mean value of any image block in each group of corresponding image blocks.
The terminal is internally stored with a first setting data table. Different first setting threshold values are stored in the first setting data table, and the change of the pixel values can be respectively reflected by human eyes on the basis of different pixel values. In practical application, the same pixel value can correspond to two first set thresholds, wherein one first set threshold can reflect how much the pixel value is increased on the basis of the pixel value and can be perceived by human eyes; the further first set threshold value reflects how much the pixel value is reduced on the basis of the pixel value as perceived by the human eye.
In practical application, the terminal may query the first setting data table based on the pixel values of the pixel points in the LDR image with the short exposure time in the two frames of LDR images, and determine the first setting threshold corresponding to the pixel value of the corresponding pixel point based on the first setting threshold corresponding to the different pixel values in the first setting data table.
Of course, the terminal may also query the first setting data table based on the pixel values of the pixel points in the LDR image with the long exposure time in the two frames of LDR images to obtain the corresponding first setting threshold, and at this time, the corresponding first setting threshold may reflect how much the pixel value is reduced to be perceived by human eyes on the basis of the corresponding pixel value.
In some embodiments, in determining the corresponding second binary image, the method further comprises: determining a corresponding second set threshold value based on the vector of the image block corresponding to the pixel point in the LDR image and a second set data table; the second setting data table stores the corresponding relation between the vector and a second setting threshold; the second set threshold represents an inner product between the corresponding vector and a vector when the pixel value of the corresponding human eye-perceptible image block changes.
Here, the terminal may look up a second setting threshold corresponding to a vector from the second setting data table based on the vector corresponding to any image block in each set of corresponding image blocks of the two-frame LDR image.
And the terminal is internally stored with a second setting data table. The second setting data table stores second setting thresholds corresponding to different vectors, and can respectively reflect how much the pixel value change of the central pixel point of the corresponding image block can be perceived by human eyes for the image blocks taking different pixel points as the centers. Different vectors stored in the second setting data table represent the difference between each pixel point and the pixel mean value of the image block in the image block taking different pixel points as centers. In practical application, each vector in the second setting data table can correspond to two second setting thresholds, wherein one second setting threshold can reflect how much the pixel value of the central pixel point of the corresponding image block is increased and can be perceived by human eyes; and the other second set threshold can reflect the reduction of the pixel value of the central pixel point of the corresponding image block, which can be sensed by human eyes.
In practical application, the terminal may query the second setting data table based on the vectors of the image blocks in the LDR image with a short exposure time in the two frames of LDR images, and determine the second setting threshold corresponding to the corresponding vector based on the second setting thresholds corresponding to different vectors in the second setting data table, where at this time, the corresponding second setting threshold may reflect how much the pixel value of the central pixel point of the corresponding image block is increased and can be perceived by human eyes.
Certainly, the terminal may also query the second setting data table based on the vectors of the image blocks in the LDR image with the long exposure time in the two frames of LDR images to obtain the corresponding second setting threshold, and at this time, the corresponding second setting threshold may reflect how much the pixel value of the central pixel point of the corresponding image block is reduced and can be perceived by human eyes.
The size of the image block in the two-frame LDR image is the same as the size of the image block corresponding to the vector stored in the second setting data table.
In some embodiments, in order to identify a motion pixel more accurately, in a case where a first binary image corresponding to each two frames of LDR and a second binary image corresponding to each two frames of LDR are determined, the determining a binary image corresponding to each two frames of LDR based on a pixel difference value between each two frames of LDR images in at least two frames of LDR images used for synthesizing an HDR image further includes:
and carrying out bitwise OR operation on the corresponding first binary image and the corresponding second binary image to obtain a third binary image corresponding to each two frames of LDR.
Here, when the first binary image and the second binary image corresponding to the two frames of LDR images are obtained, the bitwise or operation is performed on the first binary image and the second binary image with the pixel point as a unit, and a third binary image of the corresponding two frames of LDR images is obtained.
It should be noted that, in the above embodiment, the first binary image or the second binary image corresponding to each two frames of LDR images is taken; in this embodiment, a third binary image corresponding to two frames of images is determined based on the first binary image and the second binary image corresponding to each two frames of LDR images, so that the moving pixels can be identified more accurately in the third binary image.
In some embodiments, in a scenario of image fusion of an LDR image in a video stream, in order to improve accuracy of a binary map, when the binary map is determined, the method further includes:
under the condition that the determined binary chart characterizes that the pixels in the set non-edge area are motion pixels, determining the moving distance of the terminal based on sensor data built in the terminal;
and when the moving distance is smaller than a set distance threshold value, performing error correction processing on the motion pixels in the set non-edge area in the determined binary image.
Here, the determined binary image includes the first binary image, the second binary image, or the third binary image. Considering that a moving object usually appears in an edge area of an image first in the process of shooting a video, when the moving object is detected to appear in a non-edge area of a current image, error correction is performed on pixel points which are characterized as moving pixels in the current image by referring to images shot before the moving object appears. In this embodiment, a corresponding binary image sequence is determined based on the LDR image sequence, and error correction processing is performed on pixels in a non-edge region in a latest binary image in the determined binary image sequence based on the determined corresponding binary image sequence.
In actual application, the terminal judges whether pixels in a set non-edge area in the latest binary image are characterized as motion pixels or not under the condition that the corresponding binary image sequence is determined based on the LDR image sequence, and determines the moving distance of the terminal based on sensor data built in the terminal under the condition that the pixels in the set non-edge area in the latest binary image are characterized as motion pixels, wherein the built-in sensor comprises a gyroscope or an acceleration sensor. And when the determined movement distance is smaller than the set distance threshold value, representing that the terminal does not move or the movement amplitude is in a set error range, and performing error correction processing on the motion pixels in the set non-edge area in the latest binary image in the binary image sequence. The error correction process is implemented as follows:
under the condition that a first pixel point in a non-edge region in a latest binary image 1 is characterized as a motion pixel, detecting whether the first pixel point in the non-edge region in a binary image 2 except the latest binary image 1 in a binary image sequence corresponding to two frames of LDR images is characterized as the motion pixel, and under the condition that the first pixel points in the non-edge region in all binary images 2 in the corresponding binary image sequence are characterized as the non-motion pixels, modifying the numerical value corresponding to the first pixel point in the latest binary image 1 from 1 to 0; under the condition that all first pixel point parts in the binary image sequence, which are located in the non-edge area in the binary image 2, are characterized as non-motion pixels, the numerical values corresponding to the first pixel points in the binary image sequence 1 are not modified.
In some embodiments, in order to improve image processing efficiency, determining a binary image corresponding to each two frames of LDR in the at least two frames of LDR images used for synthesizing the HDR image based on a pixel difference between each two frames of LDR images includes:
and determining a binary image corresponding to each two frames of LDR images based on the pixel difference value between each two frames of LDR images in the at least two frames of LDR images after size reduction.
After the image registration is carried out on the at least two frames of LDR images, the size reduction processing is carried out on every two frames of LDR images in the at least two frames of LDR images, and a binary image corresponding to every two frames of LDR images is determined based on the pixel difference value between every two frames of LDR images in the at least two frames of LDR images after size reduction.
In this embodiment, the LDR image after the image registration is subjected to the size reduction processing, so that a binary image is determined based on the LDR image after the size reduction, and the fusion weight of each pixel point in the LDR image is determined based on the LDR image after the size reduction, which can reduce time and resources and accelerate the image processing efficiency.
Step 102: and performing fusion processing on the at least two frames of LDR images based on binary images corresponding to every two frames of LDR images and fusion weights corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain corresponding HDR images.
And the terminal determines a fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images, and performs fusion processing on the at least two frames of LDR images based on the binary image corresponding to each two frames of LDR images and the fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain the corresponding HDR image.
And in actual application, the terminal performs fusion processing on the LDR images of at least two frames after brightness registration and image registration.
It should be noted that, on the basis of determining a binary image corresponding to each two frames of LDR images based on a pixel difference between every two frames of LDR images in the at least two frames of LDR images after size reduction, after determining a fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images after size reduction, the terminal restores the at least two frames of LDR images to the size before size reduction, so as to perform fusion processing on the at least two frames of LDR images with restored sizes. Here, the fusion weight of the pixel point is determined based on the LDR image after the size reduction, which can save resources and time and accelerate the image processing efficiency.
It should be noted that the binary map is the first binary map or the second binary map determined in the foregoing; in the case where a corresponding third binary image is determined based on the corresponding first binary image and the corresponding second binary image, the binary image referred to herein is the third binary image.
The method for determining the fusion weight corresponding to each pixel point of each frame of LDR image comprises the following steps:
the terminal can determine a frame of reference image from at least two frames of LDR images, determine a first fusion weight of each pixel point in the reference image based on the pixel value of each pixel point in the reference image, and determine a second fusion weight of a corresponding pixel point of each frame of non-reference image in the at least two frames of LDR images based on the first fusion weight of each pixel point in the reference image. The reference image may be any one of at least two frames of LDR images, and the sum of the first fusion weight and the second fusion weight of each group of corresponding pixel points is equal to 1. And when the method is actually applied, determining the LDR image with the longest exposure time or the shortest exposure time in at least two frames of LDR images as a reference image.
For example, in an application scene where two frames of LDR images are obtained, when a pixel value of a pixel point in a first LDR image with the longest exposure time is less than or equal to a third set threshold, the terminal determines a fusion weight of a corresponding pixel point in a second LDR image as W0, and determines a fusion weight of a corresponding pixel point in the first LDR image as (1-W0); when the pixel value of the pixel point in the first LDR image with the longest exposure time is larger than a third set threshold and smaller than a fourth set threshold, determining the fusion weight of the corresponding pixel point in the first LDR image and the fusion weight of the corresponding pixel point in the second LDR image based on a set proportion; and when the pixel value of the pixel point in the first LDR image with the longest exposure time is greater than or equal to the fourth set pixel threshold value, the terminal determines the fusion weight of the corresponding pixel point in the second LDR image as W1, and determines the fusion weight of the corresponding pixel point in the first LDR image as (1-W1). Wherein W1 is greater than W0.
In some embodiments, in order to determine the fusion weight of the pixel point more quickly, the method further comprises: determining the fusion weight of each pixel point of each frame of LDR image in the at least two frames of LDR images based on a set curve; and the set curve represents the corresponding relation between the pixel value of the pixel point and the corresponding fusion weight.
Here, the terminal determines a reference image from the at least two frames of LDR images, determines a corresponding set curve based on an exposure time of the reference image, determines a first fusion weight of each pixel point in the reference image based on the determined set curve, and determines a second fusion weight of a corresponding pixel point of a non-reference image from the at least two frames of LDR images based on the first fusion weight of each pixel point in the reference image.
In practical application, a curve is set to represent the corresponding relation between the normalized pixel values and the corresponding fusion weights.
In some examples, in the set curve, the pixel values in the set interval and the corresponding fusion weights are linearly changed, and the pixel values in the non-set interval and the corresponding fusion weights are fixed. Referring to fig. 2, fig. 2 shows a corresponding setting curve when the LDR image with the shortest exposure time is used as the reference image.
As shown in fig. 2, when the pixel value of a pixel point of the reference image is less than or equal to 0.2 and greater than 0, the first fusion weight of the pixel point in the reference image is 0.2, and at this time, the longer the exposure time of the non-reference image in the LDR images of at least two frames is, the larger the fusion weight of the corresponding pixel point in the non-reference image is; when the pixel value of a pixel point of the reference image is greater than or equal to 0.8 and less than or equal to 1, the first fusion weight of the pixel point in the reference image is 0.8, and at this time, the shorter the exposure time of the non-reference image in the at least two frames of LDR images is, the larger the fusion weight of the corresponding pixel point in the non-reference image is.
It should be noted that, when the pixel value of the pixel point of the reference image is greater than 0.2 and less than 0.8, the corresponding fusion weight in each frame of the LDR image in the at least two frames of LDR images can be determined according to the set proportion.
After the terminal determines the fusion weight corresponding to each pixel point of each frame of LDR image in at least two frames of LDR images, the terminal can perform fusion processing on the at least two frames of LDR images by adopting the following two modes:
mode 1: when the LDR image is subjected to fusion processing, the pixel value of the pixel point representing the motion pixel in the LDR image is adjusted to cover the image information of the pixel point representing the motion pixel. In some embodiments, the performing a fusion process on the at least two frames of LDR images based on the binary image corresponding to each two frames of LDR images and the fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain a corresponding HDR image includes:
in the pixel value corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images, adjusting the pixel value of the pixel point which is characterized as a motion pixel in the corresponding binary image;
and performing fusion processing on the adjusted at least two frames of LDR images based on the fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain the corresponding HDR images.
The terminal determines pixel points which are characterized as motion pixels in the corresponding two frames of LDR images based on the binary images corresponding to each two frames of LDR images, and obtains at least one group of corresponding pixel points which are characterized as motion pixels in the two frames of LDR images; and replacing the pixel value of one pixel point in each group of corresponding pixel points in the two frames of LDR images with the pixel value of the other pixel point, wherein the process is equivalent to the matting operation, the pixel value of one pixel point in each group of corresponding pixel points which are characterized as motion pixels is kept unchanged, and the pixel value of the other pixel point is modified, so that the pixel values of each group of corresponding pixel points which are characterized as motion pixels are the same. In practical application, a frame of reference image is determined from an LDR image of a synthesized HDR image, when each group of corresponding pixel points which are characterized as motion pixels in the reference image and the non-reference image is determined, the pixel values of the corresponding pixel points in the reference image are kept unchanged, and the pixel values of the corresponding pixel points in the non-reference image are modified.
And performing fusion processing on the at least two frames of LDR images based on the adjusted pixel value of each pixel point of each frame of LDR image in the at least two frames of LDR images and the fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain the corresponding HDR image. In practical application, before the LDR image is subjected to fusion processing, the LDR image with the pixel value adjusted can be subjected to image filtering processing, so that the pixel values of the pixel points of the LDR image are in smooth transition.
Before the LDR images are subjected to fusion processing, weight filtering processing can be further performed on the fusion weight corresponding to each pixel point of each frame of LDR image in the two frames of LDR images, so that the fusion weight of the pixel points in each frame of LDR image is in smooth transition.
Mode 2: when the LDR image is subjected to fusion processing, the fusion weight of the pixel point representing the motion pixel in the LDR image is adjusted to cover the image information of the pixel point representing the motion pixel. In some embodiments, the performing a fusion process on the at least two frames of LDR images based on the binary image corresponding to each two frames of LDR images and the fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain a corresponding HDR image includes:
in the fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images, adjusting the fusion weight corresponding to the pixel point which is characterized as a motion pixel in the corresponding binary image;
and performing fusion processing on the at least two frames of LDR images based on the adjusted fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain the corresponding HDR image.
Here, based on the binary image corresponding to each two frames of the LDR image, the fusion weight corresponding to the pixel point characterized as a moving pixel in the corresponding binary image is adjusted in the determined fusion weight, so that the image information of the pixel point characterized as a moving pixel can be masked when the LDR image is subjected to the fusion processing. For example, the terminal determines a frame of reference image from at least two frames of LDR images, determines at least one group of pixel points characterized as moving pixels in the reference image and the non-reference image based on a binary image corresponding to the reference image and the non-reference image in the at least two frames of LDR images, sets the fusion weight of the corresponding pixel points in the reference image to 1, and sets the fusion weight of the corresponding pixel points in the corresponding non-reference image to 0. Or, under the condition that at least one group of pixel points which are characterized as motion pixels in the reference image and the non-reference image are determined, when the fusion weight of the corresponding pixel points in the reference image is smaller than or equal to a set weight threshold, the fusion weight of the corresponding pixel points in the reference image is kept unchanged, the fusion weight of the corresponding pixel points in the non-reference image is replaced by the pixel values of the corresponding pixel points in the reference image, and therefore the sum of the fusion weights of each group of corresponding pixel points at the same position in at least two frames of LDR images after adjustment is smaller than or equal to 1. And when the method is actually applied, determining the LDR image with the shortest exposure time in the at least two frames of LDR images as a reference image.
And performing weighted fusion on the at least two frames of LDR images based on the fusion weight and the corresponding pixel value of each pixel point of each frame of LDR image in the at least two frames of LDR images after adjustment to obtain the HDR image. When at least two frames of LDR images are subjected to fusion processing, at least two frames of LDR images can be subjected to weighted fusion based on the fusion weight of the pixel points and the corresponding pixel values to obtain HDR images; or based on the fusion weight of each pixel point of each frame of LDR image in at least two frames of LDR images, performing weighted fusion on at least two frames of LDR images by using the Laplacian pyramid.
After the corresponding HDR image is obtained, the tone mapping may be performed on the corresponding HDR image to adjust the brightness of the corresponding HDR image, and the tone-mapped HDR image may be input to the ISP to perform processing such as noise reduction and color correction. Here, the terminal may perform local tone mapping or global tone mapping on the HDR image.
In this embodiment, based on a pixel difference between every two frames of LDR images in at least two frames of low dynamic range LDR images used for synthesizing an HDR image, a binary image corresponding to each two frames of LDR images is determined, and based on the binary image corresponding to each two frames of LDR images and a fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images, the at least two frames of LDR images are subjected to fusion processing to obtain a corresponding HDR image. Because the pixels in the corresponding binary image represent motion pixels or non-motion pixels, the pixel points characterized as motion pixels in the corresponding binary image can be quickly determined from the corresponding two frames of LDR images based on the corresponding binary image, so that the pixel values of the pixel points characterized as motion pixels in the corresponding two frames of LDR images are adjusted, or the fusion weight of the pixel points characterized as motion pixels in the corresponding two frames of LDR images is adjusted, therefore, when fusion processing is carried out on the LDR images, the image information of the pixel points characterized as motion pixels can be covered, and the HDR images obtained by fusion have no ghost.
In addition, the image registration is carried out on the obtained LDR image by using the ISP in the terminal in the embodiment of the application, so that the image registration efficiency is improved; the binary image is utilized to quickly determine the pixel points which are characterized as motion pixels in the LDR image, and the fusion weight of each pixel point of the LDR image is quickly determined based on the set curve, so that the obtained LDR image can be quickly fused into a corresponding HDR image, the fusion efficiency of the LDR image is improved, and the scheme can be applied to image processing scenes with high real-time requirements, for example, fusion processing is carried out on the LDR image in a video stream.
Fig. 3 is a schematic diagram illustrating an implementation flow of an image processing method provided by an application embodiment of the present application, and as shown in fig. 3, an implementation process of the image processing method is as follows:
at least two frames of LDR images output by the ISP for synthesizing HDR images are acquired.
And converting the color space of the at least two frames of LDR images output by the ISP from the YUV color space to the RGB color space to obtain at least two frames of LDR images of the RGB color space.
And under the condition that the acquired LDR images of at least two frames are subjected to Gamma correction, performing inverse Gamma correction on the LDR images of at least two frames of the RGB color space.
And performing brightness registration on the at least two frames of LDR images after the inverse Gamma correction.
And carrying out image registration on the LDR images of at least two frames after brightness registration.
And determining a binary image corresponding to each two frames of LDR images based on the pixel difference value between each two frames of LDR images in the at least two frames of LDR images after the image registration. When the binary image is determined, the corresponding binary image is determined from one reference image in the at least two frames of LDR images based on the pixel difference value between the reference image and the non-reference image in the at least two frames of LDR images.
And determining the fusion weight of each pixel point of each frame of LDR image in at least two frames of LDR images after image registration. The fusion weight of each pixel point of the reference image can be determined, a weight map of the reference image is generated based on the fusion weight of each pixel point of the reference image, and a weight map of the non-reference image is generated based on the weight map of the reference image.
And filtering the fusion weight of each pixel point in each frame of LDR image.
And based on the determined binary image and the filtered fusion weight, performing fusion processing on at least two frames of LDR images after image registration to obtain a corresponding HDR image.
And performing Gamma correction on the HDR image.
And tone mapping is carried out on the HDR image after Gamma correction.
The color space of the tone mapped HDR image is converted from the RGB color space to the YUV color space.
And inputting the HDR image of the YUV color space into an ISP (Internet service provider) for processing such as noise reduction, color correction and the like.
In order to implement the method of the embodiment of the present application, an embodiment of the present application further provides a terminal, as shown in fig. 4, where the terminal includes:
a first determining unit 41, configured to determine a binary image corresponding to each two frames of LDR images based on a pixel difference between each two frames of LDR images in at least two frames of LDR images used for synthesizing the HDR image; wherein the pixels in the binary image represent either moving pixels or non-moving pixels;
and a fusion unit 42, configured to perform fusion processing on the at least two frames of LDR images based on the binary images corresponding to each two frames of LDR images and the fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images, so as to obtain corresponding HDR images.
In some embodiments, the first determination unit 41 is configured to perform at least one of:
determining a binary image corresponding to each two frames of LDR based on a pixel difference value between each two frames of LDR images in at least two frames of LDR images used for synthesizing the HDR image, wherein the binary image comprises at least one of the following:
determining a corresponding first binary image based on a pixel difference value between each group of corresponding pixel points in each two frames of LDR images and a corresponding first set threshold;
determining a corresponding second value image based on the inner product of the vectors corresponding to each group of corresponding image blocks in each two frames of LDR images and the corresponding second set threshold; wherein the content of the first and second substances,
and the vector corresponding to the image block represents the difference between the pixel value of each pixel point in the image block and the pixel mean value of the image block.
In some embodiments, in the case that the first two-value map corresponding to each two frames LDR and the second two-value map corresponding to each two frames LDR are determined, the first determining unit 41 is further configured to:
and carrying out bitwise OR operation on the corresponding first binary image and the corresponding second binary image to obtain a third binary image corresponding to each two frames of LDR.
In some embodiments, the first determining unit 41, when determining the corresponding first binary map, is further configured to:
determining a corresponding first set threshold value based on the pixel value of a pixel point in the LDR image and the first set data table; the first setting data table stores the corresponding relation between the pixel value and a first setting threshold value; the first set threshold represents a difference value between a corresponding pixel value and a pixel value when the pixel value is changed and can be sensed by human eyes;
when determining the corresponding second value map, the first determining unit 41 is further configured to:
determining a corresponding second set threshold value based on the vector of the image block corresponding to the pixel point in the LDR image and a second set data table; the second setting data table stores the corresponding relation between the vector and a second setting threshold; the second set threshold represents an inner product between the corresponding vector and a vector when the pixel value of the corresponding human eye-perceptible image block changes.
In some embodiments, the terminal further comprises:
a second determining unit, configured to determine a moving distance of the terminal based on sensor data built in the terminal, when the determined binary image feature indicates that a pixel in the set non-edge area is a moving pixel;
and the error correction unit is used for performing error correction processing on the motion pixels in the set non-edge area in the determined binary image under the condition that the moving distance is smaller than a set distance threshold value.
In some embodiments, the terminal further comprises:
a third determining unit, configured to determine, based on the set curve, a fusion weight of each pixel point of each frame of the LDR image in the at least two frames of LDR images; wherein the content of the first and second substances,
and the set curve represents the corresponding relation between the pixel values of the pixel points and the corresponding fusion weights.
In some embodiments, the fusion unit 42 is configured to:
in the pixel value corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images, adjusting the pixel value of the pixel point which is characterized as a motion pixel in the corresponding binary image;
and performing fusion processing on the adjusted at least two frames of LDR images based on the fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain the corresponding HDR images.
In some embodiments, the fusion unit 42 is configured to:
in the fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images, adjusting the fusion weight corresponding to the pixel point which is characterized as a motion pixel in the corresponding binary image;
and performing fusion processing on the at least two frames of LDR images based on the adjusted fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain the corresponding HDR image.
In some embodiments, the first determining unit 41 is configured to:
and determining a binary image corresponding to each two frames of LDR images based on the pixel difference value between each two frames of LDR images in the at least two frames of LDR images after size reduction.
In some embodiments, the terminal further comprises:
a luminance registration unit for performing luminance registration on a second image of the at least two frames of LDR images based on a first image of the at least two frames of LDR images; wherein, the first image is the LDR image with the longest exposure time in the at least two frames of LDR images;
and the image registration unit is used for carrying out image registration on the first image and the second image after brightness registration by using an image signal processor of a terminal.
In practical applications, each Unit included in the terminal may be implemented by a Processor in the terminal, such as a Central Processing Unit (CPU), a Digital Signal Processor (DSP), a Micro Control Unit (MCU), or a Programmable Gate Array (FPGA).
It should be noted that: the terminal provided in the above embodiment is only exemplified by the division of each program module when performing image processing, and in practical applications, the above processing may be distributed to different program modules according to needs, that is, the internal structure of the device may be divided into different program modules to complete all or part of the above-described processing. In addition, the terminal provided by the above embodiment and the image processing method embodiment belong to the same concept, and the specific implementation process thereof is described in the method embodiment, which is not described herein again.
Based on the hardware implementation of the program module, in order to implement the method of the embodiment of the present application, the embodiment of the present application further provides a terminal. Fig. 5 is a schematic diagram of a hardware composition structure of a terminal according to an embodiment of the present application, and as shown in fig. 5, the terminal 5 includes:
a communication interface 51 capable of information interaction with other devices such as network devices and the like;
and the processor 52 is connected with the communication interface 51 to realize information interaction with other devices, and is used for executing the method provided by one or more technical schemes of the terminal side when running a computer program. And the computer program is stored on the memory 53.
Of course, in practice, the various components in the terminal 5 are coupled together by a bus system 54. It will be appreciated that the bus system 54 is used to enable communications among the components. The bus system 54 includes a power bus, a control bus, and a status signal bus in addition to the data bus. For clarity of illustration, however, the various buses are labeled as bus system 54 in fig. 5.
The memory 53 in the embodiment of the present application is used to store various types of data to support the operation of the terminal 5. Examples of such data include: any computer program for operating on the terminal 5.
It will be appreciated that the memory 53 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), Synchronous Dynamic Random Access Memory (SLDRAM), Direct Memory (DRmb Access), and Random Access Memory (DRAM). The memory 130 described in embodiments herein is intended to comprise, without being limited to, these and any other suitable types of memory.
The method disclosed in the above embodiments of the present application may be applied to the processor 52, or implemented by the processor 52. Processor 52 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 52. The processor 52 described above may be a general purpose processor, a DSP, or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Processor 52 may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 53, and the processor 52 reads the program in the memory 53 and performs the steps of the aforementioned method in conjunction with its hardware.
Optionally, when the processor 52 executes the program, the corresponding process implemented by the terminal in each method of the embodiment of the present application is implemented, and for brevity, no further description is given here.
In an exemplary embodiment, the present application further provides a storage medium, i.e. a computer storage medium, in particular a computer readable storage medium, for example, including a memory 53 storing a computer program, which can be executed by a processor 52 of the terminal to implement the steps of the foregoing method. The computer readable storage medium may be Memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface Memory, optical disk, or CD-ROM.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing module, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit may be implemented in the form of hardware, or in the form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and various media capable of storing program codes.
The technical means described in the embodiments of the present application may be arbitrarily combined without conflict.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (13)

1. An image processing method, comprising:
determining a binary image corresponding to each two frames of LDR images based on a pixel difference value between each two frames of LDR images in at least two frames of low dynamic range LDR images for synthesizing the high dynamic range HDR images; wherein the pixels in the binary image represent either moving pixels or non-moving pixels;
and performing fusion processing on the at least two frames of LDR images based on binary images corresponding to every two frames of LDR images and fusion weights corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain corresponding HDR images.
2. The method of claim 1, wherein determining a binary image corresponding to each two frames of LDR based on a pixel difference between every two frames of LDR images in the at least two frames of LDR images used for synthesizing the HDR image comprises at least one of:
determining a corresponding first binary image based on a pixel difference value between each group of corresponding pixel points in each two frames of LDR images and a corresponding first set threshold;
determining a corresponding second value image based on the inner product of the vectors corresponding to each group of corresponding image blocks in each two frames of LDR images and the corresponding second set threshold; wherein the content of the first and second substances,
and the vector corresponding to the image block represents the difference between the pixel value of each pixel point in the image block and the pixel mean value of the image block.
3. The method of claim 2, wherein in a case that a first binary image corresponding to each two frames of LDR and a second binary image corresponding to each two frames of LDR are determined, the determining a binary image corresponding to each two frames of LDR based on a pixel difference between each two frames of LDR images in the at least two frames of LDR images used for synthesizing the HDR image further comprises:
and carrying out bitwise OR operation on the corresponding first binary image and the corresponding second binary image to obtain a third binary image corresponding to each two frames of LDR.
4. The method of claim 2, wherein when said determining the corresponding first binary image, the method further comprises:
determining a corresponding first set threshold value based on the pixel value of a pixel point in the LDR image and the first set data table; the first setting data table stores the corresponding relation between the pixel value and a first setting threshold value; the first set threshold represents a difference value between a corresponding pixel value and a pixel value when the pixel value is changed and can be sensed by human eyes;
when the corresponding second binary image is determined, the method further includes:
determining a corresponding second set threshold value based on the vector of the image block corresponding to the pixel point in the LDR image and a second set data table; the second setting data table stores the corresponding relation between the vector and a second setting threshold; the second set threshold represents an inner product between the corresponding vector and a vector when the pixel value of the corresponding human eye-perceptible image block changes.
5. A method according to claim 2 or 3, characterized in that the method further comprises:
under the condition that the determined binary chart characterizes that the pixels in the set non-edge area are motion pixels, determining the moving distance of the terminal based on sensor data built in the terminal;
and when the moving distance is smaller than a set distance threshold value, performing error correction processing on the motion pixels in the set non-edge area in the determined binary image.
6. The method of claim 1, further comprising:
determining the fusion weight of each pixel point of each frame of LDR image in the at least two frames of LDR images based on a set curve; wherein the content of the first and second substances,
and the set curve represents the corresponding relation between the pixel values of the pixel points and the corresponding fusion weights.
7. The method as claimed in claim 1, wherein the fusion processing is performed on the at least two frames of LDR images based on the binary image corresponding to each frame of LDR image and the fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain the corresponding HDR image, and the method comprises:
in the pixel value corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images, adjusting the pixel value of the pixel point which is characterized as a motion pixel in the corresponding binary image;
and performing fusion processing on the adjusted at least two frames of LDR images based on the fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain the corresponding HDR images.
8. The method as claimed in claim 1, wherein the performing the fusion processing on the at least two frames of LDR images based on the binary image corresponding to each two frames of LDR images and the fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain the corresponding HDR image comprises:
in the fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images, adjusting the fusion weight corresponding to the pixel point which is characterized as a motion pixel in the corresponding binary image;
and performing fusion processing on the at least two frames of LDR images based on the adjusted fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain the corresponding HDR image.
9. The method of claim 1, wherein determining a binary image corresponding to each two frames of LDR based on a pixel difference between every two frames of LDR images in the at least two frames of LDR images used for synthesizing the HDR image comprises:
and determining a binary image corresponding to each two frames of LDR images based on the pixel difference value between each two frames of LDR images in the at least two frames of LDR images after size reduction.
10. The method of any of claims 1 to 4 and 6 to 9, wherein before determining the binary image corresponding to each two frames of the at least two frames of LDR images used for synthesizing the HDR image based on the pixel difference between the LDR images, the method further comprises:
luma registration of a second image of the at least two frame LDR images based on a first image of the at least two frame LDR images; wherein, the first image is the LDR image with the longest exposure time in the at least two frames of LDR images;
and carrying out image registration on the first image and the second image after brightness registration by using an image signal processor of the terminal.
11. A terminal, comprising:
a first determining unit, configured to determine a binary image corresponding to each two frames of LDR images based on a pixel difference between each two frames of LDR images in at least two frames of low dynamic range LDR images used for synthesizing the high dynamic range HDR image; wherein the pixels in the binary image represent either moving pixels or non-moving pixels;
and the fusion unit is used for performing fusion processing on the at least two frames of LDR images based on the binary images corresponding to each two frames of LDR images and the fusion weight corresponding to each pixel point of each frame of LDR image in the at least two frames of LDR images to obtain the corresponding HDR images.
12. A terminal, comprising: a processor and a memory for storing a computer program capable of running on the processor,
wherein the processor is adapted to perform the steps of the method of any one of claims 1 to 10 when running the computer program.
13. A storage medium having a computer program stored thereon, the computer program, when being executed by a processor, realizing the steps of the method of any one of claims 1 to 10.
CN202110221036.2A 2021-02-26 2021-02-26 Image processing method, terminal and storage medium Pending CN114972137A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110221036.2A CN114972137A (en) 2021-02-26 2021-02-26 Image processing method, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110221036.2A CN114972137A (en) 2021-02-26 2021-02-26 Image processing method, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN114972137A true CN114972137A (en) 2022-08-30

Family

ID=82973151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110221036.2A Pending CN114972137A (en) 2021-02-26 2021-02-26 Image processing method, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN114972137A (en)

Similar Documents

Publication Publication Date Title
EP3624439B1 (en) Imaging processing method for camera module in night scene, electronic device and storage medium
CN111028189B (en) Image processing method, device, storage medium and electronic equipment
CN108335279B (en) Image fusion and HDR imaging
US8988529B2 (en) Target tracking apparatus, image tracking apparatus, methods of controlling operation of same, and digital camera
KR101699919B1 (en) High dynamic range image creation apparatus of removaling ghost blur by using multi exposure fusion and method of the same
KR101023946B1 (en) Apparatus for digital image stabilizing using object tracking and Method thereof
CN111028190A (en) Image processing method, image processing device, storage medium and electronic equipment
US20120250937A1 (en) Scene enhancements in off-center peripheral regions for nonlinear lens geometries
KR20050022748A (en) Device and method for reforming quality of picture having illumination difference
US8085986B2 (en) Image processing apparatus and method for processing images more naturally and sharply
JP2012515493A (en) Method and system for extended dynamic range images and video from multiple exposures
US9131172B2 (en) Image processing apparatus and method for detecting motion using long exposures images and then performing infinite impulse response filtering on short exposure image
US9424632B2 (en) System and method for generating high dynamic range images
CN113992850B (en) ISP-based image processing method and device, storage medium and image pickup apparatus
US8705896B2 (en) Processing a super-resolution target image
CN111601044B (en) Image exposure time ratio determining method and device
CN109286758B (en) High dynamic range image generation method, mobile terminal and storage medium
CN110213498B (en) Image generation method and device, electronic equipment and computer readable storage medium
JP2003134385A (en) Image synthesizing device
CN108513068B (en) Image selection method and device, storage medium and electronic equipment
CN113793257A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN113259594A (en) Image processing method and device, computer readable storage medium and terminal
CN110930440A (en) Image alignment method and device, storage medium and electronic equipment
CN114972137A (en) Image processing method, terminal and storage medium
JP4052348B2 (en) Image processing apparatus and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination