CN111953893B - High dynamic range image generation method, terminal device and storage medium - Google Patents

High dynamic range image generation method, terminal device and storage medium Download PDF

Info

Publication number
CN111953893B
CN111953893B CN202010608736.2A CN202010608736A CN111953893B CN 111953893 B CN111953893 B CN 111953893B CN 202010608736 A CN202010608736 A CN 202010608736A CN 111953893 B CN111953893 B CN 111953893B
Authority
CN
China
Prior art keywords
area
region
image
exposure
fused
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010608736.2A
Other languages
Chinese (zh)
Other versions
CN111953893A (en
Inventor
许楚萍
符顺
牛永岭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pulian Intelligent Software Co ltd
Original Assignee
TP Link Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TP Link Technologies Co Ltd filed Critical TP Link Technologies Co Ltd
Priority to CN202010608736.2A priority Critical patent/CN111953893B/en
Publication of CN111953893A publication Critical patent/CN111953893A/en
Application granted granted Critical
Publication of CN111953893B publication Critical patent/CN111953893B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a high dynamic range image generation method, terminal equipment and a storage medium, wherein the method comprises the steps of obtaining a group of images; the group of images are at least two frames of images with different exposure degrees of the same scene in a shooting period, and the images with different exposure degrees comprise one frame of normal exposure image and at least one frame of abnormal exposure image; extracting an exposure abnormal area in the normal exposure image, detecting a motion area of the exposure abnormal area, and dividing the exposure abnormal area into a motion area and a static area; performing HDR fusion on the motion areas and the static areas of all the images to obtain a fusion sub-area; and synthesizing the fusion subarea and the abnormal exposure area in the normal exposure image to obtain a high dynamic range image. By implementing the embodiment of the invention, the time complexity can be reduced when the high dynamic range image is generated.

Description

High dynamic range image generation method, terminal device and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a high dynamic range image generation method, a terminal device, and a storage medium.
Background
Due to hardware and technical limitations, the brightness range of pictures which can be acquired and presented by the current photographic equipment and display equipment is far smaller than that of a real scene. When a real scene with a large light ratio is shot, a high-brightness area formed by irradiation of a strong light source (sunlight, a lamp or reflected light and the like) is displayed as white due to overexposure, and a dark area such as shadow, backlight and the like is displayed as black due to underexposure, so that texture information is lost, and the imaging quality is seriously influenced.
The high dynamic range photographic technology and device aim at preventing white and black areas without textures generated by overexposure or underexposure when shooting in a scene with a large light ratio by using a common photographic device, so that picture information is lost.
In the prior art, chinese patent CN110611750A adopts a method of fusing multiple under-exposed and normally exposed images of the same scene at the same angle to generate a night high-dynamic image, but most of the shooting scenes usually have moving vehicles or people, and because the capturing times of the images to be fused shot by a single camera are different, the fused images may have ghost images or ghost images. The chinese patent CN108492262A uses poisson algorithm to detect and remove moving regions on the image, and then performs static scene multi-exposure image fusion of the whole image. Although the method can solve the problem of ghosting, the time complexity is increased by the moving region removing algorithm of the whole image and the image fusion algorithm of the whole image, and the requirement of shooting a high dynamic range image cannot be met.
Disclosure of Invention
Embodiments of the present invention provide a high dynamic range image generation method, a terminal device, and a storage medium, which can reduce time complexity when generating a high dynamic range image.
An embodiment of the present invention provides a method for generating a high dynamic range image, including:
acquiring a group of images; the group of images are at least two frames of images with different exposure degrees of the same scene in a shooting period, and the images with different exposure degrees comprise one frame of normal exposure image and at least one frame of abnormal exposure image;
extracting an exposure abnormal area in the normal exposure image, detecting a motion area of the exposure abnormal area, and dividing the exposure abnormal area into a motion area and a static area;
respectively fusing the moving area and the static area of the normal exposure image with the corresponding areas in the abnormal exposure image to obtain fused subareas;
and synthesizing the fusion subarea and the abnormal exposure area in the normal exposure image to obtain a high dynamic range image.
Further, detecting overexposure pixel points and underexposure pixel points of the normal exposure image according to a preset brightness threshold value, and generating an exposure binary image;
and removing abnormal values from the exposure binary image, calculating the boundaries of all connected domains of the exposure binary image after removing the abnormal values, and then extracting the exposure abnormal regions according to the boundaries of all the connected domains.
Further, the detecting a moving area of the abnormal exposure area, and dividing the abnormal exposure area into a moving area and a static area specifically includes:
calculating the median of pixel values of all pixel points in the normal exposure image to obtain a first median, and performing binarization operation on the normal exposure image according to the first median to generate a first binary image;
calculating the median of pixel values of all pixel points in the abnormal exposure image to obtain a second median, and performing binarization operation on the abnormal exposure image according to the second median to generate a second binary image;
and extracting all moving areas of the normal exposure view image according to the difference between the first binary image and the second binary image, and dividing the moving areas and the static areas in the exposure abnormal areas according to all the moving areas of the normal exposure image.
Further, the fusing the moving area and the static area of the normal exposure image with the corresponding areas in the abnormal exposure image respectively specifically includes:
when the abnormal exposure image only comprises a first long exposure image and the abnormal exposure region only comprises a first underexposure region, respectively fusing a static region of the first underexposure region and a first to-be-fused static region as well as a motion region of the first underexposure region and a first to-be-fused motion region according to preset pixel value weights to obtain a first underexposure fusion sub-region; the first still region to be fused is an image region corresponding to the still region position of the first underexposed region in the first long-exposure image; the first to-be-fused motion region is an image region corresponding to the motion region position of the first under-exposed region in the first long-exposure image; when the motion area of the first underexposed area is fused with the first motion area to be fused, a motion area is randomly selected as the fused motion area;
when the abnormal exposure image only comprises a first short exposure image and the abnormal exposure area only comprises a first overexposure area, respectively fusing a static area of the first overexposure area and a second static area to be fused and a motion area of the first overexposure area and a second motion area to be fused according to preset pixel value weight to obtain a first overexposure fusion sub-area; the second to-be-fused static area is an image area corresponding to the static area position of the first overexposure area in the first short-exposure image; the second motion region to be fused is an image region corresponding to the motion region position of the first overexposure region in the first short-exposure image; when the motion area of the first overexposure area is fused with the second motion area to be fused, randomly selecting a motion area as the fused motion area;
when the abnormal exposure image simultaneously comprises a second long exposure image and a second short exposure image, and the abnormal exposure area simultaneously comprises a second underexposure area and a second overexposure area, respectively fusing a static area of the second underexposure area and a third static area to be fused, and a motion area of the second underexposure area and a third motion area to be fused according to preset pixel value weights to obtain a second underexposure fusion sub-area; fusing the static area of the second overexposure area and a fourth static area to be fused, and the moving area of the second overexposure area and a fourth moving area to be fused according to preset pixel value weights to obtain a second overexposure fusion sub-area; the third still region to be fused is an image region corresponding to the still region position of the second underexposed region in the second long-exposure image; the third motion region to be fused is an image region corresponding to the motion region position of the second underexposed region in the second long-exposure image; the fourth still region to be fused is an image region corresponding to the still region of the second overexposed region in the second short-exposure image; the fourth motion region to be fused is an image region corresponding to the motion region of the second overexposure region in the second short-exposure image; when the motion area of the second underexposed area is fused with the third motion area to be fused, a motion area is randomly selected as the fused motion area; and fusing the motion area of the second overexposure area with the fourth motion area to be fused, and randomly selecting a motion area as the fused motion area.
Further, the synthesizing the fusion sub-region with the exposure abnormal region in the normal exposure image to obtain a high dynamic range image specifically includes:
when the fusion sub-region is the first underexposure fusion sub-region, performing progressive fusion on the first underexposure fusion sub-region and a first underexposure region in the normal exposure image to obtain the high dynamic range image;
when the fusion sub-region is the first overexposure fusion sub-region, performing progressive fusion on the first overexposure fusion sub-region and a first overexposure region in the normal exposure image to obtain the high dynamic range image;
when the fusion sub-region comprises a second underexposure fusion sub-region and a second overexposure fusion sub-region, carrying out progressive fusion on the second underexposure fusion sub-region and a second underexposure region in the normal exposure image, and carrying out progressive fusion on the second overexposure fusion sub-region and a second overexposure region in the normal exposure image.
Further, the method also comprises the following steps: and setting the exposure parameters of each image in the next shooting period according to the current exposure parameters of each image in the current shooting period and the brightness information of the exposure abnormal areas in each image.
On the basis of the above embodiment, another embodiment of the present invention provides a high dynamic range video generation method, including: acquiring a plurality of groups of images of a plurality of shooting periods, and generating a high dynamic range image corresponding to each group of images according to the high dynamic range image generation method of any one of the embodiments to obtain a plurality of high dynamic range images; and generating a high dynamic range video according to the plurality of high dynamic range images.
On the basis of the foregoing embodiment, another embodiment of the present invention provides a terminal device, which includes a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, and the processor executes the computer program to implement the high dynamic range image generation method according to any one of the foregoing embodiments of the present invention.
On the basis of the foregoing embodiment, another embodiment of the present invention provides a storage medium, where the storage medium includes a stored computer program, and when the computer program runs, a device in which the storage medium is located is controlled to execute the high dynamic range image generation method according to any one of the foregoing embodiments of the present invention.
The embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a high dynamic range image generation method, terminal equipment and a storage medium, wherein the method comprises the steps of firstly acquiring a normal exposure image and an abnormal exposure image, then, the abnormal exposure area in the normal exposure image is divided into a motion area and a static area, then the motion areas and the static areas of all the images are fused to obtain a fusion sub-area, and finally the fusion sub-area is fused with the exposure abnormal area in the normal exposure image to generate the high dynamic range image, all image areas of the normally exposed image do not need to be fused, and only the exposure abnormal area in the normally exposed image needs to be fused, so that the time complexity of image fusion is greatly reduced, and the generation efficiency of the high dynamic range image is improved.
Drawings
Fig. 1 is a schematic flow chart of a high dynamic range image generation method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a binarized image denoising filter according to an embodiment of the present invention.
Fig. 3 is a schematic diagram illustrating an abnormal exposure region fusion according to an embodiment of the present invention.
Fig. 4 is another schematic diagram of performing abnormal exposure region fusion according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of performing abnormal exposure region fusion according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of the progressive synthesis provided by an embodiment of the present invention.
Fig. 7 is a flowchart illustrating a high dynamic range image generating method according to another embodiment of the present invention.
Fig. 8 is a schematic view illustrating exposure parameter control according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, a method for generating a high dynamic range image according to an embodiment of the present invention includes:
step S1, acquiring a group of images; the group of images are at least two frames of images with different exposure degrees of the same scene in a shooting period, and the images with different exposure degrees comprise one frame of normal exposure image and at least one frame of abnormal exposure image;
step S2, extracting an exposure abnormal area in the normal exposure image, detecting a motion area of the exposure abnormal area, and dividing the exposure abnormal area into a motion area and a static area;
step S3, carrying out HDR fusion on the motion areas and the static areas of all the images to obtain a fusion sub-area;
and step S4, synthesizing the fusion subarea and the abnormal exposure area in the normal exposure image to obtain a high dynamic range image.
For step S1, acquiring at least one frame of normal exposure image and at least one abnormal exposure image by adjusting the camera device to shoot the same scene according to different exposure parameters in one shooting period, wherein the abnormal exposure image in a preferred embodiment includes any one or more of the following combinations: a long exposure image and a short exposure image; in the normal exposure image, the texture and the bright-dark contrast of the image can accord with the scene seen by human eyes, the exposure amount is properly increased in the actual shooting process of the long exposure image, so that the bright-dark contrast of an underexposed area is optimal, and the exposure amount is properly reduced in the actual shooting process of the short exposure image, so that the bright-dark contrast of an overexposed area is optimal. Normal exposure images, long exposure images and short exposure images are well known in the art to those skilled in the art.
For step S2, in a preferred embodiment, the abnormal exposure region in the normally exposed image is extracted by:
detecting overexposure and underexposure pixel points of the normal exposure image according to a preset brightness threshold value to generate an exposure binary image; removing abnormal values of the exposed binary image through morphological corrosion operation; and calculating the boundary of each connected domain of the exposure binarization image after the abnormal value is eliminated, and then extracting the exposure abnormal region according to the boundary of each connected domain.
Specifically, firstly, detecting overexposure and underexposure pixel points of a normal exposure image to generate an exposure binary image, and assuming the brightness value y of a certain position (i, j) of the normal exposure imagei,jLess than threshold T1(assume to be 25) or more than a threshold value T2(230), the position pixel is an abnormal exposure pixel, otherwise, the position pixel is a normal exposure pixel.
And then removing abnormal values of the calculated exposure binary image by adopting a morphological corrosion operation. The erosion operation may be performed by filtering the image through the (3 x 3) template shown in fig. 2. When the filtering result of the corresponding pixel position is not equal to 8, the position is determined as an abnormal value and is converted into a normal exposure pixel.
And finally, calculating the boundary of each connected domain of the binarized image after the abnormal value is removed, storing the position information of the abnormal exposure region in the normal exposure image by using the boundary frame, and extracting the abnormal exposure region according to the position information.
In a preferred embodiment, the detecting a moving area of the abnormal exposure area, and dividing the abnormal exposure area into a moving area and a static area includes:
calculating the median of pixel values of all pixel points in the normal exposure image to obtain a first median, and performing binarization operation on the normal exposure image according to the first median to generate a first binary image; calculating the median of pixel values of all pixel points in the abnormal exposure image to obtain a second median, and performing binarization operation on the abnormal exposure image according to the second median to generate a second binary image; and extracting all moving areas of the normal exposure image according to the difference between the first binary image and the second binary image, and dividing the moving areas and the static areas in the abnormal exposure areas according to all the moving areas of the normal exposure image.
Specifically, the median of pixel values of all pixel points in the normally exposed image is calculated to obtain the first median, then the first median is taken as a boundary, the pixel value of the pixel point with the pixel value greater than or equal to the first median is adjusted to 255, the pixel value of the pixel point with the pixel value less than the first median is adjusted to 0, and the first binary image is generated as follows:
Figure BDA0002561625380000081
in the same way, the second method for generating abnormal exposure imageThe binary image is, for example
Figure BDA0002561625380000082
Then compare
Figure BDA0002561625380000083
And
Figure BDA0002561625380000084
the difference forms a binary image, abnormal values in the differential binary image are removed, the obtained result is all moving areas in the normal exposure image, then image areas in the abnormal exposure area are further extracted from all the moving areas, and the division of the moving areas and the static areas in the abnormal exposure area can be realized.
For step S3, in a preferred embodiment, the HDR fusion of the motion region and the still region of all the images to obtain a fusion sub-region specifically includes:
when the abnormal exposure image only comprises a first long exposure image and the abnormal exposure region only comprises a first underexposure region, respectively fusing a static region of the first underexposure region and a first to-be-fused static region as well as a motion region of the first underexposure region and a first to-be-fused motion region according to preset pixel value weights to obtain a first underexposure fusion sub-region; the first still region to be fused is an image region corresponding to the still region position of the first underexposed region in the first long-exposure image; the first to-be-fused motion region is an image region corresponding to the motion region position of the first under-exposed region in the first long-exposure image; when the motion area of the first underexposed area is fused with the first motion area to be fused, a motion area is randomly selected as the fused motion area;
when the abnormal exposure image only comprises a first short exposure image and the abnormal exposure area only comprises a first overexposure area, respectively fusing a static area of the first overexposure area and a second static area to be fused and a motion area of the first overexposure area and a second motion area to be fused according to preset pixel value weight to obtain a first overexposure fusion sub-area; the second to-be-fused static area is an image area corresponding to the static area position of the first overexposure area in the first short-exposure image; the second motion region to be fused is an image region corresponding to the motion region position of the first overexposure region in the first short-exposure image; when the motion area of the first overexposure area is fused with the second motion area to be fused, randomly selecting a motion area as the fused motion area;
when the abnormal exposure image simultaneously comprises a second long exposure image and a second short exposure image, and the abnormal exposure area simultaneously comprises a second underexposure area and a second overexposure area, respectively fusing a static area of the second underexposure area and a third static area to be fused, and a motion area of the second underexposure area and a third motion area to be fused according to preset pixel value weights to obtain a second underexposure fusion sub-area; fusing the static area of the second overexposure area and a fourth static area to be fused, and the moving area of the second overexposure area and a fourth moving area to be fused according to preset pixel value weights to obtain a second overexposure fusion sub-area; the third still region to be fused is an image region corresponding to the still region position of the second underexposed region in the second long-exposure image; the third motion region to be fused is an image region corresponding to the motion region position of the second underexposed region in the second long-exposure image; the fourth still region to be fused is an image region corresponding to the still region of the second overexposed region in the second short-exposure image; the fourth motion region to be fused is an image region corresponding to the motion region of the second overexposure region in the second short-exposure image; when the motion area of the second underexposed area is fused with the third motion area to be fused, a motion area is randomly selected as the fused motion area; and fusing the motion area of the second overexposure area with the fourth motion area to be fused, and randomly selecting a motion area as the fused motion area.
The specific steps of the above fusion are further explained below:
the first case, as shown in fig. 3: if the abnormal exposure image is a long exposure image (first long exposure image), the exposure abnormal area in the normal exposure image is an underexposure area (first underexposure area), firstly dividing the underexposure area into a static area and a dynamic area according to the previous step, then according to the relative position of the static area of the underexposure area in the normal exposure image, taking the image area with the same position in the long exposure image as the static area needing to be fused, namely the first to-be-fused static area, obtaining a first to-be-fused moving area in the same way, then fusing the static area of the underexposure area and the first to-be-fused static area of the long exposure image, and fusing the moving area of the underexposure area and the first to-be-fused moving area of the long exposure image.
It is emphasized that, when a motion region of an underexposed region is fused with a first to-be-fused motion region of a long-exposure image, a motion region is randomly selected as a fused motion region, that is, the pixel value weights of all pixels in the motion region of the underexposed region are set to 0 or the pixel value weights of all pixels in the first to-be-fused motion region are set to 0 in the fusion process, schematically, it is assumed that the pixel value of a certain pixel point a in the motion region of the underexposed region in a normal exposure image is 122, the pixel value of a pixel point B in the first to-be-fused motion region corresponding to the pixel point a is 100, and if the pixel value weight of the point a is set to 0 in the fusion process, the pixel value weight of the point B is set to 1; the pixel value weight of each pixel point of the two images to be fused can be obtained according to the mode, so that a weight value graph is obtained, then the two images to be fused and the corresponding weight value graph are subjected to down-sampling, filtering and other processing by adopting a Laplacian pyramid method during fusion, and finally image fusion is realized.
After all pixel points in the motion area in the underexposed area and all pixel points in the first motion area to be fused are fused in the above mode, the final fusion effect is that only the motion area in the underexposed area or the first motion area to be fused is reserved; in this way it is possible to keep only one of the images of the motion areas, so that no ghost or ghost problems occur, since only one of the motion areas is kept for both motion areas.
For the fusion of the static area, the above limitation is not existed because the ghost problem does not exist; for the fusion of the static area of the underexposed area and the first to-be-fused static area of the long-exposed image, the pixel value weight of one area is not necessarily set to be 0; the method comprises the following steps: for the fusion of the static area of the underexposed area and the first to-be-fused static area of the long-exposure image, the pixel value weights of all the pixel points in the static area of the underexposed area are set to 0.6, the pixel value weights of all the pixel points in the first to-be-fused static area are set to 0.4, and then the fusion is performed.
The second case, as shown in fig. 4: if the abnormal exposure image is a short exposure image (a first short exposure image) and the exposure abnormal area is an overexposure area (a first overexposure area), firstly dividing a static area and a dynamic area according to the overexposure area, then taking an image area with the same position in the short exposure image as a static area needing to be fused, namely the second to-be-fused static area defined above according to the relative position of the static area of the overexposure area in the normal exposure image, obtaining a second to-be-fused moving area in the same way, then fusing the static area of the overexposure area and the second to-be-fused static area of the short exposure image, and fusing the moving area of the overexposure area and the second to-be-fused moving area of the short exposure image. The fusion principle of the static area of the overexposed area and the second static area to be fused of the short-exposure image is the same as the fusion principle of the static area of the underexposed area and the first static area to be fused of the long-exposure image, and the description is omitted; the fusion principle of the motion region of the overexposed region and the second to-be-fused motion region of the short-exposure image is the same as the fusion principle of the static region of the underexposed region and the first to-be-fused motion region of the long-exposure image, and further description is omitted here.
The third case, as shown in fig. 5: if the abnormal exposure image has both a long exposure image (a second long exposure image) and a short exposure image (a second short exposure image), and the abnormal exposure region has an underexposed region (a second underexposed region) and an overexposed region (a second overexposed region), then during fusion, firstly dividing the second underexposed region into a static region and a dynamic region, then according to the relative position of the static region of the second underexposed region in the normal exposure image, taking the image regions with the same position in the long exposure image as the static region needing to be fused, namely the third static region to be fused, and obtaining the third moving region to be fused in the same way, then according to the relative position of the static region of the second overexposed region in the normal exposure image, taking the image regions with the same position in the short exposure image as the static region needing to be fused, namely the fourth static region to be fused, obtaining a fourth motion area to be fused in the same way; and finally, fusing the static area of the second underexposed area with the third static area to be fused of the long-exposure image, fusing the moving area of the second underexposed area with the third moving area to be fused of the long-exposure image, fusing the static area of the second overexposed area with the fourth static area to be fused of the short-exposure image, and fusing the moving area of the second overexposed area with the fourth moving area to be fused of the short-exposure image.
For step S4, in a preferred embodiment, the synthesizing the fusion sub-region with the exposure abnormal region in the normal exposure image to obtain a high dynamic range image specifically includes:
when the fusion sub-region is the first underexposure fusion sub-region, performing progressive fusion on the first underexposure fusion sub-region and a first underexposure region in the normal exposure image to obtain the high dynamic range image; when the fusion sub-region is the first overexposure fusion sub-region, performing progressive fusion on the first overexposure fusion sub-region and a first overexposure region in the normal exposure image to obtain the high dynamic range image; when the fusion sub-region comprises a second underexposure fusion sub-region and a second overexposure fusion sub-region, carrying out progressive fusion on the second underexposure fusion sub-region and a second underexposure region in the normal exposure image, and carrying out progressive fusion on the second overexposure fusion sub-region and a second overexposure region in the normal exposure image.
Specifically, after the fusion sub-region is generated in step S3, the exposure abnormal region in the normal exposure image may be directly replaced with the fusion sub-region, and the high dynamic range image may be generated.
In a more preferred embodiment, the fusion sub-region and the abnormal exposure region in the normal exposure image may be progressively synthesized, and when the fusion sub-region and the abnormal exposure region are progressively synthesized, the edge of the region is progressively synthesized, that is, weighted fusion is performed according to the distance from the edge, so that the region boundary is naturally transited, and the specific principle is as shown in fig. 6: graph A is the pixel value weight coefficient of each pixel point of the normal exposure image during fusion, and graph B is the pixel value weight coefficient of each pixel point in the fusion sub-area during fusion; as can be seen from the figure, at the edge of the blend region, the weight of each pixel point of the input normal exposure image gradually decreases from the outside to the middle, and the weight of each pixel point belonging to the blend sub-region gradually increases from the outside to the middle (for convenience of explanation, the total weight in the figure is set to 10 instead of 1). This allows for a natural transition of the zone boundaries.
As shown in fig. 7, in another embodiment of the present invention, the method further includes a step S5 of setting the exposure parameters of each image in the next shooting period according to the current exposure parameters of each image in the current shooting period and the brightness information of the exposure abnormal region in each image. The brightness information of the abnormal exposure area can be the average brightness of the abnormal exposure area or the number of pixel points exceeding a certain brightness threshold;
the specific principle is shown in fig. 8: assume that all image frames in the nth photographing period are recorded as
Figure BDA0002561625380000131
Next circle of ith image frameExposure parameters of phase
Figure BDA0002561625380000132
Will combine the current frame
Figure BDA0002561625380000133
The luminance statistical information of the exposure abnormal region (which may be the luminance mean of the exposure abnormal region, or the number of pixel points exceeding a certain luminance threshold), the exposure parameter adopted by the current frame
Figure BDA0002561625380000134
And calculating the preset Target brightness Target _ i, and then enabling the exposure parameter to take effect in the next shooting period so that the brightness statistical information of the ith video frame approaches the preset Target brightness in the next limited shooting period.
Specifically, the preset target brightness value is target, the floating upper and lower limits are range, and the step length of addition and subtraction of exposure parameters.
Firstly, the abnormal exposure area is divided into blocks, and the average brightness of each block after the blocks (8 x 8) are calculated and returned. Firstly, averaging the average brightness of 64 blocks to obtain the average brightness avg of the picture of an exposure abnormal area, and if the avg is less than target-range, then carrying out exposure parameter + step length; if avg > target + range, then expose parameter-step; if target-range < avg < target + range, the exposure parameters remain unchanged.
If the preset target brightness is an empirical value, for example, the average brightness of the desired picture is about 100, a target corresponding to 100 is set. For the normal exposure frame, the target of the short exposure frame and the long exposure frame, the short exposure < normal < long exposure is only required to be satisfied.
On the basis of the above embodiment, another embodiment of the present invention provides a high dynamic range video generation method, including: acquiring a plurality of groups of images of a plurality of shooting periods, and generating a high dynamic range image corresponding to each group of images according to the high dynamic range image generation method of any one of the embodiments to obtain a plurality of high dynamic range images; and generating a high dynamic range video according to the plurality of high dynamic range images. The method for generating the high dynamic range video according to the high dynamic range image is the prior art, and is not described herein again.
On the basis of the foregoing embodiment, another embodiment of the present invention provides a terminal device, which includes a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, and the processor executes the computer program to implement the high dynamic range image generation method according to any one of the foregoing embodiments of the present invention.
The terminal device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The terminal device may include, but is not limited to, a processor, a memory.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is the control center of the terminal device and connects the various parts of the whole terminal device using various interfaces and lines.
The memory may be used to store the computer program, and the processor may implement various functions of the terminal device by running or executing the computer program stored in the memory and calling data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Wherein, the terminal device integrated module/unit can be stored in a computer readable storage medium if it is implemented in the form of software functional unit and sold or used as a stand-alone product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented.
In another embodiment of the present invention, therefore, a storage medium is provided, where the storage medium includes a stored computer program, and where the computer program is executed to control an apparatus in which the storage medium is located to execute the high dynamic range image generation method according to any one of the above embodiments of the present invention.
The storage medium is a computer readable storage medium, the computer program comprises computer program code, which may be in source code form, object code form, executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
Compared with the fusion mode in the prior art, the method and the device have the advantages that when fusion is carried out, all image areas of the normally exposed image do not need to be fused, only the abnormal exposure area in the normally exposed image needs to be fused, time complexity of image fusion is greatly reduced, generation efficiency of the high dynamic range image is improved, and in addition, any one of the two motion areas is selected as the fused motion area when the motion areas are fused, so that ghost and ghost images are removed.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (7)

1. A high dynamic range image generation method, comprising:
acquiring a group of images; the group of images are at least two frames of images with different exposure degrees of the same scene in a shooting period, and the images with different exposure degrees comprise one frame of normal exposure image and at least one frame of abnormal exposure image;
extracting an exposure abnormal area in the normal exposure image, detecting a motion area of the exposure abnormal area, and dividing the exposure abnormal area into a motion area and a static area;
respectively fusing the moving area and the static area of the normal exposure image with the corresponding areas in the abnormal exposure image to obtain fused subareas;
synthesizing the fusion subarea and an exposure abnormal area in the normal exposure image to obtain a high dynamic range image;
the fusing the moving area and the static area of the normal exposure image with the corresponding areas in the abnormal exposure image respectively to obtain a fused subarea specifically comprises:
when the abnormal exposure image only comprises a first long exposure image and the abnormal exposure region only comprises a first underexposure region, respectively fusing a static region of the first underexposure region and a first to-be-fused static region as well as a motion region of the first underexposure region and a first to-be-fused motion region according to preset pixel value weights to obtain a first underexposure fusion sub-region; the first still region to be fused is an image region corresponding to the still region position of the first underexposed region in the first long-exposure image; the first to-be-fused motion region is an image region corresponding to the motion region position of the first under-exposed region in the first long-exposure image; when the motion area of the first underexposed area is fused with the first motion area to be fused, a motion area is randomly selected as the fused motion area;
when the abnormal exposure image only comprises a first short exposure image and the abnormal exposure area only comprises a first overexposure area, respectively fusing a static area of the first overexposure area and a second static area to be fused and a motion area of the first overexposure area and a second motion area to be fused according to preset pixel value weight to obtain a first overexposure fusion sub-area; the second to-be-fused static area is an image area corresponding to the static area position of the first overexposure area in the first short-exposure image; the second motion region to be fused is an image region corresponding to the motion region position of the first overexposure region in the first short-exposure image; when the motion area of the first overexposure area is fused with the second motion area to be fused, randomly selecting a motion area as the fused motion area;
when the abnormal exposure image simultaneously comprises a second long exposure image and a second short exposure image, and the abnormal exposure area simultaneously comprises a second underexposure area and a second overexposure area, respectively fusing a static area of the second underexposure area and a third static area to be fused, and a motion area of the second underexposure area and a third motion area to be fused according to preset pixel value weights to obtain a second underexposure fusion sub-area; fusing the static area of the second overexposure area and a fourth static area to be fused, and the moving area of the second overexposure area and a fourth moving area to be fused according to preset pixel value weights to obtain a second overexposure fusion sub-area; the third still region to be fused is an image region corresponding to the still region position of the second underexposed region in the second long-exposure image; the third motion region to be fused is an image region corresponding to the motion region position of the second underexposed region in the second long-exposure image; the fourth still region to be fused is an image region corresponding to the still region of the second overexposed region in the second short-exposure image; the fourth motion region to be fused is an image region corresponding to the motion region of the second overexposure region in the second short-exposure image; when the motion area of the second underexposed area is fused with the third motion area to be fused, a motion area is randomly selected as the fused motion area; fusing the motion area of the second overexposure area with the fourth motion area to be fused, and randomly selecting a motion area as the fused motion area;
the synthesizing the fusion subregion with the exposure abnormal region in the normal exposure image to obtain a high dynamic range image specifically includes:
when the fusion sub-region is the first underexposure fusion sub-region, performing progressive fusion on the first underexposure fusion sub-region and a first underexposure region in the normal exposure image to obtain the high dynamic range image;
when the fusion sub-region is the first overexposure fusion sub-region, performing progressive fusion on the first overexposure fusion sub-region and a first overexposure region in the normal exposure image to obtain the high dynamic range image;
when the fusion sub-region comprises a second underexposure fusion sub-region and a second overexposure fusion sub-region, carrying out progressive fusion on the second underexposure fusion sub-region and a second underexposure region in the normal exposure image, and carrying out progressive fusion on the second overexposure fusion sub-region and a second overexposure region in the normal exposure image to obtain the high dynamic range image.
2. The method for generating a high dynamic range image according to claim 1, wherein the extracting the exposure abnormal region in the normal exposure image specifically includes:
detecting overexposure and underexposure pixel points of the normal exposure image according to a preset brightness threshold value to generate an exposure binary image;
and removing abnormal values from the exposure binary image, calculating the boundaries of all connected domains of the exposure binary image after removing the abnormal values, and then extracting the exposure abnormal regions according to the boundaries of all the connected domains.
3. The method for generating a high dynamic range image according to claim 1, wherein the detecting a moving area of the abnormal exposure area and dividing the abnormal exposure area into a moving area and a static area specifically comprises:
calculating the median of pixel values of all pixel points in the normal exposure image to obtain a first median, and performing binarization operation on the normal exposure image according to the first median to generate a first binary image;
calculating the median of pixel values of all pixel points in the abnormal exposure image to obtain a second median, and performing binarization operation on the abnormal exposure image according to the second median to generate a second binary image;
and extracting all moving areas of the normal exposure view image according to the difference between the first binary image and the second binary image, and dividing the moving areas and the static areas in the exposure abnormal areas according to all the moving areas of the normal exposure image.
4. The high dynamic range image generation method of any one of claims 1 to 3, further comprising:
and setting the exposure parameters of each image in the next shooting period according to the current exposure parameters of each image in the current shooting period and the brightness information of the exposure abnormal areas in each image.
5. A method for generating high dynamic range video, comprising: acquiring a plurality of groups of images of a plurality of shooting periods, and generating a high dynamic range image corresponding to each group of images according to the high dynamic range image generation method of any one of claims 1 to 4 to obtain a plurality of high dynamic range images; and generating a high dynamic range video according to the plurality of high dynamic range images.
6. A terminal device comprising a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the high dynamic range image generation method according to any one of claims 1 to 4 when executing the computer program.
7. A storage medium comprising a stored computer program, wherein the apparatus on which the storage medium is located is controlled to perform the high dynamic range image generation method according to any one of claims 1 to 4 when the computer program is run.
CN202010608736.2A 2020-06-30 2020-06-30 High dynamic range image generation method, terminal device and storage medium Active CN111953893B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010608736.2A CN111953893B (en) 2020-06-30 2020-06-30 High dynamic range image generation method, terminal device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010608736.2A CN111953893B (en) 2020-06-30 2020-06-30 High dynamic range image generation method, terminal device and storage medium

Publications (2)

Publication Number Publication Date
CN111953893A CN111953893A (en) 2020-11-17
CN111953893B true CN111953893B (en) 2022-04-01

Family

ID=73337250

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010608736.2A Active CN111953893B (en) 2020-06-30 2020-06-30 High dynamic range image generation method, terminal device and storage medium

Country Status (1)

Country Link
CN (1) CN111953893B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598609A (en) * 2020-12-09 2021-04-02 普联技术有限公司 Dynamic image processing method and device
CN112837254B (en) * 2021-02-25 2024-06-11 普联技术有限公司 Image fusion method and device, terminal equipment and storage medium
CN116437222B (en) * 2021-12-29 2024-04-19 荣耀终端有限公司 Image processing method and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978722A (en) * 2015-07-06 2015-10-14 天津大学 Multi-exposure image fusion ghosting removing method based on background modeling
CN107613218A (en) * 2017-09-15 2018-01-19 维沃移动通信有限公司 The image pickup method and mobile terminal of a kind of high dynamic range images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI501639B (en) * 2013-07-29 2015-09-21 Quanta Comp Inc Method of filming high dynamic range video

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978722A (en) * 2015-07-06 2015-10-14 天津大学 Multi-exposure image fusion ghosting removing method based on background modeling
CN107613218A (en) * 2017-09-15 2018-01-19 维沃移动通信有限公司 The image pickup method and mobile terminal of a kind of high dynamic range images

Also Published As

Publication number Publication date
CN111953893A (en) 2020-11-17

Similar Documents

Publication Publication Date Title
CN108335279B (en) Image fusion and HDR imaging
Galdran Image dehazing by artificial multiple-exposure image fusion
CN108898567B (en) Image noise reduction method, device and system
CN111028189B (en) Image processing method, device, storage medium and electronic equipment
CN110121882B (en) Image processing method and device
CN110428366B (en) Image processing method and device, electronic equipment and computer readable storage medium
US11055827B2 (en) Image processing apparatus and method
CN111986129B (en) HDR image generation method, equipment and storage medium based on multi-shot image fusion
CN111953893B (en) High dynamic range image generation method, terminal device and storage medium
CN105812675B (en) Method for generating HDR images of a scene based on a compromise between luminance distribution and motion
US9311901B2 (en) Variable blend width compositing
CN110443766B (en) Image processing method and device, electronic equipment and readable storage medium
US10992845B1 (en) Highlight recovery techniques for shallow depth of field rendering
CN109413335B (en) Method and device for synthesizing HDR image by double exposure
CN110349163B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110602467A (en) Image noise reduction method and device, storage medium and electronic equipment
Mangiat et al. Spatially adaptive filtering for registration artifact removal in HDR video
Messikommer et al. Multi-bracket high dynamic range imaging with event cameras
WO2020029679A1 (en) Control method and apparatus, imaging device, electronic device and readable storage medium
US11977319B2 (en) Saliency based capture or image processing
CN113344821B (en) Image noise reduction method, device, terminal and storage medium
CN110796041A (en) Subject recognition method and device, electronic equipment and computer-readable storage medium
CN110942427A (en) Image noise reduction method and device, equipment and storage medium
CN110866486A (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN112085686A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240515

Address after: 518000, Building 1, 402, Pulian Science and Technology Park (Phase II), No.8 Road, West Area of Tianliao Community High tech Park, Yutang Street, Guangming District, Shenzhen City, Guangdong Province

Patentee after: Shenzhen Pulian Intelligent Software Co.,Ltd.

Country or region after: China

Address before: 518000 the 1st and 3rd floors of the south section of building 24 and the 1st-4th floor of the north section of building 28, Shennan Road Science and Technology Park, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: TP-LINK TECHNOLOGIES Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right