CN115272155A - Image synthesis method, image synthesis device, computer equipment and storage medium - Google Patents
Image synthesis method, image synthesis device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN115272155A CN115272155A CN202211021685.9A CN202211021685A CN115272155A CN 115272155 A CN115272155 A CN 115272155A CN 202211021685 A CN202211021685 A CN 202211021685A CN 115272155 A CN115272155 A CN 115272155A
- Authority
- CN
- China
- Prior art keywords
- image
- synthesized
- images
- area
- exposure
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000015572 biosynthetic process Effects 0.000 title claims abstract description 24
- 238000003786 synthesis reaction Methods 0.000 title claims abstract description 24
- 238000001308 synthesis method Methods 0.000 title claims abstract description 16
- 238000000034 method Methods 0.000 claims abstract description 34
- 238000004590 computer program Methods 0.000 claims abstract description 25
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 13
- 230000035945 sensitivity Effects 0.000 claims description 29
- 238000012545 processing Methods 0.000 claims description 17
- 230000004927 fusion Effects 0.000 claims description 11
- 230000008569 process Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005375 photometry Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20208—High dynamic range [HDR] image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
The application relates to an image synthesis method, an image synthesis device, a computer device, a storage medium and a computer program product. The method comprises the steps of obtaining a group of first images to be synthesized including one frame of low exposure level and second images to be synthesized including at least two frames of high exposure levels, determining an overexposed region in the high-brightness images obtained after the at least two frames of images to be synthesized are fused, comparing the at least two frames of second images to be synthesized to determine a non-moving region in the overexposed region of the high-brightness images, determining the regions to be synthesized in the first images to be synthesized based on the non-moving region, replacing the non-moving region in the overexposed region of the high-brightness images with the images of the regions to be synthesized, and obtaining the synthesized images. Compared with the traditional mode of synthesizing multi-frame images by taking the image with high brightness as the reference, the scheme replaces the non-motion area of the over-exposure area with the image with the low exposure level during synthesis, does not replace the motion area of the over-exposure area, prevents the ghost phenomenon from occurring in the synthesized image and improves the image definition.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image synthesis method, an image synthesis apparatus, a computer device, a storage medium, and a computer program product.
Background
With the development of computer technology, people can take images through mobile devices such as mobile phones and cameras, and taking HDR (High dynamic range) images, which are images obtained by combining multiple frames of images, has become the mainstream. In the past, a multi-frame image is generally synthesized based on a one-frame image having high brightness. However, in the case of moving objects existing in images of multiple frames, and the images of the moving objects may have brightness difference with other images, direct synthesis may cause a phenomenon that ghost images exist in the images, resulting in a decrease in the definition of image synthesis.
Therefore, the current image synthesis method has the defect of low synthesis definition.
Disclosure of Invention
In view of the above, it is necessary to provide an image synthesis method, an apparatus, a computer device, a computer readable storage medium, and a computer program product capable of improving the definition of image synthesis in view of the above technical problems.
In a first aspect, the present application provides a method for synthesizing an image, the method comprising:
acquiring a group of images to be synthesized of the same object; the group of images to be synthesized comprises a first image to be synthesized of a frame with a first exposure level and at least two second images to be synthesized of a frame with a second exposure level; the first exposure level is less than the second exposure level; the at least two frames of second images to be synthesized comprise images with different exposure time lengths, and one frame of image is consistent with the image structure of the first image to be synthesized;
fusing the at least two frames of second images to be synthesized to obtain a high-brightness image, and determining an overexposure area in the high-brightness image;
determining a non-moving area in an overexposure area of the high-brightness image according to a comparison result of the at least two frames of second images to be synthesized; the non-moving area is free of moving objects;
determining a region to be synthesized in the first image to be synthesized according to a matching result of the first image to be synthesized and a non-moving region in the overexposure region;
and replacing the non-moving area in the overexposure area of the high-brightness image with the image of the area to be synthesized, and obtaining a synthesized image of the same object according to the replaced high-brightness image.
In one embodiment, the acquiring a set of images to be synthesized of the same object includes:
acquiring a first original image of a first exposure time for a target object, and processing the first original image respectively based on a first exposure level and a second exposure level to obtain a first to-be-synthesized image of a first sensitivity and the first exposure time at the first exposure level and a short-exposure image of the first sensitivity and the first exposure time at the second exposure level;
acquiring at least one frame of second original image of a second exposure duration aiming at the target object, and processing the at least one frame of second original image based on a second exposure level to obtain at least one frame of long exposure image of a second sensitivity and the second exposure duration under the second exposure level; the first sensitivity is greater than the second sensitivity; the short exposure image and the long exposure image have the same brightness;
and obtaining the group of images to be synthesized according to the first image to be synthesized, the short exposure image and the at least one frame long exposure image.
In one embodiment, the fusing the at least two frames of second images to be synthesized to obtain a high-brightness image includes:
determining a non-overexposure area in the at least two frames of second images to be synthesized, and determining a moving area in the non-overexposure area according to a comparison result of the at least two frames of second images to be synthesized; moving objects exist in the moving area;
replacing the image of the moving area in the non-overexposure area with the image of the corresponding area in the short-exposure image;
replacing the image of the non-moving area in the non-overexposure area with the image of the corresponding area in the long-exposure image; a non-moving region in the non-overexposure region is free of moving objects;
and obtaining a fused high-brightness image according to the fusion result of the replaced image of the moving area in the non-overexposure area and the image of the non-moving area in the non-overexposure area.
In one embodiment, the determining a non-moving region in an overexposed region of the high-brightness image according to a comparison result of the at least two frames of second images to be synthesized includes:
acquiring the pixel coincidence degree of the images of the overexposure areas in the at least two frames of second images to be synthesized;
and determining a non-moving area in the overexposure area of the high-brightness image according to an area in which the pixel overlapping degree of the image of the overexposure area of the short-exposure image and the image of the overexposure area of the long-exposure image is greater than a preset overlapping degree threshold value.
In one embodiment, the determining, according to a matching result of the first image to be synthesized and a non-moving area in the overexposed area, an area to be synthesized in the first image to be synthesized includes:
matching the first image to be synthesized with a non-moving area in the overexposure area to obtain the non-moving area in the first image to be synthesized;
and obtaining a region to be synthesized according to the non-moving region in the first image to be synthesized.
In one embodiment, the determining the overexposed region in the high brightness image comprises:
and acquiring a region with the brightness larger than a preset brightness threshold value in the fused high-brightness image to obtain an overexposed region in the high-brightness image.
In one embodiment, the obtaining a synthesized image of the same object according to the replaced high-brightness image includes:
and performing at least one of sharpening, brightness adjustment and smooth denoising on the replaced high-brightness image to obtain a synthesized image of the same object with the image definition larger than a preset definition threshold.
In a second aspect, the present application provides an image composing apparatus comprising:
the acquisition module is used for acquiring a group of images to be synthesized of the same object; the group of images to be synthesized comprises a first image to be synthesized of a frame with a first exposure level and at least two second images to be synthesized of a frame with a second exposure level; the first exposure level is less than the second exposure level; the at least two frames of second images to be synthesized comprise images with different exposure durations, and one frame of image is consistent with the image structure of the first image to be synthesized;
the fusion module is used for fusing the at least two frames of second images to be synthesized to obtain a high-brightness image and determining an overexposed area in the high-brightness image;
the first determining module is used for determining a non-moving area in an overexposure area of the high-brightness image according to a comparison result of the at least two frames of second images to be synthesized; the non-moving area is free of moving objects;
a second determining module, configured to determine, according to a matching result between the first image to be synthesized and a non-moving region in the overexposure region, a region to be synthesized in the first image to be synthesized;
and the synthesis module is used for replacing the non-moving area in the overexposure area of the high-brightness image with the image of the area to be synthesized, and obtaining a synthesized image of the same object according to the replaced high-brightness image.
In a third aspect, the present application provides a computer device comprising a memory storing a computer program and a processor implementing the steps of the method described above when the processor executes the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method described above.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, performs the steps of the method described above.
The image synthesis method, the image synthesis device, the computer equipment, the storage medium and the computer program product are used for obtaining a group of first images to be synthesized, including one frame of low-exposure-level images, of the same object and at least two frames of second images to be synthesized, fusing the at least two frames of images to be synthesized, determining an overexposed region in the fused high-brightness images, determining a non-moving region in the overexposed region of the high-brightness images according to a comparison result of the at least two frames of second images to be synthesized, determining a region to be synthesized in the first images to be synthesized based on the non-moving region, replacing the non-moving region in the overexposed region of the high-brightness images with an image of the region to be synthesized, and obtaining a synthesized image of the same object based on the replaced high-brightness images. Compared with the traditional mode of synthesizing multiple frames of images based on one frame of image with high brightness, the scheme replaces the non-motion area of the overexposed area with the image with the low exposure level during synthesis, does not replace the motion area of the overexposed area, prevents the ghost phenomenon from occurring in the synthesized image and improves the definition of the image.
Drawings
FIG. 1 is a schematic flow chart diagram of an image synthesis method according to an embodiment;
FIG. 2 is a schematic flow chart of the image acquisition step to be synthesized in one embodiment;
FIG. 3 is a schematic flow chart of the fusion step in one embodiment;
FIG. 4 is a block diagram showing the configuration of an image synthesizing apparatus according to an embodiment;
FIG. 5 is a diagram of the internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In an embodiment, as shown in fig. 1, an image synthesis method is provided, which is exemplified by applying the method to a terminal, and it is understood that the method may also be applied to a server, and may also be applied to a system including the terminal and the server, and is implemented by interaction between the terminal and the server, and includes the following steps:
step S202, a group of images to be synthesized of the same object are obtained; the group of images to be synthesized comprises a frame of first image to be synthesized with a first exposure level and at least two frames of second image to be synthesized with a second exposure level; the first exposure level is less than the second exposure level; the at least two frames of second images to be synthesized contain images with different exposure time lengths, and one frame of image is consistent with the image structure of the first image to be synthesized.
The image to be synthesized may be an image that needs to be synthesized, and the synthesizing process may be an HDR image synthesizing process, that is, the terminal may obtain a group of images to be synthesized and synthesize the group of images to be synthesized into an HDR image. The obtained group of images to be synthesized may be images of the same object, for example, the terminal may perform one-time shooting on the same object by using an image acquisition device arranged in the terminal, so as to obtain a plurality of images shot this time as a group of images to be synthesized. The group of images to be synthesized comprises at least three frames of images, and the at least three frames of images have images with different exposure levels. For example, the set of images to be combined may include a first image to be combined of one frame at a first exposure level and at least two second images to be combined at a second exposure level. The first exposure level is less than the second exposure level, that is, the exposure level of the first image to be synthesized is less than the exposure level of the second image to be synthesized, so that the brightness of the first image to be synthesized is less than the brightness of the second image to be synthesized. Specifically, the first exposure level may be referred to as ev-, and the second exposure level may be referred to as ev0; wherein the ev value is called exposure value, which reflects the combination of aperture size and shutter speed, and in order to accurately obtain accurate exposure suitable for correct performance of the subject, it needs to combine the exposure value and exposure compensation amount, and the exposure compensation amount is represented by +3, +2, +1, 0, -1, -2, -3, etc., and "+" represents increasing exposure based on exposure determined by photometry, "-" represents decreasing exposure, and the corresponding number is the number of stages of compensation exposure, e.g., ev-may represent negative exposure level, ev0 may represent 0 th level exposure level.
The group of images to be combined may further include images with different exposure durations, for example, the group of images to be combined may include an image to be combined with a short exposure duration and an image to be combined with a long exposure duration at the same time. Specifically, the first image to be synthesized at the first exposure level may be a short-exposed image, and the second image to be synthesized at the second exposure level may have a short-exposed image and a long-exposed image. And one image of the at least two frames of second images to be synthesized is consistent with the image structure of the first image to be synthesized. Specifically, the first image to be synthesized and the second image to be synthesized are generated by the terminal from the same source, that is, the first image to be synthesized and the second image to be synthesized are from the same original image, and the terminal obtains the first image to be synthesized and the second image to be synthesized by different image processing processes, so that the image content and the image structure of the first image to be synthesized and the second image to be synthesized with short exposure are consistent.
And step S204, fusing at least two frames of second images to be synthesized to obtain a high-brightness image, and determining an overexposure area in the high-brightness image.
After the terminal acquires the group of images to be synthesized, the terminal may first synthesize the at least two frames of second images to be synthesized, that is, the terminal may fuse the at least two frames of second images to be synthesized at the ev0 exposure level, and since the brightness of the exposure level represented by ev0 is greater than the ev-exposure level, the fused at least two frames of second images to be synthesized may be a high-brightness image. Wherein, the brightness of each frame of the at least two frames of second images to be synthesized may be uniform. The fusion process may be a synthesis process performed on a non-overexposed region in the second image to be synthesized, for example, the terminal uses an image of a moving region in the non-overexposed region in the short-exposure second image to be synthesized to make the moving object as still as possible, and uses an image of the moving region in the non-overexposed region in the long-exposure second image to be synthesized to make the image quality as clear as possible.
After the terminal fuses the at least two frames of second images to be synthesized aiming at the non-overexposed area, the terminal can correspondingly process the overexposed area in the second images to be synthesized. The terminal may first determine an overexposed region in the high brightness image. For example, in one embodiment, determining an overexposed region in a high brightness image comprises: and acquiring a region with the brightness larger than a preset brightness threshold value in the fused high-brightness image to obtain an overexposed region in the high-brightness image. In this embodiment, the terminal may determine whether each region is an overexposed region according to the brightness of each region in the frame of the high-brightness image. For example, the terminal may compare the brightness of each pixel in the fused high-brightness image with a preset brightness threshold, so that the terminal may obtain the areas with brightness greater than the preset brightness threshold in the high-brightness image, and the terminal may use the areas with brightness greater than the preset brightness threshold as the overexposed areas in the high-brightness image.
Step S206, determining a non-moving area in the overexposure area of the high-brightness image according to the comparison result of at least two frames of second images to be synthesized; the non-moving area has no moving object.
The second image to be synthesized may be an image having a higher brightness than the first image to be synthesized. And the brightness of the at least two frames of second images to be synthesized may be uniform. Therefore, the terminal can compare at least two frames of second images to be synthesized, and determine the non-moving area in the overexposed area in the high-brightness image based on the comparison result of the comparison. Wherein the non-moving region represents a state in which no moving object is present in the region, i.e. no motion blur is present. The terminal can compare the short-exposure image with the long-exposure image, and when the terminal detects a region with inconsistent images between the short-exposure image and the long-exposure image, the terminal can consider that a moving object exists in the region, the region with the moving object can be used as a moving region, and the region with consistent images in the short-exposure image and the long-exposure image can be used as a non-moving region to indicate that no moving object exists in the region. Thus, the non-moving region in the overexposed region in the high-brightness image can be obtained by mapping the contrast result back to the high-brightness image.
In step S208, the region to be synthesized in the first image to be synthesized is determined according to the matching result between the first image to be synthesized and the non-moving region in the overexposed region.
The non-moving area in the overexposure area may be a non-moving area in the overexposure area in the high-brightness image obtained by the terminal by comparing the short-exposure image and the long-exposure image in the at least two frames of second images to be synthesized. The first image to be synthesized and the short-exposure image are images obtained by different processing based on the same original image, so that the image content and the structure of the first image to be synthesized and the short-exposure image are consistent. Therefore, the non-moving area in the overexposure area determined by the terminal based on the short exposure image and the long exposure image can be consistent with the non-moving area in the first image to be synthesized. The terminal may match the non-moving region in the overexposed region of the high-brightness image with the first image to be synthesized, for example, the terminal may perform matching based on the position of the non-moving region in the overexposed region in the high-brightness image, so that the terminal may determine the region to be synthesized in the first image to be synthesized according to the matching result. Wherein, the image in the region to be synthesized may be an image for image replacement.
Step S210, replacing the non-moving area in the overexposed area of the high-brightness image with the image of the area to be synthesized, and obtaining a synthesized image of the same object according to the replaced high-brightness image.
After the terminal determines the area to be synthesized in the first image to be synthesized, the non-moving area in the overexposure area of the high-brightness image can be replaced by the image of the area to be synthesized, the first image to be synthesized can be an image with lower brightness, namely the image without the overexposure phenomenon, and after the terminal replaces the image in the non-moving area in the overexposure area of the high-brightness image by the image of the area to be synthesized, the overexposure state of the replaced area can be removed, and the definition of the image is improved.
After the terminal replaces the non-moving area in the overexposure area of the high-brightness image with the image of the area to be synthesized, the synthesized image of the same object can be obtained based on the replaced high-brightness image. The terminal can perform post-processing on the replaced high-brightness image, and the transparency and the definition of the replaced high-brightness image are improved. For example, in one embodiment, obtaining a synthesized image of the same object from the replaced high-brightness image includes: and performing at least one of sharpening, brightness adjustment and smooth denoising on the replaced high-brightness image to obtain a synthesized image of the same object with the image definition larger than a preset definition threshold. In this embodiment, the terminal can perform corresponding adjustment on the replaced high-brightness image, so as to improve the image transparency and brightness. For example, the terminal may perform at least one of sharpening, brightness adjustment and smooth denoising on the replaced high-brightness image, so that the terminal may improve the transparency and definition of the replaced high-brightness image. After the terminal performs the adjustment on the replaced high-brightness image, a synthesized image obtained based on the group of images to be synthesized can be obtained, and because the image of the corresponding area in the first image to be synthesized of the ev-exposure level is used for replacing the non-moving area of the overexposure area, the overexposure is maintained for the moving area in the overexposure area, so that the phenomenon that the synthesized image generates double images due to the overlapping of a moving object and a static object is avoided.
In the image synthesis method, a group of images of the same object, including a first image to be synthesized with a low exposure level of one frame and a second image to be synthesized with a high exposure level of at least two frames, are obtained, the at least two images to be synthesized with the high exposure level are fused, an overexposed region in the fused high-brightness image is determined, a non-moving region in the overexposed region of the high-brightness image is determined according to a comparison result of the at least two frames of images to be synthesized, the region to be synthesized in the first image to be synthesized is determined based on the non-moving region, the non-moving region in the overexposed region of the high-brightness image is replaced by the image of the region to be synthesized, and the synthesized image of the same object is obtained based on the replaced high-brightness image. Compared with the traditional mode of synthesizing multiple frames of images based on one frame of image with high brightness, the scheme replaces the non-motion area of the overexposure area with the image with the low exposure level during synthesis, does not replace the motion area of the overexposure area, prevents the ghost phenomenon from occurring in the synthesized image and improves the definition of the image.
In one embodiment, as shown in fig. 2, fig. 2 is a schematic flow chart of the step of acquiring the image to be synthesized in one embodiment. The above-mentioned obtaining a group of images to be synthesized of the same object includes: step S302: acquiring a first original image of a first exposure duration for a target object, and processing the first original image respectively based on a first exposure level and a second exposure level to obtain a first to-be-synthesized image of a first sensitivity and the first exposure duration under the first exposure level and a short-exposure image of the first sensitivity and the first exposure duration under the second exposure level; step S304: acquiring at least one frame of second original image of a second exposure duration aiming at the target object, and processing the at least one frame of second original image based on a second exposure level to obtain at least one frame of long exposure image of a second sensitivity and the second exposure duration under the second exposure level; the first sensitivity is greater than the second sensitivity; the brightness of the short exposure image is consistent with that of the long exposure image; step S306: and obtaining a group of images to be synthesized according to the first image to be synthesized, the short exposure image and the long exposure image of at least one frame.
In this embodiment, the terminal may acquire a group of images to be synthesized of the same object through the image acquisition device. For example, the terminal captures multiple frames of images of the same object through a camera to obtain a group of images to be synthesized. The terminal may acquire a plurality of images with different shooting parameters from the group of images to be synthesized. Wherein the same object may be a target object. The terminal may acquire a first original image for the target object at a first exposure duration. And acquiring at least one frame of second original image for the target object under a second exposure duration. The first exposure duration is less than the second exposure duration, and both the first original image and the second original image may be raw format images containing camera sensor data information necessary to create a visual image. The terminal can perform corresponding processing flow on the original image to obtain a first image to be synthesized and a second image to be synthesized. Specifically, the processing may be ISP (Image Signal Processor) processing on a raw format Image, and the main function is to perform post-processing on a Signal output by the front-end Image sensor, and the terminal can better restore field details under different optical conditions through the ISP processing.
For the first original image, the terminal may perform different ISP processing on the first original image to obtain a plurality of images to be combined at different exposure levels. The first original image has a corresponding first exposure time and a corresponding first sensitivity (ISO). The terminal can process the first original image based on the first exposure level, so that the terminal can obtain a first image to be synthesized with first sensitivity and first exposure duration under the first exposure level; the terminal can also process the first original image based on the second exposure level, so that the terminal can obtain a short-exposure image with the first sensitivity and the first exposure duration under the second exposure level. That is, the terminal may obtain images to be combined at different exposure levels through different processes based on one first original image. Specifically, the first image to be synthesized may be an image of ev-exposure level, the short-exposure image may be an image of ev0 exposure level, and the exposure time length and sensitivity of the first image to be synthesized and the short-exposure image may be the same. That is, the first image to be synthesized and the short-exposure image may be generated homologously, so that the image content and the image structure of the first image to be synthesized and the short-exposure image are consistent, and the subsequent determination of the area to be synthesized is facilitated.
For the second original image, the terminal may process at least one frame of the second original image based on the second exposure level to obtain at least one frame of long-exposure image with the second sensitivity and the second exposure duration at the second exposure level. Since the exposure time period of the second original image is long, there may be a moving area generated due to the movement of the object. In order to make the brightness of the short-exposure image consistent with that of the long-exposure image, the terminal can make the first sensitivity greater than the second sensitivity, so that the terminal can obtain the short-exposure image based on the shorter exposure time and the greater first sensitivity and obtain the long-exposure image based on the longer exposure time and the smaller second sensitivity, and then compare the short-exposure image and the long-exposure image with consistent brightness to determine the moving region. The at least one frame of long exposure image may be an image with the same exposure level as the short exposure image, for example, each of ev0 exposure levels, so that the terminal may obtain a combination of the short exposure image and the long exposure image with the same brightness. Specifically, when the terminal performs image synthesis, the terminal can acquire a first image to be synthesized of ev-and N second images to be synthesized of ev0, where N is greater than or equal to 2. The N ev0 images to be combined may include images to be combined that have a short exposure time and a high ISO, and may further include images to be combined that have a long exposure time and a low ISO, so that the N ev0 images to be combined have uniform brightness. And the first image to be synthesized of ev-coincides with the image content and the image structure of the second image to be synthesized of ev0 of the short exposure time period. After the terminal acquires the first image to be synthesized, the short-exposure image and the long-exposure image, a group of images to be synthesized of the same object can be obtained based on the images, and the group of images to be synthesized at least comprises three frames of images.
Through the embodiment, the terminal can obtain a plurality of images to be synthesized with consistent image contents and structures under different exposure levels based on the same original image, so that the terminal can correspond to the moving area of the image to be synthesized with low brightness based on the judgment of the moving area of the image to be synthesized with high brightness, and the accuracy of image synthesis is improved. And the terminal determines the moving region in the image by generating the short-exposure image and the long-exposure image with consistent brightness and comparing the short-exposure image with the long-exposure image, so that the image synthesis can be carried out based on the moving region, and the definition of the image synthesis is improved.
In one embodiment, as shown in fig. 3, fig. 3 is a schematic flow diagram of a fusion step in one embodiment. The above-mentioned at least two frames of second images to be synthesized of fusing obtain the high brightness image, including: step S402: determining a non-overexposure area in at least two frames of second images to be synthesized, and determining a moving area in the non-overexposure area according to a comparison result of the at least two frames of second images to be synthesized; moving objects exist in the moving area; step S404: replacing the image of the moving area in the non-overexposure area with the image of the corresponding area in the short-exposure image; step S406: replacing the image of the non-moving area in the non-overexposure area with the image of the corresponding area in the long-exposure image; the non-moving area in the non-overexposure area has no moving object; step S408: and obtaining a fused high-brightness image according to the fusion result of the image of the moving area in the non-overexposure area and the image of the non-moving area in the non-overexposure area after replacement.
In this embodiment, the terminal may fuse the at least two frames of second images to be synthesized at the second exposure level to obtain a high-brightness image. For example, the terminal may first determine the non-overexposure area in the at least two frames of second images to be synthesized, and specifically, the terminal may determine, as the non-overexposure area, an area whose luminance is lower than a preset luminance threshold by detecting the luminance of each pixel point in each second image to be synthesized. The terminal can also determine a moving area in the non-overexposure area according to the comparison result of the at least two frames of second images to be synthesized. The moving region indicates a region in which a moving object exists and motion blur is generated. Specifically, the terminal may compare the at least two frames of second images to be synthesized to obtain a pixel coincidence degree, and use a region in the non-overexposure region where the pixel coincidence degree is lower than a preset coincidence degree threshold as a moving region in the non-overexposure region, and after the terminal determines the moving region in the non-overexposure region, other regions in the non-overexposure region may be determined as non-moving regions.
The at least two frames of second images to be synthesized comprise a short-frame exposure image and at least one frame of long-frame exposure image, and the terminal can replace the image of the moving area in the non-overexposure area with the image of the corresponding area in the short-exposure image and replace the image of the non-moving area in the non-overexposure area with the image of the corresponding area in the long-exposure image by taking the short-exposure image as a reference frame. Specifically, when the short-exposure image at ev0 exposure level is taken as a reference frame, for a non-overexposed area in the image, if a moving area exists, the terminal can use the image of the corresponding area in the short-exposure image at the position, and the moving object is ensured to be in a frozen and clearly presented state; and the rest non-moving areas in the non-overexposure area can use the image of the corresponding area in the long-exposure image to ensure the clear image quality. And the terminal determines the replacement images of the areas in the second image to be synthesized, and can obtain a fused high-brightness image according to the fusion result of the image of the moving area in the non-overexposure area and the image of the non-moving area in the non-overexposure area.
Through the embodiment, the terminal can use the image of the corresponding area in the short exposure image in the moving area in the non-overexposure area in the second image to be synthesized, and use the image of the corresponding area in the long exposure image in the non-moving area in the non-overexposure area, so that the definition of the fused high-brightness image is improved, and the definition of the finally synthesized image is improved.
In one embodiment, determining a non-moving area in an overexposed area of the high-brightness image according to a comparison result of at least two frames of second images to be synthesized comprises: acquiring the pixel coincidence degree of the images of the overexposure areas in at least two frames of second images to be synthesized; and determining a non-moving area in the overexposure area of the high-brightness image according to an area in which the pixel overlapping degree of the image of the overexposure area of the short-exposure image and the image of the overexposure area of the long-exposure image is greater than a preset overlapping degree threshold value.
In this embodiment, the terminal may determine the non-moving area in the overexposed area in the at least two frames of second images to be synthesized through comparison. For example, the terminal may perform pixel comparison on the at least two frames of second images to be synthesized, and obtain the pixel coincidence degree of the images of the overexposed regions in the at least two frames of second images to be synthesized, and the terminal may obtain the regions where the pixel coincidence degree of the images of the overexposed regions in the short-exposure image and the images of the overexposed regions in the long-exposure image meets a preset coincidence degree threshold, and determine the non-moving regions in the overexposed regions in the high-brightness image according to the regions. Specifically, the at least two frames of second images to be synthesized include a short-exposure image and a long-exposure image, and the brightness of the short-exposure image is consistent with that of the long-exposure image, so that an overexposed region determined based on the short-exposure image and the long-exposure image is consistent with that of the fused high-brightness image. Therefore, the terminal can determine the non-moving area in the overexposure area through the comparison of the short exposure image and the long exposure image, and the other areas in the overexposure area are determined as the moving areas in the overexposure area. Wherein a moving region indicates the presence of a moving object in the region and a non-moving region indicates the absence of a moving object in the region.
With the present embodiment, the terminal can determine the non-moving region of the overexposed region in the high-luminance image based on the degree of pixel overlap of the image. Therefore, the terminal can replace the image based on the non-moving area, and the definition of image synthesis is improved.
In one embodiment, determining the region to be synthesized in the first image to be synthesized according to the matching result of the first image to be synthesized and the non-moving region in the overexposed region includes: matching the first image to be synthesized with the non-moving area in the overexposure area to obtain the non-moving area in the first image to be synthesized; and obtaining a region to be synthesized according to the non-moving region in the first image to be synthesized.
In this embodiment, the terminal may determine the area to be synthesized in the first image to be synthesized, which is used to replace the non-moving area in the overexposed areas, by corresponding the non-moving area in the determined overexposed areas to the first image to be synthesized. Since the first image to be synthesized and the short-exposure image are derived from the same original image and have the same image structure and image content, the terminal may determine a non-moving area in the overexposed area based on the short-exposure image and correspond to the first image to be synthesized. The terminal may match the first image to be synthesized with the non-moving area in the determined overexposure area to obtain the non-moving area in the first image to be synthesized, so that the terminal may obtain the area to be synthesized based on the non-moving area in the first image to be synthesized. Specifically, because the moving area and the non-moving area in the overexposed area are identified, when the overexposed area in the high-brightness image is an ev-frame image, the terminal can determine whether to use the ev-frame image according to whether the position belongs to the moving area, so that the ev-frame image can be accurately reduced to the synthesized image, and ghost images are avoided.
Through the embodiment, the terminal can identify the moving area and the non-moving area of the overexposure area in the high-brightness image, and determine the area to be synthesized corresponding to the position of the non-moving area in the overexposure area in the first image to be synthesized based on the corresponding relation between the short-exposure image and the first image to be synthesized, so that the terminal can realize image synthesis based on the image in the area to be synthesized, and the definition of image synthesis is improved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides an image synthesis apparatus for implementing the image synthesis method. The implementation scheme for solving the problem provided by the apparatus is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the image synthesis apparatus provided below can be referred to the limitations of the image synthesis method in the foregoing, and details are not described here.
In one embodiment, as shown in fig. 4, there is provided an image synthesizing apparatus including: an obtaining module 500, a fusing module 502, a first determining module 504, a second determining module 506, and a synthesizing module 508, wherein:
an obtaining module 500, configured to obtain a group of images to be synthesized of the same object; the group of images to be synthesized comprises a frame of first image to be synthesized with a first exposure level and at least two frames of second image to be synthesized with a second exposure level; the first exposure level is less than the second exposure level; the at least two frames of second images to be synthesized contain images with different exposure time lengths, and one frame of image is consistent with the image structure of the first image to be synthesized.
And a fusion module 502, configured to fuse at least two frames of second images to be synthesized to obtain a high-brightness image, and determine an overexposed region in the high-brightness image.
A first determining module 504, configured to determine a non-moving area in an overexposed area of the high-brightness image according to a comparison result of at least two frames of second images to be synthesized; the non-moving area has no moving object.
A second determining module 506, configured to determine a region to be synthesized in the first image to be synthesized according to a matching result between the first image to be synthesized and the non-moving region in the overexposed region.
And a synthesizing module 508, configured to replace the non-moving area in the overexposed area of the high-brightness image with an image of the area to be synthesized, and obtain a synthesized image of the same object according to the replaced high-brightness image.
In an embodiment, the obtaining module 500 is specifically configured to obtain a first original image of a first exposure duration for a target object, and process the first original image based on a first exposure level and a second exposure level respectively to obtain a first to-be-synthesized image of a first sensitivity and the first exposure duration at the first exposure level, and a short-exposure image of the first sensitivity and the first exposure duration at the second exposure level; acquiring at least one frame of second original image of a second exposure duration aiming at the target object, and processing the at least one frame of second original image based on a second exposure level to obtain at least one frame of long exposure image of a second sensitivity and the second exposure duration under the second exposure level; the first sensitivity is greater than the second sensitivity; the brightness of the short exposure image is consistent with that of the long exposure image; and obtaining a group of images to be synthesized according to the first image to be synthesized, the short exposure image and the long exposure image of at least one frame.
In an embodiment, the fusion module 502 is specifically configured to determine a non-overexposed region in at least two frames of second images to be synthesized, and determine a moving region in the non-overexposed region according to a comparison result of the at least two frames of second images to be synthesized; a moving object exists in the moving area; replacing the image of the moving area in the non-overexposure area with the image of the corresponding area in the short-exposure image; replacing the image of the non-moving area in the non-overexposure area with the image of the corresponding area in the long-exposure image; the non-moving area in the non-overexposure area has no moving object; and obtaining a fused high-brightness image according to the fusion result of the image of the moving area in the non-overexposure area and the image of the non-moving area in the non-overexposure area after replacement.
In an embodiment, the first determining module 504 is specifically configured to obtain a pixel overlapping ratio of an image of an overexposed region in at least two frames of second images to be synthesized; and determining a non-moving area in the overexposure area of the high-brightness image according to an area in which the pixel overlapping degree of the image of the overexposure area of the short-exposure image and the image of the overexposure area of the long-exposure image is greater than a preset overlapping degree threshold value.
In an embodiment, the second determining module 506 is specifically configured to match the first image to be synthesized with a non-moving region in the overexposed region, so as to obtain the non-moving region in the first image to be synthesized; and obtaining a region to be synthesized according to the non-moving region in the first image to be synthesized.
In an embodiment, the fusing module 502 is specifically configured to obtain an area of the fused high-luminance image, where luminance is greater than a preset luminance threshold, to obtain an overexposed area in the high-luminance image.
In an embodiment, the synthesizing module 508 is specifically configured to perform at least one of sharpening, brightness adjustment, and smooth denoising on the replaced high-brightness image, so as to obtain a synthesized image of the same object whose image definition is greater than a preset definition threshold.
The respective modules in the image synthesizing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 5. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for communicating with an external terminal in a wired or wireless manner, and the wireless manner can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an image synthesis method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the configuration shown in fig. 5 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory in which a computer program is stored and a processor which, when executing the computer program, implements the image synthesis method described above.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when executed by a processor, implements the image synthesis method described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the image composition method described above.
It should be noted that the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include a Read-Only Memory (ROM), a magnetic tape, a floppy disk, a flash Memory, an optical Memory, a high-density embedded nonvolatile Memory, a resistive Random Access Memory (ReRAM), a Magnetic Random Access Memory (MRAM), a Ferroelectric Random Access Memory (FRAM), a Phase Change Memory (PCM), a graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.
Claims (11)
1. An image synthesis method, characterized in that the method comprises:
acquiring a group of images to be synthesized of the same object; the group of images to be synthesized comprises a first image to be synthesized of a frame with a first exposure level and at least two second images to be synthesized of a frame with a second exposure level; the first exposure level is less than the second exposure level; the at least two frames of second images to be synthesized comprise images with different exposure time lengths, and one frame of image is consistent with the image structure of the first image to be synthesized;
fusing the at least two frames of second images to be synthesized to obtain a high-brightness image, and determining an overexposure area in the high-brightness image;
determining a non-moving area in an overexposed area of the high-brightness image according to a comparison result of the at least two frames of second images to be synthesized; the non-moving area is free of moving objects;
determining a region to be synthesized in the first image to be synthesized according to a matching result of the first image to be synthesized and a non-moving region in the overexposure region;
and replacing the non-moving area in the overexposure area of the high-brightness image with the image of the area to be synthesized, and obtaining a synthesized image of the same object according to the replaced high-brightness image.
2. The method according to claim 1, wherein the acquiring a set of images to be synthesized of the same object comprises:
acquiring a first original image of a first exposure time for a target object, and processing the first original image respectively based on a first exposure level and a second exposure level to obtain a first to-be-synthesized image of a first sensitivity and the first exposure time at the first exposure level and a short-exposure image of the first sensitivity and the first exposure time at the second exposure level;
acquiring at least one frame of second original image of a second exposure duration aiming at the target object, and processing the at least one frame of second original image based on a second exposure level to obtain at least one frame of long exposure image of a second sensitivity and the second exposure duration under the second exposure level; the first sensitivity is greater than the second sensitivity; the short exposure image and the long exposure image have the same brightness;
and obtaining the group of images to be synthesized according to the first image to be synthesized, the short exposure image and the at least one frame long exposure image.
3. The method according to claim 2, wherein the fusing the at least two frames of the second image to be synthesized to obtain a high-brightness image comprises:
determining a non-overexposed area in the at least two frames of second images to be synthesized;
determining a moving area in the non-overexposure area according to a comparison result of the at least two frames of second images to be synthesized; moving objects exist in the moving area;
replacing the image of the moving area in the non-overexposure area with the image of the corresponding area in the short-exposure image;
replacing the image of the non-moving area in the non-overexposure area with the image of the corresponding area in the long-exposure image; a non-moving region in the non-overexposure region is free of moving objects;
and obtaining a fused high-brightness image according to the fusion result of the replaced image of the moving area in the non-overexposure area and the image of the non-moving area in the non-overexposure area.
4. The method according to claim 2, wherein the determining a non-moving region in the overexposed region of the high brightness image according to the comparison result of the at least two frames of second images to be synthesized comprises:
acquiring the pixel coincidence degree of the images of the overexposure areas in the at least two frames of second images to be synthesized;
and determining a non-moving area in the overexposure area of the high-brightness image according to an area in which the pixel overlapping degree of the image of the overexposure area of the short-exposure image and the image of the overexposure area of the long-exposure image is greater than a preset overlapping degree threshold value.
5. The method according to claim 1, wherein determining the region to be synthesized in the first image to be synthesized according to the matching result of the first image to be synthesized and the non-moving region in the overexposed region comprises:
matching the first image to be synthesized with a non-moving area in the overexposure area to obtain the non-moving area in the first image to be synthesized;
and obtaining a region to be synthesized according to the non-moving region in the first image to be synthesized.
6. The method of claim 1, wherein determining an overexposed region in a high brightness image comprises:
and obtaining a region with the brightness larger than a preset brightness threshold value in the fused high-brightness image to obtain an overexposed region in the high-brightness image.
7. The method according to any one of claims 1 to 6, wherein obtaining a synthesized image of the same object from the replaced high-brightness image comprises:
and performing at least one of sharpening, brightness adjustment and smooth denoising on the replaced high-brightness image to obtain a synthesized image of the same object with the image definition larger than a preset definition threshold.
8. An image synthesizing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a group of images to be synthesized of the same object; the group of images to be synthesized comprises a first image to be synthesized of a frame with a first exposure level and at least two second images to be synthesized of a frame with a second exposure level; the first exposure level is less than the second exposure level; the at least two frames of second images to be synthesized comprise images with different exposure durations, and one frame of image is consistent with the image structure of the first image to be synthesized;
the fusion module is used for fusing the at least two frames of second images to be synthesized to obtain a high-brightness image and determining an overexposed area in the high-brightness image;
a first determining module, configured to determine a non-moving region in an overexposed region of the high-brightness image according to a comparison result of the at least two frames of second images to be synthesized; the non-moving area is free of moving objects;
a second determining module, configured to determine a region to be synthesized in the first image to be synthesized according to a matching result between the first image to be synthesized and a non-moving region in the overexposed region;
and the synthesis module is used for replacing the non-moving area in the overexposure area of the high-brightness image with the image of the area to be synthesized, and obtaining a synthesized image of the same object according to the replaced high-brightness image.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
11. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 7 when executed by a processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211021685.9A CN115272155A (en) | 2022-08-24 | 2022-08-24 | Image synthesis method, image synthesis device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211021685.9A CN115272155A (en) | 2022-08-24 | 2022-08-24 | Image synthesis method, image synthesis device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115272155A true CN115272155A (en) | 2022-11-01 |
Family
ID=83754111
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211021685.9A Pending CN115272155A (en) | 2022-08-24 | 2022-08-24 | Image synthesis method, image synthesis device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115272155A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117278865A (en) * | 2023-11-16 | 2023-12-22 | 荣耀终端有限公司 | Image processing method and related device |
-
2022
- 2022-08-24 CN CN202211021685.9A patent/CN115272155A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117278865A (en) * | 2023-11-16 | 2023-12-22 | 荣耀终端有限公司 | Image processing method and related device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6742732B2 (en) | Method for generating HDR image of scene based on trade-off between luminance distribution and motion | |
US20200045219A1 (en) | Control method, control apparatus, imaging device, and electronic device | |
US20210168275A1 (en) | Method for imaging controlling, electronic device, and non-transitory computer-readable storage medium | |
CN104349066B (en) | A kind of method, apparatus for generating high dynamic range images | |
CN106060249B (en) | Photographing anti-shake method and mobile terminal | |
CN109068058B (en) | Shooting control method and device in super night scene mode and electronic equipment | |
CN115037884A (en) | Unified bracketing method for imaging | |
CN108683861A (en) | Shoot exposal control method, device, imaging device and electronic equipment | |
WO2020034701A1 (en) | Imaging control method and apparatus, electronic device, and readable storage medium | |
CN111489320A (en) | Image processing method and device | |
US11601600B2 (en) | Control method and electronic device | |
CN105391940A (en) | Image recommendation method and apparatus | |
CN110956679A (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN108513062B (en) | Terminal control method and device, readable storage medium and computer equipment | |
CN111080571A (en) | Camera shielding state detection method and device, terminal and storage medium | |
CN115272155A (en) | Image synthesis method, image synthesis device, computer equipment and storage medium | |
CN114092562A (en) | Noise model calibration method, image denoising method, device, equipment and medium | |
CN113781358A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN109523456A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN113438411A (en) | Image shooting method, image shooting device, computer equipment and computer readable storage medium | |
CN115049572A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN116320714A (en) | Image acquisition method, apparatus, device, storage medium, and program product | |
CN115550558A (en) | Automatic exposure method and device for shooting equipment, electronic equipment and storage medium | |
CN112565595B (en) | Image jitter eliminating method, device, electronic equipment and storage medium | |
CN114422721A (en) | Imaging method, imaging device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230807 Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. Address before: Room F, 11/F, Beihai Center, 338 Hennessy Road, Wan Chai District, 810100 Hong Kong, China Applicant before: Sonar sky Information Consulting Co.,Ltd. |
|
TA01 | Transfer of patent application right |