CN117408896A - Image generation method, device, electronic equipment and computer readable storage medium - Google Patents

Image generation method, device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN117408896A
CN117408896A CN202210783763.2A CN202210783763A CN117408896A CN 117408896 A CN117408896 A CN 117408896A CN 202210783763 A CN202210783763 A CN 202210783763A CN 117408896 A CN117408896 A CN 117408896A
Authority
CN
China
Prior art keywords
image
camera
image frame
scene
shooting scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210783763.2A
Other languages
Chinese (zh)
Inventor
黄杰文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202210783763.2A priority Critical patent/CN117408896A/en
Publication of CN117408896A publication Critical patent/CN117408896A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10144Varying exposure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10152Varying illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

The application relates to an image generation method, an image generation device, an electronic device, a storage medium and a computer program product. The method comprises the following steps: acquiring at least one of a scene dynamic range of a preview picture, a motion state of the preview picture and ambient illuminance of an environment in which a camera is located; determining a target shooting scene where the camera is currently located based on at least one of a scene dynamic range of a preview picture, a motion state of the preview picture and ambient illuminance of an environment where the camera is located; determining a plurality of groups of image pairs to be fused based on the target shooting scene; and fusing the image pairs to be fused to obtain a target image. By adopting the method, the accuracy of the generated image can be improved.

Description

Image generation method, device, electronic equipment and computer readable storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an image generating method, an image generating device, an electronic device, and a computer readable storage medium.
Background
The auto exposure controller (Auto Exposure Control, AEC) is an important unit in an ISP (Image Signal Processor ) imaging system, and determines the brightness expressive power of an imaged picture. The conventional automatic exposure control algorithm generally refers to RAW domain statistical information output by hardware: histogram, block mean, etc., to determine the difference between the luminance of the region of interest (Region of Interest, ROI) of the current picture and the target luminance, and to apply different strategies to approximate the target luminance, thereby generating an image.
However, the conventional image generation method has a problem in that the generated image is not accurate enough.
Disclosure of Invention
Embodiments of the present application provide an image generating method, apparatus, electronic device, computer-readable storage medium, and computer program product, which can improve the accuracy of the generated image.
In a first aspect, the present application provides an image generation method. The method comprises the following steps:
acquiring at least one of a scene dynamic range of a preview picture, a motion state of the preview picture and ambient illuminance of an environment in which a camera is located;
determining a target shooting scene where the camera is currently located based on at least one of a scene dynamic range of a preview picture, a motion state of the preview picture and ambient illuminance of an environment where the camera is located;
determining a plurality of groups of image pairs to be fused based on the target shooting scene;
and fusing the image pairs to be fused to obtain a target image.
In a second aspect, the present application provides an image generation apparatus. The device comprises:
the acquisition module is used for acquiring at least one of the scene dynamic range of the preview picture, the motion state of the preview picture and the ambient illuminance of the environment where the camera is located;
The determining module is used for determining a target shooting scene where the camera is currently located based on at least one of a scene dynamic range of a preview picture, a motion state of the preview picture and ambient illuminance of an environment where the camera is located;
the determining module is also used for determining a plurality of groups of image pairs to be fused based on the target shooting scene;
and the image generation module is used for fusing the groups of the image pairs to be fused to obtain a target image.
In a third aspect, the present application provides an electronic device. The electronic device comprises a memory and a processor, wherein the memory stores a computer program which, when executed by the processor, causes the processor to perform the steps of the image generation method according to the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of the first aspect as described above.
In a fifth aspect, the present application provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of the method of the first aspect described above.
According to the image generation method, the image generation device, the electronic equipment, the computer readable storage medium and the computer program product, the target shooting scene where the camera is currently located can be accurately determined based on at least one of the acquired scene dynamic range of the preview picture, the motion state of the preview picture and the ambient illuminance of the environment where the camera is located. Then, based on the target shooting scene, a plurality of groups of image pairs to be fused corresponding to the target shooting scene are determined, so that each group of image pairs to be fused are fused, and a more accurate target image is obtained.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an image generation method in one embodiment;
FIG. 2 is a graph of exposure output versus digital overlay wide dynamic range mode and normal mode in one embodiment;
FIG. 3 is a flow chart of an image generation method in another embodiment;
FIG. 4 is a block diagram showing the structure of an image generating apparatus in one embodiment;
fig. 5 is an internal structural diagram of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, an image generation method is provided, which is applied to an electronic device or a system including a terminal and a server, and is implemented through interaction of the terminal and the server. The electronic device may be a terminal or a server. The terminal can be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things equipment and portable wearable equipment, and the internet of things equipment can be smart speakers, smart televisions, smart air conditioners, smart vehicle-mounted equipment and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers.
In this embodiment, the method includes the steps of:
step 102, at least one of a scene dynamic range of the preview picture, a motion state of the preview picture and an ambient illuminance of an environment in which the camera is located is acquired.
Optionally, the electronic device obtains at least one of a scene dynamic range of the preview screen, a motion state of the preview screen, and an ambient illuminance of an environment in which the camera is located, in a case where the camera enters the digital overlay wide dynamic range mode.
FIG. 2 is a graph of exposure output versus digital overlay wide dynamic range mode and normal mode in one embodiment. In fig. 2, the camera can support the output of long exposure data and short exposure data in one frame in a Digital overlay-High Dynamic Range (DOL-HDR) mode, that is, the time interval between the exposure of the long exposure data and the short exposure data is very short, so that the motion ghost problem can be optimized; and the long exposure data and the short exposure data of the normal mode are output in different frames, respectively.
As shown in fig. 2, the ratio between the exposure time length of the long exposure data and the exposure time length of the short exposure data in the digital overlapping wide dynamic range mode of the camera is 2:1, and the output frame rate of the long exposure data and the short exposure data in the digital overlapping wide dynamic range mode of the camera is 30fps, and the output frame rate of the long exposure data and the short exposure data in the normal mode of the camera is 60fps.
It should be noted that, in the digital overlapping wide dynamic range mode, the camera may support outputting long exposure data and short exposure data in one frame, and may support outputting 3-4 or more different exposure data in one frame, which is not limited herein.
The scene dynamic range is the range of the light-dark difference of the scene photographed by the camera. It will be appreciated that the larger the scene dynamic range, the greater the difference between the bright and dark areas in the scene captured by the camera.
In one embodiment, when the camera enters a digital overlapping wide dynamic range mode, the electronic device obtains each bright area and each dark area in the preview image, generates a luminance histogram of each area in the preview image, and can determine the scene dynamic range of the preview image based on the distribution of the bright area and the dark area in the luminance histogram.
In another embodiment, when the camera enters a digital overlapping wide dynamic range mode, the electronic device segments the preview screen, obtains a luminance average value of each segment, and determines a scene dynamic range of the preview screen based on a distribution condition of the luminance average values of the segments. Specifically, the electronic device may determine the block with the highest brightness and the block with the lowest brightness from the distribution condition of the brightness average value of each block, and then determine the scene dynamic range of the preview screen based on the block with the highest brightness and the block with the lowest brightness.
In other embodiments, the electronic device may determine the scene dynamic range of the preview screen in other manners, which is not limited herein.
The motion state of the preview screen includes a stable state and an unstable state.
The ambient illuminance is a physical quantity reflecting the brightness of the environment in which the target is located, and is numerically equal to the luminous flux passing vertically through a unit area.
Step 104, determining the target shooting scene where the camera is currently located based on at least one of the scene dynamic range of the preview picture, the motion state of the preview picture and the ambient illuminance of the environment where the camera is located.
The target shooting scene is the shooting scene in which the camera is currently located. For example, the target shooting scene may be a scene in which the interval length of the scene dynamic range is less than or equal to a preset length threshold, or may be a scene in which the motion state of the preview screen is in a steady state, or may be a scene in which the interval length of the scene dynamic range is greater than a preset length threshold, or may be a scene in which the motion state of the preview screen is in a steady state, and the present invention is not limited thereto.
In an alternative embodiment, the electronic device sequentially determines the scene dynamic range of the preview screen, the motion state of the preview screen, and the ambient illuminance of the environment in which the camera is located, and determines the target shooting scene in which the camera is currently located.
In another alternative embodiment, the electronic device determines the target shooting scene in which the camera is currently located based on the scene dynamic range of the preview screen.
In another optional implementation manner, the electronic device sequentially judges the scene dynamic range of the preview screen and the motion state of the preview screen, and determines the current target shooting scene of the camera.
In other embodiments, the electronic device may also determine the current target shooting scene of the camera in other manners, which is not limited herein.
And 106, determining a plurality of groups of image pairs to be fused based on the target shooting scene.
The image pair to be fused is an image pair used for fusing to obtain a target image. The image pair to be fused comprises at least two image frames. For example, the image pair to be fused includes a first image frame and a second image frame. The multiple groups of image pairs to be fused can comprise the same type of image pairs or can comprise different types of image pairs.
The electronic device may acquire a plurality of groups of image pairs to be fused corresponding to the target shooting scene from the overlapping image set before the shooting operation, may acquire a plurality of groups of image pairs to be fused corresponding to the target shooting scene from the overlapping image set after the shooting operation, and may also acquire a plurality of groups of image pairs to be fused corresponding to the target shooting scene from the overlapping image set before the shooting operation and after the shooting operation, which is not limited herein.
It should be noted that, after triggering the shooting operation, the new exposure parameter is reconfigured, and the new exposure parameter is validated for a specified duration generally, so if the electronic device needs to wait for the specified duration after adjusting the exposure parameter, then acquire an image frame obtained by exposing the new exposure parameter, so as to prevent the ghost image of the fused target image. Wherein the specified duration may be 2 frames.
And step 108, fusing the image pairs to be fused to obtain a target image.
In an alternative embodiment, the electronic device uses HDR (High Dynamic Range Imaging ) to fuse sets of image pairs to be fused to generate a high dynamic range target image. The target image with high dynamic range has wider exposure dynamic range, namely larger brightness difference, and can display more brightness information.
In another alternative embodiment, the electronic device performs an average process on pixels at the same position in the images of each group of image pairs to be fused, so as to obtain a target image.
In other embodiments, the electronic device may fuse the image pairs to be fused in other manners, which is not limited herein.
According to the image generation method, the target shooting scene where the camera is currently located can be accurately determined based on at least one of the acquired scene dynamic range of the preview picture, the motion state of the preview picture and the ambient illuminance of the environment where the camera is located. Then, based on the target shooting scene, a plurality of groups of image pairs to be fused corresponding to the target shooting scene are determined, so that each group of image pairs to be fused are fused, and a more accurate target image is obtained.
In one embodiment, sequentially determining a scene dynamic range of a preview picture, a motion state of the preview picture and an ambient illuminance of an environment in which a camera is located, determining a target shooting scene in which the camera is currently located includes: determining an exposure duration ratio between a first image frame and a second image frame in each set of overlapping image sets exposed by the camera based on the scene dynamic range; the exposure time length of the first image frame is longer than that of the second image frame, and the exposure time length ratio and the scene dynamic range form positive correlation; and determining the current target shooting scene of the camera based on the exposure time length ratio.
It will be appreciated that in the digital overlay wide dynamic range mode, the exposure output of the former image frame has not yet ended and the exposure output of the latter image frame has begun, so that there is an overlay in exposure time of the various image frames within each exposure period, the resulting multiple image frames within each exposure period comprising each overlapping image set of camera exposures.
The exposure duration ratio refers to the ratio between the exposure duration of the first image frame (EV 0) and the exposure duration of the second image frame (EV-). The size of the current scene dynamic range which exists actually can be known from the exposure time length ratio. For example, if the exposure time is relatively small, it can be known that the dynamic range of the current scene actually exists is also relatively small. Wherein EV (Exposure Values) is an amount reflecting how much exposure, when the sensitivity is ISO 100, the aperture coefficient is F1, and the exposure time is 1 second, the exposure amount is defined as 0, the exposure amount is reduced by one step (the shutter time is reduced by half or the aperture is reduced by one step), ev+1; the exposure amount is increased by one step (shutter time is doubled or aperture is increased by one step), EV-1.
It will be appreciated that the greater the scene dynamic range, the greater the exposure time between the first image frame and the second image frame is required to be in order to present the highlight information and the dark region information in the picture, the greater the exposure time of the first image frame is greater than the exposure time of the second image frame, the first image frame may present the highlight information in the picture, and the second image frame may present the dark region information in the picture. That is, the exposure duration ratio is positively correlated with the scene dynamic range. It is understood that the first image frame may be a long exposure image frame and the second image frame may be a short exposure image frame.
Specifically, the electronic device obtains a preset corresponding relation between a scene dynamic range and an exposure time ratio, and determines an exposure time ratio between a first image frame and a second image frame in each set of overlapping image sets exposed by the camera from the corresponding relation based on the scene dynamic range. The corresponding relation enables the exposure time length ratio and the scene dynamic range to be in positive correlation.
In one embodiment, the correspondence between the scene dynamic range and the exposure time ratio may be a proportional relationship. In another embodiment, the electronic device sets the exposure time ratio corresponding to each scene dynamic range in turn, and ensures that the scene dynamic range and the exposure time ratio are positively correlated. In other embodiments, the electronic device may also determine the correspondence between the scene dynamic range and the exposure time ratio in other manners, which is not limited herein.
It can be understood that the exposure time length ratio and the scene dynamic range are positively correlated, so that the exposure time length is relatively large, and the scene dynamic range objectively existing in the environment where the camera is positioned is relatively large; the exposure time is smaller, and the dynamic range of the scene objectively existing in the environment where the camera is located is also smaller. Therefore, the target shooting scene where the camera is currently located can be accurately determined based on the exposure time length ratio.
In this embodiment, the electronic device determines, based on a scene dynamic range, an exposure duration ratio between a first image frame and a second image frame in each set of overlapping image sets exposed by the camera; the exposure time of the first image frame is longer than that of the second image frame, and the exposure time ratio and the scene dynamic range are in positive correlation. Then, the first image frame and the second image frame can better present the current scene dynamic range, namely, the first image frame with larger exposure time length can present highlight information in a picture, and the second image frame with smaller exposure time length can present dark area information in the picture, so that the current target shooting scene of the camera is more accurately determined based on the exposure time length ratio.
In one embodiment, determining a target shooting scene in which the camera is currently located based on the exposure time length ratio includes: if the exposure time length ratio is smaller than or equal to a preset time length ratio threshold value, determining that the current target shooting scene of the camera is a first shooting scene; the interval length of the scene dynamic range of the first shooting scene is smaller than a preset length threshold value; if the exposure time length ratio is greater than the preset time length ratio threshold value, determining a target shooting scene where the camera is currently located according to the motion state of the preview picture or the ambient illuminance where the camera is currently located.
The preset duration ratio threshold can be set as required. For example, the preset duration ratio threshold may be 16.
If the exposure time length ratio is smaller than or equal to a preset time length ratio threshold value, the dynamic range of the scene of the preview picture is smaller, and the camera is currently in a first shooting scene; if the exposure time length ratio is larger than the preset time length ratio threshold value, the dynamic range of the scene of the preview picture is larger, the motion state of the preview picture is determined, and the current target shooting scene of the camera is judged according to the motion state of the preview picture or the current ambient illuminance of the camera.
In one embodiment, the determining manner of the motion state of the preview screen includes: acquiring the angular speed of a gyroscope and detecting the motion amplitude of a preview picture; if the angular velocity is smaller than or equal to a preset angular velocity threshold value and the motion amplitude is smaller than or equal to a preset motion threshold value, determining that the motion state of the preview picture is a stable state; if the angular velocity is greater than a preset angular velocity threshold value or the motion amplitude is greater than a preset motion threshold value, determining that the motion state of the preview picture is an unstable state.
The preset angular velocity threshold and the preset motion threshold can be set as required.
It can be understood that the angular velocity of the gyroscope represents the shake degree of the electronic device, the motion amplitude of the preview image represents the motion degree of the preview image, and if the angular velocity is less than or equal to a preset angular velocity threshold value and the motion amplitude is less than or equal to a preset motion threshold value, the electronic device is more stable, the preview image is also more stable, and the motion state of the preview image is a stable state. If the angular velocity is greater than the preset angular velocity threshold value or the motion amplitude is greater than the preset motion threshold value, the motion state of the preview picture can be determined to be an unstable state.
In this embodiment, the electronic device may accurately determine the motion state of the preview screen based on the angular velocity of the gyroscope and the motion amplitude of the preview screen, so as to more accurately determine the multiple groups of image pairs to be fused based on the motion state.
In one embodiment, determining a target shooting scene where the camera is currently located according to a motion state of the preview screen includes: if the motion state is a stable state, determining that the current target shooting scene of the camera is a second shooting scene; if the motion state is an unstable state, determining a target shooting scene where the camera is currently located according to the ambient illuminance of the environment where the camera is currently located.
The second shooting scene is a scene with the exposure time length ratio being larger than a preset time length ratio threshold value and the motion state being in a stable state.
In one embodiment, determining a target shooting scene in which the camera is currently located according to an ambient illuminance of an environment in which the camera is currently located includes: if the ambient illuminance is greater than or equal to a preset ambient threshold, determining that the current target shooting scene of the camera is a third shooting scene; if the ambient illuminance is smaller than the preset ambient threshold, determining that the current target shooting scene of the camera is a fourth shooting scene.
The third shooting scene is a scene with the exposure time length ratio being larger than a preset time length ratio threshold value, the motion state being an unstable state and the ambient illuminance being larger than or equal to a preset ambient threshold value.
The fourth shooting scene is a scene with the exposure time length ratio being larger than a preset time length ratio threshold value, the motion state being an unstable state and the environment illuminance being smaller than a preset environment threshold value.
In one embodiment, as shown in fig. 3, the electronic device performs step 302 of determining an exposure duration ratio between a first image frame and a second image frame according to a scene dynamic range of a preview screen; continuing to execute step 304, judging the exposure time length ratio > th1; if not, executing step 306, wherein the current target shooting scene of the camera is the first shooting scene; if yes, go to step 308.
Step 306, take N groups (EV 0, EV-) before shooting operation as image pairs to be fused. And fusing the image pairs to be fused to obtain the target image. Where EV0 is the first image frame and EV-is the second image frame. Wherein N is greater than or equal to 2.
Step 308, obtaining the angular velocity of the gyroscope and detecting the motion amplitude of the preview picture; continuing to execute step 310, determining an angular velocity > th2, or a pre-motion amplitude > th3; if not, executing step 312, wherein the current target shooting scene of the camera is the second shooting scene; if yes, go to step 314.
In step 312, N groups (EV 0, EV-) before the photographing operation and one group (ev—, EV 0L) after the photographing operation are taken out as image pairs to be fused. And fusing the image pairs to be fused to obtain the target image. Where EV- -is the third image frame and EV0L is the fourth image frame of the same brightness as the first image frame.
Step 314, determining the ambient illuminance of the environment in which the camera is located; continuing to step 316, determining the ambient illuminance < th4; if not, executing step 318, wherein the current target shooting scene of the camera is the third shooting scene; if yes, the current target shooting scene of the camera is the fourth shooting scene, and step 320 is executed.
In step 318, N groups (EV 0, EV-) before the photographing operation and one group (EV 0S, EV- -) after the photographing operation are taken out as the image pair to be fused. And fusing the image pairs to be fused to obtain the target image. Where EV0S is the fifth image frame with the same brightness as the first image frame, and ev—is the third image frame.
In step 320, N groups (EV 0S, EV-) before the photographing operation and M groups (ev—, EV 0L) after the photographing operation are taken out as image pairs to be fused. And fusing the image pairs to be fused to obtain the target image. Wherein M is greater than or equal to 2.
In one embodiment, determining a plurality of image pairs to be fused based on a target shooting scene includes: if the current target shooting scene of the camera is a first shooting scene, responding to shooting operation, and acquiring a plurality of groups of initial image pairs from an overlapped image set exposed by the camera before the shooting operation as a plurality of groups of image pairs to be fused corresponding to the first shooting scene; the interval length of the scene dynamic range of the first shooting scene is smaller than or equal to a preset length threshold value, each group of initial image pairs comprises a first image frame and a second image frame, and the brightness of the first image frame is larger than that of the second image frame.
The preset length threshold may be set as desired.
If the current target shooting scene of the camera is the first shooting scene, that is, the interval length of the scene dynamic range of the preview image of the camera is smaller than or equal to the preset length threshold, the scene dynamic range of the preview image is smaller, then multiple groups of initial image pairs can be directly obtained from the overlapped image set exposed by the camera before shooting operation as multiple groups of image pairs to be fused corresponding to the first shooting scene, recalculation of exposure parameters and effective waiting time caused by taking the image frames after shooting operation as the images to be fused are avoided, and the efficiency of generating images is improved.
In this embodiment, in the first shooting scene, the electronic device may ensure zero delay and clear motion snapshot of the generated target image based on the second image frames in the multiple sets of to-be-fused image pairs, may ensure that the generated target image has high dynamic state based on the exposure duration ratio between the first image frames and the second image frames in the multiple sets of to-be-fused image pairs, and may ensure that the generated target image has low noise and high image quality based on the first image frames in the multiple sets of to-be-fused image pairs, thereby generating a more accurate target image.
Wherein, zero delay: refers to a time interval <100ms (milliseconds) between the motion state of the target image and the motion state of the preview screen at the time of triggering the photographing operation.
High dynamic: the maximum dynamic range of the preview picture is >84dB (decibel), and the maximum dynamic range of the target image is >120dB.
The sports snapshot is clear: the exposure time period of the imaging reference frame is smaller than the specified time period. For example, the specified duration may be 1/4 of the exposure duration of the normal exposure frame, and motion blur may be reduced. Wherein the imaging reference frame is an image frame that the imaging system uses to align with other image frames as it is processed.
Low noise and high image quality: multiple input frames with higher signal-to-noise ratios are used for multi-frame noise reduction to further improve the final imaging signal-to-noise ratio.
In one embodiment, determining a plurality of image pairs to be fused based on a target shooting scene includes: if the current target shooting scene of the camera is the second shooting scene, responding to shooting operation, adjusting the camera to a first exposure parameter, and exposing the camera to obtain a fourth image frame by the first exposure parameter; the interval length of the scene dynamic range of the second shooting scene is larger than a preset length threshold value, and the motion state of the preview picture is a stable state; the exposure time length of the fourth image frame is larger than a preset time length threshold value; determining a plurality of groups of initial image pairs from the overlapped image set exposed by the camera before shooting operation to serve as image pairs to be fused, and acquiring a first image pair to serve as the image pairs to be fused; each set of initial image pairs includes a first image frame having a brightness greater than a brightness of the second image frame and a second image frame having the same brightness as the first image frame and a fourth image frame having a brightness greater than the brightness of the third image frame.
The first exposure parameter is an exposure parameter correspondingly adjusted by the camera under the second shooting scene. The first exposure parameter is used for exposing at least to obtain a fourth image frame, and the exposure time of the fourth image frame is longer than a preset time threshold. It is understood that the exposure time of the fourth image frame is longer than the preset time threshold, that is, the fourth image frame is the image frame (EV 0L) with long exposure time, which may present highlight information in the picture.
Optionally, after the shooting operation, the camera exposes with a first exposure parameter after a delay time, and sets of initial image pairs obtained by exposure in the delay time are used as image pairs to be fused.
It can be understood that, after the electronic device detects the shooting operation, the electronic device needs to perform corresponding processing on the shooting operation, for example, processing of acquiring operation information of the shooting operation, readjusting exposure parameters, and the like, and a certain delay period is needed. Thus, during this delay period, the camera exposes with the original exposure parameters to obtain multiple sets of initial image pairs. The delay time length is determined by the hardware parameters of the camera and the image algorithm.
It will be appreciated that the exposure time of the first image frame is longer than the exposure time of the second image frame, i.e. the first image frame is an image frame of long exposure time, the exposure of the first image frame is larger, the second image frame is an image frame of short exposure time, the exposure of the second image frame is smaller, and the exposure of the third image frame is less than the exposure of the second image frame, the third image frame is a more underexposed frame (EV- -). For example, the exposure of the third image frame may be-6 EV or-7 EV.
Further, the brightness gain of the fourth image frame is realized by ISO (photosensability) conversion into an exposure period. For example, the first image frame is exposed for 20ms, ISO 200, the fourth image frame maintains the same brightness as the first image frame, the exposure (exposure time ISO) remains unchanged, and the output image signal-to-noise ratio is improved by extending the exposure time and decreasing ISO (e.g., 40ms for exposure time, iso100).
It can be understood that the interval length of the scene dynamic range of the second shooting scene is greater than the preset length threshold, and the motion state of the preview image is in a stable state, so that the exposure parameter needs to be reset, and a third image frame which is less exposed is taken to present the information of the highlight region. For example, the second image frame may be defined as-a/2 EV, and the third image frame may be defined as-a EV. In addition, since the dynamic range of the second shooting scene is larger and the motion state of the preview picture is stable, a fourth image frame is acquired after the shooting operation, and the global picture of the fourth image frame is not easy to cause motion blur due to longer exposure time, the fourth image frame can provide an image with better signal-to-noise ratio, and the improvement of the dark area image quality is facilitated.
Illustratively, the image pair to be fused includes (EV 0, EV-), …, (EV 0, EV-) (photographing operation), (EV 0, EV-), (EV-, EV 0L), and in each of the initial image pairs (EV 0, EV-) EV0 is a first image frame, EV-is a second image frame, 2 initial image pairs (EV 0, EV-) are obtained by exposure within a delay period of the photographing operation, the first image pair (EV-, EV 0L) is obtained by exposure with a first exposure parameter after the delay period, EV-is a third image frame, and EV0L is a fourth image frame, and then the first image frames of the plurality of candidates are determined from the 2 initial image pairs as the image pair to be fused before the photographing operation, and the 2 initial image pairs as the image pair to be fused within the delay period after the photographing operation.
In this embodiment, in the second shooting scene, the electronic device may ensure that the obtained target image has zero delay and clear motion snapshot based on the second image frames in the multiple groups of to-be-fused image pairs, and the exposure duration ratio between the first image frame, the second image frame and the third image frame may ensure that the target image has high dynamic state, and the first image frame obtained by exposure before the shooting operation and the fourth image frame obtained after the shooting operation and having the same brightness as the first image frame may ensure that the target image has low noise and high image quality, thereby generating a more accurate target image.
In one embodiment, determining a plurality of image pairs to be fused based on a target shooting scene includes: if the current target shooting scene of the camera is a third shooting scene, responding to shooting operation, and adjusting the camera to a second exposure parameter; the interval length of the scene dynamic range of the third shooting scene is larger than a preset length threshold, the motion state of the preview picture is an unstable state, and the ambient illuminance of the environment where the camera is located is larger than or equal to the preset ambient threshold; determining a plurality of groups of initial image pairs from the overlapped image sets exposed by the camera before shooting operation as image pairs to be fused, and acquiring a second image pair as the image pairs to be fused; each initial image pair comprises a first image frame and a second image frame, the brightness of the first image frame is larger than that of the second image frame, the second image pair comprises a third image frame and a fifth image frame with the same brightness as that of the first image frame, the brightness of the second image frame is larger than that of the third image frame, at least one image frame in the second image pair is obtained by exposing a camera with a second exposure parameter after shooting operation, and the exposure duration of the fifth image frame is smaller than or equal to a preset duration threshold value.
After the shooting operation, the camera exposes with a second exposure parameter after the delay time, and sets of initial image pairs obtained by exposure in the delay time are used as image pairs to be fused.
It can be understood that if the ambient illuminance of the environment in which the camera is located is reduced, the auto-exposure controller automatically performs frame dropping and extends the exposure time to improve the image quality. However, if the output frame rate is reduced to the lowest frame rate or less, the preview screen is blocked, and therefore the frame rate cannot be reduced without limitation, and the lowest frame rate is usually set to 16fps (Frames Per Second, transmission frame number per second). That is, the frame rate automatically adjusted based on the ambient illuminance is greater than or equal to the lowest frame rate.
The preset environmental threshold may be set according to the implementation. And if the ambient illuminance is greater than or equal to the preset ambient threshold, the frame rate automatically adjusted by the camera based on the ambient illuminance is greater than or equal to the minimum frame rate, and the camera can be adjusted to the second exposure parameter in response to the shooting operation.
The second exposure parameter is an exposure parameter correspondingly adjusted by the camera under the third shooting scene. The second exposure parameter is used for exposing to obtain at least one image frame in a second image pair, wherein the second image pair comprises a third image frame and a fifth image frame with the same brightness as the first image frame.
The exposure of the third image frame is less than the exposure of the second image frame, and the exposure of the second image frame is less than the exposure of the first image frame, so the third image frame is a more underexposed frame. The exposure time of the fifth image frame is less than or equal to a preset time threshold, that is, the fifth image frame is an image frame with short exposure time. For example, the exposure time of the fifth image frame is 1/2-1/4 of the exposure time of the first image frame.
Further, the brightness gain of the fifth image frame is realized by converting the exposure time period to ISO (sensitivity). Assuming that the exposure parameter of the first image frame is 20ms exposure duration, ISO 200, and the fifth image frame maintains the same brightness as the first image frame, the exposure amount (exposure duration x ISO) remains unchanged, and the motion blur is reduced by increasing ISO to reduce the exposure duration (e.g., to become 5ms exposure duration, ISO 800), so that space is provided for subsequent motion region processing.
For example, in the third shooting scene, the image pair to be fused includes (EV 0, EV-), …, (EV 0, EV-) (shooting operation), (EV 0, EV-), (EV 0S, EV- -), and (EV 0S, EV- -), in each set of initial image pairs (EV 0, EV-) where EV0 is a first image frame and EV-is a second image frame, 2 sets of initial image pairs (EV 0, EV-) are exposed within a delay period of the shooting operation, and after the delay period, exposure is performed with a second exposure parameter to obtain a second image pair (EV 0S, EV- -), EV-is a third image frame and EV0S is a fifth image frame, and then the fifth image frame in the second image pair is determined as a reference frame.
In this embodiment, in the third shooting scene, the electronic device may ensure that the obtained zero delay and the motion snapshot of the target image are clear based on the fifth image frame in the multiple groups of to-be-fused image pairs, the exposure duration ratio between the first image frame, the second image frame and the third image frame may ensure that the target image has high dynamic state, and the first image frame obtained by exposure before the shooting operation may ensure low noise and high image quality of the target image, thereby improving the accuracy of the generated target image.
In another embodiment, based on the target shooting scene, obtaining the plurality of image pairs to be fused includes: if the current target shooting scene of the camera is a third shooting scene, responding to shooting operation, and determining a second image pair and a plurality of groups of initial image pairs from an overlapped image set exposed by the camera before the shooting operation as image pairs to be fused; the interval length of the scene dynamic range of the third shooting scene is larger than a preset length threshold, the motion state of the preview picture is an unstable state, and the ambient illuminance of the environment where the camera is located is larger than or equal to the preset ambient threshold; each initial image pair comprises a first image frame and a second image frame, the brightness of the first image frame is larger than that of the second image frame, the second image pair comprises a third image frame and a fifth image frame with the same brightness as that of the first image frame, the brightness of the second image frame is larger than that of the third image frame, and the exposure time length of the fifth image frame is smaller than or equal to a preset time length threshold value.
It can be understood that the exposure of the third image frame is smaller than that of the second image frame, so that the exposure time of the third image frame is shorter, and the exposure time of the third image frame and the fifth image frame are shorter, so that the image can be obtained from a plurality of groups of overlapped images which are exposed by the camera before the shooting operation, and the frame taking time can be shortened, thereby improving the efficiency of image generation.
Wherein, in the digital overlapping wide dynamic range mode, each group of overlapping image sets exposed by the camera comprises at least 3 image frames, and the first image frame and the second image frame can be acquired from each group of overlapping image sets.
In this embodiment, in the third shooting scene, the electronic device may ensure that the obtained zero delay and the motion snapshot of the target image are clear based on the fifth image frame in the multiple groups of to-be-fused image pairs, the exposure duration ratio between the first image frame, the second image frame and the third image frame may ensure that the target image has high dynamic state, and the first image frame obtained by exposure before the shooting operation may ensure low noise and high image quality of the target image, thereby improving the accuracy of the generated target image. In addition, in this embodiment, the multiple groups of image pairs to be fused acquired by the electronic device are all images acquired before the shooting operation, so that waiting time consumed by exposure frame taking again is avoided, frame taking time is shortened, and therefore image generation efficiency is improved.
In one embodiment, determining a plurality of image pairs to be fused based on a target shooting scene includes: if the current target shooting scene of the camera is a fourth shooting scene, the camera is adjusted to a third exposure parameter; the interval length of the scene dynamic range of the fourth shooting scene is larger than a preset length threshold, the motion state of the preview picture is an unstable state, and the ambient illuminance of the environment where the camera is located is smaller than the preset ambient threshold; in response to the photographing operation, adjusting the camera to a fourth exposure parameter; determining a plurality of groups of third image pairs as image pairs to be fused from the overlapped image set exposed by the camera with the third exposure parameters before shooting operation, and acquiring a plurality of groups of fourth image pairs as image pairs to be fused; each group of third image pairs comprises a second image frame and a fifth image frame with the same brightness as the first image frame, the brightness of the first image frame is larger than the brightness of the second image frame, the exposure time length of the fifth image frame is smaller than or equal to a preset time length threshold value, each group of fourth image pairs comprises a third image frame and a fourth image frame with the same brightness as the first image frame, the brightness of the second image frame is larger than the brightness of the third image frame, the fourth image frame is obtained by exposing the camera with a fourth exposure parameter after the shooting operation, and the exposure time length of the fourth image frame is longer than the preset time length threshold value.
After the shooting operation, the camera exposes with a fourth exposure parameter after the delay time, and sets of third image pairs obtained by exposure in the delay time are used as image pairs to be fused.
If the ambient illuminance is smaller than the preset ambient threshold, the brightness of the environment where the camera is located is dark, for example, under night scenes, the camera is adjusted to a third exposure parameter before shooting operation, and a plurality of groups of overlapped image sets obtained by exposing with the third exposure parameter can be displayed as preview pictures after being processed by an ISP imaging system.
It can be understood that the third exposure parameter and the fourth exposure parameter are different, the third exposure parameter is used for exposing to obtain a second image frame and a fifth image frame, the fifth image frame is an image frame with the same brightness as the first image frame, and the exposure duration of the fifth image frame is less than or equal to the preset duration threshold, which can be used for guaranteeing zero delay and clear motion snapshot of the target image. And the fourth exposure parameter is used for exposing to obtain a third image frame and a fourth image frame with the same brightness as the first image frame, the brightness of the second image frame is larger than that of the third image frame, the exposure time of the fourth image frame is longer than a preset time length threshold, and the fourth image frame can ensure the low noise and high image quality of the target image.
The exposure time period of the fifth image frame may be 1/2-1/4 of the exposure time period of the first image frame. The brightness gain of the fifth image frame is achieved by converting the exposure time length to ISO.
It can be understood that if the exposure time required for the image frames is long and obtained from multiple sets of overlapping images before the shooting operation, the exposure time of each set of overlapping images is more than 33ms, which results in a problem of image sticking caused by a reduced frame rate.
Therefore, the exposure time of the fourth image frame is longer than the preset time threshold, and the fourth image frame is obtained by exposure with the fourth exposure parameter after the shooting operation, so that the phenomenon of picture blocking is avoided, and the accuracy of the generated target image is improved.
The exposure time period of the third image frame is short, and may be obtained by performing exposure after the photographing operation or may be obtained by performing exposure before the photographing operation.
Illustratively, the image pair to be fused includes (EV 0S, EV-) and …, (EV 0S, EV-), (photo-taking), (EV 0S, EV-), (EV-), …, (EV-, EV 0L) of each of the third image pairs (EV 0S, EV-) which is the second image frame, EV0S which is the fifth image frame, 2 sets of the third image pair (EV 0S, EV-), and exposing with the fourth exposure parameter after the delay period to obtain a fourth image pair (EV-, EV 0L) which is the third image frame, EV0L which is the fourth image frame, and the designated number is 2, and a plurality of candidate fifth image frames are determined from the 2 sets of the third image pairs before the shooting operation and the 2 sets of the third image pairs after the shooting operation which are the third image pairs to be fused.
In this embodiment, in the fourth shooting scene, the electronic device may ensure zero delay and clear motion capture of the obtained target image based on the fifth image frame in the multiple sets of to-be-fused image pairs, and the exposure duration ratio between the fifth image frame, the first image frame and the third image frame, or the exposure duration ratio between the fourth image frame, the first image frame and the third image frame may ensure a high dynamic range of the target image, and the fourth image frame may ensure low noise and high image quality of the target image, thereby improving accuracy of the generated target image.
It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides an image generating device for realizing the above related image generating method. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitation in the embodiment of one or more image generating apparatus provided below may refer to the limitation of the image generating method hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 4, there is provided an image generating apparatus including: an acquisition module 402, a determination module 404, and an image generation module 406, wherein:
the obtaining module 402 is configured to obtain at least one of a scene dynamic range of the preview screen, a motion state of the preview screen, and an ambient illuminance of an environment in which the camera is located.
The determining module 404 is configured to determine a target shooting scene in which the camera is currently located based on at least one of a scene dynamic range of the preview screen, a motion state of the preview screen, and an ambient illuminance of an environment in which the camera is located.
The determining module 404 is further configured to determine a plurality of groups of image pairs to be fused based on the target shooting scene.
The image generating module 406 is configured to fuse each group of image pairs to be fused to obtain a target image.
According to the image generating device, the target shooting scene where the camera is currently located can be accurately determined based on at least one of the acquired scene dynamic range of the preview picture, the motion state of the preview picture and the ambient illuminance of the environment where the camera is located. Then, based on the target shooting scene, a plurality of groups of image pairs to be fused corresponding to the target shooting scene are determined, so that each group of image pairs to be fused are fused, and a more accurate target image is obtained.
In one embodiment, the determining module 404 is further configured to determine, in order, a scene dynamic range of the preview screen, a motion state of the preview screen, and an ambient illuminance of an environment in which the camera is located, and determine a target shooting scene in which the camera is currently located.
In one embodiment, the determining module 404 is further configured to determine, based on the scene dynamic range, an exposure duration ratio between the first image frame and the second image frame in each set of overlapping image sets exposed by the camera; the exposure time length of the first image frame is longer than that of the second image frame, and the exposure time length ratio and the scene dynamic range form positive correlation; and determining the current target shooting scene of the camera based on the exposure time length ratio.
In an embodiment, the determining module 404 is further configured to determine that the target shooting scene where the camera is currently located is the first shooting scene if the exposure duration ratio is less than or equal to the preset duration ratio threshold; the interval length of the scene dynamic range of the first shooting scene is smaller than a preset length threshold value; if the exposure time length ratio is greater than the preset time length ratio threshold value, determining a target shooting scene where the camera is currently located according to the motion state of the preview picture or the ambient illuminance where the camera is currently located.
In one embodiment, the determining module 404 is further configured to obtain an angular velocity of the gyroscope and detect a motion amplitude of the preview screen; if the angular velocity is smaller than or equal to a preset angular velocity threshold value and the motion amplitude is smaller than or equal to a preset motion threshold value, determining that the motion state of the preview picture is a stable state; if the angular velocity is greater than a preset angular velocity threshold value or the motion amplitude is greater than a preset motion threshold value, determining that the motion state of the preview picture is an unstable state.
In an embodiment, the determining module 404 is further configured to determine that the target shooting scene in which the camera is currently located is the second shooting scene if the motion state is a steady state; if the motion state is an unstable state, determining a target shooting scene where the camera is currently located according to the ambient illuminance of the environment where the camera is currently located.
In an embodiment, the determining module 404 is further configured to determine that the current target shooting scene of the camera is the third shooting scene if the ambient illuminance is greater than or equal to the preset ambient threshold; if the ambient illuminance is smaller than the preset ambient threshold, determining that the current target shooting scene of the camera is a fourth shooting scene.
In one embodiment, the determining module 404 is further configured to, in response to the photographing operation, obtain, from the set of overlapping images exposed by the camera before the photographing operation, a plurality of sets of initial image pairs as a plurality of sets of image pairs to be fused corresponding to the first photographing scene, if the target photographing scene in which the camera is currently located is the first photographing scene; the interval length of the scene dynamic range of the first shooting scene is smaller than or equal to a preset length threshold value, each group of initial image pairs comprises a first image frame and a second image frame, and the brightness of the first image frame is larger than that of the second image frame.
In one embodiment, the determining module 404 is further configured to adjust the camera to a first exposure parameter in response to the shooting operation if the target shooting scene in which the camera is currently located is the second shooting scene, and expose the camera to the first exposure parameter to obtain a fourth image frame; the interval length of the scene dynamic range of the second shooting scene is larger than a preset length threshold value, and the motion state of the preview picture is a stable state; the exposure time length of the fourth image frame is larger than a preset time length threshold value; determining a plurality of groups of initial image pairs from the overlapped image set exposed by the camera before shooting operation to serve as image pairs to be fused, and acquiring a first image pair to serve as the image pairs to be fused; each set of initial image pairs includes a first image frame having a brightness greater than a brightness of the second image frame and a second image frame having the same brightness as the first image frame and a fourth image frame having a brightness greater than the brightness of the third image frame.
In one embodiment, after the shooting operation, the camera exposes with a first exposure parameter after a delay time, and sets of initial image pairs obtained by exposure in the delay time are used as image pairs to be fused.
In one embodiment, the determining module 404 is further configured to adjust the camera to the second exposure parameter in response to the shooting operation if the target shooting scene in which the camera is currently located is the third shooting scene; the interval length of the scene dynamic range of the third shooting scene is larger than a preset length threshold, the motion state of the preview picture is an unstable state, and the ambient illuminance of the environment where the camera is located is larger than or equal to the preset ambient threshold; determining a plurality of groups of initial image pairs from the overlapped image sets exposed by the camera before shooting operation as image pairs to be fused, and acquiring a second image pair as the image pairs to be fused; each initial image pair comprises a first image frame and a second image frame, the brightness of the first image frame is larger than that of the second image frame, the second image pair comprises a third image frame and a fifth image frame with the same brightness as that of the first image frame, the brightness of the second image frame is larger than that of the third image frame, at least one image frame in the second image pair is obtained by exposing a camera with a second exposure parameter after shooting operation, and the exposure duration of the fifth image frame is smaller than or equal to a preset duration threshold value.
In one embodiment, after the shooting operation, the camera exposes with a second exposure parameter after a delay time, and sets of initial image pairs obtained by exposure in the delay time are used as image pairs to be fused.
In one embodiment, the determining module 404 is further configured to adjust the camera to the third exposure parameter if the target shooting scene in which the camera is currently located is a fourth shooting scene; the interval length of the scene dynamic range of the fourth shooting scene is larger than a preset length threshold, the motion state of the preview picture is an unstable state, and the ambient illuminance of the environment where the camera is located is smaller than the preset ambient threshold; in response to the photographing operation, adjusting the camera to a fourth exposure parameter; determining a plurality of groups of third image pairs as image pairs to be fused from the overlapped image set exposed by the camera with the third exposure parameters before shooting operation, and acquiring a plurality of groups of fourth image pairs as image pairs to be fused; each group of third image pairs comprises a second image frame and a fifth image frame with the same brightness as the first image frame, the brightness of the first image frame is larger than the brightness of the second image frame, the exposure time length of the fifth image frame is smaller than or equal to a preset time length threshold value, each group of fourth image pairs comprises a third image frame and a fourth image frame with the same brightness as the first image frame, the brightness of the second image frame is larger than the brightness of the third image frame, the fourth image frame is obtained by exposing the camera with a fourth exposure parameter after the shooting operation, and the exposure time length of the fourth image frame is longer than the preset time length threshold value.
In one embodiment, after the shooting operation, the camera exposes with a fourth exposure parameter after a delay period, and sets of third image pairs obtained by exposure in the delay period are used as image pairs to be fused.
The respective modules in the above-described image generating apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or independent of a processor in the electronic device, or may be stored in software in a memory in the electronic device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, an electronic device is provided, which may be a terminal, and an internal structure diagram thereof may be as shown in fig. 5. The electronic device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input device. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic device includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the electronic device is used to exchange information between the processor and the external device. The communication interface of the electronic device is used for conducting wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an image generation method. The display unit of the electronic device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the electronic equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 5 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the electronic device to which the present application is applied, and that a particular electronic device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
Embodiments of the present application also provide a computer-readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of an image generation method.
Embodiments of the present application also provide a computer program product containing instructions that, when run on a computer, cause the computer to perform an image generation method.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (17)

1. An image generation method, comprising:
acquiring at least one of a scene dynamic range of a preview picture, a motion state of the preview picture and ambient illuminance of an environment in which a camera is located;
determining a target shooting scene where the camera is currently located based on at least one of a scene dynamic range of a preview picture, a motion state of the preview picture and ambient illuminance of an environment where the camera is located;
Determining a plurality of groups of image pairs to be fused based on the target shooting scene;
and fusing the image pairs to be fused to obtain a target image.
2. The method of claim 1, wherein the determining the target shooting scene in which the camera is currently located based on at least one of a scene dynamic range of a preview screen, a motion state of the preview screen, and an ambient illuminance of an environment in which the camera is located comprises:
and sequentially judging the scene dynamic range of the preview picture, the motion state of the preview picture and the ambient illuminance of the environment where the camera is positioned, and determining the current target shooting scene of the camera.
3. The method according to claim 2, wherein determining the target shooting scene in which the camera is currently located by sequentially determining a scene dynamic range of a preview screen, a motion state of the preview screen, and an ambient illuminance of an environment in which the camera is located, comprises:
determining an exposure time length ratio between a first image frame and a second image frame in each set of overlapped image sets exposed by the camera based on the scene dynamic range; the exposure time length of the first image frame is longer than that of the second image frame, and the exposure time length ratio and the scene dynamic range are in positive correlation;
And determining a target shooting scene where the camera is currently located based on the exposure time length ratio.
4. The method of claim 3, wherein the determining, based on the exposure duration ratio, a target shooting scene in which the camera is currently located comprises:
if the exposure duration ratio is smaller than or equal to a preset duration ratio threshold value, determining that a target shooting scene where the camera is currently located is a first shooting scene; the interval length of the scene dynamic range of the first shooting scene is smaller than a preset length threshold value;
and if the exposure time length ratio is greater than a preset time length ratio threshold value, determining a target shooting scene where the camera is currently located according to the motion state of the preview picture or the ambient illuminance where the camera is currently located.
5. The method according to claim 4, wherein the determining the motion state of the preview screen includes:
acquiring the angular speed of a gyroscope and detecting the motion amplitude of the preview picture;
if the angular velocity is smaller than or equal to a preset angular velocity threshold value and the motion amplitude is smaller than or equal to a preset motion threshold value, determining that the motion state of the preview picture is a stable state;
And if the angular velocity is greater than a preset angular velocity threshold value or the motion amplitude is greater than a preset motion threshold value, determining that the motion state of the preview picture is an unstable state.
6. The method of claim 4, wherein determining a target shooting scene in which the camera is currently located based on a motion state of the preview screen, comprises:
if the motion state is a stable state, determining that a target shooting scene where the camera is currently located is a second shooting scene;
and if the motion state is an unstable state, determining a target shooting scene where the camera is currently located according to the ambient illuminance of the environment where the camera is currently located.
7. The method of claim 4 or 6, wherein determining the target shooting scene in which the camera is currently located based on the ambient illuminance of the environment in which the camera is currently located, comprises:
if the ambient illuminance is greater than or equal to a preset ambient threshold, determining that the current target shooting scene of the camera is a third shooting scene;
and if the ambient illuminance is smaller than a preset ambient threshold, determining that the current target shooting scene of the camera is a fourth shooting scene.
8. The method of claim 1, wherein the determining a plurality of image pairs to be fused based on the target shooting scene comprises:
if the current target shooting scene of the camera is a first shooting scene, responding to shooting operation, and acquiring a plurality of groups of initial image pairs from an overlapped image set exposed by the camera before the shooting operation as a plurality of groups of image pairs to be fused corresponding to the first shooting scene; the interval length of the scene dynamic range of the first shooting scene is smaller than or equal to a preset length threshold value, each group of initial image pairs comprises the first image frame and the second image frame, and the brightness of the first image frame is larger than that of the second image frame.
9. The method of claim 1, wherein the determining a plurality of image pairs to be fused based on the target shooting scene comprises:
if the current target shooting scene of the camera is a second shooting scene, responding to shooting operation, adjusting the camera to a first exposure parameter, and exposing the camera to obtain a fourth image frame by the first exposure parameter; the interval length of the scene dynamic range of the second shooting scene is larger than a preset length threshold value, and the motion state of the preview picture is a stable state; the exposure time length of the fourth image frame is longer than a preset time length threshold value;
Determining a plurality of groups of initial image pairs from the overlapped image sets exposed by the camera before the shooting operation to serve as image pairs to be fused, and acquiring a first image pair to serve as the image pairs to be fused; each set of initial image pairs includes a first image frame and a second image frame, the first image frame having a brightness greater than a brightness of the second image frame, the first image pair including a third image frame and a fourth image frame having the same brightness as the first image frame, the second image frame having a brightness greater than a brightness of the third image frame.
10. The method of claim 9, wherein after the shooting operation, the camera exposes with the first exposure parameter after a delay period, and sets of initial image pairs obtained by exposure within the delay period are used as image pairs to be fused.
11. The method of claim 1, wherein the determining a plurality of image pairs to be fused based on the target shooting scene comprises:
if the current target shooting scene of the camera is a third shooting scene, responding to shooting operation, and adjusting the camera to a second exposure parameter; the interval length of the scene dynamic range of the third shooting scene is larger than a preset length threshold, the motion state of the preview picture is in an unstable state, and the ambient illuminance of the environment where the camera is located is larger than or equal to a preset ambient threshold;
Determining a plurality of groups of initial image pairs from the overlapped image sets exposed by the camera before the shooting operation to serve as image pairs to be fused, and acquiring a second image pair to serve as the image pairs to be fused; each initial image pair comprises a first image frame and a second image frame, the brightness of the first image frame is larger than that of the second image frame, the second image pair comprises a third image frame and a fifth image frame with the same brightness as that of the first image frame, the brightness of the second image frame is larger than that of the third image frame, at least one image frame in the second image pair is obtained by exposing the camera with the second exposure parameter after the shooting operation, and the exposure duration of the fifth image frame is smaller than or equal to a preset duration threshold.
12. The method of claim 11, wherein after the shooting operation, the camera exposes with the second exposure parameter after a delay period, and sets of initial image pairs obtained by exposure within the delay period are used as image pairs to be fused.
13. The method of claim 1, wherein the determining a plurality of image pairs to be fused based on the target shooting scene comprises:
If the current target shooting scene of the camera is a fourth shooting scene, adjusting the camera to a third exposure parameter; the interval length of the scene dynamic range of the fourth shooting scene is larger than a preset length threshold, the motion state of the preview picture is in an unstable state, and the ambient illuminance of the environment where the camera is located is smaller than a preset ambient threshold;
adjusting the camera to a fourth exposure parameter in response to a photographing operation;
determining a plurality of groups of third image pairs from the overlapped image set exposed by the camera according to the third exposure parameters before the shooting operation as image pairs to be fused, and acquiring a plurality of groups of fourth image pairs as image pairs to be fused; each group of third image pairs comprises a second image frame and a fifth image frame with the same brightness as the first image frame, the brightness of the first image frame is larger than the brightness of the second image frame, the exposure time length of the fifth image frame is smaller than or equal to a preset time length threshold value, each group of fourth image pairs comprises a third image frame and a fourth image frame with the same brightness as the first image frame, the brightness of the second image frame is larger than the brightness of the third image frame, the fourth image frame is obtained by exposing the camera with the fourth exposure parameter after the shooting operation, and the exposure time length of the fourth image frame is larger than the preset time length threshold value.
14. The method of claim 13, wherein after the shooting operation, the camera exposes with the fourth exposure parameter after a delay period, and sets of third image pairs obtained by exposure within the delay period are used as image pairs to be fused.
15. An image generating apparatus, comprising:
the acquisition module is used for acquiring at least one of the scene dynamic range of the preview picture, the motion state of the preview picture and the ambient illuminance of the environment where the camera is located;
the determining module is used for determining a target shooting scene where the camera is currently located based on at least one of a scene dynamic range of a preview picture, a motion state of the preview picture and ambient illuminance of an environment where the camera is located;
the determining module is also used for determining a plurality of groups of image pairs to be fused based on the target shooting scene;
and the image generation module is used for fusing the groups of the image pairs to be fused to obtain a target image.
16. An electronic device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to perform the steps of the image generation method of any of claims 1 to 14.
17. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 14.
CN202210783763.2A 2022-07-05 2022-07-05 Image generation method, device, electronic equipment and computer readable storage medium Pending CN117408896A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210783763.2A CN117408896A (en) 2022-07-05 2022-07-05 Image generation method, device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210783763.2A CN117408896A (en) 2022-07-05 2022-07-05 Image generation method, device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN117408896A true CN117408896A (en) 2024-01-16

Family

ID=89491250

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210783763.2A Pending CN117408896A (en) 2022-07-05 2022-07-05 Image generation method, device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN117408896A (en)

Similar Documents

Publication Publication Date Title
CN110121882B (en) Image processing method and device
CN109218628B (en) Image processing method, image processing device, electronic equipment and storage medium
CN108989700B (en) Imaging control method, imaging control device, electronic device, and computer-readable storage medium
US9451173B2 (en) Electronic device and control method of the same
JP6742732B2 (en) Method for generating HDR image of scene based on trade-off between luminance distribution and motion
CN106060249B (en) Photographing anti-shake method and mobile terminal
CN111684788A (en) Image processing method and device
CN115037884A (en) Unified bracketing method for imaging
CN111028190A (en) Image processing method, image processing device, storage medium and electronic equipment
US20180109711A1 (en) Method and device for overexposed photography
CN108683863B (en) Imaging control method, imaging control device, electronic equipment and readable storage medium
US10939049B2 (en) Sensor auto-configuration
CN112822412B (en) Exposure method, exposure device, electronic equipment and storage medium
CN111405185B (en) Zoom control method and device for camera, electronic equipment and storage medium
CN113439286A (en) Processing image data in a composite image
CN107147851B (en) Photo processing method and device, computer readable storage medium and electronic equipment
Choi et al. A method for fast multi-exposure image fusion
CN109523456B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN113438411A (en) Image shooting method, image shooting device, computer equipment and computer readable storage medium
CN108881731B (en) Panoramic shooting method and device and imaging equipment
CN117408896A (en) Image generation method, device, electronic equipment and computer readable storage medium
US20150254856A1 (en) Smart moving object capture methods, devices and digital imaging systems including the same
CN113259594A (en) Image processing method and device, computer readable storage medium and terminal
US20160323490A1 (en) Extensible, automatically-selected computational photography scenarios
CN111630839B (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination