CN117710264B - Dynamic range calibration method of image and electronic equipment - Google Patents

Dynamic range calibration method of image and electronic equipment Download PDF

Info

Publication number
CN117710264B
CN117710264B CN202310953742.5A CN202310953742A CN117710264B CN 117710264 B CN117710264 B CN 117710264B CN 202310953742 A CN202310953742 A CN 202310953742A CN 117710264 B CN117710264 B CN 117710264B
Authority
CN
China
Prior art keywords
image
pyramid
fusion
weight map
dynamic range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310953742.5A
Other languages
Chinese (zh)
Other versions
CN117710264A (en
Inventor
孙琪
黄庭刚
冯天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310953742.5A priority Critical patent/CN117710264B/en
Publication of CN117710264A publication Critical patent/CN117710264A/en
Application granted granted Critical
Publication of CN117710264B publication Critical patent/CN117710264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application provides a dynamic range calibration method of an image and electronic equipment, wherein the method comprises the following steps: obtaining a first image and a second image; the first image is a high dynamic range gain image; the second image is a YUV format image; the second image is generated according to a standard raw image; the standard raw graph is a standard raw graph in the raw graph for generating the first image electricity; and carrying out dynamic range calibration on the first image by using the second image to obtain a target image, wherein the target image is a high dynamic range image. The embodiment of the application can reduce the visual effect difference between the HDR image shot by the electronic equipment and the real scene, and improve the user experience.

Description

Dynamic range calibration method of image and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a dynamic range calibration method for an image and an electronic device.
Background
The dynamic range of an image refers to the gray scale ratio of the brightest portion to the darkest portion of the image. In other words, the dynamic range of an image is a measure that describes the level of brightness that can be present in the image. In general, the higher the dynamic range of an image, the more detailed the image is, the closer the image is to the real environment. To this end, electronic devices (e.g., cell phones) provide the user with the ability to capture high dynamic range (HIGH DYNAMIC RANGE, HDR) images so that the images presented to the user can include more image details to better reflect the visual effects of the real scene.
When a user captures an image, the electronic device typically displays a preview video for the user in an interface for the user to complete image capture based on the preview image. However, when the electronic device captures an image, there is a large difference in visual effect between the HDR image provided for the user and the image in the preview video (hereinafter also simply referred to as preview image) viewed by the user, which affects the user experience.
Disclosure of Invention
The application provides an image dynamic range calibration method and electronic equipment, which can reduce the visual effect difference between an HDR image shot by the electronic equipment and a preview image checked by a user and improve the user experience.
In a first aspect, an embodiment of the present application provides a method for calibrating a dynamic range of an image, including: obtaining a first image and a second image; the first image is a high dynamic range gain image; the second image is a YUV format image; the second image is generated according to a standard raw image; the standard raw graph is a standard raw graph in the raw graph for generating the first image; and carrying out dynamic range calibration on the first image by using the second image to obtain a target image, wherein the target image is a high dynamic range image. In the method, the dynamic range of the first image is calibrated by using the second image in the YUV format generated by the standard raw image, and the standard raw image is a standard raw image in a multi-frame raw image for generating the first image, so that the image quality and visual effect of the generated HDR image are improved, the visual effect difference between the HDR image and a preview image viewed by a user is reduced, and the user experience is improved.
In one possible implementation manner, the performing dynamic range calibration on the first image using the second image to obtain a target image includes: generating a first weight map of the first image and a second weight map of the second image; constructing a first image pyramid of the first image, a second image pyramid of the second image, a third image pyramid of the first weight map and a fourth image pyramid of the second weight map; generating a target image according to the first image pyramid, the second image pyramid, the third image pyramid and the fourth image pyramid.
In one possible implementation manner, the generating the first weight map of the first image and the second weight map of the second image includes: determining Gaussian filter parameters of the first image and Gaussian filter parameters of the second image according to the exposure; updating a first lookup table of the first image and a second lookup table of the second image according to the Gaussian filtering parameters of the first image and the Gaussian filtering parameters of the second image; generating a first weight map of the first image according to a first lookup table of the first image, and generating a second weight map of the second image according to a second lookup table of the second image.
In one possible implementation manner, before constructing the third image pyramid of the first weight map and the fourth image pyramid of the second weight map, the method further includes: and performing Gaussian filtering on the first weight map and the second weight map respectively.
In one possible implementation, the gaussian filtering of the first weight map includes: filling the first weight map to obtain a filled first weight map; performing Gaussian filtering on the filled first weight graph according to the Gaussian filtering parameters of the first image; and/or, performing gaussian filtering on the second weight map, including: filling the second weight map to obtain a filled second weight map; and carrying out Gaussian filtering on the filled second weight graph according to the Gaussian filtering parameters of the second image.
In one possible implementation, generating the target image from the first image pyramid, the second image pyramid, the third image pyramid, and the fourth image pyramid includes: performing fusion processing on the first image pyramid, the second image pyramid, the third image pyramid and the fourth image pyramid to obtain a first fusion pyramid; performing fusion processing on the first image pyramid and the second image pyramid to obtain a second fusion pyramid; and reconstructing an image according to the first fusion pyramid and the second fusion pyramid to obtain the target image.
In one possible implementation manner, the fusing the first image pyramid, the second image pyramid, the third image pyramid, and the fourth image pyramid to obtain a first fused pyramid includes: performing fusion processing on the first image pyramid and the third image pyramid to obtain a second fusion pyramid; performing fusion processing on the second image pyramid and the fourth image pyramid to obtain a third fusion pyramid; and carrying out fusion processing on the second fusion pyramid and the third fusion pyramid to obtain the first fusion pyramid.
In one possible implementation manner, the fusing processing is performed on the first image pyramid and the third image pyramid to obtain a second fused pyramid, including: and the product obtained by multiplying the pixel values of the corresponding points of the first image pyramid and the third image pyramid is used as the pixel value of the corresponding point of the second fusion pyramid.
In one possible implementation manner, the fusing processing is performed on the second fusion pyramid and the third fusion pyramid to obtain the first fusion pyramid, including: and adding the pixel values of the corresponding points of the second fusion pyramid and the third fusion pyramid to obtain a sum which is used as the pixel value of the corresponding point of the first fusion pyramid.
In one possible implementation, the obtaining the first image and the second image includes: receiving a first operation of a user; responding to the first operation, and driving a camera to shoot a plurality of frames of raw images; the exposure amounts of the multi-frame raw images are different; generating the first image according to the multi-frame raw graph; and generating the second image according to a standard raw image in the multi-frame raw image.
In one possible implementation manner, the generating the first image according to the multi-frame raw graph includes: generating an LDR image according to the multi-frame raw image; and performing dynamic range expansion on the LDR image to obtain the first image.
In a second aspect, an embodiment of the present application provides an electronic device, including: a processor, a memory; wherein one or more computer programs are stored in the memory, the one or more computer programs comprising instructions, which when executed by the processor, cause the electronic device to perform the method of any of the first aspects.
In a third aspect, embodiments of the present application provide a computer-readable storage medium having a computer program stored therein, which when run on a computer, causes the computer to perform the method of any of the first aspects.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
Fig. 2 is a schematic software structure of an electronic device according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a method for calibrating dynamic range of an image according to an embodiment of the present application;
FIG. 4 is an interface diagram of a method for calibrating dynamic range of an image according to an embodiment of the present application;
FIG. 5 is a flowchart of another method for calibrating dynamic range of an image according to an embodiment of the present application;
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 7 is a flowchart illustrating a dynamic range calibration method based on the image with the structure shown in FIG. 6 according to an embodiment of the present application;
fig. 8A is a schematic image diagram of a preview video according to an embodiment of the present application;
FIG. 8B is a schematic diagram of an image without dynamic range calibration according to an embodiment of the present application;
Fig. 8C is a schematic image diagram of dynamic range calibration according to an embodiment of the present application.
Detailed Description
The terminology used in the description of the embodiments of the application herein is for the purpose of describing particular embodiments of the application only and is not intended to be limiting of the application.
The dynamic range of an image refers to the gray scale ratio of the brightest portion to the darkest portion of the image. In other words, the dynamic range of an image is a measure that describes the level of brightness that can be present in the image. In general, the higher the dynamic range of an image, the more detailed the image is, the closer the image is to the real environment. To this end, electronic devices (e.g., cell phones) provide the user with the ability to capture high dynamic range (HIGH DYNAMIC RANGE, HDR) images so that the images presented to the user can include more image details to better reflect the visual effects of the real scene.
In one embodiment, when the electronic device captures an HDR image, the electronic device drives the camera to capture multiple frames of raw images with different exposure degrees, generates a low dynamic range (low DYNAMIC RANGE, LDR) image according to the multiple frames of raw images, performs processing such as dynamic range expansion on the LDR image to obtain the HDR image, and provides the HDR image as the captured image to a user. However, there is a large difference in visual effect between the above HDR image generated by the electronic device and an image in the preview video (hereinafter also simply referred to as preview image) watched by the user, and public opinion such as "the difference between true image brightness and preview is large", "what is not obtained what is seen", "what is obvious what is changed" may occur, which affects the user experience.
Therefore, the embodiment of the application provides another dynamic range calibration method for the image, which can calibrate the dynamic range of the image in the process of generating the HDR image, reduce the visual effect difference between the HDR image and the preview image and improve the user experience.
First, terms appearing in the embodiments of the present application will be described by way of example.
Dynamic range of image: refers to the gray scale ratio of the brightest portion to the darkest portion of the image. In other words, the dynamic range of an image is a measure that describes the level of brightness that can be present in the image.
YUV: is a color coding mode in which Y represents Luminance (luminence), i.e., gray values, U represents chrominance (Chrominance), and V represents density (Chroma), which functions to describe the image color and saturation for a given pixel color.
Image pyramid: an image pyramid of an image is a series of image resolution sets that decrease progressively (bottom-up) in the shape of a pyramid and are derived from the image. An image pyramid of an image may be obtained by gradient downsampling, the higher the level (closer to the top of the pyramid), the smaller the image, and the lower the resolution.
Gaussian pyramid: for an image, the image pyramid of the image generated by gaussian filtering and downsampling is referred to as a gaussian pyramid. In some examples, the bottom layer of the gaussian pyramid of an image is the image, and each layer up reduces the resolution of the image by gaussian filtering and downsampling.
Laplacian pyramid: the laplacian pyramid exists to achieve image reconstruction of the gaussian pyramid. The Laplacian pyramid of an image is generated on the basis of the Gaussian pyramid of the image, and is a difference image of the Gaussian pyramid and the upper layer of the Gaussian pyramid after up-sampling expansion.
Raw diagram: the raw image is raw data of the captured light source signal converted to a digital signal by an unprocessed, also uncompressed photo format, in other words, a complementary metal oxide semiconductor (complementary metal oxide semiconductor, CMOS) or charge coupled device (charge coupled device, CCD) image sensor in the camera. The raw image is recorded with the original information of the image sensor, and some metadata generated by the shooting of the camera, such as setting of sensitivity (ISO), shutter speed, aperture value, white balance, etc., can be recorded.
The electronic device of the embodiment of the application is an electronic device comprising a camera, such as a mobile phone, a tablet personal computer (PAD), a smart wearable device, such as a smart watch, and the like.
Fig. 1 shows a schematic diagram of an electronic device, and as shown in fig. 1, the electronic device 100 may include a processor 110, a memory 120, a camera 130, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural-Network Processor (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-INTEGRATED CIRCUIT, I2C) interface, an integrated circuit built-in audio (inter-INTEGRATED CIRCUIT SOUND, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
Memory 120 may be used to store computer-executable program code that includes instructions. The memory 120 may include a stored program area and a stored data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, memory 120 may include high-speed random access memory, and may also include non-volatile memory, such as at least one disk storage device, flash memory device, universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications and data processing of the electronic device 100 by executing instructions stored in the memory 120 and/or instructions stored in a memory provided in the processor.
The camera 130 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then passed to Image Signal Processing (ISP) to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the electronic device 100 may include 1 or N cameras 130, N being a positive integer greater than 1.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. The embodiment of the invention discloses an android with a layered architectureThe (Android) system is exemplified by the software structure of the electronic device 100.
Fig. 2 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into five layers, from top to bottom, an application layer, an application framework layer (also called a system framework layer), a system library and Android runtime layer, a hardware abstraction layer (hardware abstraction layer, HAL), and a kernel layer, respectively.
The application layer may include several applications (hereinafter simply referred to as applications). In an embodiment of the present application, the application layer may include: image capturing applications, such as camera applications.
The application framework layer provides an application programming interface (Application Programming Interface, API) and programming framework for applications of the application layer, including various components and services to support the android development of the developer. In the embodiment of the application, the application framework layer can comprise a camera service and the like.
The system library and Android Runtime layer includes a system library and an Android Runtime (Android run). The system library may include a plurality of functional modules, such as a surface manager, libc, etc. The android running process is responsible for scheduling and managing an android system and specifically comprises a core library and a virtual machine. The core library comprises two parts: one part is a function required to be called by java language, and the other part is a core library of android; the virtual machine is used for running Android applications developed by using java language.
The HAL layer is an interface layer between the operating system kernel and the hardware circuitry. HAL layers include, but are not limited to: and the camera hardware abstraction layer (CAMERA HAL) is used for processing the image stream.
The kernel layer is a layer between hardware and software. The kernel layer may include: camera driving, etc. The camera drive is used for driving the camera.
Hereinafter, a dynamic range calibration method for an image according to an embodiment of the present application will be described with reference to the above-described configuration of the electronic device shown in fig. 1 and 2.
Fig. 3 is a schematic flow chart of a dynamic range calibration method for an image according to an embodiment of the present application, as shown in fig. 3, the method may include:
Step 301: obtaining a first image and a second image; the first image is a high dynamic range gain image; the second image is in YUV format; the second image is generated according to the standard raw graph; the standard raw graph is a standard raw graph among raw graphs for generating the first image.
Gain image means: the gain of each pixel in the image, in other words, the pixel value of each pixel in the gain image is the gain of that pixel. The gain of the pixel point may be a multiple value or an increased value of the pixel point, or the like. A gain image of high dynamic range means that the gain image has high dynamic range.
The first image may be a single channel image and the single channel pixel value of each pixel may also be referred to as the gray value of that pixel.
Optionally, the electronic device may drive the camera to shoot multiple frames of raw images under different exposure amounts, generate an LDR image according to the multiple frames of raw images, and perform dynamic range expansion on the LDR image to obtain the first image.
The method for performing dynamic range expansion on the LDR image may be implemented by using a related dynamic range expansion algorithm, for example, an irradiance reconstruction method, or a direct fusion method, which is not limited in the embodiment of the present application. Alternatively, when the dynamic range of the LDR image is extended, the dynamic range can be achieved through a neural network.
When the electronic device drives the camera to shoot a plurality of frames of raw images under different exposure amounts, the exposure amount estimated value can be calculated according to the brightness and the like of the environment where the electronic device is positioned, the exposure amount estimated value is used as a standard exposure amount, the exposure amount is respectively increased and decreased based on the standard exposure amount, a plurality of exposure amounts including the standard exposure amount are obtained, and one frame of image is respectively shot under the plurality of exposure amounts, so that the plurality of frames of raw images can be obtained. In one example, the electronic device may capture the multi-frame raw map described above using an associated step exposure method.
In the embodiment of the present application, a raw image captured under the standard exposure is referred to as a standard raw image. In some examples, the standard raw graph described above may also be referred to as normalraw graph.
In some examples, the standard exposure described above may also be referred to as EV0.EV0 refers to the brightness of a subject that can achieve correct exposure when photographed at ISO100, aperture 1.0, shutter 1 second, or equivalent combination thereof.
Step 302: and carrying out dynamic range calibration on the first image by using the second image to obtain a target image, wherein the target image is a high dynamic range image.
The second image is an image in a YUV format generated according to a standard raw image in a plurality of raw images for generating the first image, the image is generally closer to an image in a preview video provided by an image shooting application for a user in visual effect, and the visual effect gap between the image and a real scene is smaller in terms of the user, so that the second image is used for carrying out dynamic range calibration on the first image, so that a target image is closer to a video image previewed by the user, and the image quality and visual effect of the HDR image are improved.
It should be noted that, if in some scenes, the image in the preview video provided to the user by the image capturing application is not the second image, but other images, the second image may be extended to other images close to the visual effect of the image previewed by the user.
In the method, the dynamic range of the first image is calibrated by using the second image in the YUV format generated by the standard raw image, and the standard raw image is a standard raw image in a multi-frame raw image for generating the first image, so that the image quality and visual effect of the generated HDR image are improved, the visual effect difference between the HDR image and the preview image is reduced, and the user experience is improved.
FIG. 4 is a diagram of a user interface implementation of a dynamic range calibration method for an image according to an embodiment of the present application. As shown in fig. 4, may include:
the user opens the camera application and enters an image preview interface, such as shown by interface 31. The image preview interface may display a preview video, the image preview interface comprising: an image capture control 32 for a user to trigger image capture.
The user clicks the image capturing control 32, and accordingly, the camera application detects a selection operation of the user on the image capturing control 32, that is, detects an image capturing triggering operation of the user, and triggers an image capturing process, so that an HDR image is generated and stored.
It should be noted that, the method for calibrating the dynamic range of the image provided by the embodiment of the present application may be executed in the image capturing process triggered by the camera application, so that the generated HDR image is an HDR image subjected to dynamic range calibration, and the visual effect of the HDR image is improved.
It should be noted that the implementation of the user interface shown in fig. 4 is merely an example, and the method for calibrating the dynamic range of the image provided in the embodiment of the present application may also be applicable to other image shooting scenes, for example, shooting operations triggered by a user in a video shooting process or shooting operations triggered automatically.
Fig. 5 is a schematic flow chart of a method for calibrating dynamic range of an image according to an embodiment of the present application, where the method may be performed by an electronic device, and as shown in fig. 5, the method may include:
Step 501: an image capturing trigger operation is detected.
In one example, in the scenario illustrated in fig. 4, a user may click on the image capture control 32 at an image preview interface provided by the camera application, and accordingly, the electronic device detects an image capture trigger operation.
Step 502: responding to an image shooting triggering operation, and driving a camera to shoot a plurality of frames of raw images; the exposure amounts of the multiple frame raw charts are different.
Step 503: and generating an LDR image according to the multi-frame raw image, and performing dynamic range expansion on the LDR image to obtain a gain image drcy with a high dynamic range.
For convenience of explanation, the gain image of the high dynamic range will be simply referred to as a gain image hereinafter.
The high dynamic range gain image drcy described above is also referred to as the first image described above.
Step 504: and generating a reference image ispy in a YUV format according to a standard raw diagram in the multi-frame raw diagram.
Step 505: based on the exposure, the gaussian filter parameters of the gain image drcy and the gaussian filter parameters of the reference image ispy are determined.
Optionally, the electronic device may preset the gaussian filter parameters of the gain image drcy and the gaussian filter parameters of the reference image ispy corresponding to different exposure amounts, and in this step, the gaussian filter parameters of the corresponding gain image drcy and the gaussian filter parameters of the reference image ispy may be found according to the exposure amounts, so as to obtain the gaussian filter parameters of the gain image drcy and the gaussian filter parameters of the reference image ispy.
Alternatively, the gaussian filter formula may be: The gaussian filter parameters described above may include α, m, and σ in the formula, where α represents the coefficient, m represents the mean, and σ represents the covariance. Accordingly, in the embodiment of the present application, the gaussian filter parameters of the gain image drcy may be recorded as: drc-sigma, drc-m, drc-alpha, the gaussian filter parameters of the reference image ispy can be noted as: isp- σ, isp-m, isp- α.
Step 506: the weight map drcweight1 of the gain image drcy and the weight map ispyweight1 of the reference image ispy are generated from the gaussian filter parameters of the gain image drcy and the gaussian filter parameters of the reference image ispy.
Optionally, the step specifically may include:
Based on the gaussian filter parameters of gain image drcy and the gaussian filter parameters of reference image ispy, LUT table drcLUT of gain image drcy and LUT table ispLUT of reference image ispy are updated,
The gain image is converted to weight map drcweight according to drcLUT table of gain image drcy and the reference image isp is converted to weight map ispweight1 according to ispLUT table of reference image ispy.
Optionally, the LUT table of the gain image drcy and the LUT table of the reference image ispy may be preset in the electronic device, in this embodiment of the present application, the LUT table of the gain image drcy is denoted as drcLUT table, the LUT table of the reference image ispy is denoted as ispLUT table, and specific implementation embodiments of the preset drcLUT table and ispLUT table in the electronic device are not limited. The process of updating drcLUT tables and ispLUT tables in this step is as follows:
updating the ispLUT table of the reference image ispy according to the parameter isp-m of the reference image ispy, specifically updating the mapping relation of the gray values smaller than isp-m in the ispLUT table of the reference image ispy to map the gray values smaller than isp-m to isp-m, that is, when i < isp-m, updating the mapping relation of i to: ispLut [ i ] = ispLut [ isp-m ];
Updating drcLUT table of gain image drcy according to parameter drc-m of gain image drcy and ispLUT table updated by reference image ispy, specifically updating mapping relation of gray values smaller than drc-m in drcLUT table of gain image drcy to: grey values less than drc-m are mapped as: when the gray value is larger in mapping values in drcLUT table and ispLUT table, namely i < drc-m, updating the mapping relation of i as follows: drcLut [ i ] =max (drcLut [ i ], ispLut [ i ]);
The mapping value of the gray value in the ispLUT table updated by the reference image ispy is set as: the difference between the maximum bit width and the mapping value of the gray value, that is, the mapping relation of i is updated as follows: ispLut [ i ] = MaxBitWidth-drcLut [ i ].
Alternatively, the gray value i may be 0 to 1024.
Optionally, for the gain image, the mapping value found by the gray value of each pixel (x, y) according to drcLUT table is used as the weight value of the pixel (x, y), that is: ISPWEIGHT [ x, y ] = ispLut [ isp [ x, y ] ], and generating a weight graph drcweight1 according to the weight value of each pixel point; for the reference image, the gray value of each pixel (x, y) is used as the weight value of the pixel (x, y) according to the mapping value found by the ispLUT table, namely DRCWEIGHT [ x, y ] = drcLut [ drc [ x, y ] ], and a weight map ispweight1 is generated according to the weight value of each pixel.
Step 507: gaussian filtering is carried out on the weight graph drcweight1 of the gain image drcy according to Gaussian filtering parameters of the gain image drcy to obtain a weight graph drcweight; and carrying out Gaussian filtering on the weight map ispweight of the reference image isp according to the Gaussian filtering parameters of the reference image isp to obtain a weight map ispweight2 of the reference image ispy.
Optionally, before the gaussian filtering is performed on the weight map drcweight1 and the weight map ispweight1, the filling (padding) may be performed on the weight map drcweight1 and the weight map ispweight1, so that the gaussian filtering can be performed on the pixel points at the edge positions of the weight map drcweight1 and the weight map ispweight 1.
The step is an optional step, and the accuracy of the weight map can be improved by performing Gaussian filtering on the weight map.
Step 508: the gaussian pyramid T1 of the weight map drcweight is constructed, the laplacian pyramid L1 of the gain image drcy is constructed, the gaussian pyramid T2 of the weight map ispweight is constructed, and the laplacian pyramid L2 of the reference image ispy is constructed.
The specific method for constructing the Gaussian pyramid and the Laplacian pyramid in the step is not limited, and it is required to be noted that the constructed Gaussian pyramid 1, gaussian pyramid 2, laplacian pyramid 1, laplacian pyramid 2 and other information are the same so as to complete fusion processing in the subsequent step.
Step 509: and carrying out fusion treatment on the Gaussian pyramid 1, the Gaussian pyramid 2, the Laplacian pyramid 1 and the Laplacian pyramid 2 to obtain a fusion pyramid, and carrying out fusion treatment on the Gaussian pyramid 1 and the Gaussian pyramid 2 to obtain a weight pyramid.
Optionally, the step may include:
Carrying out fusion treatment on the Gaussian pyramid T1 and the Laplacian pyramid L1 to obtain a primary fusion pyramid F1;
carrying out fusion treatment on the Gaussian pyramid T2 and the Laplacian pyramid L2 to obtain a primary fusion pyramid F2;
And carrying out fusion treatment on the primary fusion pyramid F1 and the primary fusion pyramid F2 to obtain a secondary fusion pyramid F3.
Optionally, the fusing process is performed on the gaussian pyramid T1 and the laplacian pyramid L1, which specifically may include:
The pixel values of the corresponding points in the Gaussian pyramid T1 and the Laplacian pyramid L1 are multiplied, and the product is used as the pixel value of the corresponding point in the primary fusion pyramid F1.
Optionally, the fusing process is performed on the gaussian pyramid T2 and the laplacian pyramid L2, which specifically may include:
The pixel values of the corresponding points in the gaussian pyramid 2 and the laplacian pyramid 2 are multiplied, and the product is used as the pixel value of the corresponding point in the primary fusion pyramid 2.
Optionally, the primary fusion pyramid F1 and the primary fusion pyramid F2 are fused to obtain a fusion pyramid F3, which specifically may include:
the pixel values of the corresponding points of the primary fusion pyramid F1 and the primary fusion pyramid F2 are added, and the obtained sum is used as the pixel value of the corresponding point in the fusion pyramid F3.
Optionally, the fusing process is performed on the gaussian pyramid T1 and the gaussian pyramid T2 to obtain a weight pyramid W1, which may specifically include:
and adding the corresponding points of the Gaussian pyramid T1 and the Gaussian pyramid T2, and taking the obtained sum as the pixel value of the corresponding point in the weight pyramid W1.
Step 510: reconstructing a Laplacian pyramid according to the fusion pyramid F3 and the weight pyramid W1 to obtain a target image.
Optionally, the step may include:
Dividing the pixel values of the corresponding points of the fusion pyramid F3 and the weight pyramid W1, and taking the obtained quotient as the pixel value of the corresponding point in the target pyramid;
And carrying out layer-by-layer upsampling on the target pyramid to obtain a target image.
Alternatively, the target image may be an HDR image for capturing the resulting HDR image as a camera application.
In the method shown in fig. 5, the dynamic range of the gain image is calibrated by using the reference image isp in YUV format generated by the standard raw image, so as to generate the target image, where the standard raw image is a standard raw image in the multi-frame raw image for generating the gain image, so that the image quality and visual effect of the generated HDR image are improved.
Fig. 6 is another schematic structural diagram of an electronic device according to an embodiment of the present application, including: camera applications, camera HAL, camera drive and camera.
Wherein, camera HAL can include: the device comprises a control module, an LDR image generation module, a dynamic range expansion module, a format conversion module and a dynamic range calibration module.
The control module can be used for communication with camera application and camera drive, and also used for determining exposure of the camera to shoot the raw image, and the like.
The LDR image generation module is used for generating an LDR image according to the multi-frame raw image.
The dynamic range expansion module is used for carrying out dynamic range expansion on the LDR image to obtain a gain image with a high dynamic range.
The format conversion module is used for converting the raw image into a reference image in YUV format.
The dynamic range calibration module is used for carrying out dynamic range calibration on the gain image by using the reference image to obtain a target image.
Alternatively, when the electronic device is implemented by an android system, for example, as shown in fig. 2, the camera application may be located at an application layer of the electronic device, the camera HAL may be located at a HAL layer of the electronic device, the camera driver may be located at a kernel layer of the electronic device, and the camera may be located at a hardware layer of the electronic device. It should be noted that the foregoing is merely an example, and the junction and the level thereof may be adaptively changed in different operating systems, which is not limited by the embodiments of the present application.
Fig. 7 is a flowchart of a dynamic range calibration method based on the image of the structure shown in fig. 6 according to an embodiment of the present application. As shown in fig. 7, the method may include:
step 701: the camera application detects a photographing trigger operation.
Step 702: the camera application sends a photographing request to a control module in the camera HAL.
Step 703: the control module sends a photographing instruction to the camera driver.
The photographing indication may indicate the exposure amount required to photograph the raw image.
Step 704: the camera driver receives the photographing instruction, drives the camera to photograph the raw image according to the exposure, obtains a multi-frame raw image, and sends the multi-frame raw image to the control module.
Step 705: and the control module sends the multi-frame raw graph to the LDR image generation module.
Step 706: the LDR image generation module receives the multi-frame raw image, generates an LDR image according to the multi-frame raw image, and sends the LDR image to the dynamic range expansion module.
Step 707: the dynamic range expansion module dynamically expands the LDR image to obtain an HDR gain image, and the HDR gain image is sent to the dynamic range calibration module.
Step 708: the control module sends normalraw diagrams in the multi-frame raw diagram to the format conversion module.
Step 709: the format conversion module converts normalraw images into YUV format images and sends the YUV images to the dynamic range calibration module.
The order of execution between steps 705 to 707 and steps 708 to 709 is not limited.
Step 710: the dynamic range calibration module performs dynamic range calibration on the gain image by using the reference image to obtain an HDR image.
The HDR image obtained here is also the target image described above.
This step may be implemented by reference to steps 406-412 described above.
Specific implementations of the dynamic range calibration module for performing dynamic range calibration on the gain image using the reference image may refer to steps 506 to 511, which are not described herein.
Step 711: the dynamic range calibration module sends the target image to the control module.
Step 712: the control module sends the HDR image to the camera application, and the camera application saves the HDR image as a photographed image.
It should be noted that, if an interface for the camera application to access the dynamic range calibration module is provided in the camera HAL, the dynamic range calibration module may also directly send the target image to the camera application, without forwarding by the control module.
The image shown in fig. 8A is an image in a preview video displayed by an image capturing application for a user, the image shown in fig. 8B is an example of an HDR image obtained without dynamic range adjustment, the image shown in fig. 8C is an HDR image obtained with dynamic range adjustment, and compared with the HDR image obtained with dynamic range adjustment, the HDR image obtained with dynamic range adjustment has no problem of blackening of the image, the visual effect is closer to that of the image in the preview video, and the problem of a large difference between the luminance of the HDR image and that of the image in the preview video is solved.
The embodiment of the application also provides electronic equipment which comprises a processor and a memory, wherein the processor is used for realizing the method provided by the embodiment of the application.
The embodiment of the application also provides a computer readable storage medium, in which a computer program is stored, which when run on a computer, causes the computer to execute the method provided by the embodiment of the application.
The present application also provides a computer program product comprising a computer program which, when run on a computer, causes the computer to perform the method provided by the embodiments of the present application.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relation of association objects, and indicates that there may be three kinds of relations, for example, a and/or B, and may indicate that a alone exists, a and B together, and B alone exists. Wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of the following" and the like means any combination of these items, including any combination of single or plural items. For example, at least one of a, b and c may represent: a, b, c, a and b, a and c, b and c or a and b and c, wherein a, b and c can be single or multiple.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in the embodiments disclosed herein can be implemented as a combination of electronic hardware, computer software, and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In several embodiments provided by the present application, any of the functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (hereinafter referred to as ROM), a random access Memory (Random Access Memory hereinafter referred to as RAM), a magnetic disk, or an optical disk, etc., which can store program codes.
The foregoing is merely exemplary embodiments of the present application, and any changes or substitutions that may be easily contemplated by those skilled in the art within the scope of the present application should be included in the present application. The protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method for calibrating dynamic range of an image, comprising:
obtaining a first image and a second image; the first image is a gain image with a high dynamic range, and is obtained by performing dynamic range expansion on an LDR image generated according to a multi-frame raw image; the second image is a YUV format image; the second image is generated according to a standard raw image; the standard raw graph is a standard raw graph in the raw graph for generating the first image;
generating a first weight map of the first image and a second weight map of the second image;
constructing a first image pyramid of the first image, a second image pyramid of the second image, a third image pyramid of the first weight map and a fourth image pyramid of the second weight map;
Generating a target image according to the first image pyramid, the second image pyramid, the third image pyramid and the fourth image pyramid, wherein the target image is a high dynamic range image;
wherein the generating a target image according to the first image pyramid, the second image pyramid, the third image pyramid, and the fourth image pyramid includes:
performing fusion processing on the first image pyramid, the second image pyramid, the third image pyramid and the fourth image pyramid to obtain a first fusion pyramid;
performing fusion processing on the third image pyramid and the fourth image pyramid to obtain a second fusion pyramid;
and reconstructing an image according to the first fusion pyramid and the second fusion pyramid to obtain the target image.
2. The method of claim 1, wherein the generating the first weight map of the first image and the second weight map of the second image comprises:
Determining Gaussian filter parameters of the first image and Gaussian filter parameters of the second image according to the exposure;
updating a first lookup table of the first image and a second lookup table of the second image according to the Gaussian filtering parameters of the first image and the Gaussian filtering parameters of the second image;
Generating a first weight map of the first image according to a first lookup table of the first image, and generating a second weight map of the second image according to a second lookup table of the second image.
3. The method of claim 2, wherein prior to constructing the third image pyramid of the first weight map and the fourth image pyramid of the second weight map, further comprising:
and performing Gaussian filtering on the first weight map and the second weight map respectively.
4. A method according to claim 3, wherein gaussian filtering the first weight map comprises:
filling the first weight map to obtain a filled first weight map; performing Gaussian filtering on the filled first weight graph according to the Gaussian filtering parameters of the first image;
And/or the number of the groups of groups,
Performing gaussian filtering on the second weight map, including:
filling the second weight map to obtain a filled second weight map;
and carrying out Gaussian filtering on the filled second weight graph according to the Gaussian filtering parameters of the second image.
5. The method according to any one of claims 1 to 4, wherein the fusing the first image pyramid, the second image pyramid, the third image pyramid, and the fourth image pyramid to obtain a first fused pyramid includes:
performing fusion processing on the first image pyramid and the third image pyramid to obtain a third fusion pyramid;
performing fusion processing on the second image pyramid and the fourth image pyramid to obtain a fourth fusion pyramid;
And carrying out fusion processing on the third fusion pyramid and the fourth fusion pyramid to obtain the first fusion pyramid.
6. The method of claim 5, wherein the fusing the first image pyramid and the third image pyramid to obtain a third fused pyramid comprises:
And the product obtained by multiplying the pixel values of the corresponding points of the first image pyramid and the third image pyramid is used as the pixel value of the corresponding point of the third fusion pyramid.
7. The method of claim 5, wherein fusing the third fusion pyramid and the fourth fusion pyramid to obtain the first fusion pyramid comprises:
and adding the pixel values of the corresponding points of the third fusion pyramid and the fourth fusion pyramid to obtain a sum which is used as the pixel value of the corresponding point of the first fusion pyramid.
8. The method of any one of claims 1 to 4, wherein the obtaining a first image and a second image comprises:
receiving a first operation of a user;
Responding to the first operation, and driving a camera to shoot a plurality of frames of raw images; the exposure amounts of the multi-frame raw images are different;
Generating the first image according to the multi-frame raw graph;
and generating the second image according to a standard raw image in the multi-frame raw image.
9. An electronic device, comprising:
A processor, a memory; wherein one or more computer programs are stored in the memory, the one or more computer programs comprising instructions, which when executed by the processor, cause the electronic device to perform the method of any of claims 1-8.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when run on a computer, causes the computer to perform the method of any of claims 1 to 8.
CN202310953742.5A 2023-07-31 2023-07-31 Dynamic range calibration method of image and electronic equipment Active CN117710264B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310953742.5A CN117710264B (en) 2023-07-31 2023-07-31 Dynamic range calibration method of image and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310953742.5A CN117710264B (en) 2023-07-31 2023-07-31 Dynamic range calibration method of image and electronic equipment

Publications (2)

Publication Number Publication Date
CN117710264A CN117710264A (en) 2024-03-15
CN117710264B true CN117710264B (en) 2024-09-10

Family

ID=90157629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310953742.5A Active CN117710264B (en) 2023-07-31 2023-07-31 Dynamic range calibration method of image and electronic equipment

Country Status (1)

Country Link
CN (1) CN117710264B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108668093A (en) * 2017-03-31 2018-10-16 华为技术有限公司 The generation method and device of HDR image
CN114708173A (en) * 2022-02-22 2022-07-05 北京旷视科技有限公司 Image fusion method, computer program product, storage medium, and electronic device

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10382674B2 (en) * 2013-04-15 2019-08-13 Qualcomm Incorporated Reference image selection for motion ghost filtering
WO2018137267A1 (en) * 2017-01-25 2018-08-02 华为技术有限公司 Image processing method and terminal apparatus
US10425599B2 (en) * 2017-02-01 2019-09-24 Omnivision Technologies, Inc. Exposure selector for high-dynamic range imaging and associated method
CN111489320A (en) * 2019-01-29 2020-08-04 华为技术有限公司 Image processing method and device
CN110599433B (en) * 2019-07-30 2023-06-06 西安电子科技大学 Double-exposure image fusion method based on dynamic scene
CN112532855B (en) * 2019-09-17 2022-04-29 华为技术有限公司 Image processing method and device
CN110717878B (en) * 2019-10-12 2022-04-15 北京迈格威科技有限公司 Image fusion method and device, computer equipment and storage medium
CN114494069B (en) * 2022-01-28 2024-10-15 广州华多网络科技有限公司 Image processing method and device, equipment, medium and product thereof
CN115170416A (en) * 2022-06-29 2022-10-11 北京空间机电研究所 Multi-exposure dynamic range enhancement method for low-light-level image
CN115242983B (en) * 2022-09-26 2023-04-07 荣耀终端有限公司 Photographing method, electronic device and readable storage medium
CN115760665A (en) * 2022-11-18 2023-03-07 深圳小湃科技有限公司 Multi-scale registration fusion method and device for images, terminal equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108668093A (en) * 2017-03-31 2018-10-16 华为技术有限公司 The generation method and device of HDR image
CN114708173A (en) * 2022-02-22 2022-07-05 北京旷视科技有限公司 Image fusion method, computer program product, storage medium, and electronic device

Also Published As

Publication number Publication date
CN117710264A (en) 2024-03-15

Similar Documents

Publication Publication Date Title
CN110062160B (en) Image processing method and device
US11563897B2 (en) Image processing method and apparatus which determines an image processing mode based on status information of the terminal device and photographing scene information
CN111418201B (en) Shooting method and equipment
CN110072051B (en) Image processing method and device based on multi-frame images
CN109218628B (en) Image processing method, image processing device, electronic equipment and storage medium
CN108335279B (en) Image fusion and HDR imaging
CN108989700B (en) Imaging control method, imaging control device, electronic device, and computer-readable storage medium
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110072052B (en) Image processing method and device based on multi-frame image and electronic equipment
US20200045219A1 (en) Control method, control apparatus, imaging device, and electronic device
JP2021530911A (en) Night view photography methods, devices, electronic devices and storage media
CN110033418B (en) Image processing method, image processing device, storage medium and electronic equipment
WO2018176925A1 (en) Hdr image generation method and apparatus
CN110191291B (en) Image processing method and device based on multi-frame images
CN110213502B (en) Image processing method, image processing device, storage medium and electronic equipment
CN111028190A (en) Image processing method, image processing device, storage medium and electronic equipment
CN110198417A (en) Image processing method, device, storage medium and electronic equipment
CN109729274B (en) Image processing method, image processing device, electronic equipment and storage medium
KR20150099302A (en) Electronic device and control method of the same
WO2020029679A1 (en) Control method and apparatus, imaging device, electronic device and readable storage medium
JP2014155001A (en) Image processing apparatus and image processing method
US11601600B2 (en) Control method and electronic device
CN110012227B (en) Image processing method, image processing device, storage medium and electronic equipment
CN113298735A (en) Image processing method, image processing device, electronic equipment and storage medium
CN108513062B (en) Terminal control method and device, readable storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant